id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.18277 | 3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge | Teeth localization, segmentation, and labeling from intra-oral 3D scans are
essential tasks in modern dentistry to enhance dental diagnostics, treatment
planning, and population-based studies on oral health. However, developing
automated algorithms for teeth analysis presents significant challenges due to
variations in dental anatomy, imaging protocols, and limited availability of
publicly accessible data. To address these challenges, the 3DTeethSeg'22
challenge was organized in conjunction with the International Conference on
Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2022,
with a call for algorithms tackling teeth localization, segmentation, and
labeling from intraoral 3D scans. A dataset comprising a total of 1800 scans
from 900 patients was prepared, and each tooth was individually annotated by a
human-machine hybrid algorithm. A total of 6 algorithms were evaluated on this
dataset. In this study, we present the evaluation results of the 3DTeethSeg'22
challenge. The 3DTeethSeg'22 challenge code can be accessed at:
https://github.com/abenhamadou/3DTeethSeg22_challenge | Achraf Ben-Hamadou, Oussama Smaoui, Ahmed Rekik, Sergi Pujades, Edmond Boyer, Hoyeon Lim, Minchang Kim, Minkyung Lee, Minyoung Chung, Yeong-Gil Shin, Mathieu Leclercq, Lucia Cevidanes, Juan Carlos Prieto, Shaojie Zhuang, Guangshun Wei, Zhiming Cui, Yuanfeng Zhou, Tudor Dascalu, Bulat Ibragimov, Tae-Hoon Yong, Hong-Gi Ahn, Wan Kim, Jae-Hwan Han, Byungsun Choi, Niels van Nistelrooij, Steven Kempers, Shankeeth Vinayahalingam, Julien Strippoli, Aurélien Thollot, Hugo Setbon, Cyril Trosset, Edouard Ladroit | 2023-05-29T17:49:58Z | http://arxiv.org/abs/2305.18277v1 | # 3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge
###### Abstract
Teeth localization, segmentation, and labeling from intra-oral 3D scans are essential tasks in modern dentistry to enhance dental diagnostics, treatment planning, and population-based studies on oral health. However, developing automated algorithms for teeth analysis presents significant challenges due to variations in dental anatomy, imaging protocols, and limited availability of publicly accessible data. To address these challenges, the 3DTeethSeg'22 challenge was organized in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2022, with a call for algorithms tackling teeth localization, segmentation, and labeling from intraoral 3D scans. A dataset comprising a total of 1800 scans from 900 patients was prepared, and each tooth was individually annotated by a human-machine hybrid algorithm. A total of 6 algorithms were evaluated on this dataset. In this study, we present the evaluation results of the
3DTeethSeg'22 challenge. The 3DTeethSeg'22 challenge code can be accessed at: [https://github.com/abenhamadou/3DTeethSeg22_challenge](https://github.com/abenhamadou/3DTeethSeg22_challenge).
keywords: Teeth localization,, 3D Teeth segmentation, 3D segmentation, 3D object detection, 3D intraoral scans dentistry
## 1 Introduction
Computer-aided design (CAD) tools have become increasingly popular in modern dentistry for highly accurate treatment planning. In particular, in orthodontic CAD systems, advanced intraoral scanners (IOSs) are now widely used as they provide precise digital surface models of the dentition. Such models can dramatically help dentists simulate teeth extraction, move, deletion, and rearrangement and ease therefore the prediction of treatment outcomes. Hence, digital teeth models have the potential to release dentists from otherwise tedious and time consuming tasks. Although IOSs are becoming widespread in clinical dental practice, there are only few contributions on teeth segmentation/labeling available in the literature Lian et al. (2019); Xu et al. (2018); Sun et al. (2020) and no publicly available database. A fundamental issue that appears with IOS data is the ability to reliably segment and identify teeth in scanned observations. Teeth segmentation and labeling is difficult as a result of the inherent similarities between teeth shapes as well as their ambiguous positions on jaws. In addition, it faces several challenges:
1. The teeth position and shape variation across subjects.
2. The presence of abnormalities in dentition. For example, teeth crowding which results in teeth misalignment and thus non-explicit boundaries between neighboring teeth. Moreover, lacking teeth and holes are commonly seen among people.
3. Damaged teeth.
4. The presence of braces, and other dental equipment. The challenge we propose will particularly focus on point 1, _i.e.,_ the teeth position and shape variation across subjects. With the extension of available data in the mid and long term, the other points will also be addressed in further editions of the challenge.
### Terminology
In this section, we will explore three essential terms used in the analysis of intraoral 3D scans: localization, segmentation, and labeling. Localization refers to the precise identification and positioning of a tooth, including the calculation of its 3D centroid. Segmentation, on the other hand, involves identifying the vertices that pertain to a detected tooth, allowing for the demarcation of
its boundaries. Labeling involves assigning a specific class to a detected and segmented tooth. In this work, we adhere to the FDI teeth numbering system.
### Prior work
The majority of relevant works on the topic fall into two categories: handcrafted feature-based approaches and learning-based approaches.
### Handcrafted features-based approaches
Former approaches were mainly grounded on the extraction of handcrafted geometric features to segment 3D dental scans. These approaches are broadly classified into three types: surface curvature-based methods, contour line-based methods, and harmonic field-based methods. Surface curvature is highly informative in IOSs for characterizing tooth surfaces and locating tooth/gum borders. This feature is used in (Zhao et al., 2006) to propose a semiautomatic method for teeth segmentation based on curvature thresholding, in which gum segregation is followed by 3D teeth boundary curve identification to ensure the segmentation process. Later, Yuan _et al._ developed an integrated single-tooth modeling scheme for region extraction and teeth separation based on surface minimum curvature calculation (Yuan et al., 2010). Wu _et al._(Wu et al., 2014) proposed a Morphological skeleton-based method for teeth segmentation from IOSs by separating teeth using area growing operations. In the same vein, (Kronfeld et al., 2010) suggested a system for detecting the boundaries between teeth and gingva based on active contour models. For the contour-line method, tooth boundary landmarks are manually selected by users on dental 3D scans. In (Sinthanayothin & Tharanont, 2008; Yaqi & Zhongke, 2010), for instance, the selected tooth boundary is used to calculate the contour lines from their geodesic information and generate the desired final tooth boundaries. We can also mention the harmonic-field method which is more user-friendly for teeth segmentation than the previous approaches. In comparison to previous techniques, this method requires less user interaction by allowing them to select a limited number of surface points prior to the segmentation process (Zou et al., 2015; Liao et al., 2015).
The approaches described above fall short when it comes to robust and fully automated segmentation of dental 3D scans. Setting the optimal threshold value for surface curvature-based methods is not straightforward. Indeed, these methods are still sensitive to noise, and selecting the wrong threshold can systematically affect the segmentation accuracy, resulting in an over-or under-segmentation problem. Furthermore, the manual threshold selection will always make curvature-based methods far from being applied in a fully automatic mode. Also, contour-line approaches are time-consuming, tough to use, and closely rely on human interaction. Finally, the harmonic field techniques involve sophisticated and heavy pre-processing steps.
### Learning based approaches
Teeth segmentation techniques have recently shifted away from hand-crafted features and toward learned features thanks to deep learning techniques. Indeed, it is nowadays crystal clear that data-driven feature extraction, using CNNs for example, outperforms handcrafted features for many computer vision tasks, such as object detection (Ren et al., 2015), image classification (Wang et al., 2016), _etc._ and 3D teeth segmentation and labeling is no exception. Depending on the input data, features learning methods methods can be divided into two main approaches: 2D image segmentation and 3D mesh segmentation. CNNs have been used in numerous studies to extract relevant features from 2D images. Particularly, Cui _et al._ (Cui et al., 2019) introduced a two-stage deep supervised neural network architecture for automatic tooth instance segmentation and identification from Cone-Beam Computed Tomography (CBCT) images. A set of edge maps were first extracted from the CBCT slices with an autoencoder CNN and then fed to a Mask R-CNN network (He et al., 2017) for tooth segmentation and recognition. Another study fine-tuned a pre-trained AlexNet network on CBCT dental slices for automatic teeth classification (Miki et al., 2017). A symmetric fully convolutional residual neural network was suggested by (Rao et al., 2020) to generate a segmentation probability map for teeth in CBCT images. Following that, the dense conditional random field technique and the deep bottleneck architecture were used for teeth boundary smoothing and segmentation enhancement, respectively. Zhang _et al._ (Zhang et al., 2020) isomorphically mapped 3D dental scans into 2D harmonic parameter space to generate 2D images that will be fed into a CNN-based on the U-Net architecture for tooth image segmentation.
Taking advantage of recent advances in deep learning techniques and hardware computing capability, researchers started to employ deep learning-based methods directly on 3D dental meshes. Sun _et al._ used FeaStNet (Verma et al., 2018) a graph CNN-based architecture for automated tooth segmentation and labeling from 3D dental scans (Sun et al., 2020). They then extended the previous architecture and proposed an end-to-end graph convolutional network-based model for tooth segmentation and dense correspondence of 3D dental scans, in which a geodesic map and a probability matrix were used to improve the segmentation performance (Sun et al., 2020). Xu _et al._ (Xu et al., 2018) introduced a multi-stage framework based on deep CNN architecture for 3D dental mesh segmentation where teeth-gingiva and inter-teeth labeling processes were achieved by training two independent CNNs. Similarly, Zanjani _et al._ proposed an end-to-end deep learning system based on the PointNet (Qi et al., 2017) network architecture for semantic segmentation of individual teeth and gingiva from point clouds representation, as well as a secondary neural network as a discriminator in an adversarial learning setting for teeth labeling refinement (Zanjani et al., 2019). A broader perspective has been adopted by Lian _et al._ who modified the original version of the PointNet architecture to incorporate a set of graph-constrained learning modules in order to extract multi-scale local contextual features for teeth segmentation and labeling on 3D Intra-Oral-Scans (Lian et al., 2020). Differently to (Lian et al., 2020; Zanjani et al., 2019),
authors in (Tian et al., 2019) added a preprocessing step to encode the input 3D scans using a sparse voxel octree partitioning before separately feeding a three-level hierarchical CNNs learning for the segmentation process and another two-level hierarchical CNNs for the teeth recognition. A different approach was recently proposed in (Cui et al., 2020) where a pipeline is divided into two key components: a first CNN dedicated to 3D centroids prediction as a teeth localization, followed by a second CNN applied separately on each pre-localized tooth crop for joint tooth/gum segmentation and tooth type recognition. In the same vein, in (Zanjani et al., 2019) a region proposal network (RPN) based on a Monte Carlo approach was proposed as the first step of teeth localization. This RPN is followed by a Mask R-CNN-like architecture for instance wise teeth segmentation. Finally, as a post-processing procedure, a look-up table on the teeth centroids assigns the labels to the detected teeth. Another deep neural network architecture was suggested in (Ma et al., 2020) for pre-detected teeth classification on 3D scanned point clouds based on adjacency similarity and relative position features vectors. It attempts to explicitly model the spatial relationship between adjacent teeth for recognition.
Zhao _et al._ (Zhao et al., 2021) proposed an end-to-end network that adopts a series of graph attentional convolution layers and a global structure branch to extract fine-grained local geometric features and global features from raw mesh data. Then, these features are fused to learn the segmentation and labeling tasks. Zhao _et al._ (Zhao et al., 2022) suggested a two-stream graph convolutional network (TSGCN) where the first stream captured coarse structures of teeth from the mesh 3D coordinates, while the second stream extracted distinctive structural details from its normal vectors. Since current learning-based methods mainly rely on expensive point-wise annotations, Qiu _et al._ (Qiu et al., 2022) introduced the Dental Arch (DArch) method for 3D tooth instance segmentation using weak low-cost annotated data (labeling all tooth centroids and only a few teeth for each dental scan). The DArch consists of two stages: tooth centroid detection and tooth instance segmentation where the dental arch is initially generated by Bezier curve regression, and then refined using a graph-based convolutional network (GCN).
## 2 Materials and challenge setup
### Data acquisition and annotation
#### 2.1.1 Data acquisition
In compliance with the European General Data Protection Regulation (GDPR) agreement, we obtained 3D intra-oral scans for 900 patients acquired by orthodontists/dental surgeons with more than 5 years of professional experience from partner dental clinics located mainly in France and Belgium. All data is completely anonymized, and the identity of the patients cannot be revealed. Two 3D scans are acquired for each patient, covering the upper and lower jaws separately. The following IOSs were used for scan acquisition: the Primescan from Dentsply, the Trios3 from 3Shape, and the iTero Element 2 Plus. These
scanners are representative and generate 3D scans with an accuracy between 10 and 90 micrometers and a point resolution between 30 and 80 pts/mm2. No additional equipment other than the IOS itself was used during the acquisitions. All acquired clinical data are collected for patients requiring either orthodontic (50%) or prosthetic treatment (50%). The provided dataset follows a real-world patient age distribution: 50% male 50% female, about 70% under 16 years-old, about 27% between 16-59 years-old, about 3% over 60 years old.
#### 2.1.2 Data annotation and processing
The data annotation, _i.e.,_ teeth segmentation and labeling, was performed in collaboration with clinical evaluators with more than 10 years of expertise in orthodontisstry, dental surgery, and endodontics. The detailed process is depicted in Fig. 1.
It consists of eight steps. First, the 3D scans are preprocessed (steps 1 and 2 in Fig. 1) by removing all degenerated and redundant mesh faces, as well as duplicated and irrelevant vertices. The dental mesh coordinates are then automatically centered and aligned with the occlusal plane by principle component analysis. This improves teeth visibility while also normalizing the 3D pose of all the input 3D IOSs. Then, we used a custom tool to manually crop
Figure 1: Illustration of our annotation process. An input 3D IOS is annotated following eight steps, beginning with preprocessing and pose normalization and ending with clinical validation. The clinical validator can return the annotation to steps 2, 4, or 7, depending on the raised issue, which respectively corresponds to missing teeth, teeth border issues, or incorrect teeth instance labeling.
each tooth from the 3D scans with a tight sphere that includes the detected tooth as well as its surroundings (_i.e.,_ neighboring teeth and gingiva). We decided to perform UV mapping in step 3 to flatten the cropped 3D meshes and show the maximum 3D curvature to make the annotation of tooth boundaries easier. This transformation is ensured by harmonic parameterization, a fixed boundary parameterization algorithm that calculates the 2D coordinates of the flattened cropped tooth as two harmonic functions Eck et al. (1995). These vertices are then mapped to a circle, and the 2D coordinates of the remaining vertices are calculated using two harmonic functions and the circle boundary constraints. The benefits are twofold: the annotator can annotate the 2D polygons delimiting the tooth without changing the 3D point of view, and the 3D curvature overlay is informative on the boundaries of teeth.
After the manual annotation of the UV maps (step 4 of Fig. 1), we back-propagate the tooth boundaries to the 3D crops in step 5. At this point each separate tooth candidate has been manually segmented, however they have been represented in the same 3D coordinate system. The aim of the next step is to gather all the teeth crowns and prepare them for manual labeling as shown in step 7 of Fig. 1. We followed the FDI World Dental Federation numbering system for teeth labeling depicted in Fig. 2.
The final step 8 of the annotation process consists of the visual inspection and validation of the produced annotated 3D IOS. This step is carried out by our clinical partners, who are experienced orthodontists, dental surgeons, and endodontists. Their inspection targeted the identification of annotation issues, such as a missing tooth annotation (return to step 2), inaccurate tooth boundary annotation (return to step 4), or incorrect tooth labels (return to step 7). This validation/correction cycle was repeated until the entire dataset was correctly annotated and clinically validated.
## 3 Data records
A total of 1800 3D intra-oral scans have been collected for 900 patients covering their upper and lower jaws separately. Fig. 3 shows some examples. The
Figure 2: FDI World Dental Federation notation1.
data are hosted at the Figshare platform 2. For the purpose of the segmentation and labeling challenge held at MICCAI 2022, the 3D scans data (obj format) are separated into training (1200 scans, 16004 teeth) and test data (600 scans, 7995 teeth). Patients are anonymized by their universally unique identifier (uuid). The ground truth tooth labels and tooth instances for each vertex in the obj files are provided in JavaScript Object Notation (JSON) format. A JSON file example is shown below:
Footnote 2: The link is provided in [https://github.com/abenhamadou/3DTeethSeg22_challenge](https://github.com/abenhamadou/3DTeethSeg22_challenge)
```
1{
2"id_patient":"YNKZHRPO",
3"jaw":"upper",
4"labels":[0,0,44,33,34,0,0,45,0,...,41,0,0,37,0,34,45,0,31,36],
5"instances":[0,0,10,2,12,0,0,9,0,...,10,0,0,8,0,1,9,0,1,8]
6}
```
The length of the tables "labels" and "instances" is the same as the total number of vertices in the corresponding 3D scan. The label and instance "0" are reserved by default for gingiva. And, other than "0", the unique numbers in table "instances" indicate the number of teeth in the 3D scan.
### Challenge setup
The 3DTeethSeg'22 challenge was organized as a Satellite Event at MICCAI 2022, with a specific focus on algorithms addressing teeth detection, segmentation, and labeling using intra-oral 3D scans. The challenge comprised three distinct phases: one training phase and two testing phases. During the training
Figure 3: Frontal and occlusal rendering of annotated jaws for 6 randomly selected patients.
stage, participants were provided access to the scans along with their corresponding annotations. This allowed them to design and train their algorithms using the provided data. The first testing phase involved a preliminary evaluation, where participants could submit their algorithms and assess their performance on a limited dataset of 10 scans. In the final testing phase, participants were not granted access to the test scans directly. Instead, they were required to submit their code within a docker container. The dockers were then evaluated on hidden test data, preventing any retraining on the test data or overfitting through fine-tuning. All the training and testing data, along with their annotations, are now publicly available. Additionally, we have open-sourced the data processing and evaluation scripts to facilitate further research and development. All the materials related to the 3DTeethSeg'22 challenge can be accessed at the following link: [https://github.com/abenhamadou/3DTeethSeg22_challenge](https://github.com/abenhamadou/3DTeethSeg22_challenge)
### Evaluation metrics
#### 3.2.1 Teeth localization accuracy
-Teeth localization accuracy (TLA): mean of normalized Euclidean distance between ground truth (GT) teeth centroids and the closest localized teeth centroid. Each computed Euclidean distance is normalized by the size of the corresponding GT tooth. In case of no centroid (e.g. algorithm crashes or missing output for a given scan) a nominal penalty of 5 per GT tooth will be given. This corresponds to a distance 5 times the actual GT tooth size. As the number of teeth per patient may be variable, here the mean is computed over all gathered GT Teeth in the two testing sets.
#### 3.2.2 Teeth segmentation accuracy
- Teeth segmentation accuracy (TSA): is computed as the average F1-score over all instances of teeth point clouds. The F1-score of each tooth instance is measured as:
\[F1=2*\frac{precision\times recall}{precision+recall}\]
#### 3.2.3 Teeth identification rate
- Teeth identification rate (TIR): is computed as the percentage of true identification cases relative to all GT teeth in the two testing sets. A true identification is considered when for a given GT Tooth, the closest detected tooth centroid: is localized at a distance under half of the GT tooth size, and is attributed the same label as the GT tooth
## 4 Methods
Over five hundred requests for data download and registration have been received for the 3DTeethSeg'22 challenge. Forty-four teams registered and participated to the preliminary phase, and only ten teams uploaded their submissions to the leader board for the final phase of the challenge. Table 1 provides a brief synopsis of the selected six top ranked methods that will be presented in more details below in this section.
### CGIP Team - Hoyeon Lim et al.
As shown in Fig. 4, the proposed method consists of two main modules. First, a tooth instance segmentation pipeline predicts tooth instance labels for each vertex of the dental mesh. Then, the Point Grouping Module takes a sampled point cloud and outputs tooth semantic labels and tooth instance labels.
#### 4.1.1 Tooth instance segmentation pipeline
As shown in Figure 4(a), the proposed tooth instance segmentation pipeline takes the dental mesh and outputs tooth instance labels for each vertex of the dental mesh. The Tooth Group Network accepts features of sampled points and generates tooth instance labels. Two sampling methods to obtain the sampled points.
To obtain the final instance segmentation result, it is necessary to predict all tooth instance labels for each vertex of the dental mesh. The tooth instance label of a non-sampled point is determined by assigning it the label of the nearest neighbor point. Due to the nature of the 3D scanner, the sampling rate of the dental mesh is high near the boundary. Therefore, points near the boundary may be associated with multiple labels, which prevents obtaining fine-grained tooth instance labels. To address this issue, a solution called the Boundary Aware Point Sampling method has been proposed. This method aims to increase the number of sampled points near the boundary. Initially, \(n\) points are sampled by utilizing the Farthest Point Sampling technique on the vertices
Figure 4: Proposed approach by CGIP team. (a) Tooth instance segmentation pipeline predicts tooth instance labels for each vertex of the dental mesh. (b) Point Grouping Module takes sampled point cloud and outputs tooth semantic labels and tooth instance labels. These tooth instance labels are then refined by Tooth Cropping Module. The color of the tooth instance label represents each individual tooth instance, while the color of the tooth semantic label indicates the tooth class. For example, the green point in the tooth semantic label corresponds to the canine tooth.
of the dental mesh. The Tooth Group Network takes these sampled points as input and generates tooth instance labels for them. By examining the predicted tooth instance labels, points located in close proximity to the boundary can be identified. Subsequently, additional points are sampled near the boundary using the Boundary Aware Point Sampling approach. Another instance of the Tooth Group Network is then employed to generate tooth instance labels for these newly sampled points situated near the boundary.
To derive the final tooth instance segmentation result, tooth instance labels are aggregated from both Farthest Point Sampling and Boundary Aware Point Sampling. The class of each tooth instance can be determined by conducting a majority vote among the tooth semantic labels within the corresponding tooth instance. The tooth semantic labels are obtained by inputting the points sampled through Farthest Point Sampling into the Tooth Group Network (see Fig. 1(b)).
#### 4.1.2 Tooth Group Network
As depicted in Fig. 4(b), The Tooth Group Network is composed of Point Grouping Module(PGM) and Tooth Cropping Module(TCM). The backbone network of PGM and TCM is Point Transformer(Zhao et al., 2021). The difference is that PGM has a regression head for offset prediction and a classification head for tooth semantic label prediction while TCM has just one head to infer tooth-gingiva mask.
PGM has a similar process to Point Group(Jiang et al., 2020). The backbone network of PGM takes sampled point cloud that contains coordinates and normals of \(n\) points. It predicts tooth semantic label and offset. Then, the shifted point cloud is obtained by moving each point to its center point according to offsets. Points that are predicted as gingiva are filtered out from this point cloud. The tooth instance labels of each point are obtained by clustering the shifted point cloud because points closer to each other in the shifted point cloud are likely to belong to the same instance. The DBSCAN(Ester et al., 1996) is used to group 3d points. The clustering-based tooth instance labeling process is robust because each tooth instance has inherently a compact cylinder shape that is easy to group.
In TCM, the sampled point cloud is cropped around the center points of predicted tooth instances and feed them into the backbone network of TCM. As a result, the tooth-gingiva mask is generated as output. This mask is used to refine tooth instance labels of PGM. The tooth instance label of the point is changed to gingiva if it is predicted as a tooth in PGM but predicted as gingiva in TCM. If the tooth instance label of the point is predicted as gingiva in PGM but predicted as tooth in TCM, the tooth instance label of the point is determined by the label of the nearest neighbor point.
To prevent a decrease in tooth instance segmentation accuracy near the boundary where the tooth segmentation label changes, the contrastive boundary learning framework(Tang et al., 2022) is adopted. This framework makes two points near the boundary have similar features if they have the same label.
Conversely, if they have different labels, the backbone network learns to output contrastive features for the two points. The regression head for offset and classification head for tooth semantic label can take advantage of these distinct features when the heads predicts tooth instance labels near the boundary.
### FiboSeg Team - Mathieu Leclercq et al.
This work presents a deep learning-based method for 3d dental model segmentation. It consists of acquiring 2D views and extracting features from the surface such as the normal vectors. The rendered images are analyzed with a 2D convolutional neural network, such as a U-NET.
#### 4.2.1 Rendering the 2D views
The Pytorch3D framework is used for the rendering of the 3D intraoral surface model from different viewpoints. The views are fed to a residual U-NetHatamizadeh et al. (2022) in an end-to-end training procedure. The rendering engine provides a map that relates pixels in the images to faces in the mesh and allows rapid extraction of point data (normals, curvatures, labels, etc.) as well as setting information back into the mesh after inference. In order to get different views, random rotations are applied to the camera, so that it moves on the surface of a unit sphere. For each snapshot, we generate two images.
Figure 5: Proposed approach by FiboSeg team. During training, the dental models are randomly rotated and random teeth are removed. The views are rendered from different viewpoints and colored using normal vectors encoded in the RGB components as well as a depth map used as a fourth component. The ground truth target images are also created during the rendering step. The images are fed to a Residual U-Net and the output and target images are used to compute the DiceCELoss. At inference time, a majority voting scheme is employed to assign labels to each face in the dental model. Subsequently, an island removal approach is applied to eliminate isolated regions, and the output boundaries of the segmented teeth are smoothed for a more refined result.
The first one contains the surface normals encoded in the RGB components + a depth map. The second one contains the ground truth label maps that are used as targets in the segmentation task. We set the resolution of the rendered images to 320px. We use ambient lights so that the rendered images don't have any specular components.
#### 4.2.2 Training of the network
We augment the data by applying random rotations and randomly removing dental crowns (excluding wisdom teeth). The residual U-Net model is instantiated using the MONAI3 library with 4 input channels, 34 output channels, and encoder/decoder blocks with 2 residual units and channels 16, 32, 64, 128, 256 with stride 2.
Footnote 3: monai.io
We use the DiceCELoss, which computes the Dice Loss as well as Cross-Entropy Loss and returns the weighted sum of these two.
\[DiceCELoss=w_{0}\frac{2\sum_{c=1}^{N}p_{c}y_{c}}{\sum_{c=1}^{N}p_{c}^{2}+\sum_{ c=1}^{N}p_{c}^{2}y_{c}^{2}}-w_{1}\sum_{c=1}^{N}y_{c}\log(p_{c}) \tag{1}\]
The learning rate is set to \(1e^{-}4\) using the AdamW optimizer. One important thing to note is that there is no previous pre-processing to the mesh, _i.e._, sub-sampling of points/faces, or any classification task to identify upper or lower jaws. The training learns to identify 34 different labels corresponding to the upper and lower crowns. We use one-hot encoding for the 34 different classes: 32 different crowns, in addition to the gum and the background.
#### 4.2.3 Prediction
The prediction has three major steps: 1. Render 2D views from the 3D object; 2. Run inference on the 2D views. 3. map the information back into the 3D mesh. The Pytorch3d rasterizer returns a mapping that keeps track of the nearest face at each pixel, after we run inference on the 2D views, we use a weighted majority voting scheme to put information back into the 3D mesh as a single face may be rendered by 2 separate views. Faces that are hit by zero pixels are given the value \(-1\).
#### 4.2.4 Post-Processing
In the event that some faces of the surface are not assigned to any label at the end of the prediction, we apply an 'island removal' approach, that assigns the closest-connected label. Finally, we apply a morphological closing operation to smooth the boundary of the segmented teeth.
### IGIP team - Shaojie Zhuang et al.
As shown in Fig. 6, it is a multi-stage method for accurate teeth segmentation. First, the point cloud is down-sampled to \(N=32768\) points as the input.
Then, the input is separated into teeth and gingival. Centroids are predicted on teeth points, around which the patches are cropped, and individual tooth per patch is segmented. Next, the teeth classification is done by combining the local and global features, followed by a post-processing stage to correct potential classification errors.
**Teeth-gingival separation.** For the first step, a binary classification between teeth and gingival is made to remove most non-tooth points in the point cloud, in order to reduce the inference caused by model bases.
**Centroids prediction.** By using PointNet++ Qi et al. (2017b), the input point cloud \(P\) with \(N\) points is down-sampled to point cloud \(P^{\prime}\) with \(N^{\prime}\) points. Each point belonging to \(P^{\prime}\) is regressed to obtain an offset vector \(\Delta P^{\prime}\). The predicted centroids are finally obtained by \(\hat{C}=P^{\prime}+\Delta P^{\prime}\). Similar with Cui et al. (2020), for each predicted centroid \(\hat{c}_{i}\in\hat{C}\), the closest ground truth centroid \(c_{i}\) is its target, and the network uses the following loss functions for supervision:
\[\begin{split} L_{cent}=&\frac{1}{M}\sum_{i=1}^{M}[L _{1}^{smooth}(\hat{c}_{i}-c_{i})+\lambda\frac{\left\lVert\hat{c}_{i}-c_{i1} \right\rVert_{2}}{\left\lVert\hat{c}_{i}-c_{i2}\right\rVert_{2}}]\\ &+L_{CD}(\hat{C},C),\end{split} \tag{2}\]
where the smooth L1 loss supervises the distance between the predicted centroids and the ground truth centroids, \(c_{i1}\) and \(c_{i2}\) are the ground truth centroids with the minimum and the second minimum distance from \(\hat{c}_{i}\) respectively, used to push a predicted centroid away from other ground truth centroids, \(\lambda\) is set to 0.2 after verification, the \(L_{CD}\) is the chamfer distance loss. After the prediction, the dense centroids are clustered using density peaks clustering algorithm Rodriguez & Laio (2014) to get the final centroids.
**Patch segmentation.** Patches are cropped around each predicted centroid. The patch size is set to \(N/8\), which ensures that at least one full tooth is contained in one patch. A distance weight is attached to each point to mark
Figure 6: Proposed approach by IGIP team. The centroids are predicted for each tooth, based on which the patches are cropped, and then the curvature on tooth crown is removed. Next, the tooth labels and masks are predicted and mapped back to the original model in patch segmentation stage. Finally, a pos-processing stage is applied to fix error labels.
the tooth, calculated as:
\[w_{s_{i}}=e^{-2\times\|s_{i}-\hat{c}_{i}\|_{2}}, \tag{3}\]
where \(s_{i}\) is the \(i\)-th point in one patch, and \(\hat{c}_{i}\) is the predicted centroid. Besides, the curvature, as a feature, is able to strengthen significance at the teeth-gingival boundary. The curvature at teeth crowns is removed by the binary mask obtained at the first step. The output is supervised by a cross-entropy loss. In addition, in case there exist multiple centroids in a tooth, an overlap detection for patches is made, that if predicted masks of two patches overlap a lot, then they are merged into one patch.
**Teeth classification.** The importance of teeth classification is often ignored in most algorithms. It is a common way to output point-wise labels of a whole model by instance segmentation networks, but such algorithms don't take the features of an individual tooth into account. Some methods like Im et al. (2022); Lian et al. (2019), only classify a tooth into 16, 14 or fewer classes, ignoring the quadrant where the teeth are located. Ma et al. (2020) use the teeth feature vectors with neighborhood relations for better labeling accuracy, but it relies on a regular teeth distribution.
To emphasize the classification, the method proposes a new network and a postprocessing stage to fix potential errors. When classifying a single tooth in one patch, the network combines the shape features extracted from the patch with a tooth mask, and the position features indicating the patch's location on the model. Using the shape features or position features alone is not enough for accurate teeth classification. These two features are concatenated and fed into a fully-connected layer to obtain a 33-D output (32 teeth and one background). This is a more human-like way to address classification problems. The output is supervised by a cross-entropy loss. No prior information about whether the model belongs to the upper or lower jaw is needed in our method, because the incisors of the upper and lower jaw have the most significant difference in shapes and sizes, and the global feature extracted in this step will tell.
**Post-processing.** Deep learning methods would not give the right label predictions every time, so a postprocessing stage using the dental arch curve to correct potential classification errors is proposed. First, predicted centroids are predicted onto \(xOy\) plane and fit a para-curve as the dental arch curve. Then, based on the curve, the relative positions for each tooth are calculated, and some typical errors can be fixed. For example, if two teeth have the same label, then their labels can be corrected through the sorted label sequence; or if the labels are disordered, they can be reordered based on the order of teeth.
### TeethSeg team - Tudor Dascalu et al.
The team developed a multi-stage pipeline that aimed to label and segment teeth from 3D dental cast meshes (Fig. 7). The first step consisted of coarse segmentation of the dental structures. Next, the resulting outputs were refined using analysis of local geometric features.
#### 4.4.1 Coarse segmentation
The coarse segmentation phase consisted of indirect segmentation of the dental structures. The meshes were converted to binary volumetric images. Then, the resulting binary images featuring dental casts were segmented using the 3D U-net architecture. Separate U-net models were trained for the lower and upper jaws. Each model produced an N-dimensional 3D array of the same size as the input binary image of the dental cast. Individual teeth masks account for \(N-1\) of the channels in the array, while 1 mask corresponds to non-dental structures.
#### 4.4.2 Fine segmentation
The fine segmentation stage was introduced in order to tackle the loss in accuracy around tooth-tooth and tooth-gum edges, caused by the fact that the resolution of the binary images was much lower than the resolution of the dental casts. To generate the fine segmentation, the input dental cast was labeled using the binary U-net result, by finding the closest grid point to a given mesh vertex. This labeling was coarse because of the aforementioned resolution discrepancy between the binary volumetric images and the meshes. In the refinement stage, the participants defined a function \(f\) that enhanced vertices close to edge regions. The output of the function \(f\) for a given vertex \(v\) was based on the convexity of the neighborhood surrounding \(v\). Then, the vertices of the dental cast were labeled using the Random Walker algorithm. The seed points were set to vertices belonging to the dental crown and gum
Figure 7: Proposed approach by TeethSeg team. The mesh is segmented using the U-net model. The inaccuracies around tooth-tooth and tooth-gums edges are corrected using local geometric features.
regions, extracted from the coarse segmentation output. For each non-seed vertex, random walkers started navigating the mesh space until they reached a seed point. The movement of the walkers was steered using the function \(f\), such that they were less likely to step over edge regions. The results of the Random Walker algorithm were used as the final outputs of the framework.
### OS team - Tae-Hoon Yong et al.
As shown in Fig 8, the proposed approach consists of two stages which combined teeth centroids prediction and classification based on the two dimensional image with Federation Dentaire Internationale (FDI) dental numbering system and individual teeth segmentation based on mesh surfaces.
#### 4.5.1 Teeth centroids prediction and classification
Given an input mesh model, we first extracted the outliers to obtain normalized model with some noise removed. To identify the centroids of each tooth, we obtained two dimensional teeth image captured from top view rendering which was perpendicular to the occlusal plane. Through the captured image with 512\(\times\)512, we predicted the centroids of each tooth using a heatmap prediction method based on the high-resolution networks (Wang et al., 2020), which were consisted of subnetworks in parallel with fusing low-to-high representations. In addition to detecting the centroids, each landmark was also classified with 16 channels. As the predicted centroids showed the clustering tendency, we applied DBSCAN algorithm (Schubert et al., 2017) to all the predicted centroids for removing redundant tooth centroids. To restore the three dimensional coordinates closest to the two dimensional coordinates among the points located in
Figure 8: Proposed approach by OS team. In stage 1, we obtained a 2D captured image from the input mesh pre-processed to remove outliers. The deep learning model of the 2D-based encoder-decoder structure predicted the centroids and FDI numbers of each tooth from the 2D images. In stage 2, we acquired a cropped mesh for each tooth based on the results of stage 1 from a mesh decimated with less than 50,000 faces. After each crown region segmented from the cropped mesh through the crown segmentation deep learning model was restored and aggregated to the original mesh, aggregated model was back-projected to the original using the K-Nearest Neighbor (KNN) algorithm.
the crown, the three dimensional coordinates were calculated by projecting the 3D coordinates into the two dimensional space.
#### 4.5.2 Individual teeth segmentation
Before cropping each tooth based on the stage 1 results, we decimated the number of faces to 50,000 or less from the input mesh model to compute efficiently. Based on the nearest adjacent centroid landmarks, we calculated the radius of individual teeth for cropping the mesh into an ellipse or circle shape. The circle shape was applied to the molars, and the ellipse shape was applied to the other teeth. To segment the tooth region in the cropped individual teeth faces, we used binary tooth segmentation methods improving the implementation of the graph-constrained learning module (Lian et al., 2020; Wu et al., 2022). Based on the results of segmented individual teeth, we refined the results to obtain accurate crown areas using graph-cut algorithm (Lian et al., 2020). After performing the refinement method on individual teeth area, we mapped each result into the decimated model and aggregated it. Finally, the decimated models were up-sampled to the original scan model using the K-Nearest Neighbor (KNN) algorithm (Johnson et al., 2019), which preserved the results of the decimated model.
### Champers team - Niels van Nistelrooij et al.
#### 4.6.1 3D Scan pre-processing
An intra-oral scan was represented by its vertices with the coordinates and normals as vertex features. During pre-processing, the vertices were subsampled by overlaying a regular grid and sampling one point from each occupied grid cell. This resulted in a uniform-density point cloud with a variable number of points that remains to scale. Random data augmentation scaled the coordinates, flipped the scan left to right, and rotated it around the longitudinal axis. The annotations on the source scan were used to determine the centroid and FDI label of each tooth instance. The 32 possible labels were subsequently translated to 7 classes by removing the distinction of upper/lower and left/right. Furthermore, the few third molars were translated to second molars.
#### 4.6.2 Proposed model architecture
The proposed model is largely inspired by TSegNet (Cui et al., 2020), which split the problem into two stages for centroid prediction and tooth segmentation, respectively.
Encoder-Decoder NetworkAn encoder-decoder architecture that was repeatedly used through- out the model was the Stratified Transformer (Lai et al., 2022). This Vision Transformer for point clouds uses window-based multi-head self-attention to learn contextual features. Its encoder path uses shifted windows in subsequent Transformer blocks to efficiently learn long-range dependencies (Liu et al., 2021). After each set of blocks, the current point cloud was subsampled with Farthest Point Sampling. The initial point to seed Farthest
Point Sampling was sampled randomly, such that the subsampled point cloud at the end of the encoder path was different for each forward pass.
In the decoder path, a smaller point cloud was upsampled using weighted 3-nearest neighbour interpolation. Its features were then summed to the features of a point cloud following a skip connection from the encoder path. The decoder path resulted in the same point cloud as the input, but with contextual features.
Prediction of tooth centroidsFirst, the input point cloud was processed by the encoder path of a Stratified Transformer. Each point of the resulting subsampled point cloud had 256 features. These features were further processed by two fully-connected heads.
The first head predicted the Euclidean distance from each point to its closest tooth centroid and was supervised with a smooth L1 loss. These distances were subsequently used to filter points in the periphery, such that only points on the dental arch were retained.
The second head predicted the x-, y-, and z-offsets from each point to its closest tooth centroid. These predictions were compared to the ground truth with the following loss function:
\[\mathcal{L}_{CP}=\overbrace{\frac{1}{K}\sum_{i=1}^{N}\frac{1}{r_{i}^{{}^{(1)}} ||\hat{p}_{i}+\hat{o}_{i}-c_{i}^{{}^{(1)}}||_{2}^{2}}}^{\text{Normalized Euclidean}}+\overbrace{\frac{1}{K}\sum_{i=1}^{N}\frac{r_{i}^{{}^{(2)}} \left|\hat{p}_{i}+\hat{o}_{i}-c_{i}^{{}^{(1)}}\right|_{2}}{||\hat{p}_{i}+\hat{o }_{i}-c_{i}^{{}^{(2)}}||_{2}}}^{\text{Separation}}\]
where
* \(K\) is the number of ground-truth tooth centroids;
* \(\hat{p}_{i}\in\mathbb{R}^{3}\) are the coordinates of a subsampled point after filtering;
* \(\hat{o}_{i}\in\mathbb{R}^{3}\) are the corresponding predicted offsets;
* \(c_{i}^{{}^{(k)}}\in\mathbb{R}^{3}\) is the \(k^{\text{th}}\) closest ground-truth tooth centroid from \(\hat{p}_{i}\); and
* \(r_{i}^{{}^{(k)}}\in\mathbb{R}\) is the radius of the tooth instance of which \(c_{i}^{{}^{(k)}}\) is the centroid.
The Normalized Euclidean function computes the squared Euclidean distance between a point and its closest ground-truth tooth centroid. A bias in favor of larger teeth was remedied by normalizing by the radius of the tooth instance which has the closest centroid. Furthermore, the ground-truth centroid was chosen from the point's position, not from the point's predicted closest centroid.
The Separation function computes the difference in distances from a point to its closest and second-closest ground-truth centroids. This punishes the model whenever it predicts a further centroid. The normalization and separation resulted in a much better stratification of the predicted centroids among the tooth instances.
The Centroid Prediction stage was run six times to collect many centroid predictions. Finally, the DBSCAN algorithm was applied to cluster the centroids to one point per tooth instance (Ester et al., 1996b).
_Tooth Segmentation_. The first step processed the input point cloud by a Stratified Transformer, which resulted in the same point cloud with 48 features for each point. For each predicted centroid, 3,528 nearest neighbours were sampled to form a proposal. Each point of such a proposal had 52 features; 48 contextual features, 3 global coordinates, and the local distance from the predicted centroid to the point.
Each proposal was then independently processed by a cascade of two Stratified Transformers. The first network predicted the binary segmentation of the foreground tooth in the proposal. This 1-channel prediction was channel-wise concatenated to form proposals with points that have 53 features.
These proposals were subsequently processed by the second network, which also predicted the foreground segmentation. The predictions of both networks were supervised by the binary cross-entropy loss.
The second network additionally predicted the position of the foreground tooth, provided the segmentation from the first network. In order to do that, it first applied global average pooling to the features in the most latent dimension, followed by fully-connected layers to return seven logits per proposal. This classification was supervised with the categorical cross-entropy loss. Because the classification head converged much faster compared to the segmentation heads, its learning rate was reduced by a factor of ten.
The Tooth Segmentation stage was run twice to collect multiple proposal predictions per tooth instance. For each run of the Tooth Segmentation stage, the same predicted centroids were used.
#### 4.6.3 Post-processing
First, pairs of proposals were merged whenever their foreground points had an intersection-over-union of at least 0.35. Segmentation and class logits from points that occurred in both proposals were summed. This naturally merged proposals from multiple runs and proposals with the same foreground tooth, thereby increasing effectiveness.
Then, the segmentations were interpolated to the points of the source point cloud. To this effect, 20% nearest neighbours were sampled around each predicted centroid. The segmentation logits were then piece-wise linearly interpolated to the source points, falling back to 3-nearest neighbors for extrapolation. The classification logits were thus not interpolated.
The next step translated the predicted positions back to FDI labels. Whether the jaw was an upper or lower jaw was prior information and the left/right distinction was made by comparing a predicted centroid to the centroids of central incisor proposals. The third molars were retrieved by incrementing the label of a second molar if there was an additional anterior second molar next to it.
Finally, the interpolated tooth proposals were projected back to the source point cloud. Source points not present in any proposal or with only negative segmentation logits were attributed the gingiva label. Other source points were attributed the FDI label of the proposal that gave the point the highest segmentation logit. Furthermore, to allow for multiple tooth instances with the
same FDI label within a scan, the index of the attributing proposal was also returned.
#### 4.6.4 Implementation Details
The Centroid Prediction stage was trained for 500 epochs on 1191 scans from the challenge; 7 scans with braces were removed, as well as 2 scans with severely misaligned teeth.
The Tooth Segmentation stage was trained for 100 epochs on 1072 scans, where 119 scans were left for model selection. This 90%/10% split was determined with a second-order stratification based on the unique FDI labels of each scan (Szymanski and Kajdanowicz, 2017).
The AdamW optimizer was used with weight decay, as well as a cosine annealing learning rate scheduler with a linear warmup period (Loshchilov and Hutter, 2017). Lastly, all models were implemented from the ground up using PyTorch Lightning (Paszke et al., 2019).
## 5 Experimental results
We present the results under quantitative and qualitative evaluation. We first present the final ranking of the participating methods as well as the obtained results for the different tasks of the challenge. Afterword we expose different cases to visually check the quality of the predictions obtained for the final phase of the challenge.
### Quantitative evaluation
The ranking shown in Table 2 is determined by the global score previously described in Section 2. It is worth noting that the ranking may differ depending on the specific task or metric being evaluated. In terms of overall performance, the method proposed by the CGIP team holds the top position. However, when focusing specifically on the teeth localization task, the FiboSeg team achieves the highest score with an Exp(-TLA) of 0.9924. On the other hand, the CGIP team demonstrates exceptional performance in teeth segmentation task, obtaining a TSA score of 0.9859. The IGIP team exhibits the best performance in the tooth labeling task, achieving a TIR score of 0.9289. These results emphasize the diversity and strengths of the different methods, showcasing their effectiveness in specific aspects of the challenge.
### Qualitative evaluation
Figure 9 provides a visual representation of the segmentation results obtained by the competing methods on four samples from the validation dataset. Overall, the visual evaluation of the obtained results aligns with the ranking provided in the quantitative evaluation. The CGIP team demonstrates superiority, particularly in the segmentation task, with consistently accurate segmentation results. However, it should be noted that the FiboSeg team exhibits lower segmentation accuracy, specifically in the gum-teeth border in most of the segmented teeth.
Missing tooth detection is observed across multiple teams, but it is more pronounced in the results of the IGIP team (sample d) and the OS team (samples a and b, for instance). In the case of the TeethSeg team, there are instances where the delineation of teeth boundaries appears inaccurate, as evident in samples b, c, and d. These visual observations provide additional insights into the strengths and weaknesses of the competing methods.
## 6 Conclusion
The 3D Teeth Scan Segmentation and Labeling Challenge (3DTeethSeg'22) was conducted in conjunction with MICCAI 2022. The challenge provided a publicly available dataset consisting of 1800 intra-oral 3D scans obtained from 900 patients, which is currently the largest and most accurately annotated intra-oral 3D scans dataset. The challenge aimed to evaluate six algorithms for teeth detection, segmentation, and labeling tasks. This paper presents an overview of the challenge setup, summarizes the participating algorithms, and compares their performance. During the final testing phase of the 3D Teeth Scan Segmentation and Labeling Challenge (3DTeethSeg'22), the CGIP team's algorithm achieved the highest overall score of 0.9539, making it the top-performing solution. Additionally, the same algorithm obtained the highest performance in the segmentation task, with a score of 0.9859 for the TSA metric. On the other hand, the FiboSeg team's algorithms excelled in the teeth detection task, securing the top position with a score of 0.9924 for the Exp(-TLA) metric. For the labeling task, the IGIP team's algorithm achieved the highest score, attaining a value of 0.9289 for the TIR metric.
Future directions could include the incorporation of more variabilities in the dataset, such as more challenging cases with missing or damaged teeth and ambiguous labeling scenarios to provide a more comprehensive evaluation and enhance the algorithms' capability to handle real-world scenarios effectively. Additionally, it is important to evaluate the accuracy and smoothness of the predicted gum/teeth boundaries in the next iteration of the challenge. In future iterations, evaluating the run-time and computational complexity of the algorithms would be beneficial, as these factors are key considerations for in
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Team** & **Exp(-TLA)** & **TSA** & **TIR** & **Score** \\ \hline CGIP & 0.9658 & **0.9859** & 0.9100 & **0.9539** \\ FiboSeg & **0.9924** & 0.9293 & 0.9223 & 0.9480 \\ IGIP & 0.9244 & 0.9750 & **0.9289** & 0.9427 \\ TeethSeg & 0.9184 & 0.9678 & 0.8538 & 0.9133 \\ OS & 0.7845 & 0.9693 & 0.8940 & 0.8826 \\ Chompers & 0.6242 & 0.8886 & 0.8795 & 0.7974 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Obtained evaluation metrics for the participating teams. The given ranking is based on the final score (see last column).
Figure 9: Visual comparison of the obtained results by applying the competing methods on four samples, _i.e.,_ two lower (a,b) and two upper (c,d) jaws. The first row shows the ground truth related to teeth segmentation and labeling. The remaining rows show the results following the team global ranking.
tegrating the developed solutions into Computer-Aided Design (CAD) software used in dentistry.
|
2304.14764 | Bordism for the 2-group symmetries of the heterotic and CHL strings | In the presence of a nonzero B-field, the symmetries of the $\mathrm
E_8\times \mathrm E_8$ heterotic string form a 2-group, or a categorified
group, as do the symmetries of the CHL string. We express the bordism groups of
the corresponding tangential structures as twisted string bordism groups, then
compute them through dimension 11 modulo a few unresolved ambiguities. Then, we
use these bordism groups to study anomalies and defects for these two string
theories. | Arun Debray | 2023-04-28T11:18:52Z | http://arxiv.org/abs/2304.14764v1 | # Bordism for the 2-group symmetries of the heterotic and CHL strings
###### Abstract.
In the presence of a nonzero B-field, the symmetries of the \(\mathrm{Es}\times\mathrm{Es}\) heterotic string form a 2-group, or a categorified group, as do the symmetries of the CHL string. We express the bordism groups of the corresponding tangential structures as twisted string bordism groups, then compute them through dimension 11 modulo a few unresolved ambiguities. Then, we use these bordism groups to study anomalies and defects for these two string theories.
###### Contents
* 0 Introduction
* 1 Tangential structures for heterotic and CHL string theories
* 2 Bordism computations
* 3 Consequences in string theory
## 0. Introduction
String theory has long been a place where higher-categorical structures in mathematics meet their applications. This is true for a few different reasons, but one crucial reason is that many fields in superstring and supergravity theories have mathematical incarnations that are higher-categorical objects, and so even precisely setting up mathematical questions coming out of string theory, let alone solving them, often requires engaging with or developing the foundations of various kinds of geometric objects with higher structure. This paper is concerned with the appearance of a higher structure called a _2-group_ in two specific string theories, and how including this structure affects computations of bordism groups for the tangential structures of these theories. These bordism groups control anomalies and extended objects for these theories. The main results of this paper are computations of bordism groups and their generating manifolds through dimension 11, except for a few ambiguities we did not addres, for the tangential structures underlying these two string theories.
For the higher structures we investigate in this paper, the story begins with the _Kalb-Ramond field_, or the _B-field_. This is an analogue of the field strength of an electromagnetic field, represented as a closed differential 2-form with a quantization condition. Locality of quantum field theory means expressing the field strength of the electromagnetic field as a section of a sheaf, specifically as a connection on a principal \(\mathbb{T}\)-bundle, where \(\mathbb{T}\) is the circle group. For the B-field, everything is one degree higher: it comes to us as a closed differential 3-form with a quantization condition,
which we would like to express as a geometric object that sheafifies. This cannot be a connection on a principal \(G\)-bundle for a finite-dimensional Lie group \(G\); instead, one models the B-field as a connection on a \(\mathbb{T}\)_-gerbe_, which is a categorification of a principal \(\mathbb{T}\)-bundle. A \(\mathbb{T}\)-gerbe on a manifold \(M\) is, roughly speaking, a bundle of groupoids on \(M\) which is locally equivalent to \(\operatorname{pt}/\mathbb{T}\). There are several ways to make this precise; we discuss one, Murray's _bundle gerbes_[14], in Definition 1.1.
In this article, we consider higher structures in two string theories: the \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic string, and the Chaudhuri-Hockney-Lykken (CHL) string. The former is a ten-dimensional superstring theory whose low-energy limit is ten-dimensional \(\mathcal{N}=1\) supergravity, and the latter is a nine-dimensional theory obtained from the \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic string theory by compactifying on a circle. Both of these theories have B-fields, but Green and Schwarz [15] showed that in order to cancel an anomaly, the B-field and the gauge field must satisfy a relation known as a _Bianchi identity_. Fiorenza-Schreiber-Stasheff [16] and Sati-Schreiber-Stasheff [17] describe how the Bianchi identity mixes the data of the B-field and the gauge field into data that can be interpreted as a connection on a principal bundle for a 2-group \(\mathbb{G}\), specifically a _string \(2\)-group_\(\mathcal{S}tr(G,\mu)\) associated to the data of a compact Lie group \(G\) and a class \(\mu\in H^{4}(BG;\mathbb{Z})\); typically, \(G\) is the gauge group and \(\mu\) is determined by the anomaly polynomial.
2-groups have been used in the theoretical and mathematical physics literature for some time now. This program began in earnest with work of Baez, Crans, Lauda, Stevenson, and Schreiber [1, 2, 1, 10, 11]; more recently, 2-groups, their symmetries, and their anomalies have made a resurgence in quantum field theory following work of Cordova-Dumitiriescu-Intrilligator [18] and Benini-Cordova-Hsin [18] identifying many examples of 2-group symmetries in commonly studied QFTs. See also Sharpe [19] and the references therein.
In the first part of this article, we introduce the Bianchi identity and 2-groups, then review work of Fiorenza-Schreiber-Stasheff [16] and Sati-Schreiber-Stasheff [16] mentioned above. These authors work in the setting of stacks on the site \(\mathcal{M}an\) of smooth manifolds; the data of the B-field \((Q,\Theta_{Q})\) and the principal \(G\)-bundle with connection \((P,\Theta_{P})\) on a manifold \(M\) refine to maps from \(M\) to classifying stacks of these data. The data of an identification of two differential characteristic classes associated to \(\Theta_{P}\) and \(\Theta_{Q}\) gives rise to
1. a principal \(\mathcal{S}tr(G,\mu)\)-bundle lifting \(P\) for a specified choice of \(\mu\) (Proposition 1.35), and
2. local data of solutions to the Bianchi identity (Proposition 1.37, [16, SS6.3]).
Inspired by this, we introduce the tangential structures \(\xi^{\operatorname{het}}\) and \(\xi^{\operatorname{CHL}}\), which are special cases of a general construction of Sati-Schreiber-Stasheff [16, Definition 2.8]: a \(\xi^{\operatorname{het}}_{n}\)-structure on a spin manifold \(M\) is data of a principal \(\mathbb{G}^{\operatorname{het}}_{n}\)-bundle, where \(\mathbb{G}^{\operatorname{het}}_{n}\coloneqq\mathcal{S}tr(\operatorname{ Spin}_{n}\times(\operatorname{E}_{8}\times\operatorname{E}_{8}\rtimes\mathbb{Z}/2),c_{1}+c_{2}-\lambda)\) (1.42), whose associated \(\operatorname{Spin}_{n}\)-bundle via the quotient \(\mathbb{G}^{\operatorname{het}}_{n}\to\operatorname{Spin}_{n}\) is the principal \(\operatorname{Spin}_{n}\)-bundle of spin frames (Definition 1.41). This is compatible as \(n\) varies, allowing us to stabilize and define a \(\xi^{\operatorname{het}}\)-structure as usual. The definition of \(\xi^{\operatorname{CHL}}\) in Definition 1.52, which coincides with \(B\text{String}^{2a}\) in [16, SS2.2.3], is analogous. Related tangential structures appear in [17, 16, 18, 18, 19].
Given a tangential structure, we can compute bordism groups, and indeed the point of this paper is to compute \(\xi^{\operatorname{het}}\) and \(\xi^{\operatorname{CHL}}\) bordism groups in low dimensions. These bordism groups can then be used to learn more about the \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic and CHL strings. We have two primary applications in mind.
1. The _cobordism conjecture_ of McNamara-Vafa [11] is an application to the question of what kinds of spacetime backgrounds are summed over in quantum gravity. Such backgrounds are often taken to be manifolds or something closely related equipped with data of a tangential structure \(\xi\). The cobordism conjecture says that if \(\xi\) is the most general tangential structure which can appear in this way in any particular \(d\)-dimensional theory of quantum gravity, then \(\Omega^{\xi}_{k}=0\) for \(3\leq k\leq d-1\). We will see that \(\Omega^{\xi^{\text{het}}}_{k}\) and \(\Omega^{\xi^{\text{CHL}}}_{k}\) are often nonzero in that range. This is consistent with the cobordism conjecture: it suggests that \(\xi^{\text{het}}\) and \(\xi^{\text{CHL}}\) are not the most general tangential structures that can be summed over. Typically these bordism groups are killed by allowing singular manifolds corresponding to considering the theory with branes or other defects, so one can use bordism computations to predict new defects in string theories.
2. A broad class of \(n\)-dimensional quantum field theories come with data of an _anomaly_, which in many cases can roughly be described an \((n+1)\)-dimensional invertible field theory \(\alpha\). In some cases one wants to trivialize \(\alpha\), meaning exhibiting an isomorphism from \(\alpha\) to the trivial field theory. By work of Freed-Hopkins-Teleman [10] and Freed-Hopkins [10], invertible field theories can be classified using bordism group computations. For both the \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic string and the CHL string, the bordism groups indicating a potential anomaly are nonzero, and it would be interesting to check whether the corresponding anomalies are nontrivial.
See SS3, as well as Questions 0.1 to 0.3 below, for more on these applications and what we can learn from our bordism computations.
Our main theorems are the following two computations of the \(\xi^{\text{het}}\) and \(\xi^{\text{CHL}}\) bordism groups in low dimensions.
**Theorem A**.: _For \(k\leq 10\), the \(\xi^{\text{het}}\)-bordism groups are:_
\[\Omega^{\xi^{\text{het}}}_{0} \cong\mathbb{Z} \Omega^{\xi^{\text{het}}}_{6} \cong\mathbb{Z}/2\] \[\Omega^{\xi^{\text{het}}}_{1} \cong\mathbb{Z}/2\oplus\mathbb{Z}/2 \Omega^{\xi^{\text{het}}}_{7} \cong\mathbb{Z}/16\] \[\Omega^{\xi^{\text{het}}}_{2} \cong\mathbb{Z}/2\oplus\mathbb{Z}/2 \Omega^{\xi^{\text{het}}}_{8} \cong\mathbb{Z}^{3}\oplus(\mathbb{Z}/2)^{\oplus i}\] \[\Omega^{\xi^{\text{het}}}_{3} \cong\mathbb{Z}/8 \Omega^{\xi^{\text{het}}}_{9} \cong(\mathbb{Z}/2)^{\oplus j}\] \[\Omega^{\xi^{\text{het}}}_{4} \cong\mathbb{Z}\oplus\mathbb{Z}/2 \Omega^{\xi^{\text{het}}}_{10} \cong(\mathbb{Z}/2)^{\oplus k}.\] \[\Omega^{\xi^{\text{het}}}_{5} \cong 0\]
_Here, either \(i=1\), \(j=4\), and \(k=4\), or \(i=2\), \(j=6\), and \(k=5\)._
\(\Omega^{\xi^{\text{het}}}_{11}\) _is an abelian group of order \(64\) isomorphic to one of \(\mathbb{Z}/8\oplus\mathbb{Z}/8\), \(\mathbb{Z}/16\oplus\mathbb{Z}/4\), \(\mathbb{Z}/32\oplus\mathbb{Z}/2\), or \(\mathbb{Z}/64\)._
This is a combination of Theorems 2.62 and 2.74. In SS2.2.1, we find manifold representatives for all classes in \(\Omega^{\xi^{\text{het}}}_{k}\) for \(k\leq 10\) except potentially for two missing classes \(X_{8}\) and \(X_{9}\) of dimensions \(8\), resp. 9 and their products with \(S^{1}_{nb}\). These classes may or may not be zero depending on the fate of an Adams differential. In SS2.2.2, we find a manifold representing \(X_{8}\): if the unaddressed Adams differential vanishes, \(X_{8}\) should be added to the list of generators in SS2.2.1, and if the differential does not vanish, then \(X_{8}\) bounds as a \(\xi^{\text{het}}\)-manifold.
Our calculation of \(\xi^{\mathrm{CHL}}\)-bordism builds on work of Hill [11, Theorem 1.1], who computes \(\Omega_{*}^{\mathrm{String}}(B\mathrm{E}_{8})\) in dimensions \(14\) and below.
**Theorem B**.: _For \(k\leq 11\), there is an abstract isomorphism from \(\Omega_{*}^{\xi^{\mathrm{CHL}}}\) to the free and \(2\)-torsion summands of \(\Omega_{*}^{\mathrm{String}}(B\mathrm{E}_{8})\). Therefore, by Hill's computation [11], there are isomorphisms_
\[\Omega_{0}^{\xi^{\mathrm{CHL}}} \cong\mathbb{Z} \Omega_{6}^{\xi^{\mathrm{CHL}}} \cong\mathbb{Z}/2\] \[\Omega_{1}^{\xi^{\mathrm{CHL}}} \cong\mathbb{Z}/2 \Omega_{7}^{\xi^{\mathrm{CHL}}} \cong 0\] \[\Omega_{2}^{\xi^{\mathrm{CHL}}} \cong\mathbb{Z}/2 \Omega_{8}^{\xi^{\mathrm{CHL}}} \cong\mathbb{Z}\oplus\mathbb{Z}\oplus\mathbb{Z}/2\] \[\Omega_{3}^{\xi^{\mathrm{CHL}}} \cong\mathbb{Z}/8 \Omega_{9}^{\xi^{\mathrm{CHL}}} \cong\mathbb{Z}/2\oplus\mathbb{Z}/2\oplus\mathbb{Z}/2\] \[\Omega_{4}^{\xi^{\mathrm{CHL}}} \cong\mathbb{Z} \Omega_{10}^{\xi^{\mathrm{CHL}}} \cong\mathbb{Z}/2\oplus\mathbb{Z}/2\] \[\Omega_{5}^{\xi^{\mathrm{CHL}}} \cong 0 \Omega_{11}^{\xi^{\mathrm{CHL}}} \cong\mathbb{Z}/8.\]
This is a combination of Theorems 2.90 and 2.92. We also obtain some information about manifold representatives of generators of these groups.
The computational tool we use to prove Theorems A and B is standard: the Adams spectral sequence. This spectral sequence has seen plenty of applications in the mathematical physics literature, and there is a standard procedure reviewed by Beaudry-Campbell [1] for simplifying the \(E_{2}\)-page for a wide class of tangential structures, namely those which can be described as oriented, spin\({}^{c}\), spin, or string bordism twisted by a virtual vector bundle. For example, the twisted string bordism computations of [13, 14, 15] make use of this simplifying technique. Unfortunately, this procedure is unavailable to us: in Lemma 2.2, we prove that \(\xi^{\mathrm{het}}\) and \(\xi^{\mathrm{CHL}}\) cannot be described as twists of this sort. However, we are still able to describe them as twists in a more general sense due to Ando-Blumberg-Gepner-Hopkins-Rezk [1, 2]: adapting an argument of Hebestreit-Joachim [12], one learns that the Thom spectra for \(\xi^{\mathrm{het}}\) and \(\xi^{\mathrm{CHL}}\) can be produced as the _MTString_-module Thom spectra associated to certain maps to \(B\mathrm{GL}_{1}(\textit{MTString})\). Using this structure, in upcoming joint work with Matthew Yu, we are able to prove a theorem simplifying the calculation of the \(E_{2}\)-page:
**Theorem C** (Debray-Yu [4]).: _In topological degrees \(15\) and below, the \(E_{2}\)-pages of the Adams spectral sequences computing \(2\)-completed twisted string bordism for a class of twists including those for \(\xi^{\mathrm{het}}\) and \(\xi^{\mathrm{CHL}}\) can be computed as \(\mathrm{Ext}\) over the subalgebra \(\mathcal{A}(2)\) of the Steenrod algebra._
What we prove is more precise and holds in more generality; see Theorem 2.20 and Corollary 2.22 for that version of the result.1
Footnote 1: Since [4] is not available yet, we provide a proof sketch of the case we need in Remark 2.26.
The \(\mathcal{A}(2)\)-module \(\mathrm{Ext}\) groups we have to compute are simpler than what one a priori has to work with over the entire Steenrod algebra \(\mathcal{A}\). We do not need this simplification at odd primes; there the full Adams spectral sequence is easier to work with, and the absence of a simplification does not hinder us (though see also [4], SS3.2]).
The reason we computed these bordism groups in this paper is with applications to physics, specifically to anomalies and the cobordism conjecture, in mind. We discuss some implications of our calculations in SS3; for example, one of the \(\mathbb{Z}/2\) summands of \(\Omega_{1}^{\xi^{\mathrm{het}}}\) corresponds to the non-supersymmetric \(7\)-brane recently discovered by Kaidi-Ohmori-Tachikawa-Yonekura [16]. We end this section of the introduction with some questions related to these physics predictions.
**Question 0.1**.: What does the Kaidi-Ohmori-Tachikawa-Yonekura 7-brane correspond to in Horava-Witten theory, and what does this look like in bordism? Horava-Witten [11, 12, 13] proposed that the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string can be identified with a certain limit of M-theory compactified on an interval; thus this ought to correspond to a notion of bordism of manifolds with boundary. Conner-Floyd [10, SS16] define a notion bordism of compact manifolds with boundary -- is this the correct kind of bordism for applications to McNamara-Vafa's conjecture?
We discuss some additional extended objects predicted by our bordism computations to exist in the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic and CHL strings in SS3.1.
**Question 0.2**.: Is the \(\mathbb{Z}/2\) symmetry exchanging the two \(\mathrm{E}_{8}\)-bundles in \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string theory anomalous? Because \(\Omega^{\mathrm{e}^{\mathrm{het}}}_{11}\) is nonzero, we were unable to rule out this anomaly.
Witten [13, SS4] and Tachikawa-Yonekura [11] show that the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string is anomaly-free in certain cases, but they do not address the \(\mathbb{Z}/2\) symmetry.
**Question 0.3**.: Does the CHL string have an anomaly? This anomaly could be nontrivial, because \(\Omega^{\mathrm{\xi}^{\mathrm{CHL}}}_{10}\cong\mathbb{Z}/2\oplus\mathbb{Z}/2\).
There is another application of twisted string bordism to physics that we did not address in this paper: studying elliptic genera, the Witten genus and related invariants, along the lines of, e.g., Bunke-Naumann [1], McTague [10], Han-Huang-Duan [12], and Berwick-Evans [1]. It would be interesting to study whether the calculations in this paper could be applied in similar contexts.
**Outline**.: We begin in SS1.1 by introducing the fields present in 10d \(\mathcal{N}=1\) supergravity, the low-energy limit of heterotic string theory. We discuss how the Green-Schwarz anomaly cancellation condition imposes an equation called the _Bianchi identity_ (1.10) on the fields in this theory. We then generalize this to a _twisted Bianchi identity_ (1.12) associated to data of a Lie group \(G\) and a class \(\mu\in H^{4}(BG;\mathbb{Z})\). In SS1.2, we relate these Bianchi identities to the presence of a 2-group symmetry in this field theory. We begin by reviewing 2-groups, their principal bundles, and their connections, and in Example 1.22 define the string cover \(\mathcal{S}(G,\mu)\) corresponding to a group \(G\) and a class \(\mu\in H^{4}(BG;\mathbb{Z})\). Then we review work of Fiorenza-Schreiber-Stasheff [13] and Sati-Schreiber-Stasheff [13] relating the Bianchi identity to twisted string structures. Using this, we define the heterotic tangential structure in Definition 1.41, which is the topological part of the structure necessary for defining \(\mathcal{N}=1\) supergravity. Then, in SS1.3, we introduce the CHL string and define the CHL tangential structure using what we learned in SS1.2.
In SS2, we compute the bordism groups \(\Omega^{\mathrm{e}^{\mathrm{het}}}_{s}\) and \(\Omega^{\mathrm{\xi}^{\mathrm{CHL}}}_{s}\) in low degrees. For the latter we are able to completely compute them in dimensions 11 and below, but for the former, we have only partial information above dimension 7, occluded by Adams differentials and an extension problem we could not solve. We begin in SS2.1 by discussing how to simplify the Thom spectra \(\mathit{MT}\xi^{\mathrm{het}}\) and \(\mathit{MT}\xi^{\mathrm{CHL}}\); we prove in Lemma 2.2 that a standard approach does not work, and so we use a different idea: construct \(\mathit{MT}\xi^{\mathrm{het}}\) and \(\mathit{MT}\xi^{\mathrm{CHL}}\) as _MTString_-module Thom spectra using machinery developed by Ando-Blumberg-Gepner-Hopkins-Rezk. We review this machinery and discuss how it leads to Corollary 2.22, a special case of the main theorem of our work [4] joint with Matthew Yu, simplifying the calculation of the \(E_{2}\)-page of the Adams spectral sequence at 2 for a wide class of twisted string bordism groups. Next, in SS2.2, we undertake this computation for \(\xi^{\mathrm{het}}\). We do not have such a simplification at odd primes, so in SS2.3 we press ahead directly with the Adams
spectral sequence for \(\xi^{\text{het}}\), proving in Theorem 2.74 that \(\Omega_{*}^{\xi^{\text{het}}}\) lacks odd-primary torsion in degrees 11 and below. Finally, in SS2.4 we run the analogous calculations for the CHL string, again using Corollary 2.22 at \(p=2\) and arguing more directly at odd primes.
The final section, SS3, is about applications to string theory. We first discuss the cobordism conjecture of McNamara-Vafa [13] in SS3.1, and go over a few predictions that follow from the bordism group computations in SS2. In SS3.2, we briefly introduce anomalies of quantum field theories and their bordism-theoretic classification, and touch on questions raised by our bordism computations.
### Acknowledgements
I especially want to thank Miguel Montero both for suggesting this project and for many helpful conversations about the material in this paper, and Matthew Yu for many helpful discussions relating to [DY] and other ideas related to this paper. I also want to thank Markus Dierigl and the anonymous referee for helpful comments on a draft. In addition, this paper benefited from conversations with Ivano Basile, Matilda Delgado, Jacques Distler, Dan Freed, Jonathan J. Heckman, Justin Kaidi, Jacob McNamara, Yuji Tachikawa, and Roberto Tellez Dominguez; thank you to all!
Part of this project was completed while AD visited the Perimeter Institute for Theoretical Physics; research at Perimeter is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research & Innovation.
## 1. Tangential structures for heterotic and CHL string theories
The goal of this section is to define the tangential structures \(\xi^{\text{het}}\) and \(\xi^{\text{CHL}}\) that are necessary to formulate the (low-energy limits of) the \(\text{E}_{8}\times\text{E}_{8}\) heterotic string and the CHL string. By "tangential structure" we mean the topological part of the structure needed on a manifold to define a given field theory; see Definition 1.40 for the precise definition. The presence of a B-field in both theories means that these tangential structures arise as classifying spaces of higher groups. First, we introduce the heterotic string in SS1.1, and see what data and conditions are told to us by Green-Schwarz anomaly cancellation; then in SS1.2, we reinterpret that data as combining the gauge field and the B-field into a connection for a principal bundle for a higher group. Finally, in SS1.3, we use the general theory from SS1.2 to determine the tangential structure for the CHL string.
The material in this section is not new, though it was not always stated in this form before. The fact that a Bianchi identity/Green-Schwarz mechanism is expressing a lift to a connection for a higher-group principal bundle is well-known; see Fiorenza-Schreiber-Stasheff [10] and Sati-Schreiber-Stasheff [11].
### The \(\text{E}_{8}\times\text{E}_{8}\) heterotic string
Heterotic string theories are ten-dimensional superstring theories whose low-energy limits are 10d \(\mathcal{N}=1\) supergravity theories. These supergravity theories can have Yang-Mills terms, and so are parametrized by the data of the gauge group \(G\), a compact Lie group. However, not all choices of \(G\) yield valid supergravity theories; there is the potential for an anomaly that must be trivialized, and this is quite a strong constraint, implying that the connected component of the identity in \(G\) must be either \(\text{E}_{8}\times\text{E}_{8}\) or \(G=\text{SemiSpin}_{32}\)2[12, 13]. The anomaly cancellation mechanism itself, due to Green-Schwarz [12], combines the different fields in the theory into a connection for a principal \(\mathbb{G}\)-bundle, where \(\mathbb{G}\) is a higher group;3 we use
this subsection to discuss the fields and the Green-Schwarz condition, and the next subsection to discuss the role of higher group. In this paper, we will focus solely on the \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) case; it would be interesting to study the analogues of the computations and applications in this paper in the SemiSpin\({}_{32}\) case.
The group \(\mathbb{Z}/2\) acts on \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) by exchanging the two factors, and the setup of heterotic string theory, including the low-energy supergravity limit and Green-Schwarz' anomaly cancellation, is invariant under this symmetry, so we can expand the gauge group to \(G\coloneqq(\operatorname{E}_{8}\times\operatorname{E}_{8})\rtimes\mathbb{Z}/2\).4 This appears to have first been noticed by McInnes [10, SSI]; see also [1, SS2.2.1].
Footnote 4: Though we often use the standard name “the \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic string” to refer to this theory, we will always consider the larger gauge group \((\operatorname{E}_{8}\times\operatorname{E}_{8})\rtimes\mathbb{Z}/2\).
The fields of 10d \(\mathcal{N}=1\) supergravity on a manifold \(M\) include:
* a metric \(g\),
* a spin structure on \(M\),
* a principal \(G\)-bundle \(P\to M\) with connection \(\Theta_{P}\),
* a _B-field_ or _Kalb-Ramond field_, a gerbe \(Q\to M\) with connection \(\Theta_{Q}\), and
* several additional fields (the dilaton, dilatino, gravitino, and gaugino) which will not be directly relevant to this paper.
Let us say more about the B-field, since its model as a gerbe with connection may be less familiar. A _gerbe_ is a categorification of the idea of a principal \(\mathbb{T}\)-bundle; here \(\mathbb{T}\) is the circle group. Thus, for example, a principal \(\mathbb{T}\)-bundle \(P\to M\) is classified by its first Chern class \(c_{1}(P)\in H^{2}(M;\mathbb{Z})\), and a gerbe \(Q\to M\) is classified by its _Dixmier-Douady class_\(\operatorname{DD}(Q)\in H^{3}(M;\mathbb{Z})\)[11, 12]. A connection on a principal \(\mathbb{T}\)-bundle has holonomy around loops; a connection on a gerbe has holonomy on closed surfaces. And so on.
Gerbes were first introduced by Giraud [13]. There are several different and equivalent ways to precisely define gerbes and their connections; heuristically you can think of a gerbe on \(M\) as a sheaf of groupoids on \(M\) locally equivalent to the trivial sheaf with fiber \(\operatorname{pt}/\mathbb{T}\). One way to make this precise is the following.
If \(f\colon Y\to X\) is a map, we let \(Y^{[n]}\coloneqq Y\times_{X}Y\times_{X}\dots\times_{X}Y\); \(Y^{[n]}\) is the space of \(n\)-simplices in the Cech nerve for \(f\).
**Definition 1.1** (Murray [11]).: A _bundle gerbe_ over a manifold \(M\) is a surjective submersion \(\pi\colon Y\to M\), a \(\mathbb{T}\)-bundle \(P\to Y^{[2]}\), and an isomorphism \(\mu\colon\pi_{12}^{*}P\otimes\pi_{23}^{*}P\xrightarrow{\cong}\pi_{13}^{*}P\) of \(\mathbb{T}\)-bundles over \(Y^{[3]}\) satisfying the natural associativity condition (see below) over \(Y^{[4]}\).
Given two \(\mathbb{T}\)-bundles \(P_{1},P_{2}\to X\), their _tensor product_\(P_{1}\otimes P_{2}\) is the unit circle bundle inside the tensor product of the Hermitian line bundles \(L_{1},L_{2}\to X\) associated to \(P_{1}\), resp. \(P_{2}\). The maps \(\pi_{12},\pi_{23},\pi_{13}\colon Y^{[3]}\xrightarrow{\cong}Y^{[2]}\) are the three face maps in the Cech nerve \(Y^{\bullet}\) associated to \(f\), given explicitly by contracting two of the three copies of \(Y\) via \(Y\times_{X}Y\to Y\).
The associativity condition in Definition 1.1 is a little unwieldy to state explicitly, but can be found in in [11, Definition 4.1(2)].
**Definition 1.2** ([11]).: A _connection_\(\Theta_{Q}\) on a bundle gerbe \(Q=(Y,P,\mu)\) is data of a \(2\)-form \(B\in\Omega^{2}(Y)\) and a connection \(\Theta_{P}\) on \(P\) such that if \(\Omega_{P}\in\Omega^{2}(P)\) denotes the curvature of \(P\) and \(\pi_{1},\pi_{2}\colon Y^{[2]}\to Y\) are the two projections, then
\[\Omega_{P}=\pi_{2}^{*}B-\pi_{1}^{*}B. \tag{1.3}\]
The _curvature_ of \(\Theta_{Q}\) is \(\Omega_{Q}\coloneqq\mathrm{d}B\), which is a closed \(3\)-form.
The key thing to know about this definition is that, just like a principal \(\mathbb{T}\)-bundle \(P\to M\) with connection locally has a connection \(1\)-form \(A\) and globally has a curvature \(2\)-form \(\Omega_{P}\) which locally satisfies \(\Omega_{P}=\mathrm{d}A\), a gerbe with connection \(Q\) locally has a connection \(2\)-form \(B\) and globally has a curvature \(3\)-form \(\Omega_{Q}\) which locally satisfies \(\Omega_{Q}=\mathrm{d}B\). For more information, see, e.g., Brylinski [10, SS5.3].
**Definition 1.4**.: Because \(\mathrm{E}_{8}\) is a simple, connected, simply connected, compact Lie group, there is a canonical isomorphism \(H^{4}(B\mathrm{E}_{8};\mathbb{Z})\stackrel{{\cong}}{{\to}} \mathbb{Z}\). Let \(c\) denote the generator corresponding to \(1\in\mathbb{Z}\). In \(B(\mathrm{E}_{8}\times\mathrm{E}_{8})\simeq B\mathrm{E}_{8}\times B\mathrm{E}_ {8}\), let \(c_{1}\) and \(c_{2}\) denote the copies of \(c\) coming from the first, resp. second copies of \(B\mathrm{E}_{8}\) via the Kunneth map.
The class \(c_{1}+c_{2}\) is invariant under the \(\mathbb{Z}/2\) swapping action, so descends via the Serre spectral sequence to a class in \(H^{4}(B((\mathrm{E}_{8}\times\mathrm{E}_{8})\rtimes\mathbb{Z}/2);\mathbb{Z})\), which we also call \(c_{1}+c_{2}\).
**Definition 1.5**.: \(\mathrm{Spin}_{n}\) is also a compact, connected, simply connected simple Lie group when \(n\geq 3\), and the generator of \(H^{4}(B\mathrm{Spin}_{n};\mathbb{Z})\stackrel{{\cong}}{{\to}} \mathbb{Z}\) corresponding to \(1\) is denoted \(\lambda\).
The class \(\lambda\) is preserved under the standard embeddings \(\mathrm{Spin}_{n}\hookrightarrow\mathrm{Spin}_{n+k}\), so we often work with its stabilized avatar \(\lambda\in H^{4}(B\mathrm{Spin};\mathbb{Z})\). We use this to define \(\lambda\) for \(\mathrm{Spin}_{n}\) when \(n<3\). Because \(2\lambda=p_{1}\), the class \(\lambda\) is often denoted \(\frac{1}{2}p_{1}\). The mod \(2\) reduction of \(\lambda\) is the Stiefel-Whitney class \(w_{4}\).
**Lemma 1.6** (Whitney sum formula).: _Let \(X\) be a topological space and \(E_{1},E_{2}\to X\) be two vector bundles with spin structure. Then \(\lambda(E_{1}\oplus E_{2})=\lambda(E_{1})+\lambda(E_{2})\)._
Proof.: It suffices to prove the universal case, which amounts to the calculation of the pullback of \(\lambda\) by the map
\[\oplus\colon B\mathrm{Spin}_{k_{1}}\times B\mathrm{Spin}_{k_{2}}\longrightarrow B \mathrm{Spin}_{k_{1}+k_{2}}. \tag{1.7}\]
For \(n\geq 3\), \(\mathrm{Spin}_{n}\) is a connected, simply connected, compact simple Lie group, so \(H^{\ell}(B\mathrm{Spin}_{n};\mathbb{Z})\) vanishes for \(\ell=1,2,3\) and is isomorphic to \(\mathbb{Z}\) for \(\ell=0,4\). For \(n<3\), \(H^{*}(B\mathrm{Spin}_{n};\mathbb{Z})\) is still trivial or free abelian in degrees \(4\) and below. Therefore by the Kunneth formula, for all \(k_{1},k_{2}\), \(H^{4}(B\mathrm{Spin}_{k_{1}}\times B\mathrm{Spin}_{k_{2}};\mathbb{Z})\) is a free abelian group, meaning that if we can show \(2\lambda(E_{1}\oplus E_{2})=2\lambda(E_{1})+2\lambda(E_{2})\), then we can deduce \(\lambda(E_{1}\oplus E_{2})=\lambda(E_{1})+\lambda(E_{2})\).
As \(2\lambda=p_{1}\), we have reduced to the Whitney sum formula for \(p_{1}\). The Whitney sum formula \(p_{1}(E_{1}\oplus E_{2})=p_{1}(E_{1})+p_{1}(E_{2})\) does not actually hold for all vector bundles, but Brown [10, Theorem 1.6] (see also Thomas [11]) showed that the difference \(p_{1}(E_{1}\oplus E_{2})-p_{1}(E_{1})-p_{1}(E_{2})\) vanishes when \(E_{1}\) and \(E_{2}\) are orientable, so in our setting of spin vector bundles, we can conclude.
_Remark 1.8_.: There are other ways to prove Lemma 1.6: for example, it follows immediately from a result of Jenquin [12, Corollary 4.9] in a simple generalized cohomology theory. Johnson-Freyd and Treumann [13, SS1.4] sketch another proof of Lemma 1.6.
Next, we introduce the Chern-Weil homomorphism. Let \(G\) be a Lie group with Lie algebra \(\mathfrak{g}\), and let \(f\in\mathrm{Sym}^{k}(\mathfrak{g}^{\vee})\), i.e. \(f\) is a degree-\(k\) polynomial function on \(\mathfrak{g}\) which is invariant under the adjoint \(G\)-action on \(\mathfrak{g}\). Given a manifold \(M\), a principal \(G\)-bundle \(P\to M\), and a connection \(\Theta\) on \(P\), let \(\Omega\in\Omega_{P}^{2}(\mathfrak{g})\) denote the curvature \(2\)-form. Then one can evaluate \(f\) on \(\Omega^{\wedge k}\in\Omega_{P}^{2k}(\mathfrak{g}^{\otimes k})\), producing a form
\(f(\Omega^{\wedge k})\in\Omega_{P}^{2k}\); because \(f\) is \(\operatorname{Ad}\)-invariant, \(f(\Omega^{\wedge k})\) descends to a form \(w(\Theta)\in\Omega_{M}^{2k}\), which is always closed. This defines a ring homomorphism, called the _Chern-Weil homomorphism_[12, 13],
\[w\colon\operatorname{Sym}^{\bullet}(\mathfrak{g}^{\vee})\longrightarrow H^{*} _{\operatorname{dR}}(M), \tag{1.9a}\]
which doubles the degree and is natural in \(M\); moreover, the de Rham class of \(w(\Theta)\) depends on \(P\) but not on the connection. Using de Rham's theorem and naturality, \(w\) upgrades to a ring homomorphism
\[w\colon\operatorname{Sym}^{\bullet}(\mathfrak{g}^{\vee})\longrightarrow H^{*} (BG;\mathbb{R}), \tag{1.9b}\]
which Chern and Weil showed is an isomorphism when \(G\) is compact [13, 14]. Thus, when \(G\) is compact, a class \(x\in H^{2*}(BG;\mathbb{Z})\) defines a polynomial \(\operatorname{CW}_{x}\in\operatorname{Sym}^{*}(\mathfrak{g}^{\vee})\), the \(w\)-preimage of the de Rham class of \(x\). We will also write \(\operatorname{CW}_{x}(\Theta)\) to denote the form defined by evaluating the polynomial \(\operatorname{CW}_{x}\) on the curvature form of \(\Theta\).
Returning to 10d \(\mathcal{N}=1\) supergravity, Green-Schwarz [15] noticed that in order to trivialize an anomaly, one has to impose a relation between \(P\) and \(Q\) and their connections, so that \(Q\) is not quite a gerbe, but instead something twisted. Specifically, the curvature \(\Omega_{Q}\) is no longer closed, but instead satisfies the equation
\[\operatorname{d}\!\Omega_{Q}=\operatorname{CW}_{c_{1}+c_{2}}(\Theta_{P})- \operatorname{CW}_{\lambda}(\Theta^{\operatorname{LC}}), \tag{1.10}\]
where \(\Theta^{\operatorname{LC}}\) is the Levi-Civita connection on the principal \(\operatorname{Spin}_{n}\)-bundle of frames of \(M\).5 This is called a _Bianchi identity_ in the physics literature, motivating the following definition.
Footnote 5: Before Green-Schwarz, it was already known that \(\operatorname{CW}_{c_{1}+c_{2}}(\Theta_{P})\) and \(\operatorname{d}\!\Omega_{Q}\) had to mix in order to preserve supersymmetry, thanks to work of Bergshoeff-de Roo-de Wit-van Nieuwenhuizen [1] and Chapline-Manton [16].
**Definition 1.11**.: Given data of a compact Lie group \(G\) and a class \(\mu\in H^{4}(BG;\mathbb{Z})\), the _twisted Bianchi identity_ is the equation
\[\operatorname{d}\!H=\operatorname{CW}_{\mu}(\Theta_{P}), \tag{1.12}\]
where \(H\) is a \(3\)-form and \(\Theta_{P}\) is a connection on a principal \(G\)-bundle.
As in the case of (1.10), we think of this as mixing the data of two connections, one on a principal \(G\)-bundle and one on a gerbe. In the next section, we interpret twisted Bianchi identities as coming from connections on higher groups.
### From the Bianchi identity to higher groups
In this section, we show that the twisted Bianchi identity (1.12) is a natural consequence of combining a principal \(G\)-bundle and a gerbe, each with connections, into a principal \(\mathbb{G}\)-bundle, where \(\mathbb{G}\) is a certain Lie \(2\)-group built from \(G\) and \(\mu\), together with additional data that we think of as a connection on \(\mathbb{G}\). First we introduce \(2\)-groups and their principal bundles; then, following [15, 16], we recover the twisted Bianchi identity. As a result, we can precisely define the tangential structure for the \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic string, i.e. the topological part of the data which, when put on a manifold \(M\), allows one to study \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic string theory on that manifold.
**Definition 1.13**.: A \(2\)_-group \(\mathbb{G}\)_ is a group object in the bicategory of small categories.
**Definition 1.14**.: A _Lie \(2\)-group_ is a \(2\)-group \(\mathbb{G}\) whose underlying category has been given the structure of a category object in smooth manifolds.
This means that the sets of objects and morphisms are smooth manifolds, and assignments such as the source of a map or the composition of two maps are smooth. \(2\)-groups were first introduced by Hoang Xuan Sinh in her thesis [10], and Lie \(2\)-groups were introduced by Baez [2, SS2].
We call a \(2\)-group _strict_ if it is strict as a monoidal category, i.e. its associators and unitors are all identity maps. Mac Lane's coherence theorem [11, Chapter 7] implies every \(2\)-group is equivalent to a strict \(2\)-group, but the analogous statement is false for Lie \(2\)-groups; see Remark 1.25.
**Example 1.15**.: If \(G\) is a group, it defines a monoidal groupoid with \(G\) as its set of objects, tensor product \(g\otimes h\coloneqq gh\), and only the identity morphisms. This is a \(2\)-group, and inherits the structure of a Lie \(2\)-group if \(G\) is a Lie group.
This procedure embeds the bicategory of groups, group homomorphisms, and identity \(2\)-morphisms into the bicategory of \(2\)-groups, and we will therefore abuse notation and call this \(2\)-group \(G\) again.
**Example 1.16**.: Let \(A\) be an abelian group, and let \(A[1]\) denote the monoidal groupoid with a single object \(*\) and \(\operatorname{Hom}_{A[1]}(*,*)\coloneqq A\). This is a \(2\)-group, and if \(A\) is Lie, \(A[1]\) is a Lie \(2\)-group.
It turns out every \(2\)-group \(\mathbb{G}\) factors as an extension of these examples. Let \(e\) be the identity object of \(\mathbb{G}\) and \(\pi_{0}(\mathbb{G})\) be the group of isomorphism classes of objects in \(\mathbb{G}\). Then there is a short exact sequence of \(2\)-groups
(1.17)
The Eckmann-Hilton theorem guarantees \(\operatorname{Aut}_{\mathbb{G}}(e)\) is abelian. Extensions (1.17) are classified by the data of:
1. an action of \(\pi_{0}(\mathbb{G})\) on \(\operatorname{Aut}_{\mathbb{G}}(e)\), and
2. a cohomology class \(k\in H^{3}(B\pi_{0}(\mathbb{G});\operatorname{Aut}_{\mathbb{G}}(e))\), called the _\(k\)-invariant_ of \(\mathbb{G}\).
When \(\mathbb{G}\) has the discrete topology, this is unambiguous, but when \(\mathbb{G}\) is a Lie \(2\)-group, one must be careful what kind of cohomology is used here. The correct notion of cohomology is the _Segal-Mitchison cohomology_[21, 22] of \(\pi_{0}(\mathbb{G})\) valued in the abelian Lie group \(\operatorname{Aut}_{\mathbb{G}}(e)\), as shown by Schommer-Pries [21, Theorem 1].
Now we want to discuss principal \(\mathbb{G}\)-bundles. The idea is that if \(G\) is a group, a principal \(G\)-bundle is a submersion which is locally trivial, and whose fibers are \(G\)-torsors. For a Lie \(2\)-group \(\mathbb{G}\), we need the fibers to locally look like \(\mathbb{G}\), meaning they must be categorified somehow.
**Definition 1.18** (Bartels [1], Nikolaus-Waldorf [23, Definition 6.1.5]).: Let \(\mathbb{G}\) be a Lie \(2\)-group. A _principal \(\mathbb{G}\)-bundle_ over a smooth manifold \(M\) is a Lie groupoid \(\mathcal{P}\) with a surjective submersion \(\operatorname{obj}(P)\to M\) and a smooth right action \(\rho\) of \(\mathbb{G}\) on \(\mathcal{P}\) such that the map
\[(\operatorname{pr}_{1},\rho)\colon\mathcal{P}\times\mathbb{G}\longrightarrow \mathcal{P}\times_{M}\mathcal{P} \tag{1.19}\]
is a weak equivalence of Lie groupoids.
See Nikolaus-Waldorf [23, SS6] for more details. The principal \(\mathbb{G}\)-bundles on a manifold \(M\) form a \(2\)-groupoid \(\mathcal{B}\mathit{un}_{\mathbb{G}}(X)\)[23, Theorem 6.2.1].
**Definition 1.20**.: Let \(\mathbb{G}\) be a \(2\)-group, and let \(C_{\mathbb{G}}\) be the bicategory with a single object \(*\) and morphism category \(\operatorname{Hom}_{C_{\mathbb{G}}}(*,*)\coloneqq\mathbb{G}\). The _classifying space_ of \(\mathbb{G}\), denoted \(B\mathbb{G}\), is the geometric realization of the nerve of \(C_{\mathbb{G}}\).6
Footnote 6: There are many different definitions of the nerve of a bicategory; the fact that their geometric realizations are canonically homotopy equivalent is a theorem of Carrasco-Cegarra-Garzon [10], allowing us to speak about \(B\mathbb{G}\) without specifying which kind of bicategorical nerve to use.
When \(\mathbb{G}\) is a Lie \(2\)-group, we make the same definition. This time \(C_{\mathbb{G}}\) is a topological bicategory, so its nerve is a simplicial space, and geometrically realizing, we obtain the space \(B\mathbb{G}\).
**Theorem 1.21** (Nikolaus-Waldorf [13, Theorems 4.6, 5.3.2, 7.1]).: _If \(\mathbb{G}\) is a strict Lie \(2\)-group, then there is a natural equivalence \([X,B\mathbb{G}]\stackrel{{\simeq}}{{\to}}\pi_{0}(\mathcal{B} \mathit{un}_{\mathbb{G}}(X))\)._
Nikolaus-Waldorf's proof builds on Baez-Stevenson's related but distinct characterization of \([X,B\mathbb{G}]\)[11, Theorem 1] in terms of nonabelian Cech cohomology.
When \(G\) is an ordinary group, if \(G\) has the discrete topology, \(BG\) has only one nonzero homotopy group, which is \(\pi_{1}(BG)=G\); likewise if \(\mathbb{G}\) is a discrete \(2\)-group, \(\pi_{i}(B\mathbb{G})\) is nontrivial only for \(i=1,2\); \(\pi_{1}(B\mathbb{G})=\pi_{0}(\mathbb{G})\) and \(\pi_{2}(B\mathbb{G})=\operatorname{Aut}_{\mathbb{G}}(e)\). When \(\mathbb{G}\) is a Lie \(2\)-group, we have no control over its homotopy groups in general, just like \(BG\) when \(G\) is positive-dimensional.
If \(\mathbb{G}\) has the discrete topology, the data classifying (1.17), namely the action of \(\pi_{0}(\mathbb{G})\) on \(\operatorname{Aut}_{\mathbb{G}}(e)\) and the \(k\)-invariant, is equivalent to the Postnikov data of \(B\mathbb{G}\), worked out by Mac Lane-Whitehead [12]: this data classifies fibrations over \(BG\) with fiber the Eilenberg-Mac Lane space \(K(\operatorname{Aut}_{\mathbb{G}}(e),2)\). The total space of the fibration with this Postnikov data is homotopy equivalent to \(B\mathbb{G}\).
**Example 1.22**.: Let \(G\) be a compact Lie group; then, the Segal-Mitchison cohomology group \(H^{3}_{\operatorname{SM}}(G;\mathbb{T})\) classifying Lie \(2\)-group extensions of \(G\) by \(\mathbb{T}[1]\) is naturally isomorphic to \(H^{4}(BG;\mathbb{Z})\)[13, Corollary 97]. Therefore given a class \(\mu\in H^{4}(BG;\mathbb{Z})\), we obtain a Lie \(2\)-group \(\mathcal{S}\mathit{tr}(G,\mu)\) fitting into a central extension
(1.23)
which is sometimes called the _string \(2\)-group_ or _string cover_ associated to \(G\) and \(\lambda\). Of all the string covers, the most commonly studied one is \(\operatorname{String}_{n}\coloneqq\mathcal{S}\mathit{tr}(\operatorname{Spin}_{n},\lambda)\), which is called _the_ string \(2\)-group.
This class of \(2\)-groups was first studied by Baez-Lauda [1, SS8.5].
The sequence (1.23) implies that upon taking classifying spaces,
\[B\mathbb{G}\longrightarrow BG\stackrel{{\mu}}{{\longrightarrow}}K (\mathbb{Z},4) \tag{1.24}\]
is a fibration.
_Remark 1.25_.: Theorem 1.21 classified principal \(\mathbb{G}\)-bundles when \(\mathbb{G}\) is a strict \(2\)-group, but it is a theorem of Baez-Lauda [1, Corollary 60] that there is no strict Lie \(2\)-group model for \(\mathcal{S}\mathit{tr}(G,\mu)\) when \(G\) is simply connected and \(\mu\neq 0\). However, there is a fix: in the setting of _Frechet Lie \(2\)-groups_, where we allow the spaces of objects and morphisms of \(\mathbb{G}\) to be Frechet manifolds, there is a strict model for \(\mathcal{S}\mathit{tr}(G,\mu)\)[1, 12], so \(B\mathcal{S}\mathit{tr}(G,\mu)\) actually classifies principal \(\mathcal{S}\mathit{tr}(G,\mu)\)-bundles. This suffices for studying bordism groups.
Following Sati-Schreiber-Stasheff [14], we now relate \(\mathcal{S}\mathit{tr}(G,\mu)\) to the twisted Bianchi identity for \(G\) and \(\mu\). To do so, we use the language of stacks and differential cohomology,
following [11, 12, 13, 14, 15]. Make the category \(\mathcal{M}\)_an_ into a site by defining the covers to be surjective submersions, and define a _stack_ to be a functor of \(\infty\)-categories \(\mathcal{M}\)_an_\({}^{op}\to\mathcal{T}\)_op_ which satisfies descent for hypercovers. This defines a presentable \(\infty\)-category \(\mathcal{S}t\) of stacks [10, Proposition 6.5.2.14], and the Yoneda embedding \(h\colon\mathcal{M}\)_an_\(\to\mathcal{S}t\) embeds \(\mathcal{M}\)_an_ as a full subcategory. We will often simply write \(M\) for the stack \(h(M)\); we never compare these two notions directly, so this will not introduce confusion.
For any space \(X\), the functor \(\operatorname{Map}(\neg,X)\colon\mathcal{M}\)_an_\(\to\)_Top_ is a sheaf, and this procedure defines a functor of \(\infty\)-categories \(\Gamma^{*}\colon\mathcal{T}\)_op_\(\to\)_\(\mathcal{S}t\). The values of the stacks produced by \(\Gamma^{*}\) evaluated on manifolds \(M\) are homotopy-invariant in \(M\). \(\Gamma^{*}\) has a left adjoint \(\Gamma_{\sharp}\colon\mathcal{S}t\to\mathcal{T}\)_op_ (see [11, Proposition 8.3], Morel-Voevodsky [12, Proposition 3.3.3], and [1, Proposition 4.3.1]); \(\Gamma_{\sharp}(\mathbf{X})\) for a stack \(\mathbf{X}\) can be thought of as the best approximation to \(\mathbf{X}\) by a stack whose values on manifolds are homotopy-invariant.
Let \(\Delta^{n}_{\mathrm{alg}}\coloneqq\{(t_{0},\ldots,t_{n})\mid t_{0}+\cdots+t_{n}= 1\}\subset\mathbb{R}^{n+1}\). These "algebraic \(n\)-simplices" assemble into a cosimplicial manifold \(\Delta^{\bullet}_{\mathrm{alg}}\), and [1, Corollary 5.1.4] there is a natural homotopy equivalence \(\Gamma_{\sharp}(\mathbf{X})\simeq|\mathbf{X}(\Delta^{\bullet}_{\mathrm{alg}})|\), where as usual \(|\)-\(|\) denotes geometric realization.
Thus, for a manifold \(M\), there is a natural homotopy equivalence \(\Gamma_{\sharp}(M)\stackrel{{\simeq}}{{\to}}M\), so a map \(M\to\mathbf{X}\) naturally induces a map \(M\to\Gamma_{\sharp}(\mathbf{X})\).
**Lemma 1.26**.: _Suppose \(\mathbf{X}\to\mathbf{Y}\leftarrow\mathbf{Z}\) is a diagram in \(\mathcal{S}t\), and that \(\mathbf{Y}(\Delta^{n}_{\mathrm{alg}})\) and \(\mathbf{Z}(\Delta^{n}_{\mathrm{alg}})\) are connected for all \(n\). Then_
\[\Gamma_{\sharp}(\mathbf{X}\times_{\mathbf{Y}}\mathbf{Z})\simeq\Gamma_{\sharp }(\mathbf{X})\times_{\Gamma_{\sharp}(\mathbf{Y})}\Gamma_{\sharp}(\mathbf{Z}). \tag{1.27}\]
Proof.: Pullbacks of sheaves can be computed pointwise, then sheafifying, so given a pullback \(\mathbf{X}\to\mathbf{Y}\leftarrow\mathbf{Z}\) in \(\mathcal{S}t\), for each \(n\geq 0\) the pullback of
\[\mathbf{X}(\Delta^{n}_{\mathrm{alg}})\longrightarrow\mathbf{Y}(\Delta^{n}_{ \mathrm{alg}})\longleftarrow\mathbf{Z}(\Delta^{n}_{\mathrm{alg}}) \tag{1.28}\]
is \((\mathbf{X}\times_{\mathbf{Y}}\mathbf{Z})(\Delta^{n}_{\mathrm{alg}})\). The Bousfield-Friedlander theorem [1, 1] implies that, given the hypotheses on \(\mathbf{Y}\) and \(\mathbf{Z}\) in the theorem statement, the homotopy pullback of the geometric realizations of \(\mathbf{X}\), \(\mathbf{Y}\), and \(\mathbf{Z}\) is the geometric realization of the levelwise homotopy pullback (1.28) (see [10, p. 14-9] for this specific consequence of the Bousfield-Friedlander theorem).
**Example 1.29**.: For \(G\) a Lie group, there is a stack \(B_{\nabla}G\) whose value on a manifold \(M\) is the geometric realization of the nerve of the groupoid of principal \(G\)-bundles on \(M\) with connection [13]. This object is denoted \(\mathbf{B}G_{\mathrm{conn}}\) in [11, 12, 13], \(\mathbb{B}G^{\nabla}\) in [1, SS5], and \(\operatorname{Bun}_{G}^{\nabla}\) in [1].
There is a natural homotopy equivalence \(\Gamma_{\sharp}(B_{\nabla}G)\stackrel{{\simeq}}{{\to}}BG\)[1, Corollary 13.3.29], which can be interpreted as forgetting from a principal bundle with connection to a principal bundle.
**Example 1.30**.: For \(k\geq 0\), there is a stack \(B^{k}_{\nabla}\mathbb{T}\) whose value on a manifold \(M\) is the geometric realization of the nerve of the \(\infty\)-groupoid of cocycles for the differential cohomology group \(\tilde{H}^{k+1}(M;\mathbb{Z})\). This object is studied in [11, 12, 13], where it is denoted \(\mathbf{B}^{k}U(1)_{\mathrm{conn}}\).
**Lemma 1.31**.: _There is a homotopy equivalence \(\Gamma_{\sharp}(B^{k}_{\nabla}\mathbb{T})\simeq K(\mathbb{Z},k+1)\)._
Proof.: Schreiber [13, Observation 1.2.134] produces the following pullback square in \(\mathcal{S}t\):
(1.32)
where \(\Omega_{c\ell}^{k+1}\) is the stack of closed \((k+1)\)-forms. For that stack and \(K(\mathbb{R},n+1)\), the values on each \(\Delta_{\mathrm{alg}}^{n}\) are connected spaces, so Lemma 1.26 identifies \(\Gamma_{\sharp}(B_{\nabla}^{k}\mathbb{T})\simeq K(\mathbb{Z},k+1)\times_{K( \mathbb{R},k+1)}\Gamma_{\sharp}(\Omega_{c\ell}^{k+1})\). To finish, observe that, essentially by the de Rham theorem, the map \(\Omega_{c\ell}^{k+1}\to K(\mathbb{R},k+1)\) passes to a homotopy equivalence after applying \(\Gamma_{\sharp}\). This follows from [16, Lemma 7.15] together with the Dold-Kan theorem.
These stacks are the universal setting for the Chern-Weil map.
**Theorem 1.33** (Cheeger-Simons [13, Theorem 2.2], Bunke-Nikolaus-Volkl [16, SS5.2]).: _Let \(G\) be a compact Lie group and \(c\in H^{k}(BG;\mathbb{Z})\), where \(k\) is even. Then there is a map \(\tilde{c}\colon B_{\nabla}G\to B_{\nabla}^{k-1}\mathbb{T}\) natural in \((G,c)\) such that for any manifold \(M\) and map \(f\colon M\to B_{\nabla}G\), interpreted as a principal \(G\)-bundle \(P\to M\) with connection \(\Theta\),_
1. _if_ \(\text{char}\colon\tilde{H}^{*}(\neg;\mathbb{Z})\to H^{*}(\neg;\mathbb{Z})\) _denotes the characteristic class map, then_ \(\text{char}(\tilde{c}\circ f)=c(P)\)_, and_
2. _if_ \(\text{curv}\colon\tilde{H}^{*}(\neg;\mathbb{Z})\to\Omega_{c\ell}^{*}\) _denotes the curvature map, then_ \(\text{curv}(\tilde{c}\circ f)=\operatorname{CW}_{c}(\Theta)\)_._
Cheeger-Simons lifted the Chern-Weil map to differential cohomology; Bunke-Nikolaus-Volkl recast it in terms of \(B_{\nabla}G\). The map _char_ in Theorem 1.33 is the map down the left of the square (1.32); _curv_ is the map across the top of (1.32).
**Definition 1.34** (Fiorenza-Schreiber-Stasheff [13, SS6.2]).: Given a compact Lie group \(G\) and a class \(\mu\in H^{4}(BG;\mathbb{Z})\), let \(\mathbf{BStr}(G,\mu)\) denote the fiber of the map \(\tilde{\mu}\colon B_{\nabla}G\to B_{\nabla}^{3}\mathbb{T}\).
We will see momentarily that maps to \(\mathbf{BStr}(G,\mu)\) lead to solutions to the twisted Bianchi identity for \(G\) and \(\mu\).
**Proposition 1.35**.: _There is a natural homotopy equivalence \(\Gamma_{\sharp}(\mathbf{BStr}(G,\mu))\simeq B\mathcal{S}tr(G,\mu)\)._
For this reason we think of \(\mathbf{BStr}(G,\mu)\) as the classifying stack of principal \(\mathcal{S}tr(G,\mu)\)-bundles with connection, though this is only a heuristic.7
Footnote 7: There are at least _five_ notions of a connection on principal \(\mathbb{G}\)-bundles for \(\mathbb{G}\) a 2-group: three are discussed by Waldorf [16, §5], a fourth by Rist-Saemann-Wolf [13], and a fifth, defined only for \(\mathbb{G}=\text{String}_{n}\), by Waldorf [16].
Proof.: Apply Lemma 1.26 to the diagram
\[B_{\nabla}G\stackrel{{\tilde{\mu}}}{{\longrightarrow}}B_{ \nabla}^{3}\mathbb{T}\longleftarrow*, \tag{1.36}\]
as the values of both \(*\) and \(B_{\nabla}^{3}\mathbb{T}\) are connected on \(\Delta_{\mathrm{alg}}^{n}\) for each \(n\). This implies that \(\Gamma_{\sharp}(\mathbf{BStr}(G,\mu))\) is the fiber of \(\mu\colon BG\to K(\mathbb{Z},4)\), which we identified with \(B\mathbb{G}\) in (1.24).
**Proposition 1.37** (Fiorenza-Schreiber-Stasheff [13, SS6.3]).: _Let \(G\) be a compact Lie group, \(U\subset\mathbb{R}^{n}\) be an open set, and \(P\to U\) be a principal \(G\)-bundle with connection \(\Theta\). A lift of the corresponding map \(f_{P,\Theta}\colon U\to B_{\nabla}G\) to a map \(\widetilde{f}_{P,\Theta}\colon U\to\mathbf{BStr}(G,\mu)\) induces a form \(H\in\Omega^{3}(U)\) such that \(H\) and \(\Theta\) satisfy the twisted Bianchi identity (1.12)._
The idea here is that we have specified a trivialization of the differential characteristic class \(\tilde{\mu}(P,\theta)\). Applying the curvature map \(\mathit{curv}\colon B^{3}_{\nabla}\mathbb{T}\to\Omega^{4}_{\mathit{c}\ell}\), we have also specified a trivialization of \(\mathrm{CW}_{\mu}(\Theta)\), which locally is the data \(H\) showing that \(\mathrm{CW}_{\mu}(\Theta)\) is exact.
A map to \(\mathbf{BStr}(G,\mu)\) is more data than what we get from Proposition 1.37, as we have trivialized not just the Chern-Weil form, but also the differential characteristic class. This can be interpreted as saying the data \(H\) specifying the trivialization is quantized to form a twisted version of a gerbe with connection.
To summarize, given a map \(M\to\mathbf{BStr}(G,\mu)\), the stack which we think of as modeling \(\mathcal{S}tr(G,\mu)\)-bundles with connection, we obtain:
1. a principal \(\mathcal{S}tr(G,\mu)\)-bundle \(\mathcal{P}\to M\) by Proposition 1.35, and
2. a "twisted gerbe with connection," i.e. local data of a gerbe \(Q\to M\) such that \(\Omega_{Q}\) and the \(G\)-connection \(\Theta\) induced by the map \(\mathbf{BStr}(G,\mu)\to B_{\nabla}G\) satisfy the twisted Bianchi identity (1.12) by Proposition 1.37.
Motivated by this, we define of the tangential structure for the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string. This first appears in [12, SS3.2], with [12, 13] considering some related examples.
**Definition 1.38**.: Let \(G\coloneqq(\mathrm{E}_{8}\times\mathrm{E}_{8})\rtimes\mathbb{Z}/2\). A _differential \(\xi^{\mathrm{het}}_{n}\)-structure_ on a manifold \(M\) is the following data:
1. a Riemannian metric and spin structure on \(M\),
2. a principal \(G\)-bundle \(P\to M\) with connection \(\Theta\), and
3. a lift of (1.39) \[((B_{\mathrm{Spin}}(M),\Theta^{\mathrm{LC}}),(P,\Theta))\colon M \longrightarrow B_{\nabla}(\mathrm{Spin}_{n}\times G)\] to a map \(M\to\mathbf{BStr}(\mathrm{Spin}_{n}\times G,c_{1}+c_{2}-\lambda)\).
Here \(B_{\mathrm{Spin}}(M)\to M\) is the principal \(\mathrm{Spin}_{n}\)-bundle of frames of \(M\), and \(\Theta^{\mathrm{LC}}\) denotes its Levi-Civita connection.
For bordism groups we want the topological version of this.
**Definition 1.40**.: A _tangential structure_ is a space \(B\) and a map \(\xi\colon B\to B\mathrm{O}\). Given a tangential structure \(\xi\), a _\(\xi\)-structure_ on a virtual vector bundle \(E\to X\) is a lift of the classifying map \(f_{E}\colon X\to B\mathrm{O}\) to a map \(\widetilde{f}_{E}\colon X\to B\) such that \(\xi\circ\widetilde{f}_{E}=f_{E}\). A _\(\xi\)-structure_ on a manifold \(M\) is a \(\xi\)-structure on its tangent bundle.
We make the analogous definition with maps \(\xi_{n}\colon B_{n}\to B\mathrm{O}_{n}\); in this case, we only refer to \(\xi_{n}\)-structures on \(n\)-manifolds.
Lashof [14] defined bordism groups \(\Omega^{\xi}_{*}\) of manifolds with \(\xi\)-structure, and Boardman [1, SSV.1] defined a Thom spectrum \(\mathit{MT}\xi\) whose homotopy groups are naturally isomorphic to \(\Omega^{\xi}_{*}\) via the Pontrjagin-Thom construction.8 We think of the category of tangential structures as the category of spaces over \(B\mathrm{O}\), and bordism groups and Thom spectra are functorial in this category. That is, taking bordism groups and Thom spectra is functorial as long as one commutes with the map down to \(B\mathrm{O}\).
Footnote 8: In homotopy theory, it is common to study the Thom spectra \(M\xi\) representing \(\xi\)-structures on the stable _normal_ bundle \(\nu_{M}\) of a manifold \(M\), and indeed many of the results we cite about _MTSO_, _MTString_, etc. are stated for _MSO_, _MString_, etc., or about Thom spectra \(M\xi\) in general. This is not a problem: for any tangential structure \(\xi\), there is a tangential structure \(\xi^{\perp}\) such that a \(\xi\)-structure on \(TM\) is equivalent data to a \(\xi^{\perp}\)-structure on \(\nu_{M}\) and vice versa, so that \(\mathit{MT}\xi\simeq M\xi^{\perp}\), so the general theory is the same. And for \(\xi=\mathrm{O}\), \(\mathrm{SO}\), \(\mathrm{Spin}\), \(\mathrm{Spin}^{c}\), and String, \(\xi\simeq\xi^{\perp}\) and in those cases we can ignore the difference between \(M\xi\) and \(\mathit{MT}\xi\).
The following definition is a special case of a definition due to Sati-Schreiber-Stasheff [33, Definition 2.8]. See [12, 13, 14, 15] for other related examples.
**Definition 1.41**.: Let \(G_{n}\coloneqq\operatorname{Spin}_{n}\times(\operatorname{E}_{8}\times \operatorname{E}_{8})\rtimes\mathbb{Z}/2\) and
\[\mathbb{G}_{n}^{\operatorname{het}}\coloneqq\mathcal{S}\text{tr}(G_{n},c_{1}+ c_{2}-\lambda). \tag{1.42}\]
The \(\operatorname{E}_{8}\times\operatorname{E}_{8}\)_heterotic tangential structure_ is the tangential structure
\[\xi_{n}^{\operatorname{het}}\colon B\mathbb{G}_{n}^{\operatorname{het}} \longrightarrow B\text{Spin}_{n}\longrightarrow B\text{O}_{n}, \tag{1.43}\]
where the first map comes from the quotient of \(\mathbb{G}^{\operatorname{het}}\) by \(\mathbb{T}[1]\), followed by projection onto the \(\operatorname{Spin}_{n}\) factor in \(G_{n}\). We also define \(\mathbb{G}^{\operatorname{het}}\) and \(\xi^{\operatorname{het}}\) analogously by stabilizing in \(n\).
In other words: a differential \(\xi_{n}^{\operatorname{het}}\)-structure is a lift of a map to \(B_{\nabla}(\operatorname{Spin}\times G)\) to \(\mathbf{BStr}(G,\mu)\); by Proposition 1.35, a topological \(\xi_{n}^{\operatorname{het}}\)-structure is the image of this data under \(\Gamma_{\sharp}\). In particular, a \(\xi_{n}^{\operatorname{het}}\)-structure on an \(n\)-manifold \(M\) includes data of a principal \(\mathbb{G}_{n}^{\operatorname{het}}\)-bundle \(\mathcal{P}\to M\).
Taking the quotient of \(\mathbb{G}^{\operatorname{het}}\) by \(\mathbb{T}[1]\) induces a map of tangential structures
\[\phi\colon B\mathbb{G}^{\operatorname{het}}\longrightarrow B\text{Spin} \times B(\operatorname{E}_{8}^{2}\rtimes\mathbb{Z}/2). \tag{1.44}\]
Thus, much like a \(\operatorname{spin}^{c}\) manifold \(M\) has an associated \(\mathbb{T}\)-bundle \(P\) with \(c_{1}(P)\bmod 2=w_{2}(M)\), a \(\xi^{\operatorname{het}}\)-manifold has associated \((\operatorname{E}_{8}^{2}\rtimes\mathbb{Z}/2)\)-bundle \(P\). From this perspective, a \(\xi^{\operatorname{het}}\)-structure on a manifold \(M\) is the following data:
* a spin structure on \(M\),
* a double cover \(\pi\colon\widetilde{M}\to M\),
* two principal \(\operatorname{E}_{8}\)-bundles \(P,Q\to\widetilde{M}\) which are exchanged by the nonidentity deck transformation of \(\pi\), and
* a trivialization of the class \(\lambda(M)-(c(P)+c(Q))\in H^{4}(M;\mathbb{Z})\).
By a _trivialization_ of a cohomology class \(\alpha\in H^{n}(M;A)\) we mean a null-homotopy of the classifying map \(f_{\alpha}\colon M\to K(A,n)\). Thus orientations are identified with trivializations of \(w_{1}\), etc. To make the trivialization of \(\lambda(M)-(c(P)+c(Q))\) precise, we have to descend the class \(c(P)+c(Q)\), a priori an element of \(H^{4}(\widetilde{M};\mathbb{Z})\), to \(H^{4}(M;\mathbb{Z})\). We can do this because, as noted in Definition 1.4, the class \(c_{1}+c_{2}\) descends through the Serre spectral sequence to the base.
_Remark 1.45_.: We can combine some the data of a \(\xi^{\operatorname{het}}\) structure on \(M\) into a twisted characteristic class. Let \(\mathbb{Z}^{\sigma}\) be the \(\mathbb{Z}[\mathbb{Z}/2]\)-module isomorphic to \(\mathbb{Z}^{2}\) as an abelian group, and in which the nontrivial element of \(\mathbb{Z}/2\) swaps the two factors. Then, let \(\mathbb{Z}_{\pi}^{\sigma}\) denote the local system on \(M\) which is the associated bundle \(\widetilde{M}\times_{\mathbb{Z}/2}\mathbb{Z}^{\sigma}\). A pair of classes \(x,y\in H^{k}(\widetilde{M};\mathbb{Z})\) exchanged by the deck transformation thus define a class in \(H^{k}(M;\mathbb{Z}_{\pi}^{\sigma})\), so, the classes \(c(P)\) and \(c(Q)\) in \(H^{4}(\widetilde{M};\mathbb{Z})\) together define a class \(\widetilde{c}(P,Q)\in H^{4}(M;\mathbb{Z}_{\pi}^{\sigma})\), which is a characteristic class of an \(((\operatorname{E}_{8}\times\operatorname{E}_{8})\rtimes\mathbb{Z}/2)\)-bundle.
If \(\mathbb{Z}\) denotes the \(\mathbb{Z}[\mathbb{Z}/2]\)-module isomorphic to \(\mathbb{Z}\) as an abelian group and with trivial \(\mathbb{Z}/2\)-action, then taking the quotient of \(\mathbb{Z}^{\sigma}\) by the submodule generated by \((1,-1)\) defines a map of \(\mathbb{Z}[\mathbb{Z}/2]\)-modules \(q\colon\mathbb{Z}^{\sigma}\to\mathbb{Z}\), hence also a map between the corresponding twisted cohomology groups, and this map sends \(\widetilde{c}(P,Q)\mapsto c(P)+c(Q)\). Therefore one could recast a \(\xi^{\operatorname{het}}\)-structure on a spin manifold \(M\) as the data of a principal \(((\operatorname{E}_{8}\times\operatorname{E}_{8})\rtimes\mathbb{Z}/2)\)-bundle \((P,Q,\pi)\) together with a trivialization of \(\lambda(M)-q(\widetilde{c}(P,Q))\).
Bott-Samelson [14, Theorems IV, V(e)] showed that the map \(B\operatorname{E}_{8}\to K(\mathbb{Z},4)\) defined by the characteristic class \(c\) is \(15\)-connected. This implies that up to isomorphism, a principal
(\((\mathrm{E}_{8}\times\mathrm{E}_{8})\rtimes\mathbb{Z}/2\))-bundle on a manifold of dimension 15 or lower is equivalent data to its characteristic class \(\widetilde{c}\).
_Remark 1.46_.: One might want to simplify by restricting to the special case where \(\pi\colon\widetilde{M}\to M\) is trivial (as done in, e.g., [13]), in which case the data of a \(\xi^{\mathrm{het}}\)-structure simplifies to the data of a spin structure on \(M\), two principal \(\mathrm{E}_{8}\)-bundles \(P,Q\to M\), and a trivialization of \(\lambda(M)-c(P)-c(Q)\). This corresponds to the tangential structure \(\xi^{r,\mathrm{het}}\colon B\mathcal{S}tr(\mathrm{Spin}\times\mathrm{E}_{8} \times\mathrm{E}_{8},c_{1}+c_{2}-\lambda)\to B\mathrm{Spin}\to B\mathrm{O}\).
### The CHL string
Eleven-dimensional \(\mathcal{N}=1\) supergravity admits a time-reversal symmetry, allowing it to be defined on \(\mathrm{pin}^{+}\) 11-manifolds.9 Therefore we can compactify it on a Mobius strip with certain boundary data to obtain a nine-dimensional supergravity theory; the goal of this subsection is to determine the tangential structure of this theory. Eleven-dimensional \(\mathcal{N}=1\) supergravity is expected to be the low-energy limit of a theory called M-theory,10 and compactifying M-theory on the Mobius strip is expected to produce a string theory called the _Chaudhuri-Hockney-Lykken (CHL) string_[11] whose low-energy limit is the 9-dimensional supergravity theory described above; we study the tangential structure of this supergravity theory in this subsection with the aim of also learning about the CHL string.
Footnote 9: In addition to the \(\mathrm{pin}^{+}\) structure, one needs the additional data of a lift of \(w_{4}(TM)\) to \(w_{1}(TM)\)-twisted integral cohomology. See [13, 14, 15].
Footnote 10: M-theory is expected to require additional data on top of the tangential structure described above for 11-dimensional \(\mathcal{N}=1\) supergravity. See [16, Table 1] and the references listed there.
However, we do not want our perspective on the CHL string to be overly one-sided. There is another way to produce the CHL string by compactifying: consider the circle with its nontrivial principal \(\mathbb{Z}/2\)-bundle \(P\to S^{1}\). Via the map \(\mathbb{Z}/2\hookrightarrow\mathrm{Spin}\times((\mathrm{E}_{8}\times\mathrm{E}_ {8})\rtimes\mathbb{Z}/2)\), this bundle defines a \(\mathrm{Spin}\times((\mathrm{E}_{8}\times\mathrm{E}_{8})\rtimes\mathbb{Z}/2)\)-structure on \(S^{1}\) for which \(\lambda\) and \(c_{1}+c_{2}\) are both trivial, so this structure lifts to define a \(\xi^{\mathrm{het}}\)-structure on \(S^{1}\). We will call the circle with this \(\xi^{\mathrm{het}}\)-structure \(\mathbb{RP}^{1}\), as \(S^{1}\cong\mathbb{RP}^{1}\) as manifolds and the \(\xi^{\mathrm{het}}\)-structure comes from the double cover \(S^{1}\to\mathbb{RP}^{1}\). The CHL string is precisely what one obtains by compactifying the \(\mathrm{E}_{8}^{2}\) heterotic string on \(\mathbb{RP}^{1}\).
We want to determine the tangential structure \(\xi^{\mathrm{CHL}}\) such that the product of \(\mathbb{RP}^{1}\) with a manifold with \(\xi^{\mathrm{CHL}}\)-structure has an induced \(\xi^{\mathrm{het}}\)-structure. In general, keeping track of how the tangential structure changes under compactification can be subtle; for a careful analysis, see Schommer-Pries [17, SS9]. But for the CHL string, we can get away with a more ad hoc approach: following Chaudhuri-Polchinski [11] (see also [1, SS2.2.1]) we restrict to the case where the principal \(\mathbb{Z}/2\)-bundle on \(\mathbb{RP}^{1}\times M\) obtained by the quotient map (1.44) is the pullback of the Mobius bundle \(S^{1}\to\mathbb{RP}^{1}\) along the projection \(\mathrm{pr}_{1}\colon\mathbb{RP}^{1}\times M\to\mathbb{RP}^{1}\).
**Proposition 1.47**.: _Let \(M\) be a spin manifold and \(P\to M\) be a principal \(\mathrm{E}_{8}\)-bundle. The data of a trivialization \(\mathfrak{s}\) of \(\lambda(M)-2c(P)\) induces a \(\xi^{\mathrm{het}}\)-structure on \(\mathbb{RP}^{1}\times M\) whose associated principal \(\mathbb{Z}/2\)-bundle is the Mobius bundle \(S^{1}\times M\to\mathbb{RP}^{1}\times M\). Moreover, if \(\dim(M)\leq 14\), this assignment is a natural bijection from the set of isomorphism classes of data \((P,\mathfrak{s})\) to the set of \(\xi^{\mathrm{het}}\)-structures on \(\mathbb{RP}^{1}\times M\) whose associated \(\mathbb{Z}/2\)-bundle is \(S^{1}\times M\to\mathbb{RP}^{1}\times M\)._
Proof.: Let \(\pi\colon S^{1}\times M\to M\) be the projection onto the second factor. Given \(P\to M\) and \(\mathfrak{s}\), the pair of \(\mathrm{E}_{8}\)-bundles \((\pi^{\ast}P,\pi^{\ast}P)\to S^{1}\times M\) are exchanged by the deck transformation for \(S^{1}\times M\to\mathbb{RP}^{1}\times M\), and \((c_{1}+c_{2})\) evaluated on the pair \((\pi^{\ast}P,\pi^{\ast}P)\) is \(2c(P)\in H^{4}(\mathbb{RP}^{1}\times M;\mathbb{Z}/2)\). Choosing the string structure on \(\mathbb{RP}^{1}\) induced from the bounding framing, we obtain a canonical
trivialization of \(\lambda(\mathbb{RP}^{1}\times M)-\lambda(M)\in H^{4}(\mathbb{RP}^{1}\times M; \mathbb{Z})\) from the two-out-of-three property of string structures. Putting all of this together, we see that we have data of two \(\mathrm{E}_{8}\)-bundles on \(S^{1}\times M\) exchanged by the deck transformation, and a trivialization of \(\lambda-(c_{1}+c_{2})\) on \(\mathbb{RP}^{1}\times M\), thus defining a \(\xi^{\mathrm{het}}\)-structure as claimed.
To see that this produces all \(\xi^{\mathrm{het}}\)-structures associated with \(S^{1}\times M\to\mathbb{RP}^{1}\times M\), recall from Remark 1.45 that the \(((\mathrm{E}_{8}\times\mathrm{E}_{8})\rtimes\mathbb{Z}/2)\)-bundle associated to a \(\xi^{\mathrm{het}}\)-structure is classified by a characteristic class in twisted cohomology. The assumption that the associated \(\mathbb{Z}/2\)-bundle is \(S^{1}\times M\to\mathbb{RP}^{1}\times M\) implies this class belongs to \(H^{4}(\mathbb{RP}^{1}\times M;\underline{\mathbb{Z}}\oplus\underline{ \mathbb{Z}})\), where a generator of \(\pi_{1}(\mathbb{RP}^{1})\) acts on \(\underline{\mathbb{Z}}\oplus\underline{\mathbb{Z}}\) by swapping the two factors, and \(\pi_{1}(M)\) acts trivially. The twisted Kunneth formula [10, Theorem 1.7] gives us an isomorphism
\[H^{4}(\mathbb{RP}^{1}\times M;\underline{\mathbb{Z}}\oplus\underline{\mathbb{ Z}})\stackrel{{\cong}}{{\longrightarrow}}H^{4}(M;\mathbb{Z}), \tag{1.48}\]
meaning that the pair of \(\mathrm{E}_{8}\)-bundles on the orientation double cover \(S^{1}\times M\) pull back from bundles on \(M\), which must be isomorphic in order to be exchanged by the \(\mathbb{Z}/2\)-action.
The Bianchi identity corresponding to this data can therefore be simplified to use a single bundle \(P\to M\) and the class \(c(P)+c(P)\): we obtain
\[\mathrm{d}H=\mathrm{CW}_{2c}(\Theta_{P})-\mathrm{CW}_{\lambda}(\Theta^{\mathrm{ LC}}), \tag{1.49}\]
i.e. the twisted Bianchi identity for \(G=\mathrm{Spin}\times\mathrm{E}_{8}\) and \(\mu=2c-\lambda\). Then, following Definitions 1.38 and 1.41, we make the following definitions.
**Definition 1.50**.: A _differential \(\xi^{\mathrm{CHL}}_{n}\)-structure_ on a manifold \(M\) is the following data:
1. a Riemannian metric and spin structure on \(M\),
2. a principal \(\mathrm{E}_{8}\)-bundle \(P\to M\) with connection \(\Theta\), and
3. a lift of (1.51) \[((B_{\mathrm{Spin}}(M),\Theta^{\mathrm{LC}}),(P,\Theta))\colon M\longrightarrow B _{\nabla}(\mathrm{Spin}_{n}\times\mathrm{E}_{8})\] to a map \(M\to\mathbf{BStr}(\mathrm{Spin}_{n}\times\mathrm{E}_{8},2c-\lambda)\).
What we call \(B\mathbb{G}^{\mathrm{CHL}}_{n}\) coincides with Sati-Schreiber-Stasheff's \(B\)String\({}^{2a}\)[20, (2.18), SS2.3.3] and also appears in work of Fiorenza-Sati-Schreiber [21, Remark 4.1.1], though those papers do not discuss its relationship with the CHL string.
**Definition 1.52** (Sati-Schreiber-Stasheff [20, (2.18), SS2.3.3]).: Let
\[\mathbb{G}^{\mathrm{CHL}}_{n}\coloneqq\mathcal{S}tr(\mathrm{Spin}_{n}\times \mathrm{E}_{8},2c-\lambda). \tag{1.53}\]
The _CHL tangential structure_ is the tangential structure
\[\xi^{\mathrm{CHL}}_{n}\colon B\mathbb{G}^{\mathrm{CHL}}_{n}\longrightarrow B \mathrm{Spin}_{n}\longrightarrow BO_{n}, \tag{1.54}\]
where the first map comes from the quotient of \(\mathbb{G}^{\mathrm{CHL}}\) by \(\mathbb{T}[1]\), followed by projection onto the \(\mathrm{Spin}_{n}\) factor. Stabilizing in \(n\), we also obtain \(\mathbb{G}^{\mathrm{CHL}}\) and a tangential structure \(\xi^{\mathrm{CHL}}\).
A \(\xi^{\mathrm{CHL}}\)-structure on an \(n\)-manifold \(M\) in particular comes with data of a principal \(\mathbb{G}^{\mathrm{CHL}}_{n}\)-bundle \(\mathcal{P}\to M\), and can be formulated as the data of a principal \(\mathrm{E}_{8}\)-bundle \(P\to M\) and a trivialization of \(\lambda(M)-2c(P)\in H^{4}(M;\mathbb{Z})\).
_Remark 1.55_.: Since a \(\xi^{\mathrm{CHL}}\) structure includes data identifying \(\lambda\) as twice another class, it induces a trivialization of the mod \(2\) reduction of \(\lambda\), which is \(w_{4}\). That is, a \(\xi^{\mathrm{CHL}}\) structure induces
a \(\operatorname{Spin}\langle w_{4}\rangle\) structure, where \(B\!\operatorname{Spin}\langle w_{4}\rangle\) is the homotopy fiber of \(w_{4}\colon B\!\operatorname{Spin}\to K(\mathbb{Z},4)\). This structure has been studied in, e.g. [21, 10, 11] for applications to M-theory.
_Remark 1.56_ (Variation of the tangential structure along the moduli space).: There is a moduli space of CHL string theories, not just one, and the gauge group depends on where in the moduli space one is; this moduli space was first studied by Chaudhuri-Polchinski [13]. At a generic point, the gauge group is broken to \(\mathbb{T}^{8}\), and at various special points the gauge group enhances to \(\mathrm{E}_{8}\) or other nonabelian groups: see [12, Table 3]. We work only at the \(\mathrm{E}_{8}\) point of the moduli space in this paper; it would be interesting to apply the techniques in this paper to other points in the CHL moduli space.
There has been quite a bit of recent research studying the moduli spaces of compactifications of the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string and the CHL string, and investigating which gauge groups can occur [11, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23].
## 2. Bordism computations
Now it is time to compute. We will use the Adams spectral sequence to compute \(\Omega_{*}^{\xi^{\text{het}}}\) and \(\Omega_{*}^{\xi^{\text{CHL}}}\); this is a standard tool in computational homotopy theory and more recently appears frequently in the mathematical physics literature, and we point the interested reader to Beaudry-Campbell's introductory article [1].
Applications of the Adams spectral sequence to mathematical physics questions tend to follow the same formula. Suppose that we want to compute \(\Omega_{*}^{\xi}\) for some tangential structure \(\xi\).
1. First, express \(\xi\) as a "twisted \(\xi^{\prime}\)-structure," where \(\xi^{\prime}\) is one of SO, Spin, \(\operatorname{Spin}^{c}\), or String: prove that a \(\xi\)-structure on a vector bundle \(E\to M\) is equivalent data to an auxiliary vector bundle \(V\to M\) and a \(\xi^{\prime}\)-structure on \(E\oplus V\). This implies that \(\mathit{MT}\xi\simeq\mathit{MT}\xi^{\prime}\wedge X\) for some Thom spectrum \(X\) that is usually not too complicated.
2. Next, invoke a change-of-rings theorem to greatly simplify the calculation of the \(E_{2}\)-page for \(\xi^{\prime}\)-bordism of spaces or spectra. Then run the Adams spectral sequence, taking advantage of the extra structure afforded by the change-of-rings theorem.
This recipe goes back to work of Anderson-Brown-Peterson [1] and Giambalvo [15, 16, 17] computing twisted spin bordism. It is most commonly used in the case \(\xi^{\prime}=\operatorname{Spin}\), where it has been frequently used to compute bordism groups for tangential structures representing field theories with fermions; \(\xi^{\prime}=\operatorname{String}\) is less common but still appears in physically motivated examples, including the tangential structure of the Sugimoto string [20] and \(\xi=\operatorname{String}^{c}\)[10, 11].
Unfortunately, \(\xi^{\text{het}}\) and \(\xi^{\text{CHL}}\) do not belong to this class of examples: we will see in Lemma 2.2 that there is no way to write these tangential structures as twisted string structures in the sense above.11,12 So we have to do something different.
Footnote 11: The presence of the B-field, and how the Bianchi identity mixes it with the principal \(\operatorname{Spin}_{n}\)-bundle of frames, rules out \(\xi^{\prime}=\operatorname{SO}\), Spin, or \(\operatorname{Spin}^{c}\).
Footnote 12: This problem also happens to the tangential structures studied in [11, 19].
At odd primes, we plow ahead with the unsimplified Adams spectral sequence, though since we only care about dimensions 11 and below the computations are very tractable. At \(p=2\), though, we can modify the above strategy to simplify the computation: in SS2.1, we generalize the notion of
"twisted string bordism" for which the change-of-rings trick works to include string covers (in the sense of Example 1.22) of groups of the form \(\operatorname{Spin}\times G\). This applies to both \(\xi^{\operatorname{het}}\) and \(\xi^{\operatorname{CHL}}\), and so we are off to the races.
_Remark 2.1_.: We are far from the first to compute bordism groups for a tangential structure \(\xi\colon B\to B\mathrm{O}\) where \(B\) is the classifying space of a \(2\)-group. For example, \(\Omega_{*}^{\operatorname{String}}\) has been calculated in a range of degrees by [11, 12, 13, 14]; other examples include [10, 1, 16, 17, 18, 19, 20, 21, 22].
### Twists of string bordism
_"Started out with a twist, how did it end up like this?_
_It was only a twist, it was only a twist..."_
Once the tangential structure for a bordism question is known, the next step is typically to prove a "shearing" theorem simplifying the tangential structure. For example, the usual route to computing \(\operatorname{pin}^{-}\) bordism [11, SS7] first establishes an isomorphism between \(\operatorname{pin}^{-}\) bordism and the spin bordism of the Thom spectrum \(\Sigma^{-1}\mathit{MO}_{1}\), and then computes the latter groups using something like the Adams or Atiyah-Hirzebruch spectral sequence.
There are a few different approaches to shearing theorems, such as those in [11, 12], but generally they work with Thom spectra of vector bundles; for example, the above simplification of \(\operatorname{pin}^{-}\) bordism begins with the observation that a \(\operatorname{pin}^{-}\) structure on a bundle \(E\to M\) is equivalent data to a real line bundle \(L\to M\) and a spin structure on \(E\oplus L\), which follows from a characteristic class computation, and then passes the data of "\(L\) and a spin structure on \(E\oplus L\)" through the Pontrjagin-Thom theorem.
This approach does not work for the heterotic and CHL tangential structures.
**Lemma 2.2**.: _There is no spin vector bundle \(V\) on \(B((\mathrm{E}_{8}\times\mathrm{E}_{8})\rtimes\mathbb{Z}/2)\) such that \(\lambda(V)=c_{1}+c_{2}\), and there is no spin vector bundle \(W\) on \(B\mathrm{E}_{8}\) such that \(\lambda(V)=2c\)._
This means there is no way to express a \(\xi^{\operatorname{het}}\)-structure as "a \(G\)-bundle and a string structure on \(E\) plus some associated bundle," and likewise for \(\xi^{\operatorname{CHL}}\).
Proof.: Let \(G\) be a compact, simple, simply connected Lie group and \(\rho\colon G\to\operatorname{SU}_{n}\) be a representation. \(H^{4}(BG;\mathbb{Z})\) and \(H^{4}(B\mathrm{SU}_{n};\mathbb{Z})\) are both canonically isomorphic to \(\mathbb{Z}\), so the pullback map \(\rho^{*}\) on \(H^{4}\) is a map \(\mathbb{Z}\to\mathbb{Z}\), necessarily multiplication by some integer \(\delta(\rho)\). Because \(\operatorname{SU}_{n}\) is compact, connected, and simply connected, the standard inclusion \(\operatorname{SU}_{n}\to\operatorname{GL}_{2n}(\mathbb{R})\) lifts to a map \(\operatorname{SU}_{n}\to\operatorname{Spin}_{2n}\). Choices of this lift are a torsor over \(H^{1}(B\mathrm{SU}_{n};\mathbb{Z}/2)=0\), meaning that the characteristic class \(\lambda\) is uniquely defined for \(\operatorname{SU}_{n}\)-representations. Moreover, \(\lambda\) of the defining representation is a generator of \(H^{4}(B\mathrm{SU}_{n};\mathbb{Z})\); because \(H^{4}(B\mathrm{SU}_{n};\mathbb{Z})\) is torsion-free, it suffices to show \(2\lambda=p_{1}\) is twice a generator, which is standard. The _Dynkin index_ of \(G\) is the minimum value of \(|\delta(\rho)|\) over all such representations \(\rho\). Laszlo-Sorger [10, Proposition 2.6] show that the Dynkin index of \(\mathrm{E}_{8}\) is \(60\), meaning that for any vector bundle \(V\to B\mathrm{E}_{8}\) with \(\operatorname{SU}\)-structure induced from a representation, \(\lambda(V)\) is at least \(60\) times a generator.
We would like to generalize to real representations.
**Lemma 2.3**.: _The complexification map \(\operatorname{Spin}_{n}\to\mathrm{O}_{n}\to\mathrm{U}_{n}\) has image contained in \(\operatorname{SU}_{n}\)._
Proof.: A lift of a representation \(\rho\colon G\to\mathrm{U}_{n}\) has image contained in \(\operatorname{SU}_{n}\) if and only if \(c_{1}\) of the complex vector bundle associated to \(\rho\) vanishes. When one pulls back across the complexification
map \(B\mathrm{O}_{n}\to B\mathrm{U}_{n}\), \(c_{1}\) is sent to the image of \(w_{1}\) under the Bockstein map \(\beta\colon H^{1}(B\mathrm{O}_{n};\mathbb{Z}/2)\to H^{2}(B\mathrm{O}_{n}; \mathbb{Z})\); when we pull back further to \(B\mathrm{Spin}_{n}\), \(w_{1}\mapsto 0\), so \(c_{1}=\beta w_{1}\mapsto 0\) too.
Thus the Dynkin index fact we mentioned above applies to complexifications of representations landing in \(\mathrm{Spin}_{n}\).
If \(V\) is a real representation of a group \(G\), \(V\otimes\mathbb{C}\cong V\oplus V\) as real representations, so using the Whitney sum formula for \(\lambda\) (Lemma 1.6), \(\lambda(V\otimes\mathbb{C})=2\lambda(V)\). Therefore if \(V\) is any real spin representation of \(\mathrm{E}_{8}\), \(\lambda(V\otimes\mathbb{C})\) is at least \(60\) times a generator, so \(\lambda(V)\) is at least \(30\) times a generator. Thus the class defining \(\mathbb{G}^{\mathrm{CHL}}\), which is twice a generator, is not \(\lambda\) of any spin representation of \(\mathrm{E}_{8}\); likewise for \(\mathbb{G}^{\mathrm{het}}\), as one could restrict to either factor of \(\mathrm{E}_{8}\) inside \(\mathrm{E}_{8}^{2}\rtimes\mathbb{Z}/2\) and obtain a representation with \(\lambda\) equal to the generator.
Finally, the Atiyah-Segal completion theorem extends this from representations to all vector bundles. Because \(\lambda\) is additive (Lemma 1.6), it factors through the Grothendieck group \(\mathit{KSpin}(BG)\) of spin vector bundles on \(BG\), and similarly, evaluated on spin representations, \(\lambda\) factors through the corresponding Grothendieck group \(\mathit{RSpin}(G)\). Atiyah-Segal [1, SS7, SS8] show that taking the associated bundle of an arbitrary representation exhibits the Grothendieck ring \(\mathit{KO}^{0}(BG)\) of all vector bundles on \(BG\) as the completion of the representation ring \(\mathit{RO}(G)\) at its augmentation ideal. Thus given a \(\mathbb{Z}\)-valued characteristic class \(c\) of arbitrary vector bundles of \(G\) which satisfies the Whitney sum formula, passing from representations of \(G\) to vector bundles on \(BG\) does not decrease the minimal value of \(|c|\).
In order to use the Atiyah-Segal theorem, we need to get from spin representations and vector bundles to arbitrary ones. We will do so, at the cost of lowering the minimum value of \(\lambda\) a little bit. For any vector bundle \(V\), \(V^{\oplus 4}\) admits a canonical spin structure: the Whitney sum formula for Stiefel-Whitney classes shows a spin structure exists; then choose a spin structure universally over \(B\mathrm{O}\). Therefore we can define \(\lambda\) of an arbitrary representation of \(\mathrm{E}_{8}\) or vector bundle on \(B\mathrm{E}_{8}\) by \(\lambda(V)\coloneqq\frac{1}{4}\lambda(V^{\oplus 4})\), valued in \(\frac{1}{4}\mathbb{Z}\). Therefore passing from \(\mathit{RO}(\mathrm{E}_{8})\to\mathit{KO}^{0}(B\mathrm{E}_{8})\) to \(\mathit{RSpin}(\mathrm{E}_{8})\to\mathit{KSpin}(B\mathrm{E}_{8})\) divides the minimal value of \(\lambda\) by at most \(4\), and now we can invoke Atiyah-Segal, so it is still not possible to get \(2c\) and \(\xi^{\mathrm{CHL}}\); and likewise for \(\mathrm{E}_{8}^{2}\rtimes\mathbb{Z}/2\) in place of \(\mathrm{E}_{8}\) to show that the characteristic class for \(\xi^{\mathrm{het}}\) cannot be achieved.
So we take a different approach: we cannot get Thom spectra corresponding to vector bundles, but we can still obtain _MTString_-module Thom spectra. We accomplish this using the theory of Ando-Blumberg-Gepner-Hopkins-Rezk [1, 2] (ABG\({}^{+}\)14b) (ABGHR), which we briefly summarize.
The idea behind the ABGHR perspective on Thom spectra is to generalize the notion of local coefficients to generalized cohomology theories. Given a based, connected space \(X\) and a homomorphism \(\rho\colon\pi_{1}(X)\to\mathrm{GL}_{1}(\mathbb{Z})\cong\{\pm 1\}\), one obtains a local coefficient system \(\mathbb{Z}_{\rho}\) on \(X\): this is a bundle on \(X\) with fiber \(\mathbb{Z}\), and whose monodromy around a loop \(\gamma\in\pi_{1}(X)\) is precisely \(\rho(\gamma)\). Given \(\mathbb{Z}_{\rho}\), we can take twisted cohomology groups: if \(\widetilde{X}\to X\) denotes the universal cover, then the cochain complex \(C^{*}(\widetilde{X};\mathbb{Z})\) has a \(\pi_{1}(X)\)-action induced from the \(\pi_{1}(X)\)-action on \(\widetilde{X}\). If \(C^{*}(X;\mathbb{Z}_{\rho})\) denotes the subcomplex of \(C^{*}(\widetilde{X};\mathbb{Z})\) of cochains which transform under this \(\pi_{1}(X)\)-action by \(\rho\), then \(H^{*}(X;\mathbb{Z}_{\rho})\coloneqq H^{*}(C^{*}(X;\mathbb{Z}_{\rho}))\).
Another way to say this is that if \(\mathrm{pt}/G\) denotes the category with one object \(*\) and \(\mathrm{Hom}(*,*)=G\), \(\rho\) defines a \(\mathrm{pt}/\pi_{1}(X)\)-shaped diagram of chain complexes of abelian groups:
\[\mathrm{pt}/\pi_{1}(X)\stackrel{{\rho}}{{\longrightarrow}} \mathrm{pt}/\{\pm 1\}\longrightarrow\mathcal{C}h_{\mathbb{Z}}, \tag{2.4}\]
sending pt to \(C^{*}(\widetilde{X};\mathbb{Z})\), and sending \(g\in\pi_{1}(X)\) to the action by \(\rho(g)\). The subcomplex of cochains that transform by \(\rho\) is precisely the limit of this diagram. For functoriality reasons, we envision this complex as cochains on some object \(\mathcal{X}\) which is a _c_olimit of a diagram akin to (2.4).
To summarize, twisted cohomology, i.e. cohomology of the Thom spectrum, is expressed as a colimit of a diagram of chain complexes of \(\mathbb{Z}\)-modules induced from a map \(X\to B\mathrm{Aut}(\mathbb{Z})\). Ando-Blumberg-Gepner-Hopkins-Rezk lift this to spectra. Specifically, given a ring spectrum \(R\), Ando-Blumberg-Gepner-Hopkins-Rezk naturally associate a topological group13\(\mathrm{GL}_{1}(R)\), thought of as the group of units or group of automorphisms of \(R\). The classifying space \(B\mathrm{GL}_{1}(R)\) carries the universal local system of \(R\)-lines; a local system of \(R\)-lines over \(X\) is equivalent data to a map \(X\to B\mathrm{GL}_{1}(R)\).
Footnote 13: \(\mathrm{GL}_{1}(R)\) is not exactly a topological group, but the homotopy-coherent version thereof: a grouplike \(A_{\infty}\)-space.
**Definition 2.5** (Ando-Blumberg-Gepner-Hopkins-Rezk [1, Definition 2.20]).: The _Thom spectrum_\(Mf\) associated to a map \(f\colon X\to B\mathrm{GL}_{1}(R)\) is the colimit of the diagram \(X\to B\mathrm{GL}_{1}(R)\to\mathcal{M}\mathit{od}_{R}\), where we think of \(X\) as its fundamental \(\infty\)-groupoid.
When \(R=\mathbb{S}\), this is due to Lewis [13, Chapter IX]. In Definition 2.5, we have to consider the fundamental \(\infty\)-groupoid, rather than just \(\pi_{1}\), because \(R\) can have higher automorphisms, because spectra are derived objects.
The Thom spectrum of a map to \(B\mathrm{GL}_{1}(R)\) is an \(R\)-module.
**Example 2.6** (Twisted ordinary cohomology).: It turns out \(B\mathrm{GL}_{1}(H\mathbb{Z})\simeq K(\mathbb{Z}/2,1)\), so the ABGHR viewpoint recovers \(\mathrm{Aut}(\mathbb{Z})\) and the usual notion of cohomology twisted by a local system. To prove this homotopy equivalence, use the homotopy pullback square of \(E_{\infty}\)-spaces [1, Definition 2.1]
(2.7)
\(\Omega^{\infty}H\mathbb{Z}\simeq\mathbb{Z}\) as \(E_{\infty}\)-spaces, and \(\psi\) is a homotopy equivalence of \(E_{\infty}\)-spaces. Therefore \(\varphi\) is also a homotopy equivalence of \(E_{\infty}\)-spaces, and we conclude.
**Example 2.8** (Thom spectra from vector bundles).: Boardman's original definition of Thom spectra [1, SSV.1] associates them to virtual vector bundles \(V\to X\). Let us connect this to the ABGHR definition. Virtual vector bundles are classified by maps \(f_{V}\colon X\to B\mathrm{O}\), and one avatar of the \(J\)-homomorphism [16] is a map \(J\colon\mathrm{O}\to\mathrm{GL}_{1}(\mathbb{S})\)[1, Example 3.15], which deloops to a map of spaces \(BJ\colon B\mathrm{O}\to B\mathrm{GL}_{1}(\mathbb{S})\). A map with this signature is a natural assignment from virtual vector bundles \(V\to X\) to local systems of invertible \(\mathbb{S}\)-modules, and \(BJ\) assigns to \(V\) the local system with fiber \(\mathbb{S}^{V_{x}}\) at each \(x\in X\). Putting these maps together, we have an \(X\)-shaped diagram
\[X\xrightarrow{f_{V}}B\mathrm{O}\xrightarrow{BJ}B\mathrm{GL}_{1}(\mathbb{S}) \longrightarrow\mathcal{S}p, \tag{2.9}\]
and the colimit of this diagram, which is a Thom spectrum in the ABGHR sense, coincides with the Thom spectrum \(X^{V}\) in the usual sense. This is a combination of theorems of Lewis [13, Chapter IX] and Ando-Blumberg-Gepner-Hopkins-Rezk [1, Corollary 3.24].
This approach to Thom spectra plays well with multiplicative structures. If \(R\) is an \(E_{\infty}\)-ring spectrum, then the grouplike \(A_{\infty}\)-structure on \(\operatorname{GL}_{1}(R)\) refines to a grouplike \(E_{\infty}\)-structure, making \(\operatorname{GL}_{1}(R)\) and therefore \(B\!\operatorname{GL}_{1}(R)\) into infinite loop spaces. For \(1\leq k\leq\infty\), if \(X\) is a \(k\)-fold loop space and \(f\colon X\to B\!\operatorname{GL}_{1}(R)\) is a \(k\)-fold loop map, then the Thom spectrum \(Mf\) inherits the structure of an \(E_{k}\)-ring spectrum. This is a theorem of Lewis [13, Theorem IX.7.1] for \(R=\mathbb{S}\) and Ando-Blumberg-Gepner [1, Theorem 1.7] for more general \(R\).
\(B\!\operatorname{O}\) has an infinite loop space structure coming from the addition-like operation on \(B\!\operatorname{O}\) of direct sum of vector bundles. The \(J\)-homomorphism \(BJ\colon B\!\operatorname{O}\to B\!\operatorname{GL}_{1}(\mathbb{S})\) is an infinite loop map, so we get an \(E_{\infty}\)-ring structure on \(M\!T\xi\) if \(\xi\) is a tangential structure satisfying a _2-out-of-3 property_, i.e. whenever any two of \(E\), \(F\), and \(E\oplus F\) have a \(\xi\)-structure, the third has an induced \(\xi\)-structure. The idea is that the 2-out-of-3 property implies that \(\xi\colon B\to B\!\operatorname{O}\) is an infinite loop map, so passing to \(B\!\operatorname{GL}_{1}(\mathbb{S})\) and taking the Thom spectrum, we obtain an \(E_{\infty}\)-ring spectrum. This applies to \(M\!TO\), \(M\!TSO\), \(M\!TSpin^{c}\), \(M\!TSpin\), and \(M\!TString\); however, some commonly considered tangential structures appearing in physics do not have this property, including \(B\!\operatorname{Pin}^{\pm}\).
**Proposition 2.10**.: _Let \(B\) and \(X\) be infinite loop spaces and \(\xi\colon B\to B\!\operatorname{O}\) and \(f\colon B\to X\) be infinite loop maps, so that the fiber \(\eta\colon F\to B\) of \(f\) is also a map of infinite loop spaces. This data naturally defines twists of the Thom spectrum \(M(\xi\circ\eta)\) over \(X\), i.e. a map \(X\to B\!\operatorname{GL}_{1}(M(\xi\circ\eta))\)._
Proof.: The fiber of \(\eta\colon F\to B\) is another infinite loop map \(\zeta\colon\Omega X\to F\), so the induced map of Thom spectra (where the maps down to \(B\!\operatorname{O}\) are \(\xi\circ\eta\circ\zeta\) and \(\xi\circ\eta\) respectively) is a map of \(E_{\infty}\)-ring spectra. Because \(\xi\circ\eta\circ\zeta\) is nullhomotopic, its Thom spectrum is a suspension spectrum, so we have a map of \(E_{\infty}\)-ring spectra \(\Sigma_{+}^{\infty}\Omega X\to M(\xi\circ\eta)\).
Ando-Blumberg-Gepner-Hopkins-Rezk [1, (1.4), (1.7)] prove that \(\Sigma_{+}^{\infty}\) and \(\operatorname{GL}_{1}\) are an adjoint pair on the categories of infinite loop spaces and \(E_{\infty}\)-ring spectra. Applying this adjunction, we have a map of infinite loop spaces \(\Omega X\to\operatorname{GL}_{1}(M(\xi\circ\eta))\); deloop to obtain the map in the theorem statement.
**Theorem 2.11** (Beardsley [16, Theorem 1]).: _With notation as in Proposition 2.10, the Thom spectrum of the "universal twist" \(X\to B\!\operatorname{GL}_{1}(M(\xi\circ\eta))\) is canonically equivalent to \(M\xi\)._
**Corollary 2.12**.:
1. _There is a map_ \(\widehat{w}_{1}\colon K(\mathbb{Z}/2,1)\to B\!\operatorname{GL}_{1}(M\!TSO)\) _which, after taking the quotient_ \(M\!TSO\to H\mathbb{Z}\)_, passes to the usual homotopy equivalence_ \(K(\mathbb{Z}/2,1)\to B\!\operatorname{GL}_{1}(H\mathbb{Z})\) _from Example_ 2.6_._
2. _There is a map_ \(\widehat{w}_{2}\colon K(\mathbb{Z}/2,2)\to B\!\operatorname{GL}_{1}(M\!TSpin)\) _which, after composing with the Atiyah-Bott-Shapiro map_ \(M\!TSpin\to k\)__[_1, 12_]__, is the usual map_ \(K(\mathbb{Z}/2,2)\hookrightarrow B\!\operatorname{GL}_{1}(ko)\)__[_1, 12_]__._
3. _There is a map_ \(\widehat{\beta w}_{2}\colon K(\mathbb{Z},3)\to B\!\operatorname{GL}_{1}(M\!TSpin ^{c})\) _which, after composing with the Atiyah-Bott-Shapiro map_ \(M\!TSpin^{c}\to ku\)__[_1, 12_, 12_]__, is the usual twist of_ \(K\)_-theory by degree-_\(3\) _classes_ \(K(\mathbb{Z},3)\to B\!\operatorname{GL}_{1}(ku)\)__[_1, 12_, 12_, 12_, 12_]__._
4. _There is a map_ \(\widehat{\lambda}\colon K(\mathbb{Z},4)\to B\!\operatorname{GL}_{1}(M\!TString)\) _which, when composed with the Ando-Hopkins-Rezk orientation_ \(M\!TString\to tmf\)__[_1, 12_, is the Ando-Blumberg-Gepner map_ \(K(\mathbb{Z},4)\to B\!\operatorname{GL}_{1}(tmf)\)__[_1, Proposition 8.2_]__._
Part (3) is a theorem of Hebestreit-Joachim [12, Appendix C]. The other parts are surely known, though we were unable to find them in the literature.
Proof.: Apply Proposition 2.10 to the four maps
1. \(w_{1}\colon B\mathrm{O}\to K(\mathbb{Z}/2,1)\), whose fiber is \(B\mathrm{SO}\);
2. \(w_{2}\colon B\mathrm{SO}\to K(\mathbb{Z}/2,2)\), whose fiber is \(B\mathrm{Spin}\);
3. \(\beta\circ w_{2}\colon B\mathrm{SO}\to K(\mathbb{Z},3)\), whose fiber is \(B\mathrm{Spin}^{c}\), where \(\beta\colon H^{k}(\neg;\mathbb{Z}/2)\to H^{k+1}(\neg;\mathbb{Z})\) is the Bockstein; and
4. \(\lambda\colon B\mathrm{Spin}\to K(\mathbb{Z},4)\), whose fiber is \(B\mathrm{String}\).
All four of these are infinite loop maps, because these characteristic classes are additive in direct sums. For compatibility with preexisting twists, we use the fact that in the \(\mathrm{spin}^{c}\) and string cases, Ando-Blumberg-Gepner [1, SS7, SS8] construct the desired twists \(K(\mathbb{Z},3)\to B\mathrm{GL}_{1}(ku)\) and \(K(\mathbb{Z},4)\to B\mathrm{GL}_{1}(\mathit{tmf})\) in the same way as we construct the twists of \(\mathit{MTSpin}^{c}\) and \(\mathit{MTString}\), so compatibility follows from functoriality. The cases of \(\mathit{ko}\) and \(H\mathbb{Z}\) are analogous.
The homotopy groups of the Thom spectra of the twists Corollary 2.12 have bordism interpretations. Looking at \(\widehat{w}_{2}\) for example, a spin structure on an oriented manifold is a trivialization of \(w_{2}(TM)\), but given a space \(X\) and a degree-2 cohomology class \(B\), thought of as a map \(f_{B}\colon X\to K(\mathbb{Z}/2,2)\), the homotopy groups of \(\mathit{MT}(\widehat{w}_{2}\circ f_{B})\) are the bordism groups of oriented manifolds \(M\) together with a map \(g\colon M\to X\) and a trivialization of \(w_{2}(TM)+g^{*}B\), as was shown by Hebestreit-Joachim [11, Corollary 3.3.8]. The other three cases are analogous; in particular, we have described the Thom spectra for \(\xi^{\mathrm{het}}\) and \(\xi^{\mathrm{CHL}}\) as \(\mathit{MTString}\)-module Thom spectra.
These kinds of twisted bordism have been studied before: \(\mathrm{spin}^{c}\) structures twisted by a degree-3 cohomology class were first studied by Douglas [10, SS5], and they appear implicitly in work of Freed-Witten [13] on anomaly cancellation. Twisted spin and string structures of the sort appearing in Corollary 2.12 were first considered by B.L. Wang [14, Definitions 8.2, 8.4]. See [1, 12, 13, 14, 15, 16, 17, 18, 19] for more examples of twisted generalized cohomology theories from a similar point of view and some applications in physics.
The first case, involving twists of \(\mathit{MTSO}\) by degree-1 \(\mathbb{Z}/2\)-cohomology classes, is the notion of a twisted orientation from the beginning of this section: given a real line bundle \(L\to X\), we ask for data of a map \(g\colon M\to X\) and an orientation on \(TM\oplus g^{*}(L)\). In the ABGHR perspective this says that the map \(\widehat{w}_{1}\) factors through \(B\mathrm{O}_{1}\) as
\[K(\mathbb{Z}/2,1)\stackrel{{\simeq}}{{\to}}B\mathrm{O}_{1} \hookrightarrow B\mathrm{O}\to B\mathrm{GL}_{1}(\mathbb{S})\to B\mathrm{GL}_{1} (\mathit{MTSO}). \tag{2.13}\]
But the others do not factor this way.
_Remark 2.14_.: There is a complex version of (2.13). Let \(\mathcal{W}\) denote _Wall's bordism spectrum_[15], whose homotopy groups are the bordism groups of manifolds with an integral lift of \(w_{1}\). Explicitly, if \(\xi\colon F\to B\mathrm{O}\) is the fiber of \(\beta w_{1}\colon B\mathrm{O}\to K(\mathbb{Z},2)\), then \(\mathcal{W}\coloneqq\mathit{MT}\xi\). Proposition 2.10 then produces a map \(\widehat{\beta w_{1}}\colon K(\mathbb{Z},2)\to B\mathrm{GL}_{1}(\mathcal{W})\), but degree-2 cohomology classes are equivalent to complex line bundles, and \(\widehat{\beta w_{1}}\) factors as
\[K(\mathbb{Z},2)\stackrel{{\simeq}}{{\to}}B\mathbb{T}\to B \mathrm{O}_{2}\to B\mathrm{O}\to B\mathrm{GL}_{1}(\mathbb{S})\to B\mathrm{GL}_{ 1}(\mathcal{W}). \tag{2.15}\]
_Remark 2.16_.: One consequence of the fact that \(\widehat{w}_{1}\) (resp. \(\widehat{\beta w}_{1}\)) factors as in (2.13) (resp. (2.15)), i.e. as a twist associated to a real (resp. complex) line bundle \(L\to X\) is that the associated \(\mathit{MTSO}\)-module (resp. \(\mathcal{W}\)-module) Thom spectrum splits as \(\mathit{MTSO}\wedge X^{L-1}\) (resp. \(\mathcal{W}\wedge X^{L-2}\)). Working universally over \(B\mathrm{O}_{1}\) and \(B\mathbb{T}\), Theorem 2.11 gives us homotopy equivalences \(\mathit{MTSO}\wedge(B\mathrm{O}_{1})^{L-1}\simeq\mathit{MTO}\) and \(\mathcal{W}\wedge(B\mathbb{T})^{L-2}\simeq\mathit{MTO}\); the former is a theorem of Atiyah [11, Proposition 4.1].
We will apply Corollary 2.12 to the degree-4 characteristic classes that the Bianchi identity told us for the heterotic and CHL tangential structures. Given a space \(X\) with a class \(\mu\in H^{4}(X;\mathbb{Z})\), let \(\mathcal{B}(X)\) denote the homotopy fiber of \(\lambda+\mu\colon B\mathrm{Spin}\times X\to K(\mathbb{Z},4)\), and let \(\xi^{\mu}\) denote the tangential structure
\[\xi^{\mu}\colon\mathcal{B}(X)\longrightarrow B\mathrm{Spin}\times X \longrightarrow B\mathrm{O}. \tag{2.17}\]
\(\mathit{MT}\xi^{\mu}\) is equivalent to the _MTString_-module Thom spectrum associated to the twist \(\widehat{\lambda}\circ\mu\colon X\to B\mathrm{GL}_{1}(\mathit{MTString})\). If \(X=BG\) for a Lie group \(G\), \(\mathcal{B}(X)\) is the classifying space of the string \(2\)-group \(\mathcal{S}(\mathrm{Spin}\times G,\lambda+\mu)\). Let \(\mathcal{A}\) denote the \(2\)-primary Steenrod algebra and for \(n\geq 0\), let \(\mathcal{A}(n)\) denote the subalgebra of \(\mathcal{A}\) generated by \(\mathrm{Sq}^{1},\dots,\mathrm{Sq}^{2^{n}}\). In work to appear joint with Matthew Yu [DY], we compute the \(\mathcal{A}\)-module structure on \(H^{*}(\mathit{MT}\xi^{\mu};\mathbb{Z}/2)\).
**Definition 2.18**.: Let \(R\) denote the \(\mathbb{Z}/2\)-algebra \(\mathcal{A}(1)[S]\), i.e. the algebra with generators \(\mathrm{Sq}^{1}\), \(\mathrm{Sq}^{2}\), and \(S\), and with Adem relations for \(\mathrm{Sq}^{1}\) and \(\mathrm{Sq}^{2}\). Given \(X\) and \(\mu\) as above, define the \(\mathcal{A}(1)\)-module \(T(X,\mu)\coloneqq H^{*}(X;\mathbb{Z}/2)\), and give \(T(X,\mu)\) an \(R\)-module structure by defining
\[S(x)\coloneqq\mu x+\mathrm{Sq}^{4}(x). \tag{2.19}\]
We want to think of \(S\) as \(\mathrm{Sq}^{4}\) and \(T(X,\mu)\) as an \(\mathcal{A}(2)\)-module, but a priori it is not clear that this \(S\)-action satisfies the Adem relations.
**Theorem 2.20** ([DY]).:
1. _The_ \(R\)_-module structure on_ \(T(X,\mu)\) _satisfies the Adem relations for_ \(\mathrm{Sq}^{1}\)_,_ \(\mathrm{Sq}^{2}\)_, and_ \(\mathrm{Sq}^{4}=S\)_, hence induces an_ \(\mathcal{A}(2)\)_-module structure on_ \(T(X,\mu)\)_._
2. _There is an map of_ \(\mathcal{A}\)_-modules_ (2.21) \[H^{*}(\mathit{MT}\xi^{\mu};\mathbb{Z}/2)\longrightarrow\mathcal{A}\otimes_{ \mathcal{A}(2)}T(X,\mu),\] _natural in the data_ \((X,\mu)\)_, which is an isomorphism in degrees_ \(15\) _and below._
As [DY] is not yet available, we describe a proof of this theorem in Remark 2.26.
**Corollary 2.22**.: _For \(t-s\leq 15\), the \(E_{2}\)-page of the Adams spectral sequence computing \(2\)-completed \(\xi^{\mu}\)-bordism is_
\[E_{2}^{t,s}=\mathrm{Ext}_{\mathcal{A}(2)}^{s,t}(T(X,\mu),\mathbb{Z}/2). \tag{2.23}\]
As \(\mathcal{A}(2)\) is much smaller than \(\mathcal{A}\), this is much easier to work with.
Proof.: This follows from the change-of-rings formula: if \(\mathcal{B}\) is a graded Hopf algebra, \(\mathcal{C}\) is a graded Hopf subalgebra of \(\mathcal{B}\), and \(M\) and \(N\) are graded \(\mathcal{B}\)-modules, then there is a natural isomorphism
\[\mathrm{Ext}_{\mathcal{B}}^{s,t}(\mathcal{B}\otimes_{\mathcal{C}}M,N) \stackrel{{\cong}}{{\longrightarrow}}\mathrm{Ext}_{\mathcal{C}}^ {s,t}(M,N). \tag{2.24}\]
This you can think of as the derived version of a maybe more familiar isomorphism
\[\mathrm{Hom}_{\mathcal{B}}(\mathcal{B}\otimes_{\mathcal{C}}M,N)\stackrel{{ \cong}}{{\longrightarrow}}\mathrm{Hom}_{\mathcal{C}}(M,N). \tag{2.25}\]
In our example, \(\mathcal{B}\) is the Steenrod algebra, which is a Hopf algebra, and \(\mathcal{C}\) is \(\mathcal{A}(2)\), which is indeed a Hopf subalgebra of \(\mathcal{A}\), so we can invoke (2.24) and conclude.
We will use this simplification in the cases \(\xi^{\mu}=\xi^{\mathrm{het}},\xi^{\mathrm{CHL}}\) to run the Adams spectral sequences computing \(\Omega_{\mathfrak{s}}^{\xi^{\mathrm{het}}}\) and \(\Omega_{\mathfrak{s}}^{\xi^{\mathrm{CHL}}}\) at \(p=2\).
_Remark 2.26_ (Proof sketch of Theorem 2.20).: To prove (1), check the Adem relations for \(\mathcal{A}(2)\) directly. The first step in proving part (2) is to establish a Thom isomorphism for mod \(2\) cohomology. We make use of the _Thom diagonal_, a map of _MTString_-modules
\[MT\xi^{\mu}\xrightarrow{\Delta^{*}}MT\xi^{\mu}\wedge\text{{MTString}}\wedge \Sigma^{\infty}_{+}X \tag{2.27}\]
defined as follows: the diagonal map \(\Delta\colon X\to X\times X\) is a map of spaces over \(B\mathrm{GL}_{1}(\text{{MTString}})\), if we give \(X\) the map \(\widehat{\lambda}\circ\mu\) to \(B\mathrm{GL}_{1}(\text{{MTString}})\) and we give \(X\times X\) the map \((\widehat{\lambda}\circ\mu,*)\). Applying the _MTString_-module Thom spectrum functor to \(\Delta\) produces (2.27). Smash (2.27) with \(H\mathbb{Z}/2\). The result is the Thom diagonal for a twist of \(H\mathbb{Z}/2\), but all such twists are trivializable (i.e. all \(H\mathbb{Z}/2\)-bundles admit an orientation). Therefore by [1, Proposition 3.26] the following composition is an equivalence:
\[MT\xi^{\mu}\wedge H\mathbb{Z}/2\xrightarrow{\Delta^{*}}MT\xi^{\mu}\wedge \Sigma^{\infty}_{+}X\wedge H\mathbb{Z}/2\longrightarrow\text{{MTString}} \wedge\Sigma^{\infty}_{+}X\wedge H\mathbb{Z}/2, \tag{2.28}\]
which is the \(\mathbb{Z}/2\)-homology Thom isomorphism. The analogous fact is true for mod \(2\) cohomology.
The Thom diagonal makes \(H^{*}(MT\xi^{\mu};\mathbb{Z}/2)\) into a free, rank-\(1\) module over \(H^{*}(\mathcal{B}(X);\mathbb{Z}/2)\), generated by the Thom class \(U\). As the Thom diagonal is a map of spectra, we may use the Cartan formula to compute the Steenrod squares of an arbitrary element of \(H^{*}(MT\xi^{\mu};\mathbb{Z}/2)\) in terms of Steenrod squares in \(\mathcal{B}(X)\) and \(\mathrm{Sq}(U)\). As both \(\mathrm{Sq}(U)\) and our desired isomorphism in (2.21) are natural in \(X\) and \(\mu\), it suffices to understand the universal case, where \(X=K(\mathbb{Z},4)\) and \(\mu\) is the tautological class \(\tau\in H^{4}(K(\mathbb{Z},4);\mathbb{Z})\). In this case, Theorem 2.11 implies \(MT\xi^{\mu}\simeq\text{{MTSpin}}\). By work of Anderson-Brown-Peterson [1], if \(J\) is the \(\mathcal{A}(1)\)-module \(\mathcal{A}(1)/\mathrm{Sq}^{3}\) and \(M\) is the \(\mathcal{A}(1)\)-module \(\mathbb{Z}/2\oplus\Sigma^{8}\mathbb{Z}/2\oplus\Sigma^{10}J\), then there is a map of \(\mathcal{A}\)-modules
\[H^{*}(\text{{MTSpin}};\mathbb{Z}/2)\longrightarrow\mathcal{A}\otimes_{ \mathcal{A}(1)}M \tag{2.29}\]
which is an isomorphism in degrees \(15\) and below. And Giambalvo [11, Corollary 2.3] shows that there is a map \(H^{*}(\text{{MTString}};\mathbb{Z}/2)\to\mathcal{A}\otimes_{\mathcal{A}(2)} \mathbb{Z}/2\) which is also an isomorphism in degrees \(15\) and below. Therefore by the change-of-rings theorem (2.24) it suffices to exhibit a map of \(\mathcal{A}(2)\)-modules
\[T(K(\mathbb{Z},4),\tau)\longrightarrow\mathcal{A}(2)\otimes_{\mathcal{A}(1)}M \tag{2.30}\]
which is an isomorphism in degrees \(15\) and below. This can be verified directly, using as input the \(\mathcal{A}(2)\)-module structure on \(H^{*}(K(\mathbb{Z},4);\mathbb{Z}/2)\) calculated by Serre [10, SS10].
### \(\xi^{\mathrm{het}}\) bordism at \(p=2\)
In this section we will first compute \(H^{*}(BG;\mathbb{Z}/2)\) as an \(\mathcal{A}(2)\)-module in low degrees, where \(G\coloneqq\mathrm{E}^{2}_{8}\rtimes\mathbb{Z}/2\); then, using Corollary 2.22, we run the Adams spectral sequence computing \(2\)-completed \(\xi^{\mathrm{het}}\) bordism in degrees \(11\) and below.
First, though, we reformulate the problem slightly. Consider the tangential structure \(\xi^{\mathrm{het}^{\prime}}\colon B^{\mathrm{het}^{\prime}}\to B\mathrm{O}\) defined in the same manner as \(\xi^{\mathrm{het}}\), but with \(K(\mathbb{Z},4)\) replacing \(B\mathrm{E}_{8}\). In a little more detail, \(\mathbb{Z}/2\) acts on \(K(\mathbb{Z},4)\times K(\mathbb{Z},4)\) by swapping the two factors; taking the Borel construction
\[B\coloneqq(K(\mathbb{Z},4)\times K(\mathbb{Z},4))\times_{\mathbb{Z}/2}E \mathbb{Z}/2 \tag{2.31}\]
produces a fiber bundle
\[K(\mathbb{Z},4)\times K(\mathbb{Z},4)\longrightarrow B\longrightarrow B \mathbb{Z}/2. \tag{2.32}\]
For \(i=1,2\), let \(c_{i}\in H^{4}(K(\mathbb{Z},4)\times K(\mathbb{Z},4);\mathbb{Z})\) be the tautological class for the \(i^{\mathrm{th}}\)\(K(\mathbb{Z},4)\) factor. The class \(c_{1}+c_{2}\) is invariant under the \(\mathbb{Z}/2\)-action, so we can follow it through the Serre spectral
sequence to learn that it defines a nonzero class \(c_{1}+c_{2}\in H^{4}(B;\mathbb{Z}/2)\). Define \(f\colon B^{\operatorname{het}^{\prime}}\to B\text{Spin}\times B\) to be the fiber of \(\lambda-(c_{1}+c_{2})\colon B\text{Spin}\times B\to K(\mathbb{Z},4)\); then the tangential structure \(\xi^{\operatorname{het}^{\prime}}\) is the composition
(2.33)
That is, a \(\xi^{\operatorname{het}^{\prime}}\) structure on a manifold \(M\) is a spin structure, a principal \(\mathbb{Z}/2\)-bundle \(P\to M\), two classes \(c_{1},c_{2}\in H^{4}(P;\mathbb{Z})\) which are exchanged under the deck transformation, and a trivialization of \(\lambda(M)-(c_{1}+c_{2})\) (where the latter class is descended to \(M\)). This is the same data as a \(\xi^{\operatorname{het}}\) structure, except that we do not ask for \(c_{1}\) or \(c_{2}\) to come from principal \(\text{E}_{8}\)-bundles; therefore there is a map of tangential structures \(\widetilde{c}\colon\xi^{\operatorname{het}}\to\xi^{\operatorname{het}^{ \prime}}\), i.e. a map of spaces \(B\mathbb{G}^{\operatorname{het}}\to B^{\operatorname{het}^{\prime}}\) commuting with the maps down to \(B\text{O}\). Like for \(\xi^{\operatorname{het}}\), a \(\xi^{\operatorname{het}^{\prime}}\)-structure is a twisted string structure in the sense of Corollary 2.12, via the class \(\lambda-(c_{1}+c_{2})\colon B\to K(\mathbb{Z},4)\).
Bott-Samelson [1, Theorems IV, V(e)] showed that the characteristic class \(c\in H^{4}(B\text{E}_{8};\mathbb{Z})\) we defined in Definition 1.4, interpreted as a map \(c\colon B\text{E}_{8}\to K(\mathbb{Z},4)\), is \(15\)-connected. This means that the homomorphism \(\widetilde{c}\) induces on bordism groups, \(\widetilde{c}\colon\Omega_{k}^{\xi^{\operatorname{het}}}\to\Omega_{k}^{\xi^ {\operatorname{het}^{\prime}}}\), is an isomorphism in degrees \(14\) and below. For our string-theoretic purposes, we only care about \(k\leq 12\), so we may as well compute \(\xi^{\operatorname{het}^{\prime}}\)-bordism. In the rest of this subsection, we often blur the distinction between \(\xi^{\operatorname{het}}\) and \(\xi^{\operatorname{het}^{\prime}}\); we will point out where it matters which one we are looking at.
_Remark 2.34_.: Turning off the \(\mathbb{Z}/2\) symmetry switching the two \(\text{E}_{8}\) factors, i.e. passing to a \(\xi^{r,\operatorname{het}}\)-structure as in Remark 1.46, simplifies this story considerably: the bordism groups were known decades ago. Specifically, replace \(B\text{E}_{8}\) with \(K(\mathbb{Z},4)\) in the definition of \(\xi^{r,\operatorname{het}}\) to define a tangential structure \(\xi^{r,\operatorname{het}^{\prime}}\), which on a manifold \(M\) consists of a spin structure on \(M\), two classes \(c_{1},c_{2}\in H^{4}(M;\mathbb{Z})\), and a trivialization of \(\lambda(M)-c_{1}-c_{2}\). As Witten [20, SS4] noticed, this data is equivalent to a spin structure and the single class \(c_{1}\), which may be freely chosen; then \(c_{2}\) must be \(\lambda(M)-c_{1}\). Therefore the tangential structure \(\xi^{r,\operatorname{het}^{\prime}}\)-structure is simply \(B\text{Spin}\times K(\mathbb{Z},4)\to B\text{O}\), and just as for \(\xi^{\operatorname{het}}\), the map \(\text{MT}\xi^{r,\operatorname{het}}\to\text{MT}\xi^{r,\operatorname{het}^{ \prime}}\simeq\text{MT}\text{Spin}\wedge K(\mathbb{Z},4)_{+}\) is an isomorphism on homotopy groups in degrees \(14\) and below. Stong [19] computes \(\Omega_{*}^{\text{Spin}}(K(\mathbb{Z},4))\) in degrees \(12\) and below.
As we discussed in SS1.2, the data of a trivial principal \(\mathbb{Z}/2\)-bundle on a manifold \(M\) and two principal \(\text{E}_{8}\)-bundles \(P,Q\to M\) define a principal \(\text{E}_{8}^{2}\rtimes\mathbb{Z}/2\)-bundle on \(M\) with \(c_{1}+c_{2}\) equal to \(c(P)+c(Q)\); data trivializing \(c(P)+c(Q)-\lambda(M)\) therefore defines a \(\xi^{\operatorname{het}}\) structure. Analogously, the trivial \(\mathbb{Z}/2\)-bundle and a pair \(c_{1},c_{2}\in H^{4}(M;\mathbb{Z})\) with a trivialization of \(c_{1}+c_{2}-\lambda\) define a \(\xi^{\operatorname{het}^{\prime}}\) structure.
**Lemma 2.35**.: _A spin manifold \(M\) has a canonical \(\xi^{\operatorname{het}^{\prime}}\) structure specified as above by the trivial principal \(\mathbb{Z}/2\)-bundle, the cohomology classes \(c_{1}=\lambda\) and \(c_{2}=0\), and the canonical trivialization of \(\lambda-\lambda=0\in H^{4}(M;\mathbb{Z})\)._
This defines a map of tangential structures and therefore a map of Thom spectra \(s_{1}\colon\text{MTSpin}\to\text{MT}\xi^{\operatorname{het}^{\prime}}\). A \(\xi^{\operatorname{het}^{\prime}}\)-structure includes data of a spin structure; forgetting the rest of the \(\xi^{\operatorname{het}^{\prime}}\)-structure defines a map \(s_{2}\colon\text{MT}\xi^{\operatorname{het}^{\prime}}\to\text{MTSpin}\). The composition of \(s_{1}\) and \(s_{2}\) is homotopy equivalent to the identity, because the underlying spin structure of the \(\xi^{\operatorname{het}^{\prime}}\) manifold built in Lemma 2.35 is the same spin structure we began with.
**Corollary 2.36**.: _There is a spectrum \(\mathcal{Q}\) and a splitting_
\[(s_{2},q)\colon\text{MT}\xi^{\text{het}^{\prime}}\stackrel{{\simeq }}{{\longrightarrow}}\text{MTSpin}\vee\mathcal{Q}. \tag{2.37}\]
We will use this later to reduce the amount of spectral sequence computations we have to make.
Both Lemma 2.35 and Corollary 2.36 require us to use \(\xi^{\text{het}^{\prime}}\) and not \(\xi^{\text{het}}\), though of course the consequence on low-degree bordism groups is true for both.
When \(K\) is a finite group, Nakaoka [13, Theorem 3.3] proved that there is a ring isomorphism from the mod \(2\) cohomology of \(B(\mathbb{Z}/2\ltimes(K\times K))\) to the \(E_{2}\)-page of the Serre spectral sequence
\[E_{2}^{p,q}=H^{p}(B\mathbb{Z}/2;\underline{H^{q}(BK\times BK;\mathbb{Z}/2)}) \Longrightarrow H^{p+q}(B(\mathbb{Z}/2\ltimes(K\times K));\mathbb{Z}/2). \tag{2.38}\]
Here the underline denotes the local coefficient system arising from the \(\mathbb{Z}/2\)-action on \(BK\times BK\) by switching the two factors. Since this local coefficient system can be nontrivial, one has to be careful defining the multiplicative structure on the \(E_{2}\)-page of (2.38), but here it can be made explicit. As a \(\mathbb{Z}/2[\mathbb{Z}/2]\)-module, \(\underline{H^{*}(BK\times BK;\mathbb{Z}/2)}\) is a direct sum of:
* the subalgebra \(\mathcal{H}_{1}\) of classes fixed by \(\mathbb{Z}/2\), which are of the form \(x\otimes x\) for \(x\in H^{*}(BK;\mathbb{Z}/2)\); and
* the submodule \(\mathcal{H}_{2}\) spanned by classes of the form \(x\otimes y\) where \(x\) and \(y\) are linearly independent.
Since \(\mathbb{Z}/2\) acts trivially on \(\mathcal{H}_{1}\) and \(\mathcal{H}_{1}\) is a ring, \(H^{*}(B\mathbb{Z}/2;\mathcal{H}_{1})\) has a ring structure. And as a \(\mathbb{Z}/2[\mathbb{Z}/2]\)-module, \(\mathcal{H}_{2}\) is of the form \(M\oplus M\) where \(\mathbb{Z}/2\) acts by swapping the two factors, so \(H^{*}(B\mathbb{Z}/2;\mathcal{H}_{2})\) vanishes in positive degrees.14 In degree zero, we obtain invariants, spanned by elements of the form \(x\otimes y+y\otimes x\), with \(x,y\in H^{*}(BK;\mathbb{Z}/2)\). \(\mathcal{H}_{1}\oplus(\mathcal{H}_{2})^{\mathbb{Z}/2}=E_{2}^{0,\bullet}\) is a subalgebra of \(H^{*}(BK\times BK;\mathbb{Z}/2)\).
Footnote 14: To see this, first observe that mod \(2\) group cohomology for \(G\) is additive in the \(\mathbb{Z}/2[G]\)-module of coefficients, so it suffices to prove that \(H^{*}(B\mathbb{Z}/2;M\oplus M)\) vanishes in positive degrees when \(M=\mathbb{Z}/2\). But \(\mathbb{Z}/2\oplus\mathbb{Z}/2\) is isomorphic to \(\mathbb{Z}/2[\mathbb{Z}/2]\) as \(\mathbb{Z}/2[\mathbb{Z}/2]\)-modules (i.e. as vector spaces with \(\mathbb{Z}/2\)-representations, \(\mathbb{Z}/2\oplus\mathbb{Z}/2\) is isomorphic to the vector space of functions on the group \(\mathbb{Z}/2\)), and group cohomology valued in the group ring is trivial, e.g. because the group ring is its own free resolution.
So far we have specified ring structures on \(H^{*}(B\mathbb{Z}/2;\mathcal{H}_{1})\supsetneq E_{2}^{>0,\bullet}\) and \(\mathcal{H}_{1}\oplus(\mathcal{H}_{2})^{\mathbb{Z}/2}=E_{2}^{0,\bullet}\), and these ring structures agree where they overlap. Therefore to specify a ring structure on the entirety of the \(E_{2}\)-page, it suffices to write down the product of an element in \((\mathcal{H}_{2})^{\mathbb{Z}/2}\) and an element in positive \(p\)-degree. We say that all such products vanish; this is the ring structure that appears in Nakaoka's theorem.
Of course, \(\mathrm{E}_{8}\) is not a finite group. Nakaoka's theorem is true in quite great generality [11, 13, 14]; the version we need is proven by Evens [11], who proves the same ring isomorphism when \(K\) is a compact Lie group. Thus this applies to \(\xi^{\text{het}}\), and not necessarily to \(\xi^{\text{het}^{\prime}}\), but since their cohomology rings are isomorphic in degrees \(14\) and below, it does not matter which one we use in this calculation.
Now we make this ring structure and \(\mathcal{A}(2)\)-module structure explicit. Since \(c\colon B\mathrm{E}_{8}\to K(\mathbb{Z},4)\) is \(15\)-connected, it induces an isomorphism in cohomology in degrees \(14\) and below, so we can use the cohomology of \(K(\mathbb{Z},4)\) as a stand-in for the cohomology of \(B\mathrm{E}_{8}\). Serre [11, SS10] computed the mod \(2\) cohomology of \(K(\mathbb{Z},4)\). It is an infinitely generated polynomial algebra; in degrees \(12\) and below the generators are: the tautological class \(D\in H^{4}(K(\mathbb{Z},4);\mathbb{Z}/2)\), \(F\coloneqq\mathrm{Sq}^{2}D\), \(G\coloneqq\mathrm{Sq}^{3}D\), \(J\coloneqq\mathrm{Sq}^{4}F\), and \(K\coloneqq\mathrm{Sq}^{5}F\).
If \(C\) is one of \(D\), \(F\), \(G\), \(J\), or \(K\), we let \(C_{1}\) denote the class coming from the first copy of \(B\mathrm{E}_{8}\) and \(C_{2}\) denote the class coming from the second copy. Thus we have the following additive basis for the low-degree cohomology of \(BG\):
1. In \(\mathcal{H}_{1}\), \(D_{1}D_{2}x^{k}\) and \(F_{1}F_{2}x^{k}\) for \(k\geq 0\).
2. In \((\mathcal{H}_{2})^{\mathbb{Z}/2}\), \(D_{1}+D_{2}\), \(F_{1}+F_{2}\), \(G_{1}+G_{2}\), \(D_{1}^{2}+D_{2}^{2}\), \(J_{1}+J_{2}\), \(D_{1}F_{1}+D_{2}F_{2}\), \(D_{1}F_{2}+D_{2}F_{1}\), \(D_{1}G_{1}+D_{2}G_{2}\), \(D_{1}G_{2}+D_{2}G_{1}\), \(K_{1}+K_{2}\), \(F_{1}^{2}+F_{2}^{2}\), \(D_{1}^{3}+D_{2}^{3}\), and \(D_{1}^{2}D_{2}+D_{1}D_{2}^{2}\).
Next, we determine the \(\mathcal{A}(2)\)-module structure using a theorem of Quillen.
**Theorem 2.39** (Quillen's detection theorem [15, Proposition 3.1]).: _Let \(X\) be a space and let \(\mathbb{Z}/k\) act on \(X^{k}\) by cyclic permutations. Let \(Y\coloneqq E\mathbb{Z}/k\times_{\mathbb{Z}/k}X^{k}\), which is a fiber bundle over \(B\mathbb{Z}/k\) with fiber \(X^{k}\). Let \(i_{1}\colon X^{k}\to Y\) be inclusion of the fiber at the basepoint and \(i_{2}\colon B\mathbb{Z}/k\times X\to Y\) be induced by the diagonal map; then_
\[(i_{1}^{*},i_{2}^{*})\colon H^{*}(Y;\mathbb{Z}/k)\longrightarrow H^{*}(X^{k} ;\mathbb{Z}/k)\oplus H^{*}(B\mathbb{Z}/k\times X;\mathbb{Z}/k) \tag{2.40}\]
_is injective._
For us, \(k=2\), \(X=B\mathrm{E}_{8}\), and \(Y=BG\). Thus, to compute Steenrod squares for classes in \(H^{*}(BG;\mathbb{Z}/2)\), we can assume we are in \(B\mathrm{E}_{8}^{2}\) if the class is in \((\mathcal{H}_{2})^{\mathbb{Z}/2}\); for \(\mathcal{H}^{1}\), we also need to know \(\mathrm{Sq}(x)\), and \(i_{2}^{*}\) tells us \(\mathrm{Sq}(x)=x+x^{2}\). Thus we can compute the \(\mathcal{A}(2)\)-module structure on \(H^{*}(BG;\mathbb{Z}/2)\), hence also on \(T(-(c_{1}+c_{2}))\); we focus on the latter. Like most calculations of this form, it is a little tedious but straightforward, and can be done by hand in a reasonable length of time. After working through the calculation, we have learned the following.
**Proposition 2.41**.: _Let \(\mathcal{M}\) be the quotient of \(T(-(c_{1}+c_{2}))\) by all elements in degrees \(14\) and higher. Then \(\mathcal{M}\) is the direct sum of the following submodules._
1. \(M_{1}\)_, the summand containing the Thom class_ \(U\)_._
2. \(M_{2}\coloneqq\widetilde{H}^{*}(\mathbb{RP}^{\infty};\mathbb{Z}/2)\) _modulo those elements in degrees_ \(13\) _and above._
3. \(M_{3}\)_, the summand containing_ \(U(D_{1}^{2}+D_{2}^{2})\)_._
4. \(M_{4}\)_, the summand containing_ \(UD_{1}D_{2}\)_._
5. \(M_{5}\)_, the summand containing_ \(UD_{1}D_{2}x\)_._
6. \(M_{6}\)_, the summand containing_ \(U(D_{1}F_{1}+D_{2}F_{2})\)_._
7. \(M_{7}\)_, the summand containing_ \(U(D_{1}D_{2}^{2}+D_{1}^{2}D_{2})\)_._
We draw this decomposition in Figure 1.
Recall from Corollary 2.36 that \(\mathit{MT}{\xi^{\mathrm{het}}}^{\prime}\) splits as \(\mathit{MTSpin}\vee\mathcal{Q}\). Since \(\Omega_{*}^{\xi^{\mathrm{het}}}\cong\Omega_{*}^{\xi^{\mathrm{het}}}^{\prime}\) in the range we need and \(\Omega_{*}^{\mathrm{Spin}}\) is known thanks to work of Anderson-Brown-Peterson [1], we focus on \(\pi_{*}(\mathcal{Q})\). To do so, we will identify the submodule of the \(E_{2}\)-page of the Adams spectral sequence for \({\xi^{\mathrm{het}}}^{\prime}\) coming from spin bordism via \(s_{1}\colon\mathit{MTSpin}\to\mathit{MT}{\xi^{\mathrm{het}}}^{\prime}\); the \(E_{2}\)-page for \(\mathcal{Q}\) is then a complementary submodule.
The canonical \({\xi^{\mathrm{het}}}^{\prime}\)-structure on a spin manifold from Lemma 2.35 can be rephrased as follows: a spin structure on a manifold \(M\) is equivalent data to: a spin structure on \(M\), a map \(c\colon M\to K(\mathbb{Z},4)\), and a trivialization of \(c-\lambda(M)\). Thus spin structures are twisted string structures in the sense of Corollary 2.12 (in fact the universal twist in the sense of Remark 2.16), so the map
\[(1,0)\colon K(\mathbb{Z},4)\longrightarrow(K(\mathbb{Z},4)\times K(\mathbb{Z}, 4))\times_{\mathbb{Z}/2}E\mathbb{Z}/2=B \tag{2.42}\]
lifts to a map of \(\mathit{MTString}\)-module Thom spectra \(s_{1}\colon\mathit{MTSpin}\to\mathit{MT}{\xi^{\mathrm{het}}}^{\prime}\). Naturality of Theorem 2.20 then tells us the image of \(s_{1}^{*}\) on mod \(2\) cohomology, allowing us to determine which of the
summands in Proposition 2.41 correspond to _MTSpin_ and which correspond to \(\mathcal{Q}\). Specifically, the pullback map sends \(x\mapsto 0\), is nonzero on \(D_{1}\), \(F_{1}\), \(G_{1}\), etc., and sends \(D_{2}\), \(F_{2}\), \(G_{2}\), etc., to zero. This implies that in the direct-sum decomposition \(\mathit{MT\xi^{\mathrm{het}\prime}}\simeq\mathit{MTSpin}\vee\mathcal{Q}\), the summands \(M_{1}\), \(M_{3}\), and \(M_{6}\) come from the cohomology of _MTSpin_, and the remaining summands come from the cohomology of \(\mathcal{Q}\).
In order to run the Adams spectral sequence for \(\mathcal{Q}\), we need to compute the \(\mathrm{Ext}\) of \(M_{2}\), \(M_{4}\), \(M_{5}\), and \(M_{7}\) over \(\mathcal{A}(2)\). After we compute this, we will display the \(E_{2}\)-page in Figure 3. For an \(\mathcal{A}(2)\)-module \(M\), \(\mathrm{Ext}^{*,*}_{\mathcal{A}(2)}(M,\mathbb{Z}/2)\), which we will usually denote \(\mathrm{Ext}_{\mathcal{A}(2)}(M)\) or \(\mathrm{Ext}(M)\), is a bigraded module over the bigraded \(\mathbb{Z}/2\)-algebra \(\mathrm{Ext}_{\mathcal{A}(2)}(\mathbb{Z}/2)\); both the algebra and module structures arise from the _Yoneda product_[23, SS4] (see [1, SS4.2] for a review). This module structure is helpful for determining differentials in the Adams spectral sequence: differentials are equivariant with respect to the action. The module structure also constrains extensions on its \(E_{\infty}\)-page.
May (unpublished) and Shimada-Iwai [10, SS8] determined the algebra \(\mathrm{Ext}_{\mathcal{A}(2)}(\mathbb{Z}/2)\). We will only need to track the actions of three elements: \(h_{0}\in\mathrm{Ext}^{1,1}_{\mathcal{A}(2)}(\mathbb{Z}/2)\), \(h_{1}\in\mathrm{Ext}^{1,2}_{\mathcal{A}(2)}(\mathbb{Z}/2)\), and \(h_{2}\in\mathrm{Ext}_{\mathcal{A}(2)}(\mathbb{Z}/2)\). These elements are in the image of the map \(\mathrm{Ext}_{\mathcal{A}}(\mathbb{Z}/2)\to\mathrm{Ext}_{\mathcal{A}(2)}( \mathbb{Z}/2)\) induced by the quotient \(\mathcal{A}\to\mathcal{A}(2)\), so we do not have to worry about whether Corollary 2.22 is compatible with the \(\mathrm{Ext}_{\mathcal{A}(2)}(\mathbb{Z}/2)\)-action on the \(E_{2}\)-page of the Adams spectral sequence. (It is, though.) When we draw \(\mathrm{Ext}\) charts as in Figure 3, we denote \(h_{0}\)-actions as vertical lines, \(h_{1}\)-actions as diagonal lines with slope \(1\), and \(h_{2}\)-actions as diagonal lines with slope \(1/3\). When one of these lines is not present, the corresponding \(h_{i}\) acts as \(0\).
Figure 1. The \(\mathcal{A}(2)\)-module \(T(-(c_{1}+c_{2}))\) in low degrees. The pictured submodule contains all classes in degrees \(12\) and below.
Often one computes \(\operatorname{Ext}\) groups of \(\mathcal{A}(2)\)-modules using computer programs developed by Bruner [10] and Chatham-Chua [12], or tools such as the May spectral sequence [11] or the Davis-Mahowald spectral sequence [13, 14] (see also [1, Chapter 2]) to compute \(\operatorname{Ext}\) groups of \(\mathcal{A}(2)\)-modules, but for the four modules we care about, we can get away using simpler calculations by hand and computations already in the literature.
1. Davis-Mahowald [13, Table 3.2] compute \(\operatorname{Ext}_{\mathcal{A}(2)}(M_{2})\) in the degrees we need.
2. In degrees 13 and below, \(M_{4}\) is isomorphic to \(\Sigma^{8}(\mathcal{A}(2)\otimes_{\mathcal{A}(0)}\mathbb{Z}/2)\); therefore the \(\operatorname{Ext}\) groups of these two \(\mathcal{A}(2)\)-modules, as algebras over \(\operatorname{Ext}_{\mathcal{A}(2)}(\mathbb{Z}/2)\), are isomorphic in topological degrees 12 and below. Thus we can compute with the change-of-rings theorem (2.24): as \(\operatorname{Ext}_{\mathcal{A}(2)}(\mathbb{Z}/2)\)-algebras, (2.43) with \(h_{0}\in\operatorname{Ext}^{1,1}\). This identification of \(\operatorname{Ext}_{\mathcal{A}(0)}(\mathbb{Z}/2)\) follows from Koszul duality [1, Example 4.5.5].
3. \(M_{5}\) looks a lot like \(M_{2}\), which gives us a technique to compute \(\operatorname{Ext}_{\mathcal{A}(2)}(M_{5})\). Specifically, if \(\tau_{\leq k}M\) denotes the quotient of an \(\mathcal{A}(2)\)-module \(M\) by the submodule of elements in degrees greater than \(k\), then there is a short exact sequence of \(\mathcal{A}(2)\)-modules (2.44) We draw this sequence in Figure 2, left. (2.44) induces a long exact sequence in \(\operatorname{Ext}\) groups; passage between \(M\) and \(\tau_{\leq 13}M\) does not change \(\operatorname{Ext}\) groups in degrees 12 and below, and since we only care about degrees 12 and below, we can and do pass between \(\tau_{\leq 13}M\) and \(M\) without comment.
We already know \(\operatorname{Ext}_{\mathcal{A}(2)}(\mathbb{Z}/2)\) and \(\operatorname{Ext}_{\mathcal{A}(2)}(M_{2})\), so we can run the long exact sequence associated to (2.44) to compute \(\operatorname{Ext}_{\mathcal{A}(2)}(M_{5})\) in degrees 12 and below; we draw this long exact sequence in Figure 2, right. In the range we care about, there is exactly one boundary map that is not forced to be zero for degree reasons, namely
\[\partial\colon\operatorname{Ext}^{0,13}_{\mathcal{A}(2)}(\Sigma^{13}\mathbb{ Z}/2)\longrightarrow\operatorname{Ext}^{1,13}_{\mathcal{A}(2)}(\Sigma^{8}M_{2}); \tag{2.45}\]
it must be nonzero, because that is the only way to obtain \(\operatorname{Ext}^{0,13}_{\mathcal{A}(2)}(M_{5})=\operatorname{Hom}_{ \mathcal{A}(2)}(M_{5},\Sigma^{13}\mathbb{Z}/2)=0\), and by inspection of Figure 1 this \(\operatorname{Hom}\) group vanishes.
Figure 2. Left: the short exact sequence (2.44) of \(\mathcal{A}(2)\)-modules. Right: the associated long exact sequence in \(\operatorname{Ext}\). See the discussion after (2.45) for why the pictured boundary map (black arrow) is nonzero.
4. If \(C\eta\coloneqq\Sigma^{-2}\widetilde{H}^{*}(\mathbb{CP}^{2};\mathbb{Z}/2)\), there is a \(14\)-connected quotient map \(M_{7}\to\Sigma^{12}C\eta\), so \(\operatorname{Ext}_{\mathcal{A}(2)}(\Sigma^{12}C\eta)\) and \(\operatorname{Ext}_{\mathcal{A}(2)}(M_{7})\) do not differ in the range we care about. Bruner-Rognes [1, Figure 0.15] compute \(\operatorname{Ext}_{\mathcal{A}(2)}(C\eta)\).
Using these computations, we obtain the following description of the \(E_{2}\)-page of the Adams spectral sequence for the summand \(\mathcal{Q}\) of \(\mathit{MT}\xi^{\operatorname{het}^{\prime}}\).
**Proposition 2.46**.: _The \(E_{2}\)-page of the Adams spectral sequence for \(\mathcal{Q}\) in topological degrees \(12\) and below is as given in Figure 3. In particular, in this range, the \(E_{2}\)-page is generated as an \(\operatorname{Ext}_{\mathcal{A}(2)}(\mathbb{Z}/2)\)-module by eight elements: \(p_{1}\in\operatorname{Ext}^{0,1}\), \(p_{3}\in\operatorname{Ext}^{0,3}\), \(p_{7}\in\operatorname{Ext}^{0,7}\), \(a\in\operatorname{Ext}^{0,8}\), \(b\in\operatorname{Ext}^{2,10}\), \(c\in\operatorname{Ext}^{0,9}\), \(d\in\operatorname{Ext}^{0,11}\), and \(e\in\operatorname{Ext}^{0,12}\)._
There are plenty of differentials in this Adams spectral sequence which could be nonzero, even when we take into account the fact that Adams differentials commute with \(h_{0}\), \(h_{1}\), and \(h_{2}\):
1. \(d_{2}\colon E_{2}^{0,8}\to E_{2}^{2,9}\), whose value on \(a\) could be \(h_{2}^{2}p_{1}\), \(h_{0}^{2}p_{7}\), or a linear combination of those two elements.
2. \(d_{2}\colon E_{2}^{1,9}\to E_{2}^{3,10}\), which could send \(h_{0}a\) or \(h_{1}p_{7}\) to \(h_{0}^{3}p_{7}\).
3. \(d_{2}\colon E_{2}^{0,9}\to E_{2}^{2,8}\) and \(d_{2}\colon E_{2}^{1,11}\to E_{2}^{3,12}\), intertwined by an \(h_{1}\)-action, which could send \(c\mapsto b\) and \(h_{1}c\mapsto h_{1}b\).
4. \(d_{2}\colon E_{2}^{0,12}\to E_{2}^{2,13}\), which could send \(e\mapsto h_{1}^{2}c=h_{0}^{2}d\).
5. If the differentials in (D1) and (D2) vanish, \(d_{3}\colon E_{3}^{0,8}\to E_{3}^{3,10}\) could be nonzero on \(a\).
6. If the differential in (D4) vanishes, \(d_{5}\colon E_{5}^{0,12}\to E_{5}^{5,16}\) (and its image under \(h_{0}\)) or \(d_{6}\colon E_{6}^{0,12}\to E_{6}^{6,17}\) could be nonzero.
Figure 3. In Corollary 2.36, we showed \(\mathit{MT}\xi^{\operatorname{het}^{\prime}}\simeq\mathit{MTSpin}\vee \mathcal{Q}\); this figure denotes the \(E_{2}\)-page of the Adams spectral sequence computing \(\pi_{*}(\mathcal{Q})\) in degrees \(12\) and below. This corresponds to a subset of the summands in Figure 1. In Lemma 2.50, we show that the solid gray differential beginning at \(a\) is nonzero; we leave open the other two differentials, which are dashed in this figure.
**Lemma 2.47**.: _The differentials (D2), (D5), and (D6) vanish._
Proof.: Our strategy is to use the fact that \(\mathbb{G}^{\mathrm{het}}\to\mathbb{Z}/2\) splits to zero out differentials. This splitting does not extend to a splitting of \(\mathit{MT}\xi^{\mathrm{het}}\), but it will be close enough.
The inclusion \(\iota\colon\mathbb{Z}/2\hookrightarrow\mathbb{G}^{\mathrm{het}}\) defines a map \(\iota^{\prime}\colon\mathit{MTString}\wedge B\mathbb{Z}/2\to\mathit{MT}\xi^{ \mathrm{het}}\) which on Adams \(E_{2}\)-pages is precisely the inclusion of the summand \(\mathrm{Ext}(\mathit{M}_{2})\). Quotienting \(\mathbb{G}^{\mathrm{het}}\) by \(\mathbb{T}[1]\), then by \(\mathrm{E}_{8}\times\mathrm{E}_{8}\), produces a map
\[p\colon\mathit{MT}\xi^{\mathrm{het}}\xrightarrow[(\ref{eq:MTString})]{\phi} \mathit{MTSpin}\wedge(B((\mathrm{E}_{8}\times\mathrm{E}_{8})\rtimes\mathbb{Z}/ 2))_{+}\longrightarrow\mathit{MTSpin}\wedge(B\mathbb{Z}/2)_{+}, \tag{2.48}\]
and \(p\circ\iota\colon\mathit{MTString}\wedge(B\mathbb{Z}/2)\to\mathit{MTSpin} \wedge(B\mathbb{Z}/2)_{+}\) is the usual map \(\mathit{MTString}\to\mathit{MTSpin}\) together with the addition of a basepoint. This means that any element of \(\widetilde{\Omega}_{*}^{\mathrm{String}}(B\mathbb{Z}/2)\) whose image in \(\widetilde{\Omega}_{*}^{\mathrm{Spin}}(B\mathbb{Z}/2)\) is nonzero must also be nonzero in \(\Omega_{*}^{\xi^{\mathrm{het}}}\), which kills many differentials to or from \(\mathrm{Ext}(\mathit{M}_{2})\). To produce such elements, study the map of Adams spectral sequences induced by \(p\circ\iota\), which on \(E_{2}\)-pages is the map
\[\mathrm{Ext}_{\mathcal{A}(2)}(\widetilde{H}^{*}(B\mathbb{Z}/2;\mathbb{Z}/2)) \longrightarrow\mathrm{Ext}_{\mathcal{A}(1)}(H^{*}(B\mathbb{Z}/2;\mathbb{Z}/ 2)). \tag{2.49}\]
Davis-Mahowald [14, Table 3.2] compute \(\mathrm{Ext}_{\mathcal{A}(2)}(H^{*}(B\mathbb{Z}/2;\mathbb{Z}/2))\) in the degrees we need, and Gitler-Mahowald-Milgram [11, SS2] compute \(\mathrm{Ext}_{\mathcal{A}(1)}(H^{*}(B\mathbb{Z}/2;\mathbb{Z}/2))\). We draw the map (2.49) in Figure 4. All differentials in the spectral sequence over \(\mathcal{A}(1)\) vanish using \(h_{0}\)- and \(h_{1}\)-equivariance, and by inspection there are no hidden extensions. Therefore we can identify some classes which survive \(p\circ\iota\) and use this to trivialize some differentials in Figure 3.
* By computing the image of \(p\circ\iota\) on \(\mathrm{Ext}\) groups, we learn that the map \(\widetilde{\Omega}_{7}^{\mathrm{String}}(B\mathbb{Z}/2)\to\widetilde{\Omega}_ {7}^{\mathrm{Spin}}(B\mathbb{Z}/2)\) can be identified with the map \(\mathbb{Z}/16\oplus\mathbb{Z}/2\to\mathbb{Z}/16\) sending \((1,0)\mapsto 1\) and \((0,1)\mapsto 0\).15 Therefore, any differential to or from the four summands in topological degree \(7\) linked by \(h_{0}\)-actions must vanish, including (D2) and (D5). Footnote 15: Alternatively, one could show that the \(\mathbb{Z}/16\subset\widetilde{\Omega}_{17}^{\mathrm{String}}(B\mathbb{Z}/2)\) is mapped injectively into \(\widetilde{\Omega}_{7}^{\mathrm{Spin}}(B\mathbb{Z}/2)\) by checking on a generator. One can show that \(\mathbb{R}\mathbb{P}^{7}\) admits a string structure; then the generator of that \(\mathbb{Z}/16\) subgroup of \(\widetilde{\Omega}_{7}^{\mathrm{String}}(B\mathbb{Z}/2)\) is \(\mathbb{R}\mathbb{P}^{7}\) with its nontrivial principal \(\mathbb{Z}/2\)-bundle. Its image in \(\widetilde{\Omega}_{7}^{\mathrm{Spin}}(B\mathbb{Z}/2)\) has order at least \(16\), because the \(\eta\)-invariant of a suitable twisted Dirac operator associated to the \(\mathbb{Z}/2\)-bundle defines a bordism invariant \(\Omega_{7}^{\mathrm{Spin}}(B\mathbb{Z}/2)\to\mathbb{R}/\mathbb{Z}\), and on \((\mathbb{R}\mathbb{P}^{7},S^{7}\to\mathbb{R}\mathbb{P}^{7})\), this \(\eta\)-invariant is \(\ell/16\bmod 1\) for some odd \(\ell\), as follows from a formula of Donnelly [13, Proposition 4.1].
* Similarly, the map \(\widetilde{\Omega}_{11}^{\mathrm{String}}(B\mathbb{Z}/2)\to\widetilde{\Omega}_ {11}^{\mathrm{Spin}}(B\mathbb{Z}/2)\) can be identified with the inclusion \(\mathbb{Z}/8\hookrightarrow\mathbb{Z}/128\oplus\mathbb{Z}/8\oplus\mathbb{Z}/2\) sending \(1\mapsto(16,0,0)\), which follows either by computing \(p\circ\iota\) on \(\mathrm{Ext}\) groups or computing \(\eta\)-invariants on the generator of \(\widetilde{\Omega}_{11}^{\mathrm{String}}(B\mathbb{Z}/2)\), which can be taken to be the product of \(\mathbb{R}\mathbb{P}^{3}\) with a Bott manifold.16 Thus (D6) vanishes.
Footnote 16: All orientable \(3\)-manifolds have trivializable tangent bundles, hence string structures; for a construction of a Bott manifold with string structure, see [12, §5.3].
**Lemma 2.50**.: _The differential (D1) is nonzero; specifically, \(d_{2}(a)=h_{2}^{2}p_{1}\)._
We will deduce this from the following fact.
**Proposition 2.51**.: _The map \(\Omega_{4}^{\xi^{r,\mathrm{het}}}\to\Omega_{4}^{\xi^{\mathrm{het}}}\) is surjective, at least after \(2\)-completion._
Recall that \(\xi^{r,\mathrm{het}}\) is the analogue of \(\xi^{\mathrm{het}}\) but with \((\mathrm{E}_{8}\times\mathrm{E}_{8})\rtimes\mathbb{Z}/2\) replaced with \(\mathrm{E}_{8}\times\mathrm{E}_{8}\).
Proof of Lemma 2.50 assuming Proposition 2.51.: In this proof, implicitly \(2\)-complete all abelian groups. If \(d_{2}(a)=0\), then \(h_{2}^{2}p_{1}\in E_{2}^{2,9}\) survives to the \(E_{\infty}\)-page, so the \(h_{2}\)-action \(E_{\infty}^{1,5}\to E_{\infty}^{2,9}\) is
nonzero. This lifts to imply that taking the product with \(S^{3}\) with string structure induced from its Lie group framing, which defines a map \(\Omega_{4}^{\xi^{\text{het}}}\to\Omega_{7}^{\xi^{\text{het}}}\), is also nonzero. Direct products with framed manifolds correspond to action by elements of \(\pi_{*}(\mathbb{S})\) on homotopy groups, so this product with \(S^{3}\) is natural with respect to maps of spectra.
Since \(\Omega_{4}^{\xi^{\text{\tiny{\rm{het}}}}}\to\Omega_{4}^{\xi^{\text{het}}}\) is surjective, we may compute the product with \(S^{3}\) as a map
\[-\times S^{3}\colon\Omega_{4}^{\xi^{\text{\tiny{\rm{het}}}}}\longrightarrow \Omega_{7}^{\xi^{\text{\tiny{\rm{het}}}}} \tag{2.52}\]
and then map back to \(\Omega_{7}^{\xi^{\text{\tiny{\rm{het}}}}}\). However, as we noted in Remark 2.34, \(\Omega_{7}^{\xi^{\text{\tiny{\rm{het}}}}}\cong\Omega_{7}^{\text{\tiny{\rm{ Spin}}}}(K(\mathbb{Z},4))\), and Stong [11] showed \(\Omega_{7}^{\text{\tiny{\rm{Spin}}}}(K(\mathbb{Z},4))=0\). Thus taking the product with \(S^{3}\) is the zero map \(\Omega_{4}^{\xi^{\text{\tiny{\rm{het}}}}}\to\Omega_{7}^{\xi^{\text{\tiny{\rm{ het}}}}}\), which is incompatible with \(d_{2}(a)\) vanishing.
Proof of Proposition 2.51.: Let \(F\) be the fiber of the map \(\phi\colon\text{{MT}}\xi^{\text{\tiny{\rm{ret}}},\text{\rm{het}}}\to\text{{MT} }\xi^{\text{\rm{het}}}\), so that there is a long exact sequence
\[\cdots\longrightarrow\Omega_{4}^{\xi^{\text{\tiny{\rm{ret}}},\text{\rm{het}}} }\overset{\phi}{\longrightarrow}\Omega_{4}^{\xi^{\text{\tiny{\rm{het}}}}} \overset{\partial}{\longrightarrow}\pi_{3}(F)\longrightarrow\Omega_{3}^{\xi^ {\text{\tiny{\rm{re}}},\text{\rm{het}}}}\longrightarrow\cdots \tag{2.53}\]
We will show \(\pi_{3}(F)_{2}^{\wedge}=0\), which implies the proposition statement by exactness. To do so, we must understand \(F\).
Let \(V\) be the rank-zero stable vector bundle on \(B\mathbb{G}^{\text{het}}\) classified by the map \(\xi^{\text{het}}\colon B\mathbb{G}^{\text{het}}\to B\mathbb{O}\) and let \(\sigma\to B\mathbb{G}^{\text{het}}\) be the line bundle classified by the map quotienting by \(\mathbb{T}[1]\), then by Spin, then by \(\text{E}_{8}^{2}\):
\[B\mathbb{G}^{\text{het}}\longrightarrow B(\text{E}_{8}^{2}\rtimes\mathbb{Z}/ 2)\longrightarrow B\mathbb{Z}/2. \tag{2.54}\]
Then, inclusion of the zero section of \(\sigma\) defines a map of spaces over \(\mathbb{Z}\times B\mathbb{O}\): \(\phi\colon(B\mathbb{G}^{\text{het}},V)\to(B\mathbb{G}^{\text{het}},V\oplus\sigma)\). Here we use the notation \((B,\xi)\) to denote a space \(B\) and a map \(\xi\colon B\to\mathbb{Z}\times B\mathbb{O}\), and we use \(\mathbb{Z}\times B\mathbb{O}\) instead of \(B\mathbb{O}\) because \(\sigma\) is not rank \(0\). Let \(M^{-}\) denote the Thom spectrum of \(V\oplus\sigma\colon B\mathbb{G}^{\text{het}}\to\mathbb{Z}\times B\mathbb{O}\), and let \(\widetilde{\phi}\colon\text{{MT}}\xi\to M^{-}\) denote the map of Thom spectra induced by \(\phi\)
we claim \(F\simeq\Sigma^{-1}M\). To see this, we will use a theorem in [13] which identifies the fiber of \(\widetilde{\phi}\) as the map \(\mathit{MT}\xi^{r,\mathrm{het}}\to\mathit{MT}\xi^{\mathrm{het}}\). Specifically, [13] shows that the fiber of \(\widetilde{\phi}\) is the Thom spectrum of the pullback of \(V\) to the sphere bundle \(S(\sigma)\) of \(\sigma\). This sphere bundle is the pullback of the universal sphere bundle over \(B\mathbb{Z}/2\) by the classifying map of \(\sigma\):
(2.55)
The sphere bundle of the tautological line bundle \(L\to B\mathbb{Z}/2\) is \(E\mathbb{Z}/2\to B\mathbb{Z}/2\), which is contractible, so the pullback diagram (2.55) simplifies to a fiber diagram, and the sphere bundle is the fiber of (2.54). Since (2.54) was induced from a group homomorphism by taking classifying spaces, one can compute its fiber by taking the classifying space of the kernel of the homomorphism, which is \(\mathcal{S}(\mathrm{Spin}\times\mathrm{E}_{8}^{2},c_{1}+c_{2}-\lambda)\). In Remark 1.46 we saw that applying the Thom spectrum functor to \(B\mathcal{S}(\mathrm{Spin}\times\mathrm{E}_{8}^{2},c_{1}+c_{2}-\lambda)\to B \mathbb{G}^{\mathrm{het}}\), i.e. to the map \(S(\sigma)\to B\mathbb{G}^{\mathrm{het}}\), produces the map \(\mathit{MT}\xi^{r,\mathrm{het}}\to\mathit{MT}\xi^{\mathrm{het}}\), and therefore the fiber of this map is \(\Sigma^{-1}M^{-}\).
To finish the proof, attack \(F\) with the Adams spectral sequence, using its description as the Thom spectrum \(\Sigma^{-1}M\) to get a description in terms of \(\mathrm{Ext}\) of an \(\mathcal{A}(2)\)-module by using [13] again. Recall from Figure 2, left, the \(\mathcal{A}(2)\)-module \(\tau_{\leq 13}M_{5}\); the result of the computation here is that the \(\mathcal{A}(2)\)-module relevant for computing \(\pi_{*}(F)_{2}^{\wedge}\) agrees with \(\Sigma^{-9}(\tau_{\leq 13}M_{5})\) in degrees 4 and below. Then, Figure 2, right, computes \(\mathrm{Ext}_{\mathcal{A}(2)}(\Sigma^{-9}(\tau_{\leq 13}M_{5}))\), which is the \(E_{2}\)-page of the Adams spectral sequence computing \(\pi_{*}(F)_{2}^{\wedge}\), in degrees 3 and below (shift the topological degree of everything in Figure 2, right, down by 9). The \(E_{2}\)-page vanishes in topological degree 3, which implies \(\pi_{3}(F)_{2}^{\wedge}=0\).
**Lemma 2.56**.: _The differential (D4) vanishes._
Proof.: The source of this differential is \(E_{2}^{0,12}\cong\mathbb{Z}/2\cdot e\) in Adams filtration zero. Classes \(\alpha\) in Adams filtration \(0\) are canonically identified with classes \(c_{\alpha}\) forming a subgroup of mod 2 cohomology, and \(\alpha\) survives to the \(E_{\infty}\)-page if and only if the bordism invariant \(\int c_{\alpha}\) is nonzero. Here, \(\alpha=e\) and \(c_{\alpha}=D_{1}D_{2}^{2}+D_{1}^{2}D_{2}\), so our differential vanishes if and only if \(e\) survives to the \(E_{\infty}\)-page if and only if the following invariant is nonzero:
\[\int\bigl{(}D_{1}D_{2}^{2}+D_{1}^{2}D_{2}\bigr{)}\colon\Omega_{12}^{\xi^{ \mathrm{het}}}\longrightarrow\mathbb{Z}/2. \tag{2.57}\]
We will produce a manifold on which this invariant is nonzero.
The quaternionic projective plane \(\mathbb{HP}^{2}\) has \(H^{*}(\mathbb{HP}^{2};\mathbb{Z})\cong\mathbb{Z}[x]/(x^{3})\) with \(|x|=4\) and \(\lambda(\mathbb{HP}^{2})=x\)[1, SS15.5, SS15.6] (see also [14, SS5.2]). The Kunneth formula tells us \(H^{*}(\mathbb{HP}^{2}\times S^{4};\mathbb{Z})\cong\mathbb{Z}[x,y]/(x^{3},y^{2})\), with \(|y|=4\); since \(TS^{4}\) is stably trivial, \(\lambda(S^{4})\) vanishes and the Whitney sum formula (Lemma 1.6) implies \(\lambda(\mathbb{HP}^{2}\times S^{4})=x\).
To define a \(\xi^{\mathrm{het}}\)-structure on \(\mathbb{HP}^{2}\times S^{4}\), it suffices to produce two \(\mathrm{E}_{8}\)-bundles \(P,Q\to\mathbb{HP}^{2}\times S^{4}\) and a trivialization of \(\lambda(\mathbb{HP}^{2}\times S^{4})-c(P)-c(Q)\). Since we can freely prescribe \(c(P)\) and \(c(Q)\), choose \(P\) and \(Q\) such that \(c(P)=y\) and \(c(Q)=x-y\); then \(\lambda(\mathbb{HP}^{2}\times S^{4})-c(P)-c(Q)=0\), so we can choose a trivialization. Since \(D_{1}=c(P)\) mod 2 and \(D_{2}=c(Q)\) mod 2,
\[\int_{\mathbb{HP}^{2}\times S^{4}}\bigl{(}D_{1}D_{2}^{2}+D_{1}^{2}D_{2}\bigr{)} =\left(\int_{\mathbb{HP}^{2}\times S^{4}}(yx^{2}+xy^{2})\right)\text{ mod }2=1.\qed \tag{2.58}\]
Now we have to tackle extension questions. In this part of the computation, it will be helpful to reference Figure 3, as we will use the description of the \(E_{\infty}\)-page of this spectral sequence several times while addressing extension questions.
**Lemma 2.59**.: _In degrees \(10\) and below, all extension questions in the Adams spectral sequence for \(\pi_{*}(\mathcal{Q})_{2}^{\wedge}\) either split or are detected by \(h_{0}\) on the \(E_{\infty}\)-page, except possibly for the extensions involving the classes \(c\in E_{\infty}^{0,9}\), \(h_{1}^{2}p_{7}\in E_{\infty}^{2,11}\), and \(h_{1}b\in E_{\infty}^{3,12}\)._
The classes \(h_{1}b\) and \(c\) may vanish on the \(E_{\infty}\)-page, depending on the fate of the differentials in (D3).
Proof.: The \(h_{0}\)-action alone solves all extensions in this range except in degrees \(8\), \(9\), and \(10\).
If the \(d_{2}\)s in (D3) vanish, there is an extension question in degree \(8\). The \(h_{0}\)-actions in the tower generated by \(h_{0}a\) lift to produce a \(\mathbb{Z}\) in \(\Omega_{8}^{\xi^{\text{\rm het}}}\), so the only question is whether there is an extension involving \(h_{1}p_{7}\) and \(b\). Suppose this extension does not split, so \(\pi_{8}(\mathcal{Q})_{2}^{\wedge}\cong\mathbb{Z}\oplus\mathbb{Z}/4\). We can choose a generator \(x\) of this \(\mathbb{Z}/4\) such that the image of \(x\) in the Adams \(E_{\infty}\)-page is \(h_{1}p_{7}\in E_{\infty}^{1,9}\); since this is \(h_{1}\) times another class on the \(E_{\infty}\)-page, \(x\) is \(\eta\) times a class \(y\in\pi_{7}(\mathcal{Q})_{2}^{\wedge}\), where \(\eta\) is the generator of \(\pi_{1}(\mathbb{S})\cong\mathbb{Z}/2\). Since \(2\eta=0\), \(2x=2\eta y=0\); since \(x\) was supposed to generate a \(\mathbb{Z}/4\), this is a contradiction, and therefore this extension splits.
The same trick splits all extensions in degree \(10\), and all extensions involving the class in \(E_{\infty}^{4,13}\).
**Proposition 2.60**.: _All extension questions in \(\pi_{9}(\mathcal{Q})_{2}^{\wedge}\) split, so \(\pi_{9}(\mathcal{Q})_{2}^{\wedge}\cong(\mathbb{Z}/2)^{\oplus 4}\) if the differentials in (D3) vanish, and \(\pi_{9}(\mathcal{Q})_{2}^{\wedge}\cong(\mathbb{Z}/2)^{\oplus 2}\) if they do vanish._
Proof.: If the differentials in (D3) do not vanish, this is a consequence of Lemma 2.59, so assume that those differentials vanish.
First suppose we can split all extensions involving \(c\). Then the only extension remaining is between \(h_{1}^{2}p_{7}\) and \(h_{1}b\). In Lemma 2.59, we split the extension between \(h_{1}p_{7}\) and \(b\), so the classes \(h_{1}p_{7}\) and \(b\) lift to classes \(\underline{h_{1}p_{7}}\), resp. \(\underline{b}\), which generate a \(\mathbb{Z}/2\oplus\mathbb{Z}/2\subset\pi_{8}(\mathcal{Q})_{2}^{\wedge}\). The action by \(h_{1}\) lifts to imply that the images of \(\eta\cdot\underline{h_{1}p_{7}}\) and \(\eta\cdot\underline{b}\) in the \(E_{\infty}\)-page are \(h_{1}^{2}p_{7}\), resp. \(h_{1}b\), and \(\eta\) carries the \(\mathbb{Z}/2\oplus\mathbb{Z}/2\) generated by \(\underline{h_{1}p_{7}}\) and \(\underline{b}\) to a \(\mathbb{Z}/2\oplus\mathbb{Z}/2\subset\pi_{9}(\mathcal{Q})_{2}^{\wedge}\) generated by \(\eta\underline{h_{1}p_{7}}\) and \(\eta\underline{b}\), thus splitting the extension between \(h_{1}^{2}p_{7}\) and \(h_{1}b\).
Now we need to prove that \(c\) lifts to a class \(\underline{c}\) such that \(2\underline{c}=0\). Let \(X\) be the pullback
(2.61)
and let \(\xi\colon X\to B\mathrm{O}\) be the pullback of \(\xi^{\text{\rm het}}\) to \(X\). Both vertical arrows in (2.61) are fibrations with fiber \(BE_{8}^{2}\); using the induced map of Serre spectral sequences, we learn \(H^{*}(X;\mathbb{Z}/2)\cong H^{*}(B\mathbb{G}^{\text{\rm het}};\mathbb{Z}/2)/(x^ {3})\), where \(x\in H^{1}(B\mathbb{G}^{\text{\rm het}};\mathbb{Z}/2)\) is the generator. One can replay the whole argument we ran with \(\xi\) in place of \(\xi^{\text{\rm het}}\), defining \(\xi^{\prime}\) analogously to \(\xi^{\text{\rm het}^{\prime}}\), and deduce the following.
1. The map \(c\colon B\mathrm{E}_{8}\to K(\mathbb{Z},4)\) induces an isomorphism \(\Omega_{*}^{\xi}\to\Omega_{*}^{\xi^{\prime}}\) in degrees \(14\) and below,
2. there is a spectrum \(\mathcal{Q}^{\prime}\) and a splitting \(\mathit{MT}\xi\simeq\mathit{MTSpin}\vee\mathcal{Q}^{\prime}\), and
3. the map \(X\to B\mathbb{G}^{\text{\rm het}}\) induces a map \(\mathit{MT}\xi^{\prime}\to\mathit{MT}\xi^{\text{\rm het}^{\prime}}\) which is the identity on the _MTSpin_ factors and sends \(\mathcal{Q}\to\mathcal{Q}^{\prime}\).
The analogue of Proposition 2.41 for \(\xi^{\prime}\) is exactly the same, except replacing \(M_{2}\) with \(\Sigma C2\) and \(M_{5}\) with \(\Sigma^{9}C2\), where \(C2\) is the \(\mathcal{A}(2)\)-module \(\Sigma^{-1}\widetilde{H}^{*}(\mathbb{RP}^{2};\mathbb{Z}/2)\). Bruner-Rognes [1, SS6.1] compute \(\operatorname{Ext}_{\mathcal{A}(2)}(C2)\), and using that we can draw the \(E_{2}\)-page of the Adams spectral sequence computing \(\pi_{*}(\mathcal{Q}^{\prime})^{\wedge}_{2}\) in Figure 5. For the classes \(p_{1}\), \(a\), and \(c\) we considered in the \(E_{2}\)-page of the Adams spectral sequence for \(\mathcal{Q}\), let \(p^{\prime}_{1}\), \(a^{\prime}\), and \(c^{\prime}\) be the corresponding classes in the \(E_{2}\)-page for \(\mathcal{Q}^{\prime}\): they live in the same bidegrees and the map \(\mathcal{Q}^{\prime}\to\mathcal{Q}\) carries \(x^{\prime}\to x\) for \(x\in\{p_{1},a,c\}\).
The point of all of this is that if the differentials in (D3) vanish, then both \(c\) and \(h_{1}^{2}p_{7}\) live to the \(E_{\infty}\)-page for \(\mathcal{Q}\), then both \(c\) and \(h_{1}^{2}p_{7}\) are in the image of the map \(\Phi\) on \(E_{\infty}\)-pages induced by \(\mathcal{Q}^{\prime}\to\mathcal{Q}\): \(c=\Phi(c^{\prime})\), and Bruner-Rognes [1, Corollary 4.3] define a class \(\widetilde{h}_{2}^{2}\in\operatorname{Ext}_{\mathcal{A}(2)}^{2,9}(C2)= \operatorname{Ext}_{\mathcal{A}(2)}^{2,10}(\Sigma C2)\) such that \(h_{1}^{2}p_{7}=\Phi(\widetilde{h_{2}^{2}})\). And looking at Figure 5, in the \(E_{\infty}\)-page for \(\mathcal{Q}^{\prime}\), \(h_{1}(\widetilde{h_{2}^{2}})\neq 0\) and \(h_{1}(wp^{\prime}_{1})\neq 0\), so the \(2\eta=0\) trick from Lemma 2.59 splits the extensions in \(\pi_{9}(\mathcal{Q}^{\prime})^{\wedge}_{2}\). Thus there is a class \(\underline{c}^{\prime}\in\pi_{9}(\mathcal{Q}^{\prime})^{\wedge}_{2}\) such that \(2\underline{c}^{\prime}=0\) and the image of \(\underline{c}^{\prime}\) in the \(E_{\infty}\)-page is \(c^{\prime}\). Applying \(\Phi(c^{\prime})=c\), we learn \(c\) lifts to \(\Phi(\underline{c}^{\prime})\) in \(\pi_{9}(\mathcal{Q})^{\wedge}_{2}\), and twice this class is \(0\), as we wanted to prove.
We have therefore proven the following theorem.
Figure 5. The \(E_{2}\)-page of the Adams spectral sequence computing \(\pi_{*}(\mathcal{Q}^{\prime})^{\wedge}_{2}\), where \(\mathcal{Q}^{\prime}\) is the spectrum defined in the proof of Proposition 2.60. By comparing with the Adams spectral sequence for \(\mathcal{Q}\), we learn \(d_{2}(a^{\prime})=h_{2}^{2}p_{1}^{\prime}\) from Lemma 2.50, and that the dashed differentials (e.g. \(d_{2}(c^{\prime})\), \(d_{2}(h_{1}c^{\prime})\)) vanish if and only if the differentials in (D3) vanish.
**Theorem 2.62**.: _Ignoring odd-primary torsion, there are isomorphisms_
\[\Omega_{0}^{\xi^{\text{het}}} \cong\mathbb{Z} \Omega_{6}^{\xi^{\text{het}}} \cong\mathbb{Z}/2\] \[\Omega_{1}^{\xi^{\text{het}}} \cong\mathbb{Z}/2\oplus\mathbb{Z}/2 \Omega_{7}^{\xi^{\text{het}}} \cong\mathbb{Z}/16\] \[\Omega_{2}^{\xi^{\text{het}}} \cong\mathbb{Z}/2\oplus\mathbb{Z}/2 \Omega_{8}^{\xi^{\text{het}}} \cong\mathbb{Z}^{3}\oplus(\mathbb{Z}/2)^{\oplus i}\] \[\Omega_{3}^{\xi^{\text{het}}} \cong\mathbb{Z}/8 \Omega_{9}^{\xi^{\text{het}}} \cong(\mathbb{Z}/2)^{\oplus j}\] \[\Omega_{4}^{\xi^{\text{het}}} \cong\mathbb{Z}\oplus\mathbb{Z}/2 \Omega_{10}^{\xi^{\text{het}}} \cong(\mathbb{Z}/2)^{\oplus k}\] \[\Omega_{5}^{\xi^{\text{het}}} \cong 0 \Omega_{11}^{\xi^{\text{het}}} \cong A,\]
_where:_
* \(A\) _is an abelian group of order_ \(64\) _isomorphic to one of_ \(\mathbb{Z}/8\oplus\mathbb{Z}/8\)_,_ \(\mathbb{Z}/16\oplus\mathbb{Z}/4\)_,_ \(\mathbb{Z}/32\oplus\mathbb{Z}/2\)_, or_ \(\mathbb{Z}/64\)_, and_
* _either_ \(i=1\)_,_ \(j=4\)_, and_ \(j=4\)_, or_ \(i=2\)_,_ \(j=6\)_, and_ \(k=5\)_._
#### 2.2.1. Some manifold generators
We finish this section by giving manifold representatives for all the generators for the groups we found in dimensions \(10\) and below, except possibly for two classes in degrees \(9\) and \(10\) if the differentials in (D3) vanish. We also give partial information in dimension \(11\). In this list, we implicitly localize at \(2\), though we will soon see in Theorem 2.74 that this does not lose any information.
The map \(\text{{MTSpin}}\vee(\text{{MTString}}\wedge B\mathbb{Z}/2)\to\text{{MTS}}^{ \text{het}}\) is surjective on homotopy groups in degrees \(7\) and below, quickly giving us many of the generators we need. The low-dimensional generators of spin bordism are standard; for \(\widetilde{\Omega}_{*}^{\text{String}}(B\mathbb{Z}/2)\), we use the \(h_{2}\)-action on the \(E_{\infty}\)-page together with the map \(\widetilde{\Omega}_{*}^{\text{String}}(B\mathbb{Z}/2)\to\widetilde{\Omega}_{*} ^{\text{Spin}}(B\mathbb{Z}/2)\), as in the proof of Lemma 2.47 (see Figure 4), to deduce generators.
1. \(\Omega_{0}^{\xi^{\text{het}}}\cong\mathbb{Z}\), generated by the point.
2. \(\Omega_{1}^{\xi^{\text{het}}}\cong\mathbb{Z}/2\oplus\mathbb{Z}/2\). The first summand comes from \(\Omega_{1}^{\text{Spin}}\), hence is generated by \(S^{1}_{nb}\), the circle with \(\xi^{\text{het}}\)-structure induced from its nonbounding framing. The other summand, corresponding to \(p_{1}\in E_{\infty}^{0,1}\) of the Adams spectral sequence for \(\mathcal{Q}\), is in Adams filtration zero, hence corresponds to a mod \(2\) cohomology class and is detected by that class. Looking at Figure 1, this class is the generator of \(H^{1}(B\mathbb{Z}/2;\mathbb{Z}/2)\) evaluated on the principal \(\mathbb{Z}/2\)-bundle associated to a \(\xi^{\text{het}}\)-structure. Thus we can take as our generator \(S^{1}\) with \(\xi^{\text{het}}\)-structure induced by the nontrivial \(\mathbb{Z}/2\)-bundle and the inclusion \(\mathbb{Z}/2\hookrightarrow\text{E}_{8}^{2}\rtimes\mathbb{Z}/2\). We will call this generator \(\mathbb{RP}^{1}\), so that we can represent its \(\mathbb{Z}/2\)-bundle by \(S^{1}\to\mathbb{RP}^{1}\).
3. An action by \(h_{1}\) in the \(E_{\infty}\)-page of an Adams spectral sequence calculating bordism lifts to taking the product with \(S^{1}_{nb}\) on manifold generators. Acting by \(h_{1}\) defines an isomorphism from the \(1\)-line of the \(E_{\infty}\)-page to the \(2\)-line, so we can take \(S^{1}_{nb}\times S^{1}_{nb}\) and \(\mathbb{RP}^{1}\times S^{1}_{nb}\) to be our two generators of \(\Omega_{2}^{\xi^{\text{het}}}\).
4. \(\Omega_{3}^{\xi^{\text{het}}}\cong\mathbb{Z}/8\); there is a generator whose image in the Adams \(E_{\infty}\)-page is \(p_{3}\). The sequence of maps (2.63) \[\widetilde{\Omega}_{3}^{\text{String}}(B\mathbb{Z}/2)\overset{\iota}{ \longrightarrow}\Omega_{3}^{\xi^{\text{het}}}\overset{p}{\longrightarrow} \Omega_{3}^{\text{Spin}}(B\mathbb{Z}/2)\]
consists of two isomorphisms \(\mathbb{Z}/8\stackrel{{\cong}}{{\to}}\mathbb{Z}/8\stackrel{{ \cong}}{{\to}}\mathbb{Z}/8\), so it suffices to find a generator of \(\Omega_{3}^{\mathrm{Spin}}(B\mathbb{Z}/2)\) that admits a string structure. The standard generator is \(\mathbb{RP}^{3}\) with principal \(\mathbb{Z}/2\)-bundle \(S^{3}\to\mathbb{RP}^{3}\), and because \(\mathbb{RP}^{3}\) is parallelizable, it admits a string structure.
4. \(\Omega_{4}^{\mathrm{\tiny{bnet}}}\cong\mathbb{Z}\oplus\mathbb{Z}/2\). The free summand comes from \(\Omega_{4}^{\mathrm{Spin}}\), hence is generated by the K3 surface with trivial \(\mathbb{Z}/2\)-bundle, and \(\mathrm{E}_{8}\)-bundles with characteristic classes \(-\lambda(\mathrm{K3})\) and \(0\). \(\mathbb{Z}/2\) corresponds to \(E_{\infty}^{1,5}\cong\mathbb{Z}/2\cdot h_{2}p_{1}\). Action by \(h_{2}\) lifts to the product with \(S^{3}\) with its Lie group framing, so we can generate this summand with \(S^{3}\times\mathbb{RP}^{1}\). _Remark 2.64_.: In Proposition 2.51, we showed \(\Omega_{4}^{\mathrm{Spin}}(K(\mathbb{Z},4))\cong\Omega_{4}^{\mathrm{\tiny{F }}^{\mathrm{\tiny{bnet}}}}\to\Omega_{4}^{\mathrm{\tiny{bnet}}}\) is surjective; using this, we can replace \(S^{3}\times\mathbb{RP}^{1}\), which we will need later. Stong [14] showed \(\Omega_{4}^{\mathrm{Spin}}(K(\mathbb{Z},4))\cong\mathbb{Z}\oplus\mathbb{Z}\); one \(\mathbb{Z}\) factor comes from \(\Omega_{4}^{\mathrm{Spin}}\), hence is represented by the K3 surface with trivial map to \(K(\mathbb{Z},4)\). The other is detected by the bordism invariant which, given a \(4\)-dimensional spin manifold \(X\) and a map \(f\colon X\to K(\mathbb{Z},4)\), sends \(X\mapsto\int_{X}f^{*}c\), where \(c\in H^{4}(K(\mathbb{Z},4);\mathbb{Z})\) is the tautological class. For example, this invariant equals \(1\) on \(S^{4}\) with its standard orientation and unique spin structure inducing that orientation, with the map to \(K(\mathbb{Z},4)\) given by the class \(1\in H^{4}(S^{4};\mathbb{Z})\stackrel{{\cong}}{{\to}}\mathbb{Z}\).
The images of the two classes \((\mathrm{K3},0)\) and \((S^{4},1)\) in \(\Omega_{4}^{\mathrm{\tiny{bnet}}}\) must generate. Unsurprisingly, the K3 surface is sent to a generator of the \(\mathbb{Z}\) summand we described above; this summand is detected by \(\int p_{1}\). As this invariant vanishes on \((S^{4},1)\), surjectivity of the map on \(\Omega_{4}\) implies that \((S^{4},1)\) maps to the class of \(\mathbb{RP}^{1}\times S^{3}\).17 Thus the \(\mathbb{Z}/2\) summand in \(\Omega_{4}^{\mathrm{\tiny{bnet}}}\) can be generated by \(S^{4}\) with trivial \(\mathbb{Z}/2\)-bundle and two \(\mathrm{E}_{8}\)-bundles with characteristic classes \(c=\pm 1\in H^{4}(S^{4};\mathbb{Z})\).
Footnote 17: We thank Justin Kaidi for informing of us of this fact.
The map on Adams spectral sequences induced from the map of spectra \(\mathit{MT}\xi^{\mathrm{\tiny{F}},\mathrm{het}}\to\mathit{MT}\xi^{\mathrm{het}}\) sends the class in the \(E_{\infty}\)-page representing \((S^{4},1)\) to \(0\) (see Francis [16, SS2] or Lee-Yonekura [15, SS3.5] for the Adams spectral sequence for \(\Omega_{*}^{\mathrm{\tiny{F}}^{\mathrm{\tiny{bnet}}}}=\Omega_{*}^{\mathrm{ Spin}}(K(\mathbb{Z},4))\)), so the fact that the image of \((S^{4},1)\) is nonzero in \(\Omega_{4}^{\mathrm{\tiny{bnet}}}\) is analogous to a hidden extension.
5. \(\Omega_{5}^{\mathrm{\tiny{bnet}}}=0\).
6. \(\Omega_{6}^{\mathrm{\tiny{bnet}}}\cong\mathbb{Z}/2\), and the image of a generator on the \(E_{\infty}\)-page is \(h_{2}p_{3}\), which lifts to imply that we can take \(S^{3}\times\mathbb{RP}^{3}\) as a generator.
7. \(\Omega_{7}^{\mathrm{\tiny{bnet}}}\cong\mathbb{Z}/16\). This \(\mathbb{Z}/16\) is detected by \(\Omega_{7}^{\mathrm{Spin}}(B\mathbb{Z}/2)\) much like \(\mathbb{RP}^{3}\) was, and we learn that this summand is generated by \(\mathbb{RP}^{7}\) with \(\mathbb{G}^{\mathrm{het}}\)-bundle induced from the \(\mathbb{Z}/2\)-bundle \(S^{7}\to\mathbb{RP}^{7}\), and is detected in the \(E_{\infty}\)-page by \(p_{7}\).
8. \(\Omega_{8}^{\mathrm{\tiny{bnet}}}\cong\mathbb{Z}^{2}\oplus\mathbb{Z}\oplus \mathbb{Z}/2\) together with an additional \(\mathbb{Z}/2\) summand if the differentials in (D3) do not vanish. * The first two free summands come from \(\Omega_{*}^{\mathrm{Spin}}\); their generators may be taken to be the quaternionic projective plane \(\mathbb{HP}^{2}\) and a Bott manifold \(B\). One can choose \(B\) to have a string structure [16, SS5.3] and we do so. In both cases, the \(\mathbb{Z}/2\)-bundle associated to the \(\xi^{\mathrm{het}}\)-structure is trivial; since \(B\) is string, we give it the \(\xi^{\mathrm{het}}\)-structure in which both principal \(\mathrm{E}_{8}\)-bundles are trivial. For \(\mathbb{HP}^{2}\), \(H^{4}(\mathbb{HP}^{2};\mathbb{Z})\cong\mathbb{Z}\) with generator \(x\), as we discussed in the proof of Lemma 2.56; we choose a \(\xi^{\mathrm{het}}\)-structure on \(\mathbb{HP}^{2}\) with principal \(\mathrm{E}_{8}\)-bundles \(P,Q\to\mathbb{HP}^{2}\) with \(c(P)=-x\) and \(Q\) trivial. * The third free summand comes from the green \(h_{0}\)-tower in topological degree \(8\) in the Adams spectral sequence for \(\pi_{*}(\mathcal{Q})\). This summand is detected by the bordism
invariant (2.65) \[f\colon\int c(P)c(Q)\colon\Omega_{8}^{\text{\tiny{het}}}\longrightarrow\mathbb{Z},\] because this quantity can be nonzero (as we show below), it vanishes on the two generators we discovered for the other two free summands, and because it must vanish on the remaining, torsion summand. It is a consequence of Lemma 2.50 that the mod 2 reduction of (2.65), which is \(\int D_{1}D_{2}\), vanishes. This is because every class \(x\in E_{2}^{0,t}\) has an associated degree-\(t\)\(\mathbb{Z}/2\) cohomology class \(c_{x}\), and \(x\) lives to the \(E_{\infty}\)-page if and only if the bordism invariant \(\int c_{x}\) is nonvanishing. Thus the minimum nonzero value of \(|f(M)|\), where \(M\) is a closed, 8-dimensional \(\xi^{\text{het}}\)-manifold, is at least 2. Recall from the proof of Lemma 2.56 that \(H^{*}(\mathbb{HP}^{2};\mathbb{Z})\cong\mathbb{Z}[x]/(x^{3})\) with \(|x|=4\) and \(\lambda(\mathbb{HP}^{2})=x\). Consider the two \(\mathrm{E}_{8}\)-bundles \(P,Q\to\mathbb{HP}^{2}\) prescribed by \(c(P)=2x\) and \(c(Q)=-x\); then \(\lambda(\mathbb{HP}^{2})-c(P)-c(Q)=0\), so this data lifts to a \(\xi^{\text{het}}\)-structure, and
(2.66) \[\int_{\mathbb{HP}^{2}}c(P)c(Q)=2,\] achieving the minimum. Therefore \(\mathbb{HP}^{2}\) with these two principal \(\mathrm{E}_{8}\)-bundles generates the final free summand.
* The \(\mathbb{Z}/2\) summand that we know is present independent of any unresolved differentials is generated by \(h_{1}p_{7}\), so as usual lifts to \(S^{1}_{nb}\times\mathbb{RP}^{7}\).
* If \(d_{2}(c)\neq 0\), there is an additional \(\mathbb{Z}/2\) summand represented in the \(E_{\infty}\)-page by \(b\). We will discuss this summand, and its generator \(X_{8}\), in SS2.2.2.
* \(\Omega_{9}^{\xi^{\text{\tiny{het}}}}\cong(\mathbb{Z}/2)^{\oplus 2}\oplus( \mathbb{Z}/2)^{\oplus 2}\), and if the differentials in (D3) vanish, there is an additional \(\mathbb{Z}/2\oplus\mathbb{Z}/2\) summand.
* Two of the \(\mathbb{Z}/2\) summands come from \(\Omega_{9}^{\text{Spin}}\cong(\mathbb{Z}/2)^{\oplus 2}\), where they are represented by the generators \(\mathbb{HP}^{2}\times S^{1}_{nb}\) and \(B\times S^{1}_{nb}\), with \(\xi^{\text{het}}\)-structure induced from the corresponding generators in \(\Omega_{8}^{\xi^{\text{\tiny{het}}}}\).
* The other two \(\mathbb{Z}/2\) summands that are present no matter the value of the undetermined differentials are in the image of the map \(\iota\colon\widetilde{\Omega}_{9}^{\text{\tiny{String}}}(B\mathbb{Z}/2)\to \Omega_{9}^{\xi^{\text{\tiny{het}}}}\). The generator of the summand in lower Adams filtration has image in the \(E_{\infty}\)-page equal to \(h_{1}^{2}p_{7}\), so we obtain \(S^{1}_{nb}\times S^{1}_{nb}\times\mathbb{RP}^{7}\). The summand in higher Adams filtration has nonzero image in \(\widetilde{\Omega}_{9}^{\text{Spin}}(B\mathbb{Z}/2)\cong\mathbb{Z}/2\oplus \mathbb{Z}/2\), by inspection of Figure 4. The two generators of \(\widetilde{\Omega}_{9}^{\text{Spin}}(B\mathbb{Z}/2)\) can be taken to be \(\mathbb{HP}^{2}\times\mathbb{RP}^{1}\) and \(B\times\mathbb{RP}^{1}\); to determine which we get, compose further with the Atiyah-Bott-Shapiro [1] map \(\widetilde{\Omega}_{9}^{\text{Spin}}(B\mathbb{Z}/2)\to\widetilde{ko}_{9}(B \mathbb{Z}/2)\cong\mathbb{Z}/2\), which sends \([\mathbb{HP}^{2}\times\mathbb{RP}^{1}]\mapsto 0\) and \([B\times\mathbb{RP}^{1}]\) to the generator. The image of the map of Adams spectral sequence in Figure 4 is contained in the summand whose image under the Atiyah-Bott-Shapiro map is nonzero, the image of our generator in \(\widetilde{\Omega}_{9}^{\text{Spin}}(B\mathbb{Z}/2)\) is bordant to \(B\times\mathbb{RP}^{1}\); finally, since \(B\) and \(\mathbb{RP}^{1}\) are both string, we can take \(B\times\mathbb{RP}^{1}\) as our last generator in this dimension.
* If \(d_{2}(h_{1}c)=0\), there is another \(\mathbb{Z}/2\) summand whose image in the \(E_{\infty}\)-page is \(h_{1}b\). Thus as usual it lifts to \(S^{1}_{nb}\times X_{8}\), where \(X_{8}\) is the manifold we describe in SS2.2.2.
* If \(d_{2}(c)=0\), there is another \(\mathbb{Z}/2\) summand whose image in the \(E_{\infty}\)-page is \(c\). We were unable to find a manifold \(X_{9}\) representing this generator. Because \(c\) is in Adams
filtration \(0\), corresponding to the mod \(2\) cohomology class \(D_{1}D_{2}x\), if \(X_{9}\) exists then one can detect it by showing \(\int_{X_{9}}D_{1}D_{2}x=1\).
* \(\Omega_{10}^{\text{\tiny{\schet}}}\cong(\mathbb{Z}/2)^{\oplus 3}\oplus\mathbb{Z}/2\), together with potentially another \(\mathbb{Z}/2\) summand if the differentials in (D3) vanish.
* Three of the \(\mathbb{Z}/2\) summands in \(\Omega_{10}^{\text{\tiny{\schet}}}\) come from \(\Omega_{10}^{\text{\tiny{\sc Spin}}}\cong(\mathbb{Z}/2)^{\oplus 3}\). Their generators are known to be \(B\times S^{1}_{nb}\times S^{1}_{nb}\), \(\mathbb{HP}^{2}\times S^{1}_{nb}\times S^{1}_{nb}\), and a _Milnor hypersurface_\(X_{10}\), defined to be a smooth degree-\((1,1)\) hypersurface in \(\mathbb{CP}^{2}\times\mathbb{CP}^{4}\). Milnor [111, SS3] showed that \(X_{10}\) generates the last \(\mathbb{Z}/2\) summand in \(\Omega_{10}^{\text{\tiny{\sc Spin}}}\).
* The next \(\mathbb{Z}/2\) summand is detected by the maps \(\widetilde{\Omega}_{10}^{\text{\tiny{\sc String}}}(B\mathbb{Z}/2)\to\Omega_{1 0}^{\text{\tiny{\schet}}}\) and \(\Omega_{10}^{\text{\tiny{\schet}}}\to\Omega_{10}^{\text{\tiny{\sc Spin}}}(B \mathbb{Z}/2)\), and by a similar argument to the one we gave for the higher-filtration orange \(\mathbb{Z}/2\) summand in degree \(9\), we may choose \(B\times\mathbb{RP}^{1}\times S^{1}_{nb}\) as the generator.
* If \(d_{2}(h_{1}c)=0\), then there is an additional \(\mathbb{Z}/2\) summand whose image in the \(E_{\infty}\)-page is \(h_{1}c\). Thus we can take \(S^{1}_{nb}\times X_{9}\) for a manifold representative, though as discussed above we do not know what \(X_{9}\) is.
* We have not determined generators for \(\Omega_{11}^{\text{\tiny{\schet}}}\), nor even its isomorphism type. This is a question whose answer would be useful for anomaly cancellation for the \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic string; see Question 0.3 and SS3.2.1. Nonetheless, the Adams argument we gave above implies \(\Omega_{11}^{\text{\tiny{\schet}}}\) contains a \(\mathbb{Z}/8\) subgroup, the image of \(\iota\colon\widetilde{\Omega}_{11}^{\text{\tiny{\sc String}}}(B\mathbb{Z}/2) \to\Omega_{11}^{\text{\tiny{\schet}}}\). By comparing with the map \(\widetilde{\Omega}_{11}^{\text{\tiny{\sc String}}}(B\mathbb{Z}/2)\to \widetilde{\Omega}_{11}^{\text{\tiny{\sc Spin}}}(B\mathbb{Z}/2)\) as in Figure 4, one learns that the class of \(B\times\mathbb{RP}^{3}\) generates this \(\mathbb{Z}/8\).
#### 2.2.2. \(X_{8}\), a potentially nonzero class in \(\Omega_{8}^{\text{\tiny{\schet}}}\)
Though we were unable to determine if the class \(b\in E_{2}^{2,10}\) survives to the \(E_{\infty}\)-page, we are able to write down a manifold representative \(X_{8}\) of the class it determines in \(\Omega_{8}^{\text{\tiny{\schet}}}\); if \(b\) does survive, \(X_{8}\) should be added to the list of generators above.
**Definition 2.67**.: Let \(\mathbb{Z}/2\) act on \(S^{3}\times S^{3}\times S^{2}\) by the antipodal map on \(S^{2}\) and the first copy of \(S^{3}\), and a reflection through a plane on the second \(S^{3}\). This is a free action; let \(X_{8}\) denote the quotient, which is a smooth manifold.
\(X_{8}\) is a generalized Dold manifold of the sort studied by Nath-Sankaran [10]. Manifolds similar to \(X_{8}\) frequently appear as generators of bordism groups: see [11, SS5.5.1] and [13, SS14.3.3] for related examples.
**Lemma 2.68**.: \(X_{8}\) _admits a string structure, and one can choose a string structure on \(X_{8}\) so that the induced string structure on \(S^{3}\times S^{3}\) is the one induced by the Lie group framing on \(S^{3}\times S^{3}\cong\operatorname{SU}_{2}\times\operatorname{SU}_{2}\)._
Proof.: Adding the normal bundles for \(S^{k-1}\hookrightarrow\mathbb{R}^{k}\) defines an isomorphism
\[T(S^{3}\times S^{3}\times S^{2})\oplus\underline{\mathbb{R}}^{3}\xrightarrow{ \cong}\underline{\mathbb{R}}^{4}\oplus\underline{\mathbb{R}}^{4}\oplus \underline{\mathbb{R}}^{3}. \tag{2.69}\]
To understand \(TX_{8}\), we will study (2.69) when we introduce the \(\mathbb{Z}/2\)-action on \(S^{3}\times S^{3}\times S^{2}\) whose quotient is \(X_{8}\). Since the outward unit normal vector field on \(S^{k}\) is \(\operatorname{O}_{k+1}\)-invariant, \(\mathbb{Z}/2\) acts trivially on the \(\underline{\mathbb{R}}^{3}\) on the left side of (2.69), since the outward unit normal vector field provides the trivializations of the normal bundles giving that \(\underline{\mathbb{R}}^{3}\) factor. On the right-hand side, \(\mathbb{Z}/2\) by the antipodal map on the first factor of \(S^{3}\), so acts by \(-1\) on each \(\underline{\mathbb{R}}\) summand of the first \(\underline{\mathbb{R}}^{4}\). The reflection on the second \(S^{3}\) factor means \(\mathbb{Z}/2\) acts on the second \(\underline{\mathbb{R}}^{4}\) by \(-1\), \(1\), \(1\), and \(1\) on the four
\(\underline{\mathbb{R}}\) summands. Finally, the antipodal map on \(S^{2}\) implies \(\mathbb{Z}/2\) acts by \(-1\) on the remaining three \(\underline{\mathbb{R}}\) summands.
Passing from equivariant vector bundles on \(S^{3}\times S^{3}\times S^{2}\) to nonequivariant vector bundles on the quotient, (2.69) induces an isomorphism
\[TX_{8}\oplus\underline{\mathbb{R}}^{3}\xrightarrow{\cong}\sigma^{\oplus 8} \oplus\underline{\mathbb{R}}^{3}, \tag{2.70}\]
where \(\sigma\to X_{9}\) is pulled back from the tautological line bundle \(\sigma\to\mathbb{RP}^{2}\). The Whitney sum formula implies \(\sigma^{\oplus 8}\to\mathbb{RP}^{2}\) is spin, and since the string obstruction lives in \(H^{4}(\mathbb{RP}^{2};\mathbb{Z})=0\), \(\sigma^{\oplus 8}\) is string. Thus the pullback to \(X_{8}\) is also string, so \(TX_{8}\) is string.
For the Lie group framing string structure, use the fact that the involutions on each \(S^{3}\) summand can be described in terms of Lie groups: since the quotient of \(S^{3}\cong\mathrm{SU}_{2}\) by the antipodal map is \(\mathbb{RP}^{3}\cong\mathrm{SO}_{3}\), the Lie group framing on \(S^{3}\) is equivariant for the antipodal map. Compatibility for the reflection comes from the action of a reflection in \(\mathrm{Pin}_{3}^{+}\supset\mathrm{SU}_{2}\).
**Proposition 2.71**.: _With the string structure described in Lemma 2.68 and the \(\mathbb{Z}/2\)-bundle \(\sigma\to X_{8}\), \([X_{8}]\) is linearly independent from \([S^{1}_{nb}\times\mathbb{RP}^{7}]\) in \(\widetilde{\Omega}_{8}^{\mathrm{String}}(B\mathbb{Z}/2)\cong\mathbb{Z}/2\oplus \mathbb{Z}/2\), so the image of \([X_{8}]\) in the \(E_{\infty}\)-page for \(\Omega_{*}^{\mathrm{String}}(B\mathbb{Z}/2)\) is the nonzero class in \(E_{\infty}^{2,10}\cong\mathbb{Z}/2\) (perhaps plus a term in lower filtration)._
Proof.: Let \(f\colon\widetilde{\Omega}_{8}^{\mathrm{String}}(\mathbb{RP}^{2})\to\widetilde{ \Omega}_{8}^{\mathrm{String}}(B\mathbb{Z}/2)\) be the map induced by \(\mathbb{RP}^{2}\hookrightarrow\mathbb{RP}^{\infty}\simeq B\mathbb{Z}/2\). The map this induces on Adams spectral sequences is not hard to analyze: Bruner-Rognes [1, SS4.4, Chapter 6, SS12.1] run the whole Adams spectral sequence for \(\widetilde{\textit{tmf}}_{*}(\mathbb{RP}^{2})\), using the identification \(\Sigma^{\infty}\mathbb{RP}^{2}\simeq\Sigma\mathbb{S}/2\), and as discussed above _tmf_- and _MTString_-homology agree in degrees 14 and below.18 Likewise, Davis-Mahowald [13, Table 3.2] compute the \(E_{2}\)-page of the Adams spectral sequence for \(\widetilde{\Omega}_{*}^{\mathrm{String}}(B\mathbb{Z}/2)\) in the range we need, and with their calculation, \(h_{i}\)-linearity of differentials, and the \(2\eta=0\) trick from the proof of Lemma 2.59 one sees that \(\widetilde{\Omega}_{8}^{\mathrm{String}}(B\mathbb{Z}/2)\cong\mathbb{Z}/2\oplus \mathbb{Z}/2\). As discussed in SS2.2.1, one of the \(\mathbb{Z}/2\) summands is detected by \(\mathbb{RP}^{7}\times S^{1}_{nb}\), whose image in the \(E_{\infty}\)-page is in filtration 1. Consider the map
Footnote 18: See also the closely related work of Beaudry-Bobkova-Pham-Xu [1], who compute \(\textit{tmf}_{*}(\mathbb{RP}^{2})\) using the elliptic spectral sequence.
\[\Psi\colon\operatorname{Ext}_{\mathcal{A}(2)}(\widetilde{H}^{*}(\mathbb{RP}^ {2};\mathbb{Z}/2))\longrightarrow\operatorname{Ext}_{\mathcal{A}(2)}( \widetilde{H}^{*}(B\mathbb{Z}/2;\mathbb{Z}/2)) \tag{2.72}\]
induced by \(\mathbb{RP}^{2}\to\mathbb{RP}^{\infty}\simeq B\mathbb{Z}/2\); we draw this map in Figure 6. \(\Psi\) is also the map between the \(E_{2}\)-pages of these two Adams spectral sequences; looking at Figure 6, \(\Psi\) is injective in topological degree 8, with image containing the nonzero element of \(E_{2}^{2,10}\) but not the nonzero class in \(E_{2}^{1,9}\). As both of these elements survive to the \(E_{\infty}\)-page, this lifts to imply that \(f\colon\widetilde{\Omega}_{8}^{\mathrm{String}}(\mathbb{RP}^{2})\to\widetilde{ \Omega}_{8}^{\mathrm{String}}(B\mathbb{Z}/2)\) is injective and that if one wants to find a class in \(\widetilde{\Omega}_{8}^{\mathrm{String}}(B\mathbb{Z}/2)\) linearly independent from \(\mathbb{RP}^{7}\times S^{1}_{nb}\), it suffices to find a nonzero class in \(\widetilde{\Omega}_{8}^{\mathrm{String}}(\mathbb{RP}^{2})\).
The map \(\sigma\colon X_{8}\to B\mathbb{Z}/2\) factors through \(\mathbb{RP}^{2}\) by definition, so we are done if we can show \(X_{8}\), with its map to \(\mathbb{RP}^{2}\), is nonbounding. To do so, consider the transfer map \(\Sigma^{\infty}\mathbb{RP}^{2}\to\Sigma^{\infty}S^{2}\) associated to the double cover \(S^{2}\to\mathbb{RP}^{2}\); this induces on string bordism a map \(\widetilde{\Omega}_{*}^{\mathrm{String}}(\mathbb{RP}^{2})\to\widetilde{\Omega}_ {*}^{\mathrm{String}}(S^{2})\) sending \((M,f\colon M\to\mathbb{RP}^{2})\) to the double cover \(M^{\prime}\to M\) associated to the line bundle \(f^{*}\sigma\), together to the map \(M^{\prime}\to S(\sigma)=S^{2}\).
The map \(\Omega_{k}^{\xi}\to\widetilde{\Omega}_{k+\ell}^{\xi}(S^{\ell})\) sending \(M\mapsto(M\times S^{\ell},\mathrm{proj}_{2}\colon M\times S^{\ell}\to S^{\ell})\) (where \(S^{\ell}\) carries the bounding stable framing, which with the \(\xi\)-structure on \(M\) induces a \(\xi\)-structure on \(M\times S^{\ell}\)) is always an isomorphism (e.g. check this with the Atiyah-Hirzebruch spectral sequence), and
\(\Omega_{6}^{\mathrm{String}}\cong\mathbb{Z}/2\times\mathbb{Z}/2\)[12, SS3, SS4], generated by \(S^{3}\times S^{3}\) with its Lie group framing, because it is represented by \(h_{2}^{2}\) in the Adams spectral sequence. Therefore \(\widetilde{\Omega}_{8}^{\mathrm{String}}(S^{2})\cong\mathbb{Z}/2\) is generated by \(S^{3}\times S^{3}\times S^{2}\), with the map to \(S^{2}\) given by projection onto the third factor. The image of \(X_{8}\) under the transfer is its double cover, which is \(S^{3}\times S^{3}\times S^{2}\), with the correct string structure and map to \(S^{2}\), so \([X_{8}]\neq 0\) in \(\widetilde{\Omega}_{8}^{\mathrm{String}}(\mathbb{RP}^{2})\), which suffices to prove the theorem.
Finally, by looking at the map \(\widetilde{\Omega}_{*}^{\mathrm{String}}(B\mathbb{Z}/2)\to\Omega_{*}^{\xi^{ \mathrm{het}}}\), we conclude:
**Corollary 2.73**.: _Suppose \(d_{2}(c)=0\) in the Adams spectral sequence for \(\xi^{\mathrm{het}}\). Then \([X_{8}]\neq 0\) in \(\Omega_{8}^{\xi^{\mathrm{het}}}\), and its image in the \(E_{\infty}\)-page is the class \(b\in E_{\infty}^{2,10}\) (perhaps plus some elements in lower filtration)._
### \(\xi^{\mathrm{het}}\) bordism at odd primes
**Theorem 2.74**.: \(\Omega_{*}^{\xi^{\mathrm{het}}}\) _has no odd-primary torsion in degrees \(11\) and below._
Proof.: This amounts to a direct computation with the Adams spectral sequence. We will go over the case \(p=3\) in detail; for \(p=5,7\) the story is similar but easier, and for \(p\geq 11\) it is trivial because the degrees of the Steenrod powers are too high for the Adams spectral sequence to produce torsion.
First we compute \(H^{*}(B\mathbb{G}^{\rm het};\mathbb{Z}/3)\) as a module over the Steenrod algebra \(\mathcal{A}\) in low degrees in Proposition 2.77, then we do the same for \(H^{*}(\mathit{MT}\xi^{\rm het};\mathbb{Z}/3)\) in Proposition 2.83. Once we have this, we can run the Adams spectral sequence, and do so in Proposition 2.86.
Throughout this subsection, \(\mathcal{P}^{i}\) refers to the \(i^{\rm th}\) Steenrod power, a degree-\(4i\) operation on mod \(3\) cohomology, and \(\beta\) is the Bockstein homomorphism for the sequence \(0\to\mathbb{Z}/3\to\mathbb{Z}/9\to\mathbb{Z}/3\to 0\).
**Lemma 2.75**.: _Let \(C\in H^{3}(K(\mathbb{Z},3);\mathbb{Z}/3)\) denote the mod \(3\) reduction of the tautological class. Then_
\[H^{*}(K(\mathbb{Z},3);\mathbb{Z}/3)\cong\mathbb{Z}/3[C,\mathcal{P}^{1}C, \beta\mathcal{P}^{1}C,\dots]/(C^{2},\dots), \tag{2.76}\]
_where all missing generators and relations are in degrees \(14\) and above._
Proof.: This is a standard application of the Serre spectral sequence for the fibration \(K(\mathbb{Z},2)\to*\to K(\mathbb{Z},3)\), so we will be succinct. \(E_{2}^{0,*}\cong H^{*}(K(\mathbb{Z},2);\mathbb{Z}/3)\cong\mathbb{Z}/3[x]\), with \(|x|=2\); by the \(E_{\infty}\)-page, all powers of \(x\) must be killed by differentials.
The only way to kill \(x\) is with a transgressing \(d_{3}\colon E_{3}^{0,2}\to E_{3}^{3,0}\). Let \(C\coloneqq d_{3}(x)\). \(C^{2}=0\) follows by graded commutativity. The Leibniz rule for differentials means that when \(3\nmid k\), \(d_{3}(x^{k})=\pm x^{k-1}C\), and if \(3\mid k\), \(d_{3}(x^{k})=0\).
So \(x^{3}\) survives to the \(E_{4}\)-page. The only remaining differential that can kill \(x^{3}\) is the transgressing \(d_{7}\colon E_{7}^{0,6}\to E_{7}^{7,0}\), so \(d_{7}(x^{3})\neq 0\); by the Kudo transgression theorem [10], because \(x^{3}=\mathcal{P}^{1}(x)\), \(d_{7}(x^{3})=\mathcal{P}^{1}C\). The Leibniz rule then implies \(d_{7}(x^{6})=x^{3}\mathcal{P}^{1}C\), so by the \(E_{8}\)-page, everything on the line \(p=0\) in total degree less than \(18\) has been killed.
Because \(d_{3}(x^{3})=0\), \(x^{2}C\) survives to the \(E_{4}\)-page; the only remaining way for it to support a differential is to have a new class \(w\in H^{8}(K(\mathbb{Z},3);\mathbb{Z}/3)\) such that \(d_{5}(x^{2}C)=w\). To see that \(\beta(\mathcal{P}^{1}C)=\pm w\), compare with the analogous spectral sequence for \(\mathbb{Z}/9\)-valued cohomology to see that \(\mathcal{P}^{1}C\) is not in the image of the mod \(3\) reduction map from \(\mathbb{Z}/9\) cohomology to \(\mathbb{Z}/3\) cohomology.19
Footnote 19: Alternatively, one could deduce this Bockstein by setting up the Serre spectral sequence for \(K(\mathbb{Z},3)\to*\to K(\mathbb{Z},4)\) and Hill’s calculation [12, Corollary 2.9, Figure 1(a)] of the low-degree mod \(3\) cohomology of \(K(\mathbb{Z},4)\) as an \(\mathcal{A}\)-module: \(C\) transgresses to a degree-\(4\) class \(D\), and Hill shows \(\beta(\mathcal{P}^{1}D)\neq 0\), so by the Kudo transgression theorem [10], \(\beta(\mathcal{P}^{1}C)\neq 0\) in \(H^{*}(K(\mathbb{Z},3);\mathbb{Z}/3)\).
**Proposition 2.77**.: _Let \(D\in H^{4}(B\mathrm{E}_{8};\mathbb{Z}/3)\) be the mod \(3\) reduction of the class \(c\) from Definition 1.4, and let \(D_{1}\) and \(D_{2}\) be the two copies of \(D\) in \(H^{*}(B\mathrm{E}_{8}^{2};\mathbb{Z}/3)\) coming from the two factors of \(B\mathrm{E}_{8}\). In degrees \(13\) and below, the pullback map on \(\mathbb{Z}/3\) cohomology induced by \(\phi\colon B\mathbb{G}^{\rm het}\to B\mathrm{Spin}\times B(\mathrm{E}_{8}^{2} \rtimes\mathbb{Z}/2)\) is the quotient ring homomorphism sending \(\lambda-D_{1}-D_{2}\mapsto 0\), \(-p_{2}-\mathcal{P}^{1}(D_{1}+D_{2})\mapsto 0\), and \(\beta\mathcal{P}^{1}(D_{1}+D_{2})\mapsto 0\)._
Here \(\phi\) is the map we constructed in (1.44) which forgets the B-field.
Proof.: Throw the Serre spectral sequence at the fibration
\[K(\mathbb{Z},3)\longrightarrow B\mathbb{G}^{\rm het}\longrightarrow B\mathrm{ Spin}\times B(\mathrm{E}_{8}^{2}\rtimes\mathbb{Z}/2). \tag{2.78}\]
The base space is not simply connected, so we might have to worry about local coefficients, but this turns out not to be the case, because the \(\mathbb{Z}/2\) symmetry swapping the two \(\mathrm{E}_{8}\) factors, which is the origin of the \(\pi_{1}\) in the base, acts trivially on the B-field, which gives us the fiber in (2.78).
In order to run the Serre spectral sequence for (2.78), we need to know the cohomology of \(B\mathrm{Spin}\) and \(B(\mathrm{E}_{8}^{2}\rtimes\mathbb{Z}/2)\). The former is the polynomial ring on the mod \(3\) reductions of the Pontrjagin
classes, which is a theorem of Borel-Hirzebruch [1, SS30.2]; for the latter, run the Serre spectral sequence for the fibration
\[B\mathrm{E}_{8}^{2}\longrightarrow B(\mathrm{E}_{8}^{2}\rtimes\mathbb{Z}/2) \longrightarrow B\mathbb{Z}/2. \tag{2.79}\]
Because \(H^{*}(B\mathbb{Z}/2;\mathbb{Z}/3)\) vanishes in positive degrees, this Serre spectral sequence collapses to imply
\[H^{*}(B(\mathrm{E}_{8}^{2}\rtimes\mathbb{Z}/2);\mathbb{Z}/3)\xrightarrow{ \cong}H^{*}(B\mathrm{E}_{8}^{2};\mathbb{Z}/3)^{\mathbb{Z}/2}. \tag{2.80}\]
The answer now follows from the Kunneth formula, the fact that we can replace \(B\mathrm{E}_{8}\) with \(K(\mathbb{Z},4)\) in the range we need by the result of Bott-Samelson [1, Theorems IV, V(e)] we mentioned in SS2.2, and the mod \(3\) cohomology of \(K(\mathbb{Z},4)\) in low degrees, worked out by Cartan [13] and Serre [12], and stated explicitly by Hill [14, Corollary 2.9].
Now back to (2.78) and its Serre spectral sequence. The fibration (2.78) is classified by the degree-\(4\) cohomology class \(\lambda-D_{1}-D_{2}\), i.e. it is the pullback of the universal \(K(\mathbb{Z},3)\)-bundle
\[K(\mathbb{Z},3)\longrightarrow EK(\mathbb{Z},3)\longrightarrow BK(\mathbb{Z},3)\simeq K(\mathbb{Z},4) \tag{2.81}\]
by the map \(B\mathrm{Spin}\times B(\mathrm{E}_{8}^{2}\rtimes\mathbb{Z}/2)\to K(\mathbb{Z},4)\) classified by \(\lambda-D_{1}-D_{2}\).20 In the Serre spectral sequence for (2.81), the class \(C\in E_{2}^{0,3}=H^{3}(K(\mathbb{Z},3);\mathbb{Z}/3)\) must transgress to the generator of \(E_{2}^{4,0}=H^{4}(K(\mathbb{Z},4);\mathbb{Z}/3)\), and this generator pulls back to \(\lambda-D_{1}-D_{2}\), enforcing the relation \(\lambda-D_{1}-D_{2}=0\) in the \(E_{5}\)-page.
Footnote 20: This map, and hence also the fibration, is only determined up to homotopy, but any two choices of representative give isomorphic answers.
The other two pullbacks to zero in the theorem statement then follow from the Kudo transgression theorem [11]: \(\mathcal{P}^{1}C\in E_{2}^{0,7}=H^{7}(K(\mathbb{Z},3);\mathbb{Z}/3)\) must transgress to \(\mathcal{P}^{1}(\lambda-D_{1}-D_{2})\), and analogously for \(\beta\mathcal{P}^{1}C\). To compute these, we must determine how \(\mathcal{P}^{1}\) acts on the mod \(3\) reductions of Pontrjagin classes. Shay [15] proves a formula for Steenrod powers of Chern classes, which yields the formula for Pontrjagin classes by pullback. Hence, as worked out by Nordstrom [12], \(\mathcal{P}^{1}p_{1}=p_{2}\); then an Adem relation tells us
\[\mathcal{P}^{1}p_{2}=\mathcal{P}^{1}\mathcal{P}^{1}p_{1}=-\mathcal{P}^{2}p_{1 }=p_{1}^{3}, \tag{2.82}\]
the last equality because \(\mathcal{P}^{i}\) is the cup product cube on classes of degree \(2i\). Thus we see that \(\mathcal{P}^{1}C\) transgresses to \(-p_{2}-\mathcal{P}^{1}(D_{1}+D_{2})\) and \(\beta\mathcal{P}^{1}C\) transgresses to \(\beta\mathcal{P}^{1}(D_{1}+D_{2})\), killing those classes by the \(E_{10}\)-page.
Now, the Leibniz rule cleans up the rest of the Serre spectral sequence in total degree at most \(13\): by the \(E_{10}\)-page, everything in this range is concentrated on the line \(q=0\). Therefore on the \(E_{\infty}\)-page, the extension question is trivial in this range, and we conclude.
**Proposition 2.83**.: _Let \(\mathcal{M}_{3}\) denote the quotient of \(H^{*}(\text{MT}\xi^{\mathrm{het}};\mathbb{Z}/3)\) by all elements of degree \(14\) or higher, \(\mathcal{M}_{3}^{\mathrm{SO}}\) denote the quotient of \(H^{*}(\text{MTSO};\mathbb{Z}/3)\) by all elements of degree \(14\) or higher, and \(C\alpha\) denote the \(\mathcal{A}\)-module which consists of two \(\mathbb{Z}/3\) summands in degrees \(0\) and \(4\) linked by \(\mathcal{P}^{1}\). Then, there is an isomorphism of \(\mathcal{A}\)-modules_
\[\mathcal{M}_{3}\cong\mathcal{M}_{3}^{\mathrm{SO}}\oplus\Sigma^{8}C\alpha\oplus \Sigma^{12}\mathbb{Z}/3. \tag{2.84}\]
Proof.: In Proposition 2.77, we discovered that the map \(\phi\colon B\mathbb{G}^{\mathrm{het}}\to B\mathrm{Spin}\times B(\mathrm{E}_{8 }^{2}\rtimes\mathbb{Z}/2))\) induces a surjection on mod \(3\) cohomology in degrees \(13\) and below. As \(\phi\) commutes with the maps down to \(B\mathrm{O}\) that are part of the definition of these tangential structures, \(\phi\) induces a map on Thom
spectra
\[\mathit{MT}\xi^{\mathrm{het}}\to\mathit{MTSpin}\wedge B(\mathrm{E}_{8}^{2}\rtimes \mathbb{Z}/2)_{+}. \tag{2.85}\]
Both of these tangential structures' maps to \(B\mathrm{O}\) factor through \(B\mathrm{SO}\), so the Thom isomorphism for mod \(3\) cohomology untwists. The Thom isomorphism is natural for maps of tangential structures, so we conclude that the pullback map on mod \(3\) cohomology induced by (2.85) is a surjection in degrees \(13\) and below -- and therefore that we can compute Steenrod powers in the cohomology of the latter Thom spectrum. And the map \(\mathit{MTSpin}\to\mathit{MTSO}\) is an equivalence away from \(2\), so we may work with \(\mathit{MTSO}\) in place of \(\mathit{MTSpin}\). Milnor [23, Theorem 4] computed the Steenrod module structure on \(H^{*}(\mathit{MTSO};\mathbb{Z}/3)\), showing that it is a free \(\mathcal{A}/\beta\)-module. Using this, we can determine the Steenrod powers of \(Up_{i}\), where \(U\) is the Thom class; and this and the Cartan formula finish the proof.
**Proposition 2.86**.: _In topological degrees \(12\) and below, the Adams \(E_{2}\)-page computing \((\Omega_{\star}^{\mathrm{het}})_{3}^{\wedge}\) consists of \(h_{0}\)-towers concentrated in even topological degrees, and therefore this Adams spectral sequence collapses in degrees \(12\) and below._
Proof.: The direct-sum decomposition in Proposition 2.83 means that it suffices to prove the statement about \(h_{0}\)-towers for \(\mathcal{M}_{3}^{\mathrm{SO}}\), \(\Sigma^{8}C\alpha\), and \(\Sigma^{12}\mathbb{Z}/3\) separately. As usual, with \(M\) an \(\mathcal{A}\)-module, we write \(\mathrm{Ext}(M)\) to denote \(\mathrm{Ext}_{\mathcal{A}}^{*,*}(M,\mathbb{Z}/3)\). The first ingredient we need is \(\mathrm{Ext}(\mathbb{Z}/3)\) itself; the computation of \(\mathrm{Ext}_{\mathcal{A}}(\mathbb{Z}/3)\) in degrees \(t-s\leq 11\) is due to Gershenson [10, 11, 12] expanded this computation to \(t-s\leq 88\). In topological degrees \(2\) and below, \(\mathrm{Ext}(\mathbb{Z}/3)\) consists of a single \(h_{0}\)-tower in topological degree \(0\), implying the conclusion for \(\Sigma^{12}\mathbb{Z}/3\).
Next, we compute \(\mathrm{Ext}(C\alpha)\) using the fact that a short exact sequence of \(\mathcal{A}\)-modules induces a long exact sequence in \(\mathrm{Ext}\) groups. Specifically, factor \(C\alpha\) as an extension of \(\mathcal{A}\)-modules
(2.87)
which we draw in Figure 7, left, and compute the corresponding long exact sequence in \(\mathrm{Ext}\) in Figure 7, right. There is one potentially nonzero boundary map in range: \(\partial\colon\mathrm{Ext}_{\mathcal{A}}^{0,4}(\mathbb{Z}/3)\to\mathrm{Ext}_{ \mathcal{A}}^{1,4}(\mathbb{Z}/3)\). This map must be nonzero because \(\mathrm{Ext}_{\mathcal{A}}^{0,4}(C\alpha)=\mathrm{Hom}_{\mathcal{A}}(C\alpha,\Sigma^{4}\mathbb{Z}/3)=0\). We see that in degrees \(6\) and below, \(\mathrm{Ext}(C\alpha)\) consists solely of \(h_{0}\)-towers in even degrees, which implies the part of the corollary statement coming from \(\Sigma^{8}C\alpha\).
Finally, \(\mathcal{M}_{3}^{\mathrm{SO}}\). Milnor [23, Theorem 4] showed that this module coincides with a free \(\mathcal{A}/\beta\)-module in degrees \(13\) and below, and proves (_ibid._, Lemma 5) that the \(\mathrm{Ext}\) groups of such a module consist solely of \(h_{0}\)-towers in even topological degree. Therefore in topological degrees \(12\) and below, \(\mathrm{Ext}(\mathcal{M}_{3}^{\mathrm{SO}})\) also consists solely of \(h_{0}\)-towers in even topological degrees.
This suffices to prove Theorem 2.74 for \(p=3\): \(h_{0}\)-towers on the \(E_{\infty}\)-page lift to \(\mathbb{Z}_{3}\) (i.e. the \(3\)-adic integers) summands in \((\Omega_{\star}^{\mathrm{het}})_{3}^{\wedge}\), so there is no \(3\)-torsion in this range.
_Remark 2.88_.: The change-of-rings technique we used at \(p=2\) has an analogue at \(p=3\) for twists of _tmf_ (hence also \(3\)-local twisted string bordism in degrees \(15\) and below, because the Ando-Hopkins-Rezk map [1]_MTString\({}_{(3)}\to\mathit{tmf}_{(3)}\)_ is \(15\)-connected [11, 23]): using Baker-Lazarev's version of the Adams spectral sequence [1], we can take \(\mathrm{Ext}\) over the algebra
\[\mathcal{A}^{\mathit{tmf}}\coloneqq\pi_{-*}\mathrm{Map}_{\mathit{tmf}}(H \mathbb{Z}/3,H\mathbb{Z}/3), \tag{2.89}\]
where \(H\mathbb{Z}/3\) is made into a _tmf_-algebra spectrum by the ring spectrum maps \(\textit{tmf}\stackrel{{\tau_{\leq 0}}}{{\to}}H\mathbb{Z}\to H \mathbb{Z}/3\), where the first map is the Postnikov \(0\)-connected quotient and the second map is induced from \(\mathbb{Z}\twoheadrightarrow\mathbb{Z}/3\). The algebra \(\mathcal{A}^{\textit{tmf}}\) was explicitly calculated by Henriques and Hill, using work of Behrens [1] and unpublished work of Hopkins-Mahowald; see Henriques [1, SS13.3], Hill [11], and Bruner-Rognes [2, SS13] for computations with this Adams spectral sequence.
Just like at \(p=2\), there is a little more work to do apply this spectral sequence to twisted string bordism when the twist does not arise from a vector bundle. We will take up this question in future work joint with Matthew Yu [13], where we will see how to work over \(\mathcal{A}^{\textit{tmf}}\) for non-vector-bundle twists and that it simplifies the \(3\)-primary computation of \(\Omega_{\mathfrak{s}}^{\text{\tiny{\rm{het}}}}\) in degrees relevant to string theory.
### \(\xi^{\text{\rm{CHL}}}\) bordism
In this section, we compute the \(\xi^{\text{\rm{CHL}}}\) bordism groups. Just like for the \(\xi^{\text{\rm{het}}}\) bordism groups, we use the change-of-rings trick from Corollary 2.22 at \(p=2\) and work more directly with the Adams spectral sequence at odd primes. This time, however, we can deduce a lot of information from abstract isomorphisms with the Adams spectral sequences for the string bordism of \(B\text{E}_{8}\), which has been studied by Hill [11].
#### 2.4.1. \(2\)-primary computation
**Theorem 2.90**.: _In degrees \(11\) and below, the \(2\)-completions of \(\Omega_{\mathfrak{s}}^{\xi^{\text{\rm{CHL}}}}\) and \(\Omega_{\mathfrak{s}}^{\text{\rm{String}}}(B\text{E}_{8})\) are abstractly isomorphic._
Proof.: By Corollary 2.22, the Adams \(E_{2}\)-page in this range coincides with the \(\operatorname{Ext}\) of \(T(-2c)\) over \(\mathcal{A}(2)\). The \(\mathcal{A}(2)\)-module structure on \(T(\mu)\) only depends on the underlying group \(BG\) and on \(\mu\bmod 2\), and \(2c\bmod 2=0\), so as \(\mathcal{A}(2)\)-modules, \(T(-2c)\cong T(0)=H^{*}(B\text{E}_{8};\mathbb{Z}/2)\). So the Adams \(E_{2}\)-page coincides in the range we care about with the \(E_{2}\)-page for \(\textit{MT}\xi^{0}=\textit{MTString}\wedge(B\text{E}_{8})_{+}\). Hill [11, Figure 3] computes the \(E_{2}\)-page corresponding for the reduced string bordism of \(B\text{E}_{8}\), which we use to draw the full \(E_{2}\)-page for \(\Omega_{\mathfrak{s}}^{\xi^{\text{\rm{CHL}}}}\) in Figure 8.
This is an abstract isomorphism and does not a priori tell us about differentials or extensions. However, quotienting by \(\mathbb{T}[1]\) defines a map \(\mathbb{G}^{\text{\rm{CHL}}}\to\text{Spin}\times\text{E}_{8}\), which induces a map on Adams spectral sequences for Thom spectra of classifying spaces, and this map of Adams spectral sequences is identified with the map induced by \(\textit{MTString}\wedge(B\text{E}_{8})_{+}\to\textit{MTSpin}\wedge(B\text{E}_{8 })_{+}\), so any differential
Figure 7. Left: the extension (2.87) of \(\mathcal{A}\)-modules at \(p=3\). Right: the associated long exact sequence in \(\operatorname{Ext}\). The dashed gray lines are actions by elements of \(\operatorname{Ext}_{\mathcal{A}}(\mathbb{Z}/3)\) that cannot be seen from this long exact sequence and must be deduced another way; we do not need them in this paper, so do not go into the details.
for the string bordism of \(B\mathrm{E}_{8}\) deduced by pulling back from the Adams spectral sequence for \(\mathit{MTSpin}\wedge B\mathrm{E}_{8}\) remains valid in our Adams spectral sequence for \(\xi^{\mathrm{CHL}}\) bordism.
Moreover, we can realize the part of \(\Omega_{*}^{\xi^{\mathrm{CHL}}}\) corresponding to the gray summands in Figure 8 by string manifolds with trivial \(\mathrm{E}_{8}\)-bundle, so the gray summands split off of the rest of the Adams spectral sequence.
Looking at the black summands in Figure 8, linearity of differentials with respect to the \(\mathrm{Ext}_{\mathcal{A}}(\mathbb{Z}/2)\)-action on the \(E_{2}\)-page means the only possible nonzero differentials in the range we care about are \(d_{2}\colon E_{2}^{0,10}\to E_{2}^{2,11}\) and \(d_{2}\colon E_{2}^{1,12}\to E_{2}^{3,13}\). Hill [11, SS3.3] uses the map to \(\mathit{MTSpin}\wedge(B\mathrm{E}_{8})_{+}\) to show that these two differentials are nontrivial, so as we noted above, the same is true for \(\xi^{\mathrm{CHL}}\) bordism.
As there are no more differentials, and all extensions by \(2\) in range follow from \(\mathrm{Ext}_{\mathcal{A}}(\mathbb{Z}/2)\)-action without additional information, we have proven the theorem.
_Remark 2.91_.: As described in Remark 1.55, the map \(c\colon B\mathrm{E}_{8}\to K(\mathbb{Z},4)\) defines a map from \(\xi^{\mathrm{CHL}}\) structures to \(\mathrm{Spin}\langle w_{4}\rangle\) structures, i.e. the data of a spin structure and a trivialization of \(w_{4}\). This is the CHL analogue of the passage from \(\xi^{\mathrm{het}}\) structures to \(\xi^{\mathrm{het}^{\prime}}\) structures from SS2.2 -- and just as in that case, because \(c\) is \(15\)-connected, the induced map \(\Omega_{k}^{\xi^{\mathrm{CHL}}}\to\Omega_{k}^{\mathrm{Spin}\langle w_{4}\rangle}\) is an isomorphism for \(k\leq 14\), so the computations in this section also give \(\mathrm{Spin}\langle w_{4}\rangle\) bordism groups.
An alternate point of view due to Sati-Schreiber-Stasheff [23, (2.17)] is that \(\mathrm{Spin}\langle w_{4}\rangle\) structures are twisted string structures in the sense of Corollary 2.12: the trivialization of \(w_{4}(M)\) is equivalent data to a class \(\mu\in H^{4}(M;\mathbb{Z})\) and an identification of \(2\mu\) and \(\lambda(M)\), so a \(\mathrm{Spin}\langle w_{4}\rangle\)-structure is a twisted string structure for the map \(-2\colon K(\mathbb{Z},4)\to K(\mathbb{Z},4)\) (corresponding to the classifying space Sati-Schreiber-Stasheff denote \(B\mathrm{String}^{2\mathrm{DD}_{2}}\)). See also [16, Remark C.18].
Figure 8. The \(E_{2}\)-page for the Adams spectral sequence computing \(2\)-completed \(\xi^{\mathrm{CHL}}\) bordism. The gray summands correspond to classes with trivial \(\mathrm{E}_{8}\)-bundle. See Theorem 2.90 for more information. This figure is adapted from [11, Figure 3].
The proof of Theorem 2.90 took advantage of an abstract isomorphism, so it tells us nothing about the generators. The elements of \(\Omega_{*}^{\text{String}}(\text{BE}_{8})\) coming from \(\Omega_{*}^{\text{String}}(\text{pt})\) are represented by string manifolds with trivial \(\text{E}_{8}\)-bundle; these vacuously satisfy the condition \(2c=\lambda\), so define classes in \(\Omega_{*}^{\xi^{\text{CHL}}}\) representing the same elements under the abstract isomorphism with \(\Omega_{*}^{\text{String}}(\text{BE}_{8})\).
That leaves a few elements left: copies of \(\mathbb{Z}\) in degrees 4 and 8, and copies of \(\mathbb{Z}/2\) in degrees 9 and 10. We can represent the generator of \(\Omega_{4}^{\xi^{\text{CHL}}}\cong\mathbb{Z}\) by a K3 surface with an \(\text{E}_{8}\)-bundle chosen to satisfy the Bianchi identity; it would be interesting to determine generators of \(\Omega_{k}^{\xi^{\text{CHL}}}\) for \(k=8,9,10\).
#### 2.4.2. Odd-primary computation
**Theorem 2.92**.: _For \(k\leq 12\), \(\Omega_{k}^{\xi^{\text{CHL}}}\) has no odd-primary torsion._
Proof.: First we show the result for \(p=3\). The mod 3 cohomology, as an \(\mathcal{A}\)-module, of the string cover \(\mathcal{S}(G,\lambda)\) only depends on \(\lambda\bmod 3\). Therefore in the CHL case, where \(\lambda=2c\), we might as well work with \(\lambda=-c\) -- or replacing our \(K(\mathbb{Z},4)\) class with its opposite, \(\lambda=c\). This string cover corresponds to the universal twist of _MTString_ over \(K(\mathbb{Z},4)\) from Corollary 2.12, which means that by Theorem 2.11, the Thom spectrum for this twist is _MTSpin_ again! That is, the \(E_{2}\)-page of the 3-primary Adams spectral sequence for CHL bordism coincides with the \(E_{2}\)-page for spin bordism -- or for oriented bordism, because the forgetful map _MTSpin_\(\to\)_MTSO_ is a 3-primary equivalence.
Milnor [20, Theorem 4] shows that the mod 3 cohomology of _MTSO_ is free as an \(\mathcal{A}/\beta\)-module on even-degree generators, where \(\beta\) is the mod 3 Bockstein; then, he proves (_ibid._, Theorem 1) that for any spectrum with that property and satisfying a finiteness condition, there is no odd-primary torsion in homotopy. The CHL bordism spectrum satisfies these conditions, so we conclude.
For \(p\geq 5\), the argument is essentially the same as in Theorem 2.74.
## 3. Consequences in string theory
There are a few different uses of bordism groups in theories of quantum gravity. In this section, we discuss applications and questions raised by the computations in the previous section. Though we stay mostly mathematical, some of what we state in this section is only known at a physical level of rigor.
### The cobordism conjecture
As part of the Swampland program in quantum gravity, McNamara-Vafa [21] made the following conjecture, a consequence of the generally believed fact that theories of quantum gravity should not have global symmetries:
**Conjecture 3.1** (McNamara-Vafa cobordism conjecture [21]).: Suppose we have a consistent \(n\)-dimensional theory of quantum gravity in which the spacetime backgrounds that are summed over carry a \(\xi\)-structure. Then, for \(3\leq k\leq n-1\), \(\Omega_{k}^{\xi}=0\).
The key here is the meaning of "the spacetime backgrounds carry a \(\xi\)-structure" -- we do not mean just that one could sum over \(\xi\)-manifolds, but that \(\xi\) is in some to-be-specified sense the maximally general structure for which the theory makes sense. String theorists often work with singular manifolds and even Deligne-Mumford stacks on _Man_[22, 23, 24, 25, 17], and the notion of \(\xi\)-bordism appearing in Conjecture 3.1 is expected to take this into account, as some sort of bordism theory of generalized manifolds.
The tangential structures \(\xi\) currently known for various theories of quantum gravity do not satisfy the vanishing criterion in Conjecture 3.1, so there must be additional data or conditions on these theories' backgrounds modifying \(\xi\) so as to kill its bordism groups. These modifications often take the form of additional extended objects in the theory. This leads to a common application of the cobordism conjecture: compute the bordism groups for the tangential structure \(\xi\) as we currently understand it, and use any nonvanishing groups as beacons illuminating novel objects in the theory, which one then studies. This idea has been applied in [10, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30];21 in this subsection, we will use our computations from SS2 and see what we can learn about the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string and the CHL string.
Footnote 21: Despite all of this work, there are still plenty of already-worked-out computations of bordism groups relevant to various string and supergravity theories whose corresponding defects have not been determined. This includes \(\Omega^{\mathrm{Spin}}_{*}(B\mathrm{E}_{8})\)[20, 21], applicable to the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string in the absence of the \(\mathbb{Z}/2\) swapping symmetry; \(\Omega^{\mathrm{Spin}}_{*}[\text{\@@cite[cite]{[\@@bibref{}{KPTM20}{}{}]}}\)[20, Appendices E, F], relevant for type I string theory; and \(\Omega^{\mathrm{m}_{c}}_{*}\)[20], useful for the low-energy limit of M-theory.
Despite the \(k\geq 3\) bound in Conjecture 3.1, modifying \(\xi\) to kill classes in \(\Omega^{\xi}_{1}\) and \(\Omega^{\xi}_{2}\) is often physically meaningful, and can predict useful new objects in the theory. This is a common technique in the study of the cobordism conjecture, and we will do this too.
#### 3.1.1. The \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string
McNamara-Vafa [10, SS4.5] discussed predictions of their conjecture to the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string theory, but after making the simplifying assumption that the gauge \((\mathrm{E}_{8}\times\mathrm{E}_{8})\rtimes\mathbb{Z}/2\)-bundle is trivial; the corresponding tangential structure is then \(B\)String. For example, their conjecture must account for \(\Omega^{\mathrm{String}}_{3}\cong\mathbb{Z}/24\), generated by \(S^{3}\) with its Lie group framing, and they explain how this is trivialized by taking into account the NS5-brane.
With \(\Omega^{\mathrm{e}^{\mathrm{het}}}_{*}\) in hand, we can predict more objects. Recall the generators we found for \(\xi^{\mathrm{het}}\)-bordism groups, and our notation for them, from SS2.2.1.
**Example 3.2**.: \(\Omega^{\mathrm{e}^{\mathrm{het}}}_{1}\cong\mathbb{Z}/2\oplus\mathbb{Z}/2\), with generators \(S^{1}_{nb}\) and \(\mathbb{RP}^{1}\). McNamara-Vafa already considered \(S^{1}_{nb}\), but the latter is new. If one allows manifolds with singularities, \(\mathbb{RP}^{1}\) bounds \(D^{2}/(\mathbb{Z}/2)\), i.e. the disc with a principal \(\mathbb{Z}/2\)-bundle that is singular at the origin, inflated to a singular \(\mathbb{G}^{\mathrm{het}}\)-bundle via the inclusion \(\mathbb{Z}/2\hookrightarrow\mathbb{G}^{\mathrm{het}}\).
This class corresponds to a 7-brane in the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string. The worldvolume of this brane is eight-dimensional, so the link around it in ten-dimensional spacetime is a circle. The monodromy around this circle exchanges the two \(\mathrm{E}_{8}\)-bundles. This is exactly the non-supersymmetric 7-brane recently introduced and discussed by Kaidi-Ohmori-Tachikawa-Yonekura [21].
Related 7-branes in different theories are studied by Distler-Freed-Moore [15] and Dierigl-Heckman-Montero-Torres [15]; the latter study a 7-brane in type IIB string theory, called an R7-brane, which in the cobordism conjecture corresponds to \([\mathbb{RP}^{1}]\in\Omega^{\mathrm{Spin-GL}^{+}_{1}(\mathbb{Z})}_{1}\).
As a way of better understanding Kaidi-Ohmori-Tachikawa-Yonekura's 7-brane, we can try to identify where it is sent under dualities between different string theories. For example, Horava-Witten [16, 17, 18] identified (a certain limit) of \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string theory with a theory predicted to be the low-energy limit of a compactification of M-theory on the unit interval. Under this identification, the Kaidi-Ohmori-Tachikawa-Yonekura 7-brane ought to correspond to a defect in M-theory associated to a 2-dimensional bordism class by the cobordism conjecture. Because the passage from M-theory to heterotic string theory requires compactifying on the interval, which is a manifold with boundary, one should use a theory of bordism of compact manifolds
which are not necessarily closed.22 The bordism class should be represented by an interval bundle over \(\mathbb{RP}^{1}\), so we conjecture that the bordism class of the Mobius strip corresponds to the avatar of this brane in M-theory. As a check, M-theory compactified on a Mobius strip is expected to coincide with \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic string theory compactified on \(\mathbb{RP}^{1}\) -- they are both predicted to be the CHL string, as we discussed in SS1.3, though as usual only a statement about low-energy supergravity limits is known. We will not attempt to fully resolve this question in this paper: among other things, this would require finding "the right" notion of bordism for manifolds with boundary for this application.
Footnote 22: McNamara-Vafa [11, §5] hint at this generalization, though from the perspective of manifolds with singularities rather than manifolds with boundary.
Before we leave heterotic/M-duality behind, we point out a notion of bordism of manifolds with boundary, due to Conner-Floyd [10, SS16], for which the Mobius strip is nonbounding; we optimistically conjecture that this is the correct kind of bordism of manifolds with boundary for applications to the cobordism conjecture.
**Definition 3.3**.: Let \(\xi_{1}\colon B_{1}\to B\text{O}\) and \(\xi_{2}\colon B_{2}\to B\text{O}\) be tangential structures and \(\eta\colon B_{1}\to B_{2}\) be a _map of tangential structures_, i.e. \(\eta\) commutes with the maps \(\xi_{i}\). A \(\xi_{2}/\xi_{1}\)_-manifold_ is a compact manifold \(M\) with \(\xi_{2}\)-structure together with
1. a \(\xi_{1}\)-structure \(\mathfrak{r}\) on \(\partial M\), and
2. an identification of the \(\xi_{2}\)-structure \(\eta(\mathfrak{r})\) on \(\partial M\) with the \(\xi_{2}\)-structure induced by taking the boundary on \(M\).
Conner-Floyd [10, SS16] introduce a notion of bordism for \(\xi_{2}/\xi_{1}\)-manifolds,23 which we write \(\Omega_{\ast}^{\xi_{2}/\xi_{1}}\), such that the Thom spectrum corresponding to this notion of bordism is \(\mathit{MT}\xi_{2}/\mathit{MT}\xi_{1}\), the cofiber of \(\eta\colon\mathit{MT}\xi_{1}\to\mathit{MT}\xi_{2}\). This implies the existence of a long exact sequence
Footnote 23: Conner-Floyd only consider a few examples of \(\xi_{1}\) and \(\xi_{2}\). The works [12, 13, 14, 15] consider some more tangential structures.
\[\cdots\longrightarrow\Omega_{k}^{\xi_{1}}\overset{\eta}{\longrightarrow}\Omega _{k}^{\xi_{2}}\overset{j}{\longrightarrow}\Omega_{k}^{\xi_{1}/\xi_{2}} \overset{\partial}{\longrightarrow}\Omega_{k-1}^{\xi_{1}}\longrightarrow\dots \tag{3.4}\]
where \(j\) regards a \(\xi_{2}\)-manifold as a \(\xi_{2}/\xi_{1}\)-manifold with empty boundary.
**Lemma 3.5**.: _The class of the Mobius strip \(M\) is nonzero in \(\Omega_{2}^{\operatorname{Pin}^{+}/\operatorname{Spin}}\).24_
Footnote 24: Strictly speaking, \(\operatorname{Pin}^{+}/\operatorname{Spin}\) is not the correct tangential structure: one should replace \(\operatorname{Pin}^{+}/\operatorname{spin}\) with something like \(\mathfrak{m}_{c}\)[11, 12, 13], and should replace \(\operatorname{Spin}\) with something like \(\xi^{\text{het}}\), though \(\mathfrak{m}_{c}\)- and \(\operatorname{pin}^{+}\) structures on \(2\)-manifolds are equivalent data [12, §8.5.1].
Proof.: By (3.4), it suffices to prove that \([\partial M]\neq 0\) in \(\Omega_{1}^{\operatorname{Spin}}\). The boundary of the Mobius strip is a circle, and for any \(\operatorname{pin}^{+}\) structure on \(M\), the boundary circle has the nonbounding spin structure, i.e. is nonzero in \(\Omega_{1}^{\operatorname{Spin}}\). This is because if \(\partial M\) had the bounding spin structure, one could glue the disc with its standard \(\operatorname{pin}^{+}\) structure to \(M\) along \(\partial M\) and thereby obtain a \(\operatorname{pin}^{+}\) structure on \(\mathbb{RP}^{2}\), but \(\mathbb{RP}^{2}\) does not admit a \(\operatorname{pin}^{+}\) structure.
Lemma 3.5 suggests that Conner-Floyd's notion of bordism of manifolds with boundary could be the correct one for our application in heterotic/M-theory duality.
**Example 3.6**.: Moving onto higher-codimension objects predicted by higher-dimensional bordism groups, \(\Omega_{2}^{\operatorname{bst}}\) is nonzero, but can be generated by products of \(S^{1}_{nb}\) and \(\mathbb{RP}^{1}\). This means that if we trivialize \([\mathbb{RP}^{1}],[S^{1}_{nb}]\in\Omega_{1}^{\xi^{\text{het}}}\) in the sense above, namely by allowing \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic string theory to be defined on singular manifolds whose boundaries are \(\mathbb{RP}^{1}\) and \(S^{1}_{nb}\), then we can realize our
chosen generators of \(\Omega_{2}^{\xi^{\text{het}}}\) as boundaries of singular 3-manifolds: for example, we used \(D^{2}/(\mathbb{Z}/2)\) to realize \(\mathbb{RP}^{1}\) as a boundary, so we can use \(S^{1}_{nb}\times D^{2}/(\mathbb{Z}/2)\) to realize \(S^{1}_{nb}\times\mathbb{RP}^{1}\) as a boundary. Thus accounting for \(\Omega_{2}^{\xi^{\text{het}}}\) does not require adding any new kinds of defects or singularities beyond what we used for \(\Omega_{1}^{\xi^{\text{het}}}\).
**Example 3.7**.: \(\Omega_{3}^{\xi^{\text{het}}}\cong\mathbb{Z}/8\), generated by \(\mathbb{RP}^{3}\). As in Example 3.2, we can bound \(\mathbb{RP}^{3}\) by \(B^{4}/(\mathbb{Z}/2)\) by allowing a singularity at the origin. This bordism class should correspond to a 5-brane distinct from the NS5-brane.
**Example 3.8**.: \(\Omega_{4}^{\xi^{\text{het}}}\cong\mathbb{Z}\oplus\mathbb{Z}/2\). The \(\mathbb{Z}/2\) summand is generated by \(S^{3}\times\mathbb{RP}^{1}\), where \(S^{3}\) carries the Lie group framing, so its bordism class can be trivialized using the objects we have already discussed, like in Example 3.6. By Remark 2.64, because \(S^{3}\times S^{1}\) is bordant as \(\xi^{\text{het}}\)-manifolds to \(S^{4}\) with trivial \(\mathbb{Z}/2\)-bundle and \(\operatorname{E}_{8}\)-bundles with characteristic classes \(\pm 1\in H^{4}(S^{4};\mathbb{Z})\cong\mathbb{Z}\), this bordism class corresponds to the 4-brane recently found by Kaidi-Ohmori-Tachikawa-Yonekura [15].
The \(\mathbb{Z}\) summand in \(\Omega_{4}^{\xi^{\text{het}}}\) is new to us. It is generated by the K3 surface with trivial \(\mathbb{Z}/2\)-bundle; one \(\operatorname{E}_{8}\)-bundle is trivial, and the other cancels \(\lambda(\text{K3})\). McNamara-Vafa [16, SS4.2.1] address the K3 surface without data of \(\operatorname{E}_{8}\)-bundles or a nontrivial B-field, using it to exhibit a higher-form \(\mathbb{T}\)-symmetry. Our K3 surface corresponds to a different bordism class, but McNamara-Vafa's argument still applies: as the K3 surface is believed to be a valid background for \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic string theory, this higher-form \(\mathbb{T}\)-symmetry must be broken or gauged in some way. We do not know what this would look like.
\(\Omega_{5}^{\xi^{\text{het}}}\) vanishes and \(\Omega_{6}^{\xi^{\text{het}}}\cong\mathbb{Z}/2\) is generated by \(\mathbb{RP}^{3}\times S^{3}\), so as in Example 3.6 we can realize it as a boundary without introducing any new kinds of singularities.
**Example 3.9**.: \(\Omega_{7}^{\xi^{\text{het}}}\cong\mathbb{Z}/16\), generated by \(\mathbb{RP}^{7}\). This bordism class is closely analogous to Examples 3.2 and 3.7; this time, we have a 1-brane, i.e. a string.
_Remark 3.10_ (Relating bordism classes by compactification25).: For the cobordism conjecture for type IIB string theory considered on spin-\(\operatorname{GL}_{2}^{+}(\mathbb{Z})\) manifolds, \([\mathbb{RP}^{k}]\in\Omega_{k}^{\text{Spin-GL}_{2}^{+}(\mathbb{Z})}\) is nonzero for \(k=1\), 3, and 7 [13, SS14.3.2], so we would expect these classes to correspond to three different kinds of extended objects, akin to Examples 3.2, 3.7, and 3.9. However, in [13, SS7], it is shown that the two higher-codimension objects can be expressed as compactifications of the R7-brane corresponding to \(\mathbb{RP}^{1}\), so there is really only one novel object. We suspect something similar happens here: that in \(\operatorname{E}_{8}\times\operatorname{E}_{8}\) heterotic string theory, the extended objects corresponding to \(\mathbb{RP}^{3}\) and \(\mathbb{RP}^{7}\) can be accounted for using previously known branes and Kaidi-Ohmori-Tachikawa-Yonekura's 7-brane from Example 3.2 corresponding to \(\mathbb{RP}^{1}\).
Footnote 25: We thank Markus Dierigl for pointing this out to us.
From a bordism point of view, we are saying that if we allow singular \(\xi^{\text{het}}\)-manifolds which locally look like \(\mathbb{R}^{k}\times D^{2}/(\mathbb{Z}/2)\), it should be possible to not just bound \(\mathbb{RP}^{1}\), but also to bound \(\mathbb{RP}^{3}\) and \(\mathbb{RP}^{7}\). We leave this as a conjecture.
**Example 3.11**.: \(\Omega_{8}^{\xi^{\text{het}}}\), which corresponds to codimension-9 objects, is isomorphic to either \(\mathbb{Z}^{3}\oplus\mathbb{Z}/2\) or \(\mathbb{Z}^{3}\oplus(\mathbb{Z}/2)^{\oplus 2}\), depending on the fate of the differential (D3). The generators of these four or five summands that we found are:
* \(\mathbb{HP}^{2}\) with two different \(\xi^{\text{het}}\)-structures, giving two \(\mathbb{Z}\) summands;
* the Bott manifold, generating another free summand;
* \(\mathbb{RP}^{7}\times S^{1}_{nb}\) generating the \(\mathbb{Z}/2\) summand that is present even if (D3) does not vanish; and
* the manifold \(X_{8}\) that we discussed in SS2.2.2, an \(S^{3}\times S^{3}\)-bundle over \(\mathbb{RP}^{2}\). If the differential (D3) is nonzero, then \(X_{8}\) bounds as a \(\xi^{\text{het}}\)-manifold.
\(\mathbb{RP}^{6}\times S^{1}_{nb}\) is already accounted for in the sense of Example 3.6, so we focus on the other generators.
Both \(B\) and \(\mathbb{HP}^{2}\) are nonbounding in the bordism group \(\Omega^{\text{Spin-Mp}_{2}(\mathbb{Z})}_{8}\), which appears in the study of the cobordism conjecture for type IIB string theory; see [15, SS6.9] for a discussion of defects in type IIB corresponding to these bordism classes. Like in Example 3.8, the story in \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string theory is presumably not exactly the same, but it may be analogous.
Finally, \(X_{8}\). Following the arguments in [14, SS4.5] and [15, SS7.6, SS7.8] the description of \(X_{8}\) as a fiber bundle over \(\mathbb{RP}^{2}\) with fiber \(S^{3}\times S^{3}\) suggests the following string-theoretic construction: use the singular manifold corresponding to the NS5-brane to bound for the first \(S^{3}\), compactify on the second \(S^{3}\), and then fiber over \(D^{3}/(\mathbb{Z}/2)\) to make \(X_{8}\) a boundary of a singular manifold. We do not know whether this is a valid background for the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string; an argument for or against it could provide an example of a use of Conjecture 3.1 to make a mathematical conjecture for the fate of \(X_{8}\) based on string-theoretic predictions.
**Example 3.12**.: \(\Omega^{\text{\tiny{\schet}}}_{9}\) corresponds to zero-dimensional objects, i.e. point defects, and is isomorphic to either \((\mathbb{Z}/2)^{\oplus 4}\) or \((\mathbb{Z}/2)^{\oplus 6}\), depending on the fate of (D3). Three of the generators we found in SS2.2.1 are of the form \(S^{1}_{nb}\) times a \(\xi^{\text{het}}\)-manifold, so have already been accounted for in the sense of Example 3.6. The fourth generator is \(B\times\mathbb{RP}^{1}\), so it is also already accounted for.
The remaining two manifolds that might or might not be necessary are \(X_{8}\times S^{1}_{nb}\), which as usual is already taken care of, and a manifold \(X_{9}\) which we did not determine.
#### 3.1.2. The CHL string
In Theorems 2.90 and 2.92, we saw that \(\Omega^{\text{\tiny{\sc CHL}}}_{*}\) is abstractly isomorphic to \(\Omega^{\text{String}}_{*}(B\mathrm{E}_{8})\). Thus there is a summand corresponding to \(\Omega^{\text{String}}_{*}(\text{pt})\), and as we saw above, these classes can be represented by string manifolds with trivial \(\mathrm{E}_{8}\)-bundle. Some of these manifolds were accounted for by McNamara-Vafa [14, SS4.5] in heterotic string theory, e.g. killing \(S^{3}\) with its nonbounding framing using the fivebrane, and presumably a similar defect is present in the CHL string. McNamara-Vafa leave plenty of string bordism classes' interpretations in terms of defects open to address, and this would be interesting to understand more in the setting of the CHL string.
We also found a few more classes in \(\Omega^{\text{\tiny{\sc CHL}}}_{*}\). For example, \(\Omega^{\text{\tiny{\sc CHL}}}_{4}\cong\mathbb{Z}\), generated by a K3 surface with \(\mathrm{E}_{8}\)-bundle chosen to satisfy the Bianchi identity. Like in Example 3.8, this corresponds to some codimension-4 object, though we do not know what it will look like.
Is the \(\mathbb{Z}/2\) symmetry on the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string anomalous?
Quantum field theories can come with the data of an _anomaly_, a mild inconsistency in which key quantities in the field theory are not defined absolutely without fixing additional data. For example, one wants the partition function of a QFT on a manifold \(M\) to be a complex number, but an anomaly signals that the partition function is only an element of a complex line which has not been trivialized. The process of resolving this inconsistency, when necessary, is called _anomaly cancellation_.
Freed-Teleman [17] describe anomaly cancellation for a broad class of quantum field theories as follows: an \(n\)-dimensional quantum field theory \(Z\) lives at the boundary of an \((n+1)\)-dimensional invertible field theory \(\alpha\), called the _anomaly field theory_ of \(Z\). The tangential structures of \(Z\) and \(\alpha\) match. Anomaly cancellation is the procedure of trivializing \(\alpha\), i.e. establishing an isomorphism from \(\alpha\) to the trivial theory.
We think of this from Atiyah-Segal's approach [11, 20] that field theories are symmetric monoidal functors from (potentially geometric) bordism categories into categories such as \(\mathcal{Vect}_{\mathbb{C}}\). The perspective of extended field theory means these are often \((\infty,n)\)-categories. If \(\mathcal{C}\) and \(\mathcal{D}\) are two symmetric monoidal \((\infty,n)\)-categories, the \((\infty,n)\)-category of symmetric monoidal functors \(F\colon\mathcal{C}\to\mathcal{D}\) acquires the symmetric monoidal structure of "pointwise tensor product," specified by the formula
\[(F_{1}\otimes F_{2})(x)\coloneqq F_{1}(x)\otimes_{\mathcal{D}}F_{2}(x), \tag{3.13}\]
where \(x\) is an object, morphism, higher morphism, etc.
**Definition 3.14** (Freed-Moore [14, Definition 5.7]).: Let \(\mathcal{C}\) be a symmetric monoidal \((\infty,n)\)-category. An _invertible field theory_ is a field theory \(\alpha\colon\mathcal{B}\mathit{ord}_{n}^{\mathcal{E}}\to\mathcal{C}\) such that there is another field theory \(\alpha^{-1}\colon\mathcal{B}\mathit{ord}_{n}^{\mathcal{E}}\to\mathcal{C}\) such that \(\alpha\otimes\alpha^{-1}\simeq\mathbf{1}\), the trivial theory.
The trivial theory \(\mathbf{1}\colon\mathcal{B}\mathit{ord}_{n}^{\mathcal{E}}\to\mathcal{C}\) is defined to send all objects to the monoidal unit in \(\mathcal{C}\) and all morphisms and higher morphisms to identity morphisms, resp. identity higher morphisms.
Therefore the classification of anomalies follows from the classification of invertible field theories, and anomaly cancellation is an isomorphism from an invertible field theory to \(\mathbf{1}\). Freed-Hopkins-Teleman [14] classify invertible _topological_ field theories using stable homotopy theory, and Grady-Pavlov [15, SS5] generalize this in the nontopological setting.
In most cases, including the supergravity theories studied in this paper, the QFT under study is unitary, so their anomaly theories have the Wick-rotated analogue of unitarity, _reflection positivity_. Freed-Hopkins [14] classify reflection-positive invertible field theories.
Let \(I_{\mathbb{Z}}\) denote the _Anderson dual of the sphere spectrum_[1, 20].
**Theorem 3.15** (Freed-Hopkins [14, Theorem 1.1]).: _Let \(\xi\) be a tangential structure. There is a natural isomorphism from the group of deformation classes of \((n+1)\)-dimensional reflection-positive invertible topological field theories on \(\xi\)-manifolds to the torsion subgroup of \([\mathit{MT}\xi,\Sigma^{n+2}I_{\mathbb{Z}}]\)._
Freed-Hopkins then conjecture (_ibid._, Conjecture 8.37) that the entire group classifies all reflection-positive invertible field theories, topological or not.
\(I_{\mathbb{Z}}\) satisfies a universal property which leads to the existence of a natural short exact sequence
(3.16)
and this sequence carries physical meaning for the classification of possible anomalies for an \(n\)-dimensional QFT \(Z\). For example, \(\operatorname{Hom}(\Omega_{n+2}^{\xi},\mathbb{Z})\) is a group of \(\mathbb{Z}\)-valued degree-\((n+2)\) characteristic classes of \(\xi\)-manifolds, and the quotient map in (3.16) sends the anomaly field theory of \(Z\) to its _anomaly polynomial_. This data can often be computed using perturbative techniques for \(Z\), and is referred to as the _local anomaly_. Consequently, one can use bordism computations to assess what the group of possible anomalies of a QFT is, and whether a specific anomaly field theory is trivializable; see [14, 15, 16, 17, 18, 19, 20] for recent anomaly cancellation theorems in string and supergravity theories using this technique.
#### 3.2.1. Anomalies for the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string
For the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string, the anomaly field theory is an element of the group \([\mathit{MT}\xi^{\mathrm{het}},\Sigma^{12}I_{\mathbb{Z}}]\): the free part is noncanonically isomorphic to the free part of \(\Omega_{12}^{\mathrm{het}}\), and the torsion part is noncanonically isomorphic to the torsion subgroup of \(\Omega_{11}^{\mathrm{het}}\). Though we have not completely determined these groups, \(\Omega_{11}^{\mathrm{het}}\) is nonzero, as we showed
in Theorem 2.62, so there is the possibility of a nontrivial anomaly to cancel. One generally expects that the anomaly field theory itself is trivial, because physicists have undertaken many consistency checks on \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string theory, but sometimes there is a surprise: in joint work with Dierigl, Heckman, and Montero [6], we found that the anomaly theory for the duality symmetry in type IIB string theory is nonzero, and requires a modification of the theory to be trivialized.
For the \(\mathrm{E}_{8}\times\mathrm{E}_{8}\) heterotic string, there has been a fair amount of work already cancelling the anomaly in special cases, but for the full tangential structure \(\xi^{\mathrm{het}}\), the question of anomaly cancellation is open. The original work of Green-Schwarz [10] shoes that the anomaly polynomial vanishes, so by (3.16), we only need to look at bordism invariants out of \(\Omega^{\xi^{\mathrm{het}}}_{11}\). If one ignores the \(\mathbb{Z}/2\) swapping symmetry, the anomaly is known to be trivial: Witten [11, SS4] showed that the global anomaly is classified by a bordism invariant \(\Omega^{\mathrm{Spin}}_{11}(B\mathrm{E}_{8})\to\mathbb{C}^{\times}\), and Stong [12] showed that \(\Omega^{\mathrm{Spin}}_{11}(B\mathrm{E}_{8})=0\) (see Remark 2.34). Sati [10] studies a closely related question in terms of \(\Omega^{\mathrm{String}}_{11}(B\mathrm{E}_{8})\).
Recent work of Tachikawa-Yamashita [13] (see also Tachikawa [14] and Yonekura [15, SS4]) cancels anomalies in a large class of compactifications of heterotic string theory using an ingenious _TMF_-based argument. Their work does not take into account the \(\mathbb{Z}/2\) swapping symmetry. It would be interesting to address the full anomaly on \(\xi^{\mathrm{het}}\)-manifolds, either by directly computing it on generators of \(\Omega^{\xi^{\mathrm{het}}}_{11}\) or by adapting Tachikawa-Yonekura's argument. If this symmetry does have a nontrivial anomaly, this would have consequences for the CHL string, either requiring a modification of the theory or showing that it is inconsistent.
#### 3.2.2. Anomalies for the CHL string
Anomaly cancellation for the CHL string has been studied less. In Theorems 2.90 and 2.92, we saw that \(\Omega^{\xi^{\mathrm{CHL}}}_{11}\) is torsion, so the anomaly polynomial vanishes; and we saw \(\Omega^{\xi^{\mathrm{CHL}}}_{10}\cong\mathbb{Z}/2\oplus\mathbb{Z}/2\), so there is a potential for the anomaly field theory to be nontrivial, which would be interesting to check.
|
2305.03131 | Courant-Nijenhuis algebroids | We introduce Courant 1-derivations, which describe a compatibility between
Courant algebroids and linear (1,1)-tensor fields and lead to the notion of
Courant-Nijenhuis algebroids. We provide examples of Courant 1-derivations on
exact Courant algebroids and show that holomorphic Courant algebroids can be
viewed as special types of Courant-Nijenhuis algebroids. By considering Dirac
structures, one recovers the Dirac-Nijenhuis structures previously studied by
the authors (in the special case of the standard Courant algebroid) and obtains
an equivalent description of the Lie-Nijenhuis bialgebroids introduced by the
second author via Manin triples. | Henrique Bursztyn, Thiago Drummond, Clarice Netto | 2023-05-04T20:21:28Z | http://arxiv.org/abs/2305.03131v1 | # Courant-Nijenhuis algebroids
###### Abstract.
We introduce Courant 1-derivations, which describe a compatibility between Courant algebroids and linear (1,1)-tensor fields and lead to the notion of Courant-Nijenhuis algebroids. We provide examples of Courant 1-derivations on exact Courant algebroids and show that holomorphic Courant algebroids can be viewed as special types of Courant-Nijenhuis algebroids. By considering Dirac structures, one recovers the Dirac-Nijenhuis structures of [5] (in the special case of the standard Courant algebroid) and obtains an equivalent description of Lie-Nijenhuis bialgebroids [9] via Manin triples.
###### Contents
* 1 Introduction
* 2 1-Derivations on vector bundles
* 2.1 Equivalence with linear (1,1)-tensors fields
* 2.2 The Nijenhuis condition
* 2.3 Duality
* 2.4 Compatibility with (pre-)Lie algebroids
* 3 1-Derivations on Courant algebroids
* 3.1 Courant 1-derivations and Courant-Nijenhuis algebroids
* 3.2 Invariant Dirac structures
* 4 Courant 1-derivations on \(TM\oplus T^{*}M\)
* 4.1 Courant 1-derivations from pseudo-Riemmanian metrics
* 4.2 Nijenhuis 1-derivations and the Kahler condition
* 4.3 B-field transformations
* 5 Holomorphic Courant algebroids
* 6 Lagrangian splittings and doubles
* 6.1 Lagrangian splittings of Courant algebroids
* 6.2 Lagrangian splittings of Courant 1-derivations
## 1. Introduction
A 1-derivation on a vector bundle \(E\to M\) is a connection-like object that codifies a linear (1,1)-tensor field on the total space of \(E\) in the same way that usual derivations of vector bundles correspond to linear vector fields. As it turns out, many examples of compatibility conditions involving structures of interest in Poisson geometry can be conveniently expressed in terms of such 1-derivations; in this context, a special role is played by "Nijenhuis 1-derivations", i.e., 1-derivations whose corresponding linear (1,1)-tensor fields have vanishing Nijenhuis torsion.
A motivating example is that of Poisson-Nijenhuis structures [21], originally formulated in terms of an intricate notion of compatibility involving Poisson structures and Nijenhuis operators (that includes the vanishing of the so-called Magri-Morosi concomitant). From the recent
viewpoint of [4, 9], this is understood as follows: a Nijenhuis operator \(r\) on a manifold \(M\) canonically gives rise to a 1-derivation on \(TM\) and a dual one on \(T^{*}M\), and the compatibility of \(r\) with a Poisson bivector field \(\pi\) is simply that \(\pi^{\sharp}:TM\to T^{*}M\) intertwines these 1-derivations. This perspective leads the way to different generalizations, such as
1. Dirac-Nijenhuis structures [5],
2. Lie-Nijenhuis bialgebroids [9].
Just as Poisson structures are particular examples of both Dirac structures and Lie bialgebroids [8, 19], Poisson-Nijenhuis structures are special cases of the objects in (a) and (b). In this paper, we take a step further and introduce the closely related notion of _Courant-Nijenhuis algebroid_.
Dirac-Nijenhuis structures and Lie-Nijenhuis bialgebroids are motivated by the theory of Lie groupoids, in that they arise as infinitesimal counterparts of presymplectic-Nijenhuis and Poisson-Nijenhuis groupoids, respectively, generalizing the correspondence between Poisson-Nijenhuis structures and symplectic-Nijenhuis groupoids from [27], see [5, 9]. An important class of examples is given by holomorphic structures. Any holomorphic vector bundle can be seen as a smooth real vector bundle equipped with a special type of Nijenhuis 1-derivation, that we call a "Dolbeault 1-derivation". In this particular context, (a) and (b) recover holomorphic Dirac structures and holomorphic Lie bialgebroids, respectively, and their integrations correspond to holomorphic presymplectic and holomorphic Poisson groupoids.
Courant algebroids are central ingredients in the theory of Dirac structures and Lie bialgebroids. The main object of study in this paper is a notion of compatibility between 1-derivations and Courant algebroids, described in Definition 3.3 by what we call a _Courant 1-derivation_. A Courant algebroid equipped with a Courant 1-derivation that is also Nijenhuis is called a _Courant-Nijenhuis algebroid_. Dirac structures therein generalize the Dirac-Nijenhuis structures of [5] and provide an alternative approach to the Lie-Nijenhuis bialgebroids of [9] via Manin triples.
The paper is structured as follows. In SS 2 we recall 1-derivations on vector bundles, their main examples and properties, including the notion of duality, the Nijenhuis condition, and their compatibility with (pre-)Lie algebroid structures. Courant 1-derivations and Courant-Nijenhuis algebroids are introduced in SS 3, along with their Dirac structures. As a basic example, we show that any (1,1)-tensor field on a manifold \(M\) canonically defines a Courant 1-derivation on the standard Courant algebroid \(\mathbb{T}M=TM\oplus T^{*}M\) that underlies the Dirac-Nijenhuis structures studied in [5]. In SS 4 we consider more general Courant 1-derivations on \(\mathbb{T}M\), including a class of examples obtained through the additional choice of a (pseudo-)riemannian metric; in this case, the Nijenhuis condition is shown to be related to the Kahler compatibility condition (Prop. 4.5). In SS 5 we show (Theorem 5.3) that Courant-Nijenhuis algebroids defined by Dolbeault 1-derivations coincide with holomorphic Courant algebroids. In SS 6 we consider lagrangian splittings of Courant algebroids equipped with Courant 1-derivations (Theorem 6.1). We show in particular that the Drinfeld double of a Lie-Nijenhuis bialgebroid is a Courant-Nijenhuis algebroid, thereby establishing an equivalence between Lie-Nijenhuis bialgebroids and Courant-Nijenhuis algebroids equipped with splittings by Dirac-Nijenhuis structures.
**Remark 1.1**.: Other works in the literature involve Nijenhuis structures and Courant algebroids but follow a different direction. Nijenhuis tensors on Courant algebroids have been considered by various authors [7, 10, 15], with applications to deformations, hierarchies and compatibilities of geometric structures, see e.g. [1, 2]. The fundamental objects of interest in these papers are vector bundle endomorphisms \(N:E\to E\) with vanishing Nijenhuis torsion, where \(E\) is a Courant algebroid and Nijenhuis torsion is defined with respect to the Courant bracket. In contrast, the present paper studies a notion of compatibility between a (ordinary) linear Nijenhuis operator \(K:TE\to TE\) on the total space of \(E\) and the Courant structure on \(E\).
**Acknowledgments**. This project was partially supported by CNPq (National Council for Scientific and Technological Development), FAPERJ (Rio de Janeiro State Research Foundation) and by grant \(\sharp\) 2022/06205-2 of FAPESP (Sao Paulo State Research Foundation).
## 2. 1-Derivations on vector bundles
A central role in this paper is played by 1-derivations on vector bundles, so we start by briefly recalling their definition and basic properties.
Let \(p:E\to M\) be a smooth, real vector bundle. A _1-derivation_ on \(E\to M\) is a triple \(\mathcal{D}=(D,l,r)\), where \(r:TM\to TM\) and \(l:E\to E\) are vector-bundle maps covering the identity, and \(D:\Gamma(E)\to\Gamma(T^{*}M\otimes E)\) is an \(\mathbb{R}\)-linear map satisfying the following Leibniz-type condition:
\[D_{X}(f\sigma)=fD_{X}(\sigma)+(\mathcal{L}_{X}f)\,l(\sigma)-(\mathcal{L}_{r(X) }f)\,\sigma, \tag{2.1}\]
where \(X\in\mathfrak{X}(M)\), \(\sigma\in\Gamma(E)\) and \(f\in C^{\infty}(M)\), and we use the notation
\[D_{X}:\Gamma(E)\to\Gamma(E),\qquad D_{X}(\sigma)=i_{X}(D(\sigma)).\]
Note that the linear combination of 1-derivations (defined componentwise) is a 1-derivation.
Let \(F\subseteq E\) be a subbundle (over the same base, for simplicity).
**Definition 2.1**.: _We say that \(F\) is \(\mathcal{D}\)-invariant if_
* \(l(F)\subseteq F\)_,_
* \(D_{X}(\Gamma(F))\subseteq\Gamma(F),\qquad\forall X\in\mathfrak{X}(M)\)_._
When \(F\) is \(\mathcal{D}\)-invariant, \(\mathcal{D}\) restricts to a 1-derivation on \(F\),
\[(D|_{\Gamma(F)},l|_{F},r).\]
### Equivalence with linear (1,1)-tensors fields
Just as usual derivations on a vector bundle \(E\to M\) are equivalent to linear vector fields on \(E\) (see e.g. [20]), 1-derivations are in bijective correspondence with linear (1,1)-tensor fields, see [4, SS 6.1] and [5, SS 2.1].
Let \(K\in\Omega^{1}(E,TE)\), i.e. \(K\) is a (1,1)-tensor field on the total space \(E\), that we view as a vector-bundle morphism \(K:TE\to TE\). We say that \(K\) is _linear_ if \(K:TE\to TE\) is also a vector-bundle morphism from the tangent prolongation bundle \(TE\to TM\) to itself (not necessarily covering the identity on \(TM\)). Linear (1,1)-tensor fields form a linear subspace of \(\Omega^{1}(E,TE)\). The reader can find a detailed treatment of linear tensors in [4], where the results stated in this section can be found.
Any linear (1,1)-tensor field \(K:TE\to TE\) gives rise to a 1-derivation \(\mathcal{D}=(D,l,r)\) on \(E\) as follows: \(r:TM\to TM\) is the restriction of \(K|_{M}\) to vectors tangent to the zero section, \(l:E\to E\) is the restriction of \(K|_{M}\) to vectors tangent to the \(p\)-fibers, and \(D:\Gamma(E)\to\Gamma(T^{*}M\otimes E)\) is the \(\mathbb{R}\)-linear map defined by
\[D_{X}(\sigma)=(\mathcal{L}_{\sigma^{\uparrow}}K)(X),\qquad\sigma\in\Gamma(E), \,X\in TM,\]
where \(\sigma^{\uparrow}\in\mathfrak{X}(E)\) is the _vertical lift_ of \(\sigma\in\Gamma(E)\), given by \(\sigma^{\uparrow}(e)=\left.\frac{d}{dt}\right|_{t=0}e+t\,\sigma(p(e))\). This assignment establishes a linear bijection between linear (1,1)-tensor fields and 1-derivations on \(E\) with natural functorial properties, see [5, Thm. 2.1]. A direct consequence is that a subbundle \((F\to M)\subset(E\to M)\) is \(\mathcal{D}\)-invariant if and only if it is preserved by the corresponding linear (1,1) tensor field \(K\),
\[K(TF)\subseteq TF.\]
Let us give some examples.
**Example 2.2** (Connections).: A connection \(\nabla\) on \(E\) defines a 1-derivation with \(r=0\), \(l=\mathrm{id}_{E}\) and \(D=\nabla\). The corresponding linear (1,1)-tensor field \(K:TE\to TE\) is the projection operator on the vertical bundle \(\ker(Tp)\subseteq TE\) along the horizontal bundle defined by \(\nabla\).
**Example 2.3** (Tangent and cotangent lifts).: Given a (1,1)-tensor on \(M\), \(r:TM\to TM\), its tangent lift \(r^{tg}:T(TM)\to T(TM)\) and cotangent lift \(r^{ctg}:T(T^{*}M)\to T(T^{*}M)\) are linear (1,1)-tensor fields on \(TM\) and \(T^{*}M\), respectively, defined as follows:
\[r^{tg}=\Theta\circ Tr\circ\Theta,\qquad(\omega_{can})^{\flat}\circ r^{ctg}=( \varphi_{r}^{*}\,\omega_{can})^{\flat},\]
where \(\Theta:T(TM)\to T(TM)\) is the canonical involution of the double tangent bundle \(T(TM)\), \(\omega_{can}\) is the canonical symplectic form on \(T^{*}M\), and \(\varphi_{r}:T^{*}M\to T^{*}M\) is just \(r^{*}\) seen as a smooth map from \(T^{*}M\) to itself. The linearity of \(r^{tg}\) and \(r^{ctg}\) was proved in [9, Thm. 3.4], along with the identification of their corresponding 1-derivations as
\[\mathcal{D}^{r}=(D^{r},r,r),\quad\text{ and }\quad\mathcal{D}^{r,*}=(D^{r,*},r^{ *},r),\]
where, for \(X\in\mathfrak{X}(M)\), \(D^{r}_{X}:\mathfrak{X}(M)\to\mathfrak{X}(M)\) and \(D^{r,*}_{X}:\Omega^{1}(M)\to\Omega^{1}(M)\) are given by
\[D^{r}_{X}(Y) =(\mathcal{L}_{Y}r)(X)=[Y,r(X)]-r([Y,X]), \tag{2.3}\] \[D^{r,*}_{X}(\alpha) =\mathcal{L}_{X}(r^{*}\alpha)-\mathcal{L}_{r(X)}\alpha. \tag{2.2}\]
\(\diamond\)
**Example 2.4** (Holomorphic structures and Dolbeault 1-derivations).: Let \(r:TM\to TM\) be a complex structure on \(M\). A holomorphic vector bundle \(\mathcal{E}\to M\) can be regarded as a real vector bundle \(E\to M\) equipped with a fibrewise complex structure \(l\in\operatorname{End}(E)\), \(l^{2}=-\mathrm{Id}\), and a flat \(T^{0,1}\)-connection \(\overline{\partial}\) on the complex vector bundle \((E,l)\)[22]. One can equivalently express the holomorphic structure on \(E\) determined by \(r\), \(l\) and \(\overline{\partial}\) as a 1-derivation \(\mathcal{D}^{Dolb}=(D,l,r)\), where
\[D_{X}(\sigma)=l(\overline{\partial}_{X+\mathbf{ir}(X)}\sigma). \tag{2.4}\]
Such 1-derivations arising from holomorphic structures will be referred to as _Dolbeault 1-derivations_, and they are characterized by the fact that the corresponding linear (1,1)-tensor fields \(K:TE\to TE\) are complex structures on the total space of \(E\) (see Example 2.5 below). Subbundles of \(E\) which are \(\mathcal{D}^{Dolb}\)-invariant (in the sense of Def. 2.1) are holomorphic subbundles.
As special cases, the holomorphic structures on \(TM\) and \(T^{*}M\) induced by a complex structure \(r:TM\to TM\) are given by the 1-derivations \(\mathcal{D}^{r}\) and \(\mathcal{D}^{r,*}\) from the previous example, with corresponding linear complex structures \(r^{tg}\) and \(r^{ctg}\), see [9, SS 5.1]. \(\diamond\)
### The Nijenhuis condition
Let \(T:TN\to TN\) be a (1,1)-tensor field on a manifold \(N\). The Nijenhuis torsion of \(T\) is \(\mathcal{N}_{T}\in\Omega^{2}(E,TE)\) given by
\[\mathcal{N}_{T}(X_{1},X_{2})=[T(X_{1}),T(X_{2})]-T([X_{1},X_{2}]_{T}),\ \ X_{1},\,X_{2}\in\mathfrak{X}(N),\]
where \([X_{1},X_{2}]_{T}=[T(X_{1}),X_{2}]+[X_{1},T(X_{2})]-T([X_{1},X_{2}]\) is the deformed bracket. If \(\mathcal{N}_{T}=0\), \(T\) is called a _Nijenhuis operator_.
By the equivalence between 1-derivations and linear (1,1)-tensor fields, properties of the latter can be expressed in terms of the former. Let \(\mathcal{D}=(D,l,r)\) be a 1-derivation with corresponding linear (1,1)-tensor field \(K\) on \(E\). It is proven in [4, SS 6] that
\[\mathcal{N}_{K}=0\Longleftrightarrow\left\{\begin{array}{l}\mathcal{N}_{r} =0,\\ D_{X}(l(\sigma))-l(D_{X}(\sigma))=0,\\ l(D_{[X,Y]}(\sigma))-[D_{X},D_{Y}](\sigma)-D_{[X,Y]_{r}}(\sigma)=0,\end{array}\right. \tag{2.5}\]
where \([D_{X},D_{Y}]\) is the commutator of operators on \(\Gamma(E)\). We refer to the equations on the right hand side as the _Nijenhuis equations_ for the 1-derivation \((D,l,r)\), and 1-derivations satisfying them will be called _Nijenhuis 1-derivations_.
We will also consider the algebraic condition \(K^{2}=-\mathrm{id}_{TE}\), saying that \(K\) is an almost complex structure on the total space of \(E\). In this case, it is also proven in [4, Cor. 6.2] that
\[K^{2}=-\mathrm{id}_{TE}\Longleftrightarrow\left\{\begin{array}{l}r^{2}=- \mathrm{id}_{TM},\\ l^{2}=-\mathrm{id}_{E},\\ D_{r(X)}(\sigma)+l(D_{X}(\sigma))=0.\end{array}\right. \tag{2.6}\]
**Example 2.5** (Dolbeault 1-derivations revisited).: Following Example 2.4, a Dolbeault 1-derivation on \(E\to M\) is a 1-derivation \((D,l,r)\) on \(E\) satisfying both the Nijenhuis equations (2.5) and the almost complex equations (2.6); note that these conditions exactly say that \((M,r)\) is a complex manifold and
\[\overline{\partial}_{X+\mathbf{i}\,r(X)}:=-l(D_{X}) \tag{2.7}\]
is a flat \(T^{0,1}\)-connection on the complex vector bundle \((E,l)\). We will think of a holomorphic vector bundle \(\mathcal{E}\) as a pair
\[\mathcal{E}=(E,\mathcal{D}^{Dolb})\]
given by a real vector bundle \(E\to M\) equipped with a Dolbeault 1-derivation \(\mathcal{D}^{Dolb}\). \(\diamond\)
### Duality
Just as usual connections, 1-derivations on a vector bundle \(E\to M\) possess a duality operation that establishes a bijection between 1-derivations on \(E\) and \(E^{*}\). Given a 1-derivation \(\mathcal{D}=(D,l,r)\) on \(E\), its dual 1-derivation is \(\mathcal{D}^{*}=(D^{*},l^{*},r)\), where \(D^{*}:\Gamma(E^{*})\to\Gamma(T^{*}M\otimes E^{*})\) is characterized by
\[\langle D^{*}_{X}(\mu),\sigma\rangle=\mathcal{L}_{X}\langle\mu,l(\sigma) \rangle-\mathcal{L}_{r(X)}\langle\mu,\sigma\rangle-\langle\mu,D_{X}(\sigma)\rangle. \tag{2.8}\]
The corresponding linear (1,1)-tensor on \(E^{*}\) is denoted by
\[K^{\top}:T(E^{*})\to T(E^{*})\]
and admits the following characterization. If \(\langle\!\langle\cdot,\cdot\rangle\!\rangle:TE\times_{TM}T(E^{*})\to TM\times \mathbb{R}\) is the non-degenerate, symmetric, bilinear pairing obtained from differentiation of the natural pairing \(\langle\cdot,\cdot\rangle:E\times_{M}E^{*}\to M\times\mathbb{R}\), then
\[\langle\!\langle K(U),K^{\top}(V)\rangle\!\rangle=\langle\!\langle Tl(U),V \rangle\!\rangle,\qquad\forall\,(U,V)\in TE\times_{TM}T(E^{*}).\]
Applying duality in Examples 2.2, 2.3 and 2.4, one obtains the following (see [9, SS 2.1.2] for details):
* For the 1-derivation associated to a connection \(\nabla\), the dual 1-derivation corresponds to the dual connection \(\nabla^{*}\) on \(E^{*}\).
* For \(r:TM\to TM\), the 1-derivations \(\mathcal{D}^{r}\) and \(\mathcal{D}^{r,*}\) (corresponding to \(r^{tg}\) and \(r^{ctg}\)) are dual to each other.
* A 1-derivation \(\mathcal{D}\) is Nijenhuis if and only if so is \(\mathcal{D}^{*}\), see [9]; in this case, \(\mathcal{D}\) satisfies the almost complex conditions if and only if so does \(\mathcal{D}^{*}\). As a consequence, \(\mathcal{D}\) is a Dolbeault 1-derivation if and only if so is \(\mathcal{D}^{*}\). For a holomorphic vector bundle \(\mathcal{E}\), the dual to its Dolbeault 1-derivation is the Dolbeault 1-derivation corresponding to the holomorphic structure of the dual \(\mathcal{E}^{*}\), see Example 2.7 below.
It will be useful to extend, for each \(X\in\mathfrak{X}(M)\), the operator \(D^{*}_{X}\) to \(\Gamma(\wedge^{m}E^{*})\) as follows:
\[D^{*}_{X}(f)= -\mathcal{L}_{r(X)}f, \tag{2.10}\] \[D^{*}_{X}(\mu)(\sigma_{1},\ldots,\sigma_{m})= \mathcal{L}_{X}\mu(l(\sigma_{1}),\ldots,\sigma_{m})-\mathcal{L}_{ r(X)}\mu(\sigma_{1},\ldots,\sigma_{m})\] \[-\sum_{k=1}^{m}(-1)^{k-1}\mu(D_{X}(\sigma_{k}),\sigma_{1},\ldots, \widehat{\sigma_{k}},\ldots,\sigma_{m}), \tag{2.9}\]
where \(f\in C^{\infty}(M)\), and \(m\geq 1\). Note that \(D^{*}_{X}(\mu)\) does not define an element of \(\Gamma(\wedge^{m}E^{*})\) in general, since it is not \(C^{\infty}(M)\)-multilinear. In general, \(D_{X}(\mu)(\sigma_{1},\cdot)\) is skew-symmetric as a map from \((m-1)\) copies of \(\Gamma(E)\) to \(C^{\infty}(M)\), and it satisfies
\[D^{*}_{X}(\mu)(f\sigma_{1},\cdot)=fD^{*}_{X}(\mu)(\sigma_{1}, \cdot),\] \[D^{*}_{X}(\mu)(\sigma_{1},f\sigma_{2},\cdot)=fD^{*}_{X}(\mu)( \sigma_{1},\sigma_{2},\cdot)+(\mathcal{L}_{X}f)(\mu(l(\sigma_{1}),\sigma_{2}, \cdot)-\mu(\sigma_{1},l(\sigma_{2}),\cdot)).\]
Let us consider
\[\Gamma_{l}(\wedge^{m}E^{*})=\{\mu\in\Gamma(\wedge^{m}E^{*})\ |\ \mu(l(\sigma_{1}), \sigma_{2},\cdot)=\mu(\sigma_{1},l(\sigma_{2}),\cdot),\ \forall\,\sigma_{1},\,\sigma_{2}\in\Gamma(E)\}.\]
The next result follows immediately.
**Proposition 2.6**.: _For \(\mu\in\Gamma(\wedge^{m}E^{*})\), \(D^{*}_{X}(\mu)\in\Gamma(\wedge^{m}E^{*})\) if and only if \(\mu\in\Gamma_{l}(\wedge^{m}E^{*})\). Moreover, if \([D_{X},l]=D_{X}\circ l-l\circ D_{X}=0\), then \(D^{*}_{X}(\Gamma_{l}(\wedge^{\bullet}E^{*})\subset\Gamma_{l}(\wedge^{\bullet} E^{*})\)._
Recall from (2.5) that condition \([D_{X},l]=0\) holds if \(\mathcal{D}\) is Nijenhuis.
In the following, for \(\mu\in\Gamma(\wedge^{m}E^{*})\), we shall denote by \(\mu_{l}\) the element of \(\Gamma(E^{*}\otimes\wedge^{m-1}E^{*})\) given by \(\mu_{l}=0\) if \(m=0\), and
\[\mu_{l}(\sigma;\sigma_{1},\ldots,\sigma_{m-1})=\mu(l(\sigma),\sigma_{1},\ldots, \sigma_{m-1}),\]
for \(m\geq 1\), in such a way that \(\mu\in\Gamma_{l}(\wedge^{m}E^{*})\) if and only if \(\mu_{l}\in\Gamma(\wedge^{m}E^{*})\).
**Example 2.7**.: For a holomorphic vector bundle \(\mathcal{E}=(E,\mathcal{D}^{Dolb})\), with \(\mathcal{D}^{Dolb}=(D,l,r)\), the dual \(1\)-derivation \(\mathcal{D}^{Dolb,*}=(D^{*},l^{*},r)\) is a Dolbeault \(1\)-derivation on \(E^{*}\) corresponding to the holomorphic structure on the complex vector bundle \((E^{*},l^{*})\) given by the \(T^{0,1}\)-connection \(\overline{\partial}^{*}\) dual to \(\overline{\partial}\), see (2.7). Note also that \((E^{*},l^{*})\) is identified with the bundle of complex-linear functionals \((E,l)\to\mathbb{C}\) via
\[\mu\mapsto\mu-\mathbf{i}\,\mu_{l}. \tag{2.11}\]
Let us denote the complex exterior algebra bundle of \((E^{*},l^{*})\) by \(\wedge_{\mathbb{C}}E^{*}\), so that \(\Gamma(\wedge^{m}_{\mathbb{C}}E^{*})\) is the bundle of complex-multilinear alternating \(m\)-forms on \((E,l)\), that carries a natural extension of \(\overline{\partial}^{*}\). The map (2.11) extends to an isomorphism
\[\Phi:\Gamma_{l}(\wedge^{m}E^{*})\to\Gamma(\wedge^{m}_{\mathbb{C}}E^{*}),\qquad \Phi(\mu)=\mu-\mathbf{i}\,\mu_{l},\]
of \(C^{\infty}(M,\mathbb{C})\)-modules (multiplication by \(\mathbf{i}\) on \(\Gamma_{l}(\wedge^{m}E^{*})\) corresponds to the operation \(\mu\mapsto\mu_{l}\)). This isomorphism satisfies
\[\Phi(D^{*}_{X}(\mu))=\mathbf{i}\,\overline{\partial}^{*}_{X+\mathbf{i}\,r(X)} \Phi(\mu)\]
showing that the extension of \(D^{*}\) (as in (2.9) and (2.10)) matches that of \(\overline{\partial}^{*}\). In particular, \(\mu\in\Gamma_{l}(\wedge^{m}E^{*})\) satisfies \(D^{*}(\mu)=0\) if and only if \(\mu-\mathbf{i}\,\mu_{l}\in\Gamma(\wedge^{m}_{\mathbb{C}}E^{*})\) is a holomorphic section. Hence, for each open subset \(U\subseteq M\), elements in \(\Gamma_{l}(\wedge^{*}E^{*}|_{U})\) in the kernel of \(D^{*}\) characterize holomorphic sections of \(\Gamma(\wedge^{m}_{\mathbb{C}}E^{*}|_{U})\) in terms of their real parts.
\(\diamond\)
### Compatibility with (pre-)Lie algebroids
The discussion in this subsection will be used later in the context of Dirac structures and lagrangian splittings of Courant algebroids.
Let us a consider a vector bundle \(A\to M\) equipped with an anchor map \(\rho:A\to TM\) (i.e., a vector bundle map over the identity map on \(M\)) and an \(\mathbb{R}\)-bilinear, skew-symmetric bracket \([\cdot,\cdot]\) on \(\Gamma(A)\) such that
\[[a,fb]=f[a,b]+(\mathcal{L}_{\rho(a)}f)b,\]
for \(a,b\in\Gamma(A)\) and \(f\in C^{\infty}(M)\). The triple \((A,\rho,[\cdot,\cdot])\) is called a _pre-Lie algebroid_[11]. A Lie algebroid is a pre-Lie algebroid such that \([\cdot,\cdot]\) satisfies the Jacobi identity.
Just as commonly done for Lie algebroids (see e.g. [20]), on a pre-Lie algebroid the anchor \(\rho\) and bracket \([\cdot,\cdot]\) can be encoded in a degree-\(1\) derivation \(d_{A}\) of \(\Gamma(\wedge A^{*})\), or, alternatively, in a
linear bivector field \(\pi_{A}\) on the total space of \(A^{*}\to M\). In terms of \(d_{A}\) and \(\pi_{A}\), Lie algebroids are characterized by the further conditions that \(d_{A}^{2}=0\) or that \(\pi_{A}\) is a Poisson structure.
**Definition 2.8**.: _A 1-derivation \(\mathcal{D}=(D,l,r)\) on a vector bundle \(A\to M\) is compatible with a pre-Lie algebroid structure (\(\rho,[\cdot,\cdot]\)) if the following equations hold:_
(IM1) \[\rho\circ l =r\circ\rho\] (IM2) \[\rho(D_{X}(a)) =D_{X}^{r}(\rho(a))\] (IM3) \[l([a,b]) =[a,l(b)]-D_{\rho(b)}(a)\] (IM4) \[D_{X}([a,b]) =[a,D_{X}(b)]+[D_{X}(a),b]+D_{[\rho(b),X]}(a)-D_{[\rho(a),X]}(b),\]
_for all \(X\in\mathfrak{X}(M)\) and \(a,b\in\Gamma(A)\)._
These compatibility equations have the following geometric interpretations. Let \(K:TA\to TA\) be the linear (1,1)-tensor field corresponding to \(\mathcal{D}\).
* The conditions in Def. 2.8 hold if and only if the bivector field \(\pi_{A}\) on \(A^{*}\) is compatible with the (1,1)-tensor field \(K^{\top}:TA^{*}\to TA^{*}\) in the sense of Magri-Morosi [21], see [9, SS 4.3] and Example 3.10 below. In particular, when \(A\) is a Lie algebroid and \(\mathcal{D}\) is a Nijenhuis 1-derivation, they hold if and only if the pair \((\pi_{A},K^{\top})\) is a Poisson-Nijenhuis structure.
* When \(A\) is a Lie algebroid, the compatibility in Def. 2.8 says that \(K:TA\to TA\) is a Lie algebroid morphism with respect to the tangent prolongation Lie algebroid \(TA\to TM\)[4, SS 6]. Hence 1-derivations compatible with a Lie algebroid are the infinitesimal counterparts of multiplicative (1,1)-tensor fields on Lie groupoids; for this reason (IM1)-(IM4) above are called _IM equations_ (where IM stands for "infinitesimally multiplicative"), and 1-derivations satisfying them are also referred to as _IM (1,1)-tensors_ on Lie algebroids.
The compatibility of a 1-derivation \(\mathcal{D}\) with a pre-Lie algebroid structure on \(A\) can be encoded using the dual 1-derivation \(\mathcal{D}^{*}\) and the operator \(d_{A}\) on \(\Gamma(\wedge A^{*})\) as follows.
**Proposition 2.9**.: _A 1-derivation \(\mathcal{D}=(D,l,r)\) is compatible with a pre-Lie algebroid structure \((\rho,[\cdot,\cdot])\) on a vector bundle \(A\to M\) if and only if, for all \(\mu\in\Gamma_{l}(\wedge^{m}A^{*})\), the following holds:_
\[D_{\rho(a)}^{*}(\mu) =i_{a}d_{A}(\mu_{l})-i_{l(a)}d_{A}\mu, \tag{2.13}\] \[i_{a}d_{A}D_{X}^{*}(\mu) =D_{X}^{*}(d_{A}\mu)(a;\,\cdot)+\mathcal{R}_{X}(\mu)(a;\,\cdot), \tag{2.12}\]
_where \(\mathcal{R}_{X}(\mu)(a;\cdot)\) is the \(\mathbb{R}\)-multilinear skew-symmetric map from \(m\)-copies of \(\Gamma(A)\) to \(C^{\infty}(M)\) defined by_
\[\mathcal{R}_{X}(\mu)(a;\sigma_{1},\ldots,\sigma_{m})= \mathcal{L}_{X}(D_{\rho(a)}^{*}(\mu)(\sigma_{1},\ldots,\sigma_{m }))+D_{[\rho(a),X]}^{*}(\mu)(\sigma_{1},\ldots,\sigma_{m})\] \[-\sum_{i=1}^{m}(-1)^{i+1}D_{[\rho(\sigma_{i}),X]}^{*}(\mu)(a, \sigma_{1},\ldots,\widehat{\sigma_{i}},\ldots,\sigma_{m}).\]
Proof.: For \(\mu=f\in C^{\infty}(M)\), making use of (2.3) and (2.9), one can check that (2.12) and (2.13) are equivalent to
\[\mathcal{L}_{l(\rho(a))-\rho(r(a))}f=0\quad\text{and}\quad D_{X}^{*}(d_{A}f)= \rho^{*}D_{X}^{r,*}(df),\]
respectively. Using (2.8), one can now check that (2.12) and (2.13) in degree 0 are equivalent to (IM1) and (IM2). Similarly, for \(\mu\in\Gamma(A)\) and assuming that (IM1) holds, one can check that (2.12) is equivalent to (IM3). Finally, under the assumption that (IM1), (IM2) and (IM3) hold, one verifies that (2.13) is equivalent to (IM4). For higher degrees, (2.12) and (2.13) follow directly from the IM equations.
Define
\[\Gamma_{\mathcal{D}}(\wedge^{m}A^{*})=\{\mu\in\Gamma_{l}(\wedge^{m}A^{*})\ |\ D^{*}_{X}(\mu)=0,\ \forall\,X\in\mathfrak{X}(M)\}.\]
A direct consequence of Proposition 2.9 is that, when \(\mathcal{D}\) is compatible with the a pre-Lie algebroid structure on \(A\),
\[d_{A}(\Gamma_{\mathcal{D}}(\wedge^{m}A^{*}))\subseteq\Gamma_{\mathcal{D}}( \wedge^{m}A^{*}),\]
so \((\Gamma_{\mathcal{D}}(\wedge^{\bullet}A^{*}),d_{A})\) is a subcomplex of \((\Gamma(\wedge^{\bullet}A^{*}),d_{A})\).
**Example 2.10** (Holomorphic Lie algebroids).: Consider a Dolbeault \(1\)-derivation \(\mathcal{D}^{Dolb}\) on a vector bundle \(A\to M\), so that \(\mathcal{A}=(A,\mathcal{D}^{Dolb})\) is a holomorphic vector bundle. It is shown in [4, SS 6.4] that a Lie algebroid structure on \(A\to M\) compatible with \(\mathcal{D}^{Dolb}\) (in the sense of Def. 2.8) is equivalent to a holomorphic Lie algebroid structure on \(\mathcal{A}\) (in the sense of [17, SS 3.1]). Under this correspondence, following Example 2.7, we see that for each open subset \(U\subseteq M\), the complex \((\Gamma_{\mathcal{D}}(\wedge^{\bullet}A^{*}|_{U}),d_{A})\) is identified with the holomorphic Lie algebroid complex (see [17, SS 4.4]) of \(\mathcal{A}\) over \(U\). \(\diamond\)
## 3. \(1\)-Derivations on Courant algebroids
In this section we introduce a notion of compatibility between \(1\)-derivations and Courant algebroids that is the main object of study in this paper.
### Courant \(1\)-derivations and Courant-Nijenhuis algebroids
We start by recalling Courant algebroids [19, 23].
**Definition 3.1**.: A _Courant algebroid_ over a manifold \(M\) is a vector bundle \(E\to M\) together with a bundle map \(\mathfrak{a}:E\to TM\) (called the _anchor_), a pseudo-euclidean metric \(\langle\cdot,\cdot\rangle\) (i.e, a fibrewise nondegenerate symmetric bilinear form), and an \(\mathbb{R}\)-bilinear bracket \([\![\cdot,\cdot]\!]:\Gamma(E)\times\Gamma(E)\to\Gamma(E)\) such that, for all \(\sigma_{1},\sigma_{2},\sigma_{3}\in\Gamma(E)\) and \(f\in C^{\infty}(M)\), the following hold:
(C1) \[[\![\sigma_{1},[\![\sigma_{2},\sigma_{3}]\!]]=[\![[\sigma_{1}, \sigma_{2}]\!],\sigma_{3}\!]+[\![\sigma_{2},[\![\sigma_{1},\sigma_{3}]\!]]\] (C2) \[\mathfrak{a}([\![\sigma_{1},\sigma_{2}]\!])=[\mathfrak{a}(\sigma _{1}),\mathfrak{a}(\sigma_{2})]\] (C3) \[[\![\sigma_{1},f\sigma_{2}]\!]=f\,[\![\sigma_{1},\sigma_{2}]\!]+( \mathcal{L}_{\mathfrak{a}(\sigma_{1})}f)\,\sigma_{2}\] (C4) \[[\![\sigma_{1},\sigma_{2}]\!]+[\![\sigma_{2},\sigma_{1}]\!]= \mathfrak{a}^{*}(d\langle\sigma_{1},\sigma_{2}\rangle)\] (C5) \[\mathcal{L}_{\mathfrak{a}(\sigma_{1})}\langle\sigma_{2},\sigma_{ 3}\rangle=\langle[\![\sigma_{1},\sigma_{2}]\!],\sigma_{3}\rangle+\langle \sigma_{2},[\![\sigma_{1},\sigma_{3}]\!]\rangle\]
where \(\mathfrak{a}^{*}:T^{*}M\to E^{*}\simeq E\) is the map dual to the anchor \(\mathfrak{a}\), and the isomorphism \(E^{*}\simeq E\) is given by \(\langle\cdot,\cdot\rangle.\) We refer to \([\![\cdot,\cdot]\!]\) as the _Courant bracket_.
The following is an important class of examples [25, 26].
**Example 3.2** (Exact Courant algebroids).: Any closed \(3\)-form \(H\in\Omega^{3}(M)\) defines a Courant algebroid structure on \(E=\mathbb{T}M:=TM\oplus T^{*}M\), with anchor map \(\mathfrak{a}=\mathrm{pr}_{TM}\), symmetric pairing
\[\langle(X,\alpha),(Y,\beta)\rangle=\beta(X)+\alpha(Y),\]
and the _\(H\)-twisted Courant bracket_
\[[\![(X,\alpha),(Y,\beta)\!]]=([X,Y],\mathcal{L}_{X}\beta-i_{Y}d\alpha+i_{Y}i_{ X}H).\]
When \(H=0\), one refers to this structure as the _standard Courant algebroid_ on \(\mathbb{T}M\). \(\diamond\)
Let \((E,\mathfrak{a},\langle\cdot,\cdot\rangle,[\![\cdot,\cdot]\!])\) be a Courant algebroid.
**Definition 3.3**.: A _Courant 1-derivation_ is a 1-derivation \(\mathcal{D}=(D,l,r)\) on the vector bundle \(E\to M\) such that \(\mathcal{D}^{*}=\mathcal{D}\) (under the identification \(E\cong E^{*}\) given by the pairing) and the following compatibility equations are satisfied:
(CN1) \[\mathfrak{a}\circ l=r\circ\mathfrak{a},\] (CN2) \[\mathfrak{a}(D_{X}(\sigma))=D_{X}^{r}(\mathfrak{a}(\sigma)),\] (CN3) \[l([\sigma_{1},\sigma_{2}])=[\![\sigma_{1},l(\sigma_{2})]\!]-D_{ \mathfrak{a}(\sigma_{2})}(\sigma_{1})-\mathfrak{a}^{*}(C(\sigma_{1},\sigma_{2 })),\] (CN4) \[D_{X}([\![\sigma_{1},\sigma_{2}]\!])=[\![\sigma_{1},D_{X}( \sigma_{2})]\!]-[\![\sigma_{2},D_{X}(\sigma_{1})]\!]+D_{[\mathfrak{a}(\sigma_ {2}),X]}(\sigma_{1})\] \[\qquad\qquad\qquad\qquad-D_{[\mathfrak{a}(\sigma_{1}),X]}( \sigma_{2})-\mathfrak{a}^{*}(i_{X}\,dC(\sigma_{1},\sigma_{2})),\]
for all \(\sigma_{1},\sigma_{2}\in\Gamma(E)\) and \(X\in\mathfrak{X}(M)\), where \(C(\sigma_{1},\sigma_{2}):=\langle D_{(\cdot)}(\sigma_{1}),\sigma_{2}\rangle \in\Omega^{1}(M)\).
Equations (CN1)-(CN4) are called _Courant compatibility equations_ for \(\mathcal{D}\). Note that they impose a linear condition on 1-derivations, so linear combinations of Courant 1-derivations are still Courant 1-derivations.
**Definition 3.4**.: A _Courant-Nijenhuis_ 1-derivation is a Courant 1-derivation that is also a Nijenhuis 1-derivation, i.e., satisfies the Nijenhuis equations (2.5). A Courant algebroid equipped with a Courant-Nijenhuis 1-derivation is a _Courant-Nijenhuis algebroid_.
**Example 3.5**.: Consider a 1-derivation \(\mathcal{D}\) defined by a connection \(\nabla\) on \(E\to M\), as in Example 2.2. If it is a Courant 1-derivation, then the anchor \(\mathfrak{a}\) must be trivial (by (CN1)), so that, as a Courant algebroid, \(E\) is a bundle of quadratic Lie algebras. In this case, \(\nabla\) is a Courant 1-derivation if and only if it is symmetric (i.e., \(\nabla=\nabla^{*}\) with respect to the pseudo-euclidean metric) as well as compatible with the metric and fibrewise Lie bracket (by (CN4)):
\[\mathcal{L}_{X}\langle\sigma_{1},\sigma_{2}\rangle=\langle\nabla_{X}\sigma_{ 1},\sigma_{2}\rangle+\langle\sigma_{1},\nabla_{X}\sigma_{2}\rangle,\qquad \nabla_{X}([\![\sigma_{1},\sigma_{2}]\!])=[\![\nabla_{X}\sigma_{1},\sigma_{2} ]\!]+[\![\sigma_{1},\nabla_{X}\sigma_{2}]\!]\,,\]
for \(X\in\mathfrak{X}(M)\).
The Nijenhuis condition on \(\mathcal{D}\) amounts to the flatness of \(\nabla\). In this case, assuming that \(M\) is connected and viewing its universal cover \(\widetilde{M}\) as a \(\pi_{1}(M)\)-principal bundle over \(M\), \(E\) is of the form \((\widetilde{M}\times\mathfrak{d})/\pi_{1}(M)\), where \(\mathfrak{d}\) is a quadratic Lie algebra equipped with a representation of \(\pi_{1}(M)\) that preserves bracket and pairing. \(\diamond\)
The next example from [5] is the original motivation for the Courant compatibility equations (CN1)-(CN4).
**Example 3.6**.: For a (1,1)-tensor field \(r:TM\to TM\), consider the 1-derivations \(\mathcal{D}^{r}\) and \(\mathcal{D}^{r,*}\) from (2.2) and (2.3). Then, setting
\[\mathbb{D}^{r}:=(D^{r},D^{r,*}),\]
we obtain a 1-derivation
\[\boldsymbol{\mathcal{D}}^{r}=(\mathbb{D}^{r},(r,r^{*}),r) \tag{3.1}\]
on \(\mathbb{T}M\) that is a Courant 1-derivation with respect to the standard Courant algebroid structure (this is verified in the proof of [5, Lem. 6.1]). More generally, one can check that \(\boldsymbol{\mathcal{D}}^{r}\) is a Courant 1-derivation with respect to an \(H\)-twisted Courant bracket as long as the closed 3-form \(H\in\Omega^{3}(M)\) is compatible with \(r\) in the sense of [5, Def. 5.1], i.e.,
* the tensor field \(H_{r}\in\Gamma(T^{*}M\otimes\wedge^{2}T^{*}M)\), \(H_{r}(X_{1};,X_{2},X_{3}):=H(r(X_{1}),X_{2},X_{3})\), is skew-symmetric, that is, \(H_{r}\in\Omega^{3}(M)\);
* \(dH_{r}=0\).
Moreover, \(r\) is a Nijenhuis operator if and only if \(\boldsymbol{\mathcal{D}}^{r}\) is a Nijenhuis 1-derivation [5, Lem. 6.1], in which case it defines a Courant-Nijenhuis structure on \(\mathbb{T}M\). One can also directly verify that \(r^{2}=-\mathrm{Id}_{TM}\) if and only if \(\boldsymbol{\mathcal{D}}^{r}\) satisfies the almost complex equations in (2.6). When \(r\) is a
complex structure on \(M\), \(\operatorname{\boldsymbol{\mathcal{D}}}^{r}\) is the Dolbeault 1-derivation codifying the holomorphic structure on \(\mathbb{T}M\to M\) (see Example 2.4), with corresponding linear complex structure given by
\[K_{r}:=(r^{tg},r^{ctg}):T(\mathbb{T}M)\to T(\mathbb{T}M). \tag{3.2}\]
\(\diamond\)
We will describe in SS 4 modifications of the last example yielding more general Courant 1-derivations on \(\mathbb{T}M\).
As we will see in SS 5, for Dolbeault 1-derivations (see Example 2.4), the compatibility with Courant structures yields _holomorphic Courant algebroids_.
**Remark 3.7** (On the compatibility equations).: A natural issue concerning Courant 1-derivations is whether the Courant compatibility equations for \(\mathcal{D}\) in Def. 3.3 admit a geometric interpretation in terms of the corresponding linear (1,1)-tensor field \(K\) (c.f. the discussion after Def. 2.8). Although a satisfactory answer does not seem evident (in contrast with Lie algebroids, one can check that, in general, \(K\) does not define a Courant morphism [6] of the tangent Courant algebroids [3]), some key properties of \(K\) will be presented in Prop. 3.11 below. From another perspective, it would be interesting to see if the super-geometric viewpoint on Courant algebroids [23] can shed light on the Courant compatibility equations. \(\diamond\)
### Invariant Dirac structures
Much of the importance of Courant algebroids lies in their Dirac structures. We now consider Dirac structures invariant by Courant 1-derivations.
Let \(\mathcal{D}=(D,l,r)\) be a Courant 1-derivation on a Courant algebroid \(E\to M\). We will be concerned with lagrangian subbundles \((L\to M)\subseteq(E\to M)\) (i.e., \(L=L^{\perp}\)) that are \(\mathcal{D}\)-invariant in the sense of Def. 2.1:
* \(l(L)\subset L\), and
* \(D_{X}(\Gamma(L))\subset\Gamma(L)\), for all \(X\in\mathfrak{X}(M)\).
For lagrangian subbundles, these conditions can be equivalently expressed as follows. Consider the symmetric 2-form \(S\in\Gamma(S^{2}E^{*})\),
\[S(\sigma_{1},\sigma_{2})=\langle l(\sigma_{1}),\sigma_{2}\rangle.\]
(Recall that \(l=l^{*}\) since \(\mathcal{D}^{*}=\mathcal{D}\).) Let \(C:\Gamma(E)\times\Gamma(E)\to\Omega^{1}(M)\) be the map that appeared in the definition of Courant 1-derivations,
\[C(\sigma_{1},\sigma_{2})=\langle D_{(\cdot)}(\sigma_{1}),\sigma_{2}\rangle.\]
For a lagrangian subbundle \(L\subset E\), the restriction of \(S\) to \(L\) defines an element \(S_{L}\in\Gamma(S^{2}L^{*})\) with the property that \(l(L)\subseteq L\) if and only if \(S_{L}=0\). Assuming that this holds, the restriction of \(C\) to sections of \(L\) is tensorial, i.e., it defines an element
\[C_{L}\in\Gamma(\wedge^{2}L^{*}\otimes T^{*}M)\]
called the _concomitant of \(L\) and \(\mathcal{D}\)_. The following result can be directly verified.
**Lemma 3.8**.: _For a Courant 1-derivation \(\mathcal{D}\), a lagrangian subbundle \(L\subseteq E\) is \(\mathcal{D}\)-invariant if and only if_
\[S_{L}=0,\ \text{ and }\ C_{L}=0.\]
Suppose that \(\mathcal{D}\) is a Courant-Nijenhuis 1-derivation on \(E\), so that \((E,\mathcal{D})\) is a Courant-Nijenhuis algebroid.
**Definition 3.9**.: A Dirac structure \(L\subset E\) that is \(\mathcal{D}\)-invariant (equivalently, such that \(S_{L}=0\) and \(C_{L}=0\)) is called a _Dirac-Nijenhuis structure_.
**Example 3.10** (Poisson-Nijenhuis structures).: Given a (1,1)-tensor field \(r:TM\to TM\), consider the associated Courant 1-derivation on \(\mathbb{T}M\) (with the standard Courant bracket) given by \(\boldsymbol{\mathcal{D}}^{r}\), as in Example 3.6. Its invariant lagrangian subbundles are precisely those considered in [5, SS 3.3]. In particular, a lagrangian subbundle given by the graph of a bivector field \(\pi\in\mathfrak{X}^{2}(M)\),
\[L_{\pi}=\{(\pi^{\sharp}(\xi),\xi)\in\mathbb{T}M\ |\ \xi\in T^{*}M\},\]
where \(\pi^{\sharp}:T^{*}M\to TM\) is the map obtained via contraction, is \(\boldsymbol{\mathcal{D}}^{r}\)-invariant if and only if \(\pi\) and \(r\) satisfy
\[r\circ\pi^{\sharp}=\pi^{\sharp}\circ r^{*},\ \ \text{and}\ \ \pi^{\sharp}\circ D ^{r,*}_{X}(\alpha)-D^{r}_{X}\circ\pi^{\sharp}(\alpha)=\pi^{\sharp}(\mathcal{ L}_{X}r^{*}\alpha-\mathcal{L}_{r(X)}\alpha)-(\mathcal{L}_{\pi^{\sharp}(\alpha)}r)(X)=0 \tag{3.3}\]
for all \(X\in\mathfrak{X}(M)\), \(\alpha\in\Omega^{1}(M)\). The expression in the second condition is known as the _Magri-Morosi concomitant_ of \(r\) and \(\pi\)[21], see [5, SS 3.1]. From the viewpoint of Lemma 3.8, using the natural identification \(L_{\pi}\cong T^{*}M\), we have that
\[S_{L_{\pi}}(\alpha,\beta)=\pi(r^{*}\alpha,\beta)-\pi(\alpha,r^{*}\beta),\ \ \ \ \text{and}\ \ \ \ C_{L_{\pi}}(\alpha,\beta)=\langle\beta,\pi^{\sharp}\circ D^{r,*}_{(\cdot)} (\alpha)-D^{r}_{(\cdot)}\circ\pi^{\sharp}(\alpha)\rangle.\]
(The alternative formulation of the vanishing of the Magri-Morosi concomitant in terms of \(C_{L_{\pi}}\) goes back to [16] and is now more frequent in the literature.)
Recall that \(\pi\) is Poisson if and only \(L_{\pi}\) is a Dirac structure, and \(r\) is a Nijenhuis operator if and only if \(\boldsymbol{\mathcal{D}}^{r}\) is a Courant-Nijenhuis 1-derivation, so \(L_{\pi}\) is a Dirac-Nijenhuis structure in \((\mathbb{T}M,\boldsymbol{\mathcal{D}}^{r})\) if and only if \((\pi,r)\) is a Poisson-Nijenhuis structure [5, Ex. 3.11]. \(\diamond\)
We refer to [5] for more on Dirac-Nijenhuis structures in the specific Courant-Nijenhuis algebroid \(\mathbb{T}M\) of the previous example.
It is well known that if \(L\) is a Dirac structure in a Courant algebroid \(E\), then the restrictions of the anchor and Courant bracket make \(L\) into a Lie algebroid. In the presence of a Courant 1-derivation \(\mathcal{D}\) for which \(L\) is \(\mathcal{D}\)-invariant, it is a straightforward verification that the 1-derivation on \(L\) obtained by restriction of \(\mathcal{D}\) is compatible with its Lie algebroid structure, in the sense of Def. 2.8. Following the discussion in SS 2.4, in terms of linear (1,1)-tensor fields we have
**Proposition 3.11**.: _Consider a Courant 1-derivation \(\mathcal{D}\) on a Courant algebroid \(E\), and let \(K:TE\to TE\) be the corresponding linear (1,1)-tensor field. Then a lagrangian subbundle \(L\subset E\) is \(\mathcal{D}\)-invariant if and only if \(K(TL)\subset TL\). If \(L\) is, in addition, a Dirac structure then \(K|_{TL}:TL\to TL\) is a Lie algebroid morphism, where \(TL\to TM\) is the tangent prolongation Lie algebroid._
Proof.: As recalled in SS 2.1, the fact that \(L\) is \(\mathcal{D}\)-invariant is equivalent to \(K(TL)\subset TL\), see [5, Thm. 2.1]. When \(L\) is a Dirac structure, the restricted 1-derivation satisfies the IM equations from Def. 2.8, which are equivalent to \(K|_{TL}:TL\to TL\) being a Lie-algebroid morphism [4, SS 5.2].
## 4. Courant 1-derivations on \(TM\oplus T^{*}M\)
We now present other examples of Courant 1-derivations on \(\mathbb{T}M=TM\oplus T^{*}M\) with respect to twisted Courant brackets, extending Example 3.6
### Courant 1-derivations from pseudo-Riemmanian metrics
A direct calculation shows that a 1-derivation \(\mathcal{D}=(D,l,r)\) on \(\mathbb{T}M\) satisfying \(\mathcal{D}^{*}=\mathcal{D}\) and equations (CN1) and (CN2) must have the form
\[l=(r,r^{*}+g^{\flat}),\ \ D=(D^{r},D^{r,*}+\Sigma), \tag{4.1}\]
where \(g^{\flat}:TM\to T^{*}M\), \(g^{\flat}(X)=i_{X}g\), is the map obtained by contraction of a symmetric bilinear form \(g\), and \(\Sigma:\Gamma(TM)\to\Gamma(T^{*}M\otimes T^{*}M)\) is an \(\mathbb{R}\)-linear map satisfying
\[\Sigma_{X}(fY) =f\,\Sigma_{X}(Y)+(\mathcal{L}_{X}f)\,g^{\flat}(Y), \tag{4.3}\] \[\mathcal{L}_{X}g(Y,Z) =\langle\Sigma_{X}(Y),Z\rangle+\langle Y,\Sigma_{X}(Z)\rangle, \tag{4.2}\]
where \(f\in C^{\infty}(M)\), \(X,Y,Z\in\mathfrak{X}(M)\), and \(\Sigma_{X}:\Gamma(TM)\to\Gamma(T^{*}M)\) is given by \(\Sigma_{X}(Y)=i_{X}(\Sigma(Y))\). In particular, any Courant 1-derivation \(\mathcal{D}=(D,l,r)\) on \(\mathbb{T}M\) is determined by the data \(r\), \(g\) and \(\Sigma\) as above, via (4.1). When \(g\) is non-degenerate, i.e., when \(g\) is a _pseudo-Riemannian metric_, note that \(\nabla_{X}=(g^{\flat})^{-1}\circ\Sigma_{X}\) defines a metric connection on \(M\) (i.e., \(\nabla g=0\)).
In the following, we shall assume that \(g\) is a pseudo-Riemannian metric and \(\nabla\) is its Levi-Civita connection. In this case \(g\) and \(\nabla\) give rise to a 1-derivation \(\boldsymbol{\mathcal{D}}^{g}=(\mathbb{D}^{g},(0,g^{\flat}),0)\) of \(\mathbb{T}M\), where
\[\mathbb{D}^{g}_{X}((Y,\beta))=(0,g^{\flat}(\nabla_{X}Y)),\]
by setting \(r=0\) in (4.1). The corresponding linear (1,1)-tensor field on \(q:\mathbb{T}M\to M\),
\[K_{g}:T(\mathbb{T}M)\to T(\mathbb{T}M), \tag{4.4}\]
is described as follows. The connection \(\nabla\) defines a horizontal distribution \(\text{Hor}\subset T(\mathbb{T}M)\), complementary to the vertical distribution \(\text{Ver}=\ker(Tq)=q^{*}\mathbb{T}M\). With respect to this splitting of \(T(\mathbb{T}M)\), \(K_{g}\) vanishes of on \(\text{Hor}\) and acts as \((0,g^{\flat})\) on \(\text{Ver}\).
**Lemma 4.1**.: _The 1-derivation \(\boldsymbol{\mathcal{D}}^{g}\) is a Courant 1-derivation of \(\mathbb{T}M\) for any \(H\)-twisted Courant bracket._
Proof.: Since \(\boldsymbol{\mathcal{D}}^{g}\) is symmetric, it remains to show that the Courant equations (CN1)-(CN4) are satisfied. The only non-trivial equations to check are (CN3) and (CN4). Proving (CN3) amounts to verifying that
\[g^{\flat}([X,Y])=\mathcal{L}_{X}g^{\flat}(Y)-g^{\flat}(\nabla_{Y}X)-g(\nabla_ {(\cdot)}X,Y),\qquad\forall\,X,Y\in\mathfrak{X}(M). \tag{4.5}\]
This holds because \(\nabla\) is Levi-Civita: since \(\nabla\) is metric and has zero torsion, we have
\[i_{Z}\mathcal{L}_{X}g^{\flat}(Y) =\mathcal{L}_{X}g(Y,Z)-g(Y,[X,Z])=g(\nabla_{Y}X+[X,Y],Z)+g(Y, \nabla_{Z}X)\] \[=i_{Z}\left(g^{\flat}(\nabla_{Y}X)+g^{\flat}([X,Y])+g(Y,\nabla_{( \cdot)}X)\right).\]
Condition (CN4), in turn, follows from the first Bianchi identity. Indeed, one must show that
\[g(\nabla_{Z}[X,Y],W)= \langle\mathcal{L}_{X}g^{\flat}(\nabla_{Z}Y),W\rangle-\langle \mathcal{L}_{Y}g^{\flat}(\nabla_{Z}X),W\rangle+g(\nabla_{[Y,Z]}X,W)-g(\nabla _{[X,Z]}Y,W)\] \[-i_{W}i_{Z}dg(\nabla_{(\cdot)}X,Y)\]
Let us denote the right-hand side of this last equation by \(\Upsilon\). Using that
\[i_{W}i_{Z}dg(\nabla_{(\cdot)}X,Y)=\mathcal{L}_{Z}g(\nabla_{W}X,Y)-\mathcal{L }_{W}g(\nabla_{Z}X,Y)-g(\nabla_{[Z,W]}X,Y)\]
and, once again, the fact that \(\nabla\) is metric and has zero torsion, one obtains that
\[\Upsilon =g(R(X,Z)(Y),W)-g(R(Y,Z)(X),W)-g(R(Z,W)(X),Y)+g(\nabla_{Z}\nabla_{ X}Y-\nabla_{Z}\nabla_{Y}X,W)\] \[=\underbrace{g(R(X,Z)(Y),W)+g(R(Z,Y)(X),W)+g(R(Y,X)(Z),W)}_{=0\ \text{(by the first Bianchi identity)}}+g(\nabla_{Z}[X,Y],W),\]
where \(R\) is the curvature tensor, and we used its symmetries in the last equality. This concludes the proof.
Now let
\[\boldsymbol{\mathcal{D}}^{r,g}=(\mathbb{D}^{r,g},(r,r^{*}+g^{\flat}),r)\]
be the 1-derivation defined by (4.1) with \(\Sigma=g^{\flat}(\nabla)\), i.e., the sum of \(\boldsymbol{\mathcal{D}}^{r}\) (see Example 3.6) with the 1-derivation \(\boldsymbol{\mathcal{D}}^{g}\).
**Proposition 4.2**.: _The 1-derivation \(\boldsymbol{\mathcal{D}}^{r,g}\) is a Courant 1-derivation on \(\mathbb{T}M\) for any \(H\)-twisted Courant bracket such that \(H\) is compatible with \(r\)._
(The compatibility of \(H\) and \(r\) is recalled in Example 3.6).
Proof.: The result follows from the previous lemma, the property that the sum of Courant 1-derivations is a Courant 1-derivation, and the fact that \(\boldsymbol{\mathcal{D}}^{r}\) is a Courant 1-derivation with respect to any \(H\)-twisted Courant bracket such that \(H\) is compatible with \(r\).
**Remark 4.3**.: More generally, one can replace the Levi-Civita connection by any other metric connection \(\nabla\) in the definition of \(\mathbb{D}^{r,g}\). In this case, equations (CN3) and (CN4) are equivalent to the torsion \(T\) of \(\nabla\) being skew-symmetric and closed (i.e., \(\varphi(X,Y,Z):=g(T(X,Y),Z)\) defines a closed 3-form). \(\diamond\)
**Example 4.4**.: The lagrangian subbundles \(L\subset\mathbb{T}M\) which are \(\boldsymbol{\mathcal{D}}^{g}\)-invariant are characterized, following Lemma 3.8, by the vanishing of \(S_{L}\in\Gamma(S^{2}L^{*})\) and \(C_{L}\in\Gamma(\wedge^{2}L^{*}\otimes T^{*}M)\) given by
\[S_{L}(\sigma_{1},\sigma_{2})=g(\operatorname{pr}_{TM}(\sigma_{1}), \operatorname{pr}_{TM}(\sigma_{2}))\ \ \text{and}\ \ C_{L}(\sigma_{1},\sigma_{2},Z)=g(\nabla_{Z} \operatorname{pr}_{TM}(\sigma_{1}),\sigma_{2}).\]
The vanishing of \(S_{L}\) is equivalent to the presymplectic distribution of \(L\) being isotropic with respect to \(g\). In particular, the only 2-form whose graph is \(\boldsymbol{\mathcal{D}}^{g}\)-invariant is the zero 2-form. Under the assumption that \(S_{L}=0\), a sufficient condition for the vanishing of \(C_{L}\) is the invariance of \(\operatorname{pr}_{TM}(L)\) with respect to the Levi-Civita connection (and these conditions are equivalent when \(\operatorname{pr}_{TM}(L)\) is maximally isotropic). \(\diamond\)
### Nijenhuis 1-derivations and the Kahler condition
Let \(r:TM\to TM\) be a (1,1)-tensor field, and let \(g\) be a pseudo-Riemannian metric with Levi-Civita connection \(\nabla\). We now give conditions on the pair \((r,g)\) ensuring that the 1-derivation \(\boldsymbol{\mathcal{D}}^{r,g}\) satisfies the Nijenhuis equations, thereby defining a Courant-Nijenhuis 1-derivation (by Prop. 4.2). We will denote the corresponding linear (1,1)-tensor field on \(\mathbb{T}M\) by
\[K_{r,g}=K_{r}+K_{g},\]
see (3.2) and (4.4).
As recalled in Example 3.6, the 1-derivation \(\mathcal{D}^{r}\) satisfies the Nijenhuis equations if and only if \(r\) is a Nijenhuis operator (\(\mathcal{N}_{r}=0\)). On the other hand, it is a simple verification that \(\boldsymbol{\mathcal{D}}^{g}\) is always a Nijenhuis 1-derivation. More generally, we have
**Proposition 4.5**.: _For a pair \((r,g)\), suppose that_
* \(r\) _is a Nijenhuis operator,_
* \(g^{\flat}\circ r=-r^{*}\circ g^{\flat}\)_,_
* \(\nabla r=0\)_._
_Then the 1-derivation \(\boldsymbol{\mathcal{D}}^{r,g}\) satisfies the Nijenhuis equations (equivalently, \(K_{r,g}\) has vanishing Nijenhuis torsion)._
Proof.: Since \(\mathcal{N}_{r}=0\) by assumption, one only has to check the remaining two Nijenhuis equations in (2.5). Using that \(\mathcal{D}^{r}\) is already a Courant-Nijenhuis 1-derivation on \(\mathbb{T}M\) (see Example 3.6),
it suffices to show that
\[g^{\flat}(D^{r}_{Z}(X))-D^{r,*}_{Z}(g^{\flat}(X))= \,g^{\flat}(\nabla_{Z}\,r(X))-r^{*}(g^{\flat}(\nabla_{Z}X), \tag{4.7}\] \[D^{r,*}_{Y}(g^{\flat}(\nabla_{X}Z))-D^{r,*}_{X}(g^{\flat}(\nabla _{Y}Z))+r^{*}g^{\flat}(\nabla_{[X,Y]}Z)= \,g^{\flat}\left(\nabla_{X}D^{r}_{Y}(Z)-\nabla_{Y}D^{r}_{X}(Z)\right.\] \[\left.-D^{r}_{[X,Y]}(Z)-\nabla_{[X,Y]_{r}}Z\right). \tag{4.6}\]
Using (2.2) and (2.3), the compatibility \(g^{\flat}\circ r=-r^{*}\circ g^{\flat}\) and that \(\nabla\) is Levi-Civita, we can prove that (4.6) is equivalent to
\[g^{\flat}((\nabla_{X}r)(Z))+g((\nabla_{(\cdot)}r)(Z),X))=0,\]
which holds since \(\nabla r=0\).
Regarding (4.7), we can use that \(\nabla r=0\) and that \(\nabla\) is Levi-Civita to rewrite some of its terms as follows:
\[\nabla_{X}D^{r}_{Y}(Z) =\nabla_{X^{r}}(\nabla_{Y}Z)-\nabla_{X}\nabla_{r(Y)}Z,\] \[D^{r}_{[X,Y]}(Z) =r(\nabla_{[X,Y]}Z)-\nabla_{r([X,Y])}Z,\] \[D^{r,*}_{X}(g^{\flat}(\nabla_{Y}Z)) =-g^{\flat}(\nabla_{X}\,r(\nabla_{Y}Z))-g^{\flat}(\nabla_{r(X)} \nabla_{Y}Z).\]
Similar formulas hold for \(\nabla_{Y}D^{r}_{X}(Z)\) and \(D^{r,*}_{Y}(g^{\flat}(\nabla_{X}Z))\). After some cancellations and re-groupings, one obtains that (4.7) is equivalent to
\[g(R(Z,\cdot)(r(X))-r(R(Z,\cdot)(X)),Y)=0,\]
where we have used the symmetries of the curvature tensor \(R\). Therefore (4.7) holds, since \([R,r]=0\). This concludes the proof.
A pair \((r,g)\), where \(r\) is a (1,1)-tensor field and \(g\) is a pseudo-Riemannian metric on \(M\), defines a _(pseudo-) Kahler structure_ if \(r\) and \(g\) satisfy the three conditions in Prop. 4.5 with the additional requirement that \(r^{2}=-\mathrm{Id}_{TM}\) (so that \(r\) is a complex structure). Pseudo-Kahler structures admit the following characterization in terms of 1-derivations.
**Proposition 4.6**.: _A pair \((r,g)\) defines a pseudo-Kahler structure if and only if the 1-derivation \(\boldsymbol{\mathcal{D}}^{r,g}\) satisfies the almost-complex equations in (2.6) (equivalently, \(K^{2}_{r,g}=-\mathrm{Id}\))._
Proof.: Recall that the almost-complex equations are
\[r^{2}=-\mathrm{id}_{TM},\quad(r,r^{*}+g^{\flat})^{2}=-\mathrm{id}_{T(\mathbb{ T}M)},\quad\mathbb{D}^{r,g}_{r(X)}(\sigma)+(r,r^{*}+g^{\flat})(\mathbb{D}^{r,g} _{X}(\sigma))=0\]
By the first equation, \(r\) is an almost complex structure. The second equation is equivalent to \(r^{*}\circ g^{\flat}=-g^{\flat}\circ r\) (i.e., \((g,r)\) is almost Hermitian). Since \(r^{2}=-\mathrm{id}_{TM}\), the 1-derivation \((\mathbb{D}^{r},(r,r^{*}),r)\) satisfies the almost complex equations (see Example 3.6), and using this fact one verifies that
\[\mathbb{D}^{r,g}_{r(X)}(Y,\beta)+(r,r^{*}+g^{\flat})(\mathbb{D}^{r,g}_{X}(Y, \beta))=(0,\langle\beta,\mathcal{N}_{r}(X,\cdot)\rangle+g^{\flat}((\nabla_{Y} r)(X))\,).\]
By setting \(Y=0\) or \(\beta=0\) above, we see that the third almost-complex equation is equivalent to \(\mathcal{N}_{r}=0\) and \(\nabla r=0\).
It follows from Propositions 4.5 and 4.6 that, for a pair \((r,g)\), if \(K_{r,g}\) is an almost complex structure then it is automatically integrable.
**Corollary 4.7**.: _A pseudo-Kahler metric \(g\) on a complex manifold \((M,r)\) defines a holomorphic vector bundle structure on \(\mathbb{T}M\to M\) with Dolbeault 1-derivation \(\boldsymbol{\mathcal{D}}^{r,g}\) and linear complex structure on the total space \(\mathbb{T}M\) given by \(K_{r,g}\). Moreover, for any holomorphic 3-form \(H\), \(\boldsymbol{\mathcal{D}}^{r,g}\) is a Courant 1-derivation with respect to the \(H\)-twisted Courant bracket on \(\mathbb{T}M\)._
The last assertion follows from Prop. 4.2.
For a pseudo-Kahler structure \((r,g)\), we will see below that there is an explicit isomorphism relating the holomorphic structures on \(\mathbb{T}M\to M\) defined by \(\boldsymbol{\mathcal{D}}^{r}\) and \(\boldsymbol{\mathcal{D}}^{r,g}\).
### B-field transformations
Given a closed 2-form \(B\in\Omega^{2}(M)\), the map
\[\tau_{B}:\mathbb{T}M\to\mathbb{T}M,\qquad\tau_{B}(X,\alpha)=(X,\alpha+i_{X}B),\]
is a Courant automorphism of \(\mathbb{T}M\) for any \(H\)-twisted Courant bracket (if \(B\) is not closed, then \(\tau_{B}\) intertwines Courant brackets twisted by \(H\) and \((H-dB)\)). Such \(\tau_{B}\) is called a _gauge transformation_[26], or a _\(B\)-field transform_[12].
We say that two Courant 1-derivations \(\mathcal{D}_{1}=(D_{1},l_{1},r_{1})\) and \(\mathcal{D}_{2}=(D_{2},l_{2},r_{2})\) on \(\mathbb{T}M\) are _gauge equivalent_ if \(r_{1}=r_{2}\) and there exists a closed 2-form \(B\) such that
\[D_{2}=\tau_{B}\circ D_{1}\circ\tau_{B}^{-1}\text{ and }l_{2}=\tau_{B}\circ l_{1} \circ\tau_{B}^{-1}.\]
The Nijenhuis and almost complex equations (2.5) and (2.6), respectively, are invariant by gauge equivalence.
Consider a Courant 1-derivation \(\mathcal{D}\) determined by \(r\), \(g\) and \(\Sigma\) via (4.1); _here we no longer assume that the symmetric bilinear form \(g\) is nondegenerate._ Then \(\mathcal{D}\) is gauge equivalent to \(\boldsymbol{\mathcal{D}}^{r}\) if and only if there exists a closed 2-form \(B\) such that
\[g^{\flat}=B^{\flat}\circ r-r^{*}\circ B^{\flat},\qquad\Sigma_{X}=B^{\flat} \circ D_{X}^{r}-D_{X}^{r,*}\circ B^{\flat}. \tag{4.8}\]
In particular, \(\tau_{B}\) preserves the 1-derivation \(\boldsymbol{\mathcal{D}}^{r}\) (i.e., \(g=0\) and \(\Sigma=0\)) if and only if \(B\) and \(r\) are compatible in the sense of Magri-Morosi, see [5, SS 5.2] (cf. Example 3.6).
Suppose that \(r\) is a complex structure. Since \(\boldsymbol{\mathcal{D}}^{r}\) satisfies the almost complex equations (2.6), any Courant 1-derivation \(\mathcal{D}\) gauge equivalent to it must satisfy these equations as well. In this case, if \(\mathcal{D}\) is determined by \(r\), \(g\) and \(\Sigma\) (as in (4.1)), then necessarily \(g^{\flat}\circ r=-r^{*}\circ g^{\flat}\) (see the proof of Proposition 4.6). So the pair \((r,g)\) gives rise to a 2-form \(\omega\in\Omega^{2}(M)\) such that \(\omega^{\flat}=g^{\flat}\circ r\).
We will now describe a natural class of Courant 1-derivations on \(\mathbb{T}M\) that are gauge equivalent to \(\boldsymbol{\mathcal{D}}^{r}\).
**Proposition 4.8**.: _Suppose that \(\mathcal{D}\) is a Courant-Nijenhuis 1-derivation on \(\mathbb{T}M\) determined by \(r\), \(g\) and \(\Sigma\) (as in (4.1)) satisfying the almost complex equations. If the 2-form \(\omega\) defined by \(\omega^{\flat}=g^{\flat}\circ r\) is closed, then \(\mathcal{D}\) is gauge equivalent to \(\boldsymbol{\mathcal{D}}^{r}\)._
Proof.: Let \(B=\frac{1}{2}\omega\), and let \(\mathcal{D}^{B}\) be the Courant-Nijenhuis 1-derivation obtained by conjugating \(\mathcal{D}\) with the \(B\)-field transform \(\tau_{B}\). Denote by \(r\), \(g^{B}\) and \(\Sigma^{B}\) the data that determine \(\mathcal{D}^{B}\). Then it is straightforward to check that
\[g_{B}=0\quad\text{and}\quad\Sigma^{B}=\Sigma+(B^{\flat}(D^{r})-D^{r,*}(B^{ \flat})).\]
Using (4.2) and (4.3) for \(\mathcal{D}^{B}\), we see that \(\Sigma^{B}\) defines an element \(H\in\Gamma(\wedge^{2}T^{*}M\otimes T^{*}M)\) by
\[H(X,Y;Z)=\langle\Sigma^{B}_{X}(Y),Z\rangle.\]
As a consequence of (CN3), \(H\) is totally skew-symmetric, i.e. \(H\in\Omega^{3}(M)\)1. It now follows directly from the almost complex equations and Nijenhuis equations that \(H\) must satisfy
Footnote 1: Although not needed in the proof, one can check that (CN4) is equivalent to \(dH=0\). In fact, Courant 1-derivations \(\mathcal{D}\) determined by \(r\), \(g\) and \(\Sigma\) on \(\mathbb{T}M\) with \(g=0\) are in bijective correspondence with closed 3-forms.
\[H(r(X),Y,Z)=-H(X,r(Y),Z)\quad\text{and}\quad H(r(X),Y,Z)=H(X,r(Y),Z).\]
Hence \(H=0\), which means that \(\Sigma^{B}=0\). Therefore \(\mathcal{D}^{B}=\boldsymbol{\mathcal{D}}^{r}\), as we wanted to prove.
**Corollary 4.9**.: _For a pseudo-Kahler structure \((r,g)\), the Courant 1-derivations \(\boldsymbol{\mathcal{D}}^{r}\) and \(\boldsymbol{\mathcal{D}}^{r,g}\) are gauge equivalent._
It follows that the usual holomorphic structure on \(\mathbb{T}M\to M\) (defined by \(\boldsymbol{\mathcal{D}}^{r}\), see Example 3.6) and the one modified by the metric \(g\) (defined by \(\boldsymbol{\mathcal{D}}^{r,g}\), see Cor. 4.7) are isomorphic through a gauge transformation \(\tau_{B}\). Since \(\tau_{B}\) establishes a bijective correspondence between the sets of \(\boldsymbol{\mathcal{D}}^{r}\)-invariant and \(\boldsymbol{\mathcal{D}}^{r,g}\)-invariant lagrangian subbundles, it is clear that a lagrangian subbundle \(L\subset\mathbb{T}M\) is holomorphic with respect to the holomorphic structure modified by \(g\) if and only if \(\tau_{B}(L)\) is holomorphic in the usual sense. (In particular, this shows that \(\boldsymbol{\mathcal{D}}^{r,g}\)-invariant Dirac structures are much less restrictive than those for \(\boldsymbol{\mathcal{D}}^{g}\), c.f. Example 4.4).
## 5. Holomorphic Courant algebroids
In this section we show that holomorphic Courant algebroids can be seen as special cases of Courant-Nijenhuis structures.
For a complex manifold \((M,r)\) and holomorphic vector bundle \(\mathcal{E}\to M\), we denote by \(\mathcal{O}\) the sheaf of holomorphic functions on \(M\) and by \(\Gamma_{\mathcal{E}}\) the sheaf of holomorphic sections of \(E\).
**Definition 5.1**.: _A holomorphic Courant algebroid is a holomorphic vector bundle \(\mathcal{E}\to M\) endowed with a holomorphic non-degenerate symmetric \(\mathcal{O}\)-bilinear pairing \(\langle\cdot,\cdot\rangle:\Gamma_{\mathcal{E}}\times\Gamma_{\mathcal{E}} \to\mathcal{O}\), a holomorphic vector bundle map \(\mathfrak{p}:E\to T^{1,0}M\) and a \(\mathbb{C}\)-bilinear bracket \([\![\cdot,\cdot]\!]:\Gamma_{\mathcal{E}}\times\Gamma_{\mathcal{E}}\to\Gamma_{ \mathcal{E}}\) satisfying axioms (C1),..., (C5) in Definition 3.1 with \(\mathfrak{p}\) in place of \(\mathfrak{a}\), \(f\) a local holomorphic function, and \(\sigma_{1},\,\sigma_{2}\) and \(\sigma_{3}\) local holomorphic sections._
For the sake of completeness, we start by recalling that there exists a natural (real) smooth Courant algebroid underlying a holomorphic one (see e.g. [13]). For a holomorphic vector bundle \(\mathcal{E}\), we denote its underlying real, smooth vector bundle by \(E\).
**Proposition 5.2**.: _Let \((\mathcal{E},\mathfrak{p},[\![\cdot,\cdot]\!]\,,\langle\cdot,\cdot\rangle)\) be a holomorphic Courant algebroid. There exists a unique (real) smooth Courant algebroid structure \((\mathfrak{a},\langle\cdot,\cdot\rangle_{C^{\infty}},[\![\cdot,\cdot]\!]_{C^{ \infty}})\) on \(E\) such that_
1. \(\mathfrak{p}(\sigma)=\frac{1}{2}(\mathfrak{a}(\sigma)-\mathfrak{i}\,r( \mathfrak{a}(\sigma)))\)__
2. \([\![\sigma_{1},\sigma_{2}]\!]_{C^{\infty}}=[\![\sigma_{1},\sigma_{2}]\!]\)__
3. \(\langle\sigma_{1},\sigma_{2}\rangle_{C^{\infty}}=\operatorname{Re}(\langle \sigma_{1},\sigma_{2}\rangle)\)__
_for all local holomorphic sections \(\sigma,\sigma_{1},\sigma_{2}\in\Gamma_{\mathcal{E}}(U)\)._
Proof.: We outline the proof, split in two parts: uniqueness and existence.
**Uniqueness.** We will show that the restriction of the real smooth Courant algebroid structure (satisfying (1), (2) and (3)) to holomorphic sections is sufficient to completely determine it. Let \(U\subset M\) be an open subset for which there exists a frame of holomorphic sections \(\{\sigma_{1},\ldots,\sigma_{n}\}\subset\Gamma_{\mathcal{E}}(U)\). Any smooth section \(\sigma\) of \(E\) over \(U\) can be expressed uniquely as
\[\sigma=\sum_{k=1}^{n}f_{k}\sigma_{k}+g_{k}\,l(\sigma_{k}), \tag{5.1}\]
for \(f_{k},\,g_{k}\in C^{\infty}(U)\), where \(l:E\to E\) the fibrewise complex structure on \(E\). From the \(C^{\infty}(U)\)-linearity of \(\mathfrak{a}\), we see that \(\mathfrak{a}\) is completely characterized by \(\mathfrak{p}(\sigma_{k})\) (note that the \(\mathbb{C}\)-linearity of \(\mathfrak{p}\) implies that \(\mathfrak{a}\circ l=r\circ\mathfrak{a}\)). Also, the \(C^{\infty}(U)\)-bilinearity of \(\langle\cdot,\cdot\rangle_{C^{\infty}}\) implies that it is determined by \(\langle\sigma_{j},\sigma_{k}\rangle\), noticing that
\[\langle\sigma_{j},l(\sigma_{k})\rangle_{C^{\infty}}=-\mathrm{Im}(\langle\sigma _{j},\sigma_{k}\rangle). \tag{5.2}\]
Finally, the Leibniz equation (C3) implies that \([\![\cdot,\cdot]\!]_{C^{\infty}}\) is completely determined by \([\![\sigma_{j},\sigma_{k}]\!]\) (note that \([\sigma_{j},l(\sigma_{k})]_{C^{\infty}}=l([\![\sigma_{j},\sigma_{k}]\!])\)).
**Existence.** By uniqueness, it suffices to describe the real smooth Courant algebroid structure locally. We will use the isomorphism of \(C^{\infty}(U)\)-modules
\[C^{\infty}(U,\mathbb{C})\otimes_{\mathcal{O}(U)}\Gamma_{\mathcal{E}}(U)\ni(f+ \mathbf{i}\,g)\otimes\sigma\ \mapsto\ f\sigma+g\,l(\sigma)\in\Gamma(U,E)\]
to construct the Courant algebroid structure locally. The anchor and the bracket are given by
\[\mathfrak{a}(\psi\otimes\sigma)=\operatorname{Re}(\psi)\mathfrak{a}(\sigma)+ \operatorname{Im}(\psi)r(\mathfrak{a}(\sigma)),\quad\langle\psi_{1}\otimes \sigma_{1},\psi_{2}\otimes\sigma_{2}\rangle_{{}^{C^{\infty}}}=\operatorname{ Re}(\psi_{1}\psi_{2}\langle\sigma_{1},\sigma_{2}\rangle).\]
Note that \(\langle\cdot,\cdot\rangle_{{}^{C^{\infty}}}\) is non-degenerate and, for \(h\in\mathcal{O}(U)\),
\[\mathcal{L}_{\mathfrak{a}(\psi\otimes\sigma)}h=\psi\,\mathcal{L}_{\mathfrak{ p}(\sigma)}h. \tag{5.3}\]
We can now define \(\mathfrak{a}^{*}:\Omega^{1}(U)\to C^{\infty}(U,\mathbb{C})\otimes_{\mathcal{O}(U )}\mathcal{E}(U)\) naturally as
\[\langle\mathfrak{a}^{*}(\alpha),\psi\otimes\sigma\rangle_{{}^{C^{\infty}}}=i_ {\mathfrak{a}(\psi\otimes\sigma)}\,\alpha.\]
The Courant bracket on \(C^{\infty}(U,\mathbb{C})\otimes_{\mathcal{O}(U)}\mathcal{E}(U)\) is given by
\[\llbracket\psi_{1}\otimes\sigma_{1},\psi_{2}\otimes\sigma_{2} \rrbracket_{{}^{C^{\infty}}}= \psi_{1}\psi_{2}\otimes\llbracket\sigma_{1},\sigma_{2}\rrbracket+ \mathcal{L}_{\mathfrak{a}(\psi_{1}\otimes\sigma_{1})}(\psi_{2})\otimes \sigma_{2}-\mathcal{L}_{\mathfrak{a}(\psi_{2}\otimes\sigma_{2})}(\psi_{1}) \otimes\sigma_{1}\] \[+\operatorname{Re}(\psi_{2}\langle\sigma_{1},\sigma_{2}\rangle) \mathfrak{a}^{*}d\operatorname{Re}(\psi_{1})-\operatorname{Im}(\psi_{2} \langle\sigma_{1},\sigma_{2}\rangle)\mathfrak{a}^{*}d\operatorname{Im}(\psi_{ 1}). \tag{5.4}\]
One can check that \(\mathfrak{a}\), \(\langle\cdot,\cdot\rangle_{{}^{C^{\infty}}}\) and \(\llbracket\cdot,\cdot\rrbracket_{{}^{C^{\infty}}}\) are well-defined in the sense that their definitions agree on \((h\psi)\otimes\sigma\) and \(\psi\otimes(h\sigma)\), for \(h\in\mathcal{O}(U)\). The axioms (C1)-(C5) follow by direct inspection.
Following Examples 2.4 and 2.5, we regard a holomorphic vector bundle \(\mathcal{E}\) as a pair \((E,\mathcal{D}^{\operatorname{Dolb}})\), where \(E\to M\) is a real, smooth vector bundle endowed with a Dolbeault 1-derivation \(\mathcal{D}^{\operatorname{Dolb}}\) defining its holomorphic structure. Our main result in this section is an equivalence between holomorphic Courant algebroid structures on \(\mathcal{E}\) and real Courant algebroid structures on \(E\) for which \(\mathcal{D}^{\operatorname{Dolb}}\) is a Courant 1-derivation.
**Theorem 5.3**.: _Consider a holomorphic vector bundle \(\mathcal{E}=(E,\mathcal{D}^{\operatorname{Dolb}})\)._
1. _If_ \((\mathfrak{p},\langle\cdot,\cdot\rangle,\llbracket\cdot,\cdot\rrbracket)\) _is a holomorphic Courant algebroid structure on_ \(\mathcal{E}\)_, then_ \(\mathcal{D}^{\operatorname{Dolb}}\) _is a Courant 1-derivation of the underlying real Courant algebroid structure on_ \(E\) _(so_ \(\mathcal{D}^{\operatorname{Dolb}}\) _makes_ \(E\) _into a Courant-Nijenhuis algebroid)._
2. _If_ \((\mathfrak{a},\langle\cdot,\cdot\rangle,\llbracket\cdot,\cdot\rrbracket)\) _is a Courant algebroid structure on_ \(E\) _such that_ \(\mathcal{D}^{\operatorname{Dolb}}\) _is a Courant 1-derivation, then_ \(\llbracket\cdot,\cdot\rrbracket\) _restricts to a_ \(\mathbb{C}\)_-bilinear bracket on_ \(\Gamma_{\mathcal{E}}\) _in such a way that it defines a holomorphic Courant algebroid structure on_ \(\mathcal{E}\) _together with_ \[\mathfrak{p}(\sigma)=\frac{1}{2}(\mathfrak{a}(\sigma)-\mathbf{i}\,r(\mathfrak{a }(\sigma))),\quad\langle\sigma_{1},\sigma_{2}\rangle_{\operatorname{hol}}= \langle\sigma_{1},\sigma_{2}\rangle-\mathbf{i}\,\langle\sigma_{1},l(\sigma_{2} )\rangle.\]
The constructions in (a) and (b) are inverses of one another.
Proof.: To prove part (a), we must show that \(\mathcal{D}^{\operatorname{Dolb}}=(D,l,r)\) is a Courant 1-derivation, i.e., that it satisfies (CN1)-(CN4) in Def. 3.3 with respect to the real Courant algebroid \(E\). The \(\mathbb{C}\)-linearity of \(\mathfrak{p}\) implies that \(\mathfrak{a}\circ l=r\circ\mathfrak{a}\), so (CN1) holds. To verify conditions (CN2), (CN3) and (CN4), we must check, for each open subset \(U\subset M\), the vanishing of the following expressions:
\[W_{2}(X,\sigma_{1})=\mathfrak{a}(D_{X}(\sigma_{1}))-D_{X}^{r}( \mathfrak{a}(\sigma_{1})),\] \[W_{3}(\sigma_{1},\sigma_{2})=l(\llbracket\sigma_{1},\sigma_{2} \rrbracket_{{}^{C^{\infty}}})-\left(\llbracket\sigma_{1},l(\sigma_{2}) \rrbracket_{{}^{C^{\infty}}}-D_{\mathfrak{a}(\sigma_{2})}(\sigma_{1})- \mathfrak{a}^{*}(C(\sigma_{1},\sigma_{2}))\,\right),\] \[W_{4}(X,\sigma_{1},\sigma_{2})=D_{X}(\llbracket\sigma_{1},\sigma_{2 }\rrbracket_{{}^{C^{\infty}}})-\left(\llbracket\sigma_{1},D_{X}(\sigma_{2}) \rrbracket_{{}^{C^{\infty}}}-\llbracket\sigma_{2},D_{X}(\sigma_{1})\rrbracket_{{} ^{C^{\infty}}}+D_{[\mathfrak{a}(\sigma_{2}),X]}(\sigma_{1})\right.\] \[\qquad\qquad\qquad\qquad\left.-D_{[\mathfrak{a}(\sigma_{1}),X]}( \sigma_{2})-\mathfrak{a}^{*}(i_{X}\,dC(\sigma_{1},\sigma_{2}))\,\right),\]
for \(X\in\mathfrak{X}(U)\) and \(\sigma_{1},\sigma_{2}\) smooth sections of \(E\) over \(U\).
The vanishing of \(W_{2}\) follows from the fact that, since \(\mathfrak{a}\) is holomorphic, it must intertwine \(\mathcal{D}^{\operatorname{Dolb}}\) and \(\mathcal{D}^{r}\) (recalling that \(\mathcal{D}^{r}\) is the Dolbeault 1-derivation encoding the holomorphic structure on \(TM\), see Example 2.4), so (CN2) holds.
Using that \(\mathfrak{a}\circ l=r\circ\mathfrak{a}\) and \(l=l^{*}\) (by (5.2)), one can show that \(W_{3}\) is \(C^{\infty}(U)\)-linear in each entry, whereas \(W_{4}\) is \(C^{\infty}(U)\)-linear in the first entry and satisfies
\[W_{4}(X,\sigma_{1},f\sigma_{2})-fW_{4}(X,\sigma_{1},\sigma_{2}) =(\mathcal{L}_{X}f)\,W_{3}(\sigma_{1},\sigma_{2})-(\mathcal{L}_{ W_{2}(X,\sigma_{1})}f)\,\sigma_{2}\] \[=(\mathcal{L}_{X}f)\,W_{3}(\sigma_{1},\sigma_{2}),\] \[W_{4}(X,f\sigma_{1},\sigma_{2})-fW_{4}(X,\sigma_{1},\sigma_{2}) =-(\mathcal{L}_{X}f)W_{3}(\sigma_{2},\sigma_{1})-W_{1}(X,\sigma_{1 },\sigma_{2})\mathfrak{a}^{*}df\]
where
\[W_{1}(X,\sigma_{1},\sigma_{2})=\langle D_{X}(\sigma_{1}),\sigma_{2}\rangle_{ C^{\infty}}+\langle\sigma_{1},D_{X}(\sigma_{2})\rangle_{C^{\infty}}-\mathcal{L}_ {X}\langle\sigma_{1},l(\sigma_{2})\rangle_{C^{\infty}}+\mathcal{L}_{r(X)} \langle\sigma_{1},\sigma_{2}\rangle_{C^{\infty}}.\]
Note that \(W_{1}\) is \(C^{\infty}(U)\)-linear in each entry due to (2.1).
We claim that \(W_{1}\), \(W_{3}\) and \(W_{4}\) vanish on holomorphic sections. Indeed, for \(W_{3}\) and \(W_{4}\) this follows from the facts that \(D_{X}\) is zero on holomorphic sections (by (2.4)) and that \([\![\cdot,\cdot]\!]_{C^{\infty}}\) restricts to the \(\mathbb{C}\)-bilinear holomorphic Courant bracket on holomorphic sections. For \(W_{1}\), we use the additional fact that
\[\mathcal{L}_{X}\langle\sigma_{1},l(\sigma_{2})\rangle_{C^{\infty}}-\mathcal{L }_{r(X)}\langle\sigma_{1},\sigma_{2}\rangle_{C^{\infty}}=-\mathrm{Im}( \mathcal{L}_{X+1\,r(X)}\langle\sigma_{1},\sigma_{2}\rangle)=0,\]
for holomorphic \(\sigma_{1},\sigma_{2}\in\Gamma_{\mathcal{E}}(U)\), since \(\langle\sigma_{1},\sigma_{2}\rangle\in\mathcal{O}(U)\).
Since smooth sections can be locally expressed by means of a frame of holomorphic sections as in (5.1), the \(C^{\infty}(U)\)-multilinearity of \(W_{1}\) and \(W_{3}\) implies that they vanish. This in turn implies that \(W_{4}\) is also multilinear over \(C^{\infty}(U)\), and hence also vanishes. This concludes the proof of (a).
Let us prove part (b). Since \(D\) vanishes on holomorphic sections, (CN4) implies that \([\![\Gamma_{\mathcal{E}}(U),\Gamma_{\mathcal{E}}(U)]\!]\subset\Gamma_{ \mathcal{E}}(U)\), and it follows from (CN3) that the restricted bracket on holomorphic sections is \(\mathbb{C}\)-bilinear. Regarding the anchor, (CN1) implies that \(\mathfrak{p}\) is \(\mathbb{C}\)-linear, and (CN2) says that it is holomorphic. It remains to check that \(\langle\Gamma_{\mathcal{E}}(U),\Gamma_{\mathcal{E}}(U)\rangle\subset \mathcal{O}(U)\). This is a consequence of the fact that, for local holomorphic sections \(\sigma_{1},\sigma_{2}\), the duality equation (2.8) implies that
\[\mathcal{L}_{X+1\,r(X)}\langle\sigma_{1},\sigma_{2}\rangle_{\mathrm{hol}}= \langle\overline{\partial}_{X+1\,r(X)}\,\sigma_{1},\sigma_{2}\rangle_{\mathrm{ hol}}+\langle\sigma_{1},\overline{\partial}_{X+1\,r(X)}\,\sigma_{2} \rangle_{\mathrm{hol}}=0,\]
where \(\overline{\partial}\) is the flat \(T^{0,1}\)-connection (2.4). Therefore \(\langle\sigma_{1},\sigma_{2}\rangle_{\mathrm{hol}}\) is holomorphic, thus concluding the proof.
**Example 5.4**.: Let \((M,r)\) be a complex manifold and \(H\) a closed holomorphic \(3\)-form. Then \(\mathbb{T}M\), viewed as a holomorphic vector bundle, carries an \(H\)-twisted holomorphic Courant algebroid structure (analogous to Example 3.2). From the perspective of Theorem 5.3, this corresponds to the fact that \(\boldsymbol{\mathcal{D}}^{r}\) is a Courant \(1\)-derivation of the \(H\)-twisted Courant bracket on \(\mathbb{T}M\), viewed as a real vector bundle (see Example 3.6). \(\diamond\)
**Remark 5.5**.: Any holomorphic Courant algebroid whose underlying real Courant algebroid is \(\mathbb{T}M\) with the \(H\)-twisted Courant bracket, for a closed \(3\)-form \(H\), is completely characterized by a Courant-Nijenhuis \(1\)-derivation \(\mathcal{D}\) on \((\mathbb{T}M,H)\) determined by the data \(r\), \(g\) and \(\Sigma\) (as in (4.1)), where \(r\) is the complex structure on \(M\). Then \(g\) and \(\Sigma\) must satisfy the equations corresponding to the fact that \(\mathcal{D}\) is Courant-Nijenhuis and almost complex. If the \(2\)-form \(\omega^{\flat}=g^{b}\circ r\) is closed, Proposition 4.8 implies that the holomorphic Courant algebroid determined by \(\mathcal{D}\) is isomorphic to the one determined by \(\boldsymbol{\mathcal{D}}^{r}\). In general this is not the case, but one can always use \(\omega\) to gauge away \(g\) leaving the construction of more general holomorphic Courant algebroids as a problem of choosing \(\Sigma\) suitably; this provides a different approach to the classification of holomorphic Courant algebroids in [13, Prop.1.3].
## 6. Lagrangian splittings and doubles
### Lagrangian splittings of Courant algebroids
Let \((E,\langle\cdot,\cdot\rangle,\mathfrak{a},[\![\cdot,\cdot]\!])\) be a Courant algebroid. By a _lagrangian splitting_ of \(E\) we mean a decomposition \(E=A\oplus B\), where \(A\) and \(B\) are lagrangian subbundles. In this case, there is an isomorphism \(B\cong A^{*}\) via the pairing that yields an identification \(E=A\oplus A^{*}\) as pseudo-euclidean vector bundles, where \(A\oplus A^{*}\) is equipped with its canonical pairing
\[\langle(a,\alpha),(b,\beta)\rangle:=\beta(a)+\alpha(b).\]
Let \(p_{A}:E\to A\) and \(p_{A^{*}}:E\to A^{*}\) the natural projections onto \(A\) and \(A^{*}\), respectively. The anchor and bracket on \(E\) induce the following structures on \(A\) and \(A^{*}\): an anchor \(\rho:=\mathfrak{a}|_{A}:A\to TM\), along with an \(\mathbb{R}\)-bilinear bracket \([\cdot,\cdot]\) on \(\Gamma(A)\) and an element \(\varphi\in\Gamma(\wedge^{3}A^{*})\) given by
\[[a,b]:=p_{A}([\![(a,0),(b,0)]\!]),\qquad i_{b}i_{a}\varphi:=p_{A^{*}}([\![(a,0),(b,0)]\!]),\]
and, similarly, an anchor \(\rho_{*}:A^{*}\to TM\), bracket \([\cdot,\cdot]_{*}\) on \(\Gamma(A^{*})\) and element \(\chi\in\Gamma(\wedge^{3}A)\). Note that \(A\) and \(A^{*}\), endowed with their anchors and brackets, become pre-Lie algebroids (see SS 2.4). We denote the corresponding operators by
\[d_{A}:\Gamma(\wedge^{\bullet}A^{*})\to\Gamma(\wedge^{\bullet+1}A^{*}),\qquad d _{A^{*}}:\Gamma(\wedge^{\bullet}A)\to\Gamma(\wedge^{\bullet+1}A).\]
With respect to these structures, the Courant bracket on \(E=A\oplus A^{*}\) is given by
\[[\![(a,\alpha),(b,\beta)]\!]=([a,b]+\mathcal{L}_{a}b-i_{\beta}\,d_{A^{*}}a+i_ {\beta}i_{\alpha}\chi,\,[\alpha,\beta]_{*}+\mathcal{L}_{a}\beta-i_{b}\,d_{A} \alpha+i_{b}i_{a}\varphi), \tag{6.1}\]
where \(\mathcal{L}_{\alpha}=i_{\alpha}d_{A^{*}}+d_{A^{*}}i_{\alpha}\), similarly for \(\mathcal{L}_{a}\).
Let \((A,\rho,[\cdot,\cdot])\) and \((A^{*},\rho_{*},[\cdot,\cdot]_{*})\) be pre-Lie algebroids in duality, further equipped with sections \(\varphi\in\Gamma(\wedge^{3}A^{*})\) and \(\chi\in\Gamma(\wedge^{3}A)\). The pair \((A,A^{*})\) is called a _proto bialgebroid_ when the anchors, brackets and \(3\)-sections satisfy compatibility conditions (spelled out in [24], see also [14]) saying that the bracket (6.1) on \(\Gamma(A\oplus A^{*})\) makes \(A\oplus A^{*}\) into a Courant algebroid with anchor \(\mathfrak{a}=\rho+\rho_{*}\) and canonical pairing, called the _double_ of \((A,A^{*})\).
We therefore obtain the following equivalence: any Courant algebroid equipped with a lagrangian splitting yields a proto bialgebroid, and any proto bialgebroid gives rise, by means of its double, to a Courant algebroid endowed with a lagrangian splitting.
The following are special cases of interest of this correspondence.
* When \(\varphi=0\) and \(\chi=0\), a proto bialgebroid \((A,A^{*})\) is a Lie bialgebroid, i.e., \((A,A^{*})\) is a pair of Lie algebroids \((A,\rho,[\cdot,\cdot])\) and \((A^{*},\rho_{*},[\cdot,\cdot]_{*})\) in duality such that the Lie-algebroid differential \(d_{A^{*}}\) and the Schouten bracket \([\cdot,\cdot]\) on \(\Gamma(\wedge A)\) satisfy (6.2) \[d_{A^{*}}[a,b]=[d_{A^{*}}a,b]+[a,d_{A^{*}}b],\qquad\forall\,a,b\in\Gamma(A).\] Lie bialgebroids are in correspondence with Courant algebroids equipped with a lagrangian splitting by Dirac structures [19], known as _Manin triples_.
* When \(\chi=0\), a proto bialgebroid \((A,A^{*})\) is a Lie quasi-bialgebroid [24], in which case \(A\) is a Lie algebroid, \(d_{A^{*}}\) satisfies (6.2), \(d_{A^{*}}^{2}=[\chi,\cdot]\), and \(d_{A^{*}}\chi=0\). Lie quasi-bialgebroids correspond to Courant algebroids equipped with a splitting given by a Dirac structure and a lagrangian complement, known as _Manin quasi-triples_.
For a Courant algebroid \(E\to M\), a Lagrangian splitting \(E=A\oplus A^{*}\) induces a bivector field \(\pi\) on \(M\) via
\[\pi^{\sharp}=\rho_{*}\circ\rho^{*}:T^{*}M\to TM \tag{6.3}\]
that satisfies
\[\frac{1}{2}[\pi,\pi]=\rho(\chi)+\rho_{*}(\varphi),\]
see [18, SS 3.2 and 3.4]. In particular \(\pi\) is a Poisson structure when \((A,A^{*})\) is a Lie bialgebroid.
### Lagrangian splittings of Courant 1-derivations
Consider a Courant algebroid \(E\) equipped with a lagrangian splitting, that we write as \(E=A\oplus A^{*}\).
**Assumption**. _We assume throughout this subsection that_
\[\boldsymbol{\mathcal{D}}=(\mathbb{D},\ell,r)\]
_is a 1-derivation on the vector bundle \(E\) that keeps the splitting invariant, i.e., the subbundles \(A\) and \(A^{*}\) are \(\boldsymbol{\mathcal{D}}\)-invariant (as in Def. 2.1). We will further assume that \(\boldsymbol{\mathcal{D}}\) is symmetric (\(\boldsymbol{\mathcal{D}}=\boldsymbol{\mathcal{D}}^{*}\)), in which case the restricted 1-derivations on \(A\) and \(A^{*}\) are dual to one another._
We denote the 1-derivation on \(A\) by \(\mathcal{D}=(D,l,r)\), so that the 1-derivation on \(A^{*}\) is \(\mathcal{D}^{*}=(D^{*},l^{*},r)\) and
\[\mathbb{D}=(D,D^{*}),\qquad\ell=(l,l^{*}).\]
We keep the notation from SS 6.1 for the pre-Lie algebroids \((A,\rho,[\cdot,\cdot])\) and \((A,\rho_{*},[\cdot,\cdot]_{*})\), with 3-sections \(\chi\in\Gamma(\wedge^{3}A)\) and \(\varphi\in\Gamma(\wedge^{3}A^{*})\).
**Theorem 6.1**.: _The 1-derivation \(\boldsymbol{\mathcal{D}}=(\mathbb{D},\ell,r)\) is a Courant 1-derivation of \(E=A\oplus A^{*}\) if and only if the following conditions hold:_
* \(\mathcal{D}\) _is compatible with the pre-Lie algebroid_ \((A,\rho,[\cdot,\cdot])\)_, and_ \[\varphi\in\Gamma_{l}(\wedge^{3}A^{*}),\qquad D^{*}(\varphi)=0;\]
* \(\mathcal{D}^{*}\) _is compatible with the pre-Lie algebroid_ \((A^{*},\rho_{*},[\cdot,\cdot]_{*})\)_, and_ \[\chi\in\Gamma_{l^{*}}(\wedge^{3}A),\qquad D(\chi)=0.\]
_Moreover, \(\boldsymbol{\mathcal{D}}\) is Nijenhuis (resp. Dolbeault) if and only if so is \(\mathcal{D}\)._
Proof.: We must show the equivalence between conditions (CN1)-(CN4) (in Definition 3.3) for \(\boldsymbol{\mathcal{D}}\) and (IM1)-(IM4) (in Definition 2.8) for both \(\mathcal{D}\) and \(\mathcal{D}^{*}\) along with the conditions on \(\varphi\) and \(\chi\) in the statement.
It directly follows from \(\mathfrak{a}=\rho+\rho_{*}\) and \(\ell=(l,l^{*})\) that (CN1) for \(\boldsymbol{\mathcal{D}}\) is equivalent to (IM1) for \(\mathcal{D}\) and \(\mathcal{D}^{*}\). Similarly, the fact that \(\mathbb{D}=(D,D^{*})\) implies that (CN2) for \(\boldsymbol{\mathcal{D}}\) is equivalent to (IM2) for \(\mathcal{D}\) and \(\mathcal{D}^{*}\).
_Claim 1. Assume that (IM1) holds for \(\mathcal{D}\) and \(\mathcal{D}^{*}\). Then condition (CN3) for \(\boldsymbol{\mathcal{D}}\) is equivalent to (IM3) for \(\mathcal{D}\) and \(\mathcal{D}^{*}\), as well as \(\varphi\in\Gamma_{l}(\wedge^{3}A^{*})\) and \(\chi\in\Gamma_{l^{*}}(\wedge^{3}A)\)._
Let us verify the claim. For sections of type \(\sigma_{1}=(a,0)\), \(\sigma_{2}=(b,0)\), (CN3) becomes
\[\ell([a,b]+i_{b}i_{a}\varphi)=[a,l(b)]+i_{l(b)}i_{a}\varphi-D_{\rho(b)}(a),\]
which splits into
\[l([a,b])=[a,l(b)]-D_{\rho(b)}(a)\quad\text{and}\quad l^{*}(i_{b}i_{a}\varphi) =i_{l(b)}i_{a}\varphi.\]
These conditions hold for all \(a,b\in\Gamma(A)\) if and only if (IM3) holds for \(\mathcal{D}\) and \(\varphi\in\Gamma_{l}(\wedge^{3}A^{*})\). Similarly, (CN3) for sections of type \(\sigma_{1}=(0,\alpha)\), \(\sigma_{2}=(0,\beta)\) is equivalent to (IM3) for \(\mathcal{D}^{*}\) and \(\chi\in\Gamma_{l^{*}}(\wedge^{3}A)\). For sections of type \(\sigma_{1}=(0,\alpha)\), \(\sigma_{2}=(b,0)\), (CN3) amounts to two equations:
\[l^{*}(i_{b}d_{A}\alpha)=i_{l(b)}d_{A}\alpha+D^{*}_{\rho(b)}(\alpha)+ \langle D^{*}_{\rho(\cdot)}(\alpha),b\rangle, \tag{6.5}\] \[l(\mathcal{L}_{\alpha}b)=\mathcal{L}_{\alpha}l(b)-\langle D^{*}_{ \rho_{*}(\cdot)}(\alpha),b\rangle. \tag{6.4}\]
We will see that (6.4) (resp. (6.5)) follows directly from (2.12) for \(\mathcal{D}\) (resp. \(\mathcal{D}^{*}\)) and \(m=1\), which is equivalent to (IM3) under the assumption that (IM1) holds. Indeed, note that (2.12) for \(m=1\) has the following alternative formulations
\[\langle D^{*}_{\rho(\cdot)}(\alpha),b\rangle=l^{*}(i_{b}d_{A}\alpha)-i_{b}d_{A }(l^{*}(\alpha))\ \ \text{(similarly, $\langle\alpha,D_{\rho_{*}(\cdot)}(b)\rangle=l(i_{\alpha}d_{A^{*}}b)-i_{ \alpha}d_{A}(l(b))$.)} \tag{6.6}\]
So (6.4) is obtained from adding up (2.12) and (6.6). The second equation (6.5) follows from the Cartan formula \(\mathcal{L}_{\alpha}=i_{\alpha}d_{A^{*}}+d_{A^{*}}i_{\alpha}\) together with (2.8) and (6.6). The equations corresponding to (CN3) for sections of type \(\sigma_{1}=(a,0)\), \(\sigma_{2}=(0,\beta)\) are entirely analogous to (6.4) and (6.5), and hold for similar reasons. This proves the claim.
_Claim 2_.: _Assume that (IM1), (IM2) and (IM3) hold for \(\mathcal{D}\) and \(\mathcal{D}^{*}\). Then condition (CN4) for \(\boldsymbol{\mathcal{D}}\) is equivalent to (IM4) for \(\mathcal{D}\) and \(\mathcal{D}^{*}\), as well as \(D^{*}(\varphi)=0\) and \(D(\chi)=0\)._
To verify the claim, note that for sections of type \(\sigma_{1}=(a,0)\), \(\sigma_{2}=(b,0)\), (CN4) takes the form
\[D_{X}([a,b])+D_{X}^{*}(i_{b}i_{\alpha}\varphi)= [a,D_{X}(b)]+i_{D_{X}(b)}i_{a}\varphi-[b,D_{X}(a)]-i_{D_{X}(a)}i_ {b}\varphi\] \[+D_{[\rho(b),X]}(a)-D_{[\rho(a),X]}(b).\]
So in this case (CN4) amounts to (IM4) for \(\mathcal{D}\) together with the following condition on \(\varphi\) (using (2.8)):
\[\mathcal{L}_{X}\varphi(a,b,l(c))-\mathcal{L}_{r(X)}\varphi(a,b,c)-\varphi(D_{ X}a,b,c)-\varphi(a,D_{X}b,c)-\varphi(a,b,D_{X}c)=0,\]
for all \(a,b,c\in\Gamma(A)\). This last condition is the same as \(D^{*}(\varphi)=0\) when \(\varphi\in\Gamma_{l}(\wedge^{3}A^{*})\) (see (2.10)). Similarly, (CN4) holds for sections of type \(\sigma_{1}=(0,\alpha)\), \(\sigma_{2}=(0,\beta)\) if and only if \(\mathcal{D}^{*}\) satisfies (IM4) and \(D(\chi)=0\).
For sections of type \(\sigma_{1}=(0,\alpha)\), \(\sigma_{2}=(b,0)\), (CN4) is equivalent to
\[D_{X}(\mathcal{L}_{\alpha}b) =\mathcal{L}_{\alpha}(D_{X}b)+i_{D_{X}^{*}(\alpha)}d_{A^{*}}b-D_{ [\rho_{*}(\alpha),X]}b-(\rho_{*})^{*}i_{X}d\langle D_{(\cdot)}^{*}\alpha,b\rangle, \tag{6.8}\] \[D_{X}^{*}(i_{b}d_{A}\alpha) =i_{D_{X}(b)}d_{A}\alpha+\mathcal{L}_{b}D_{X}^{*}(\alpha)-D_{[ \rho(b),X]}^{*}\alpha+\rho^{*}i_{X}d\langle D_{(\cdot)}^{*}\alpha,b\rangle. \tag{6.7}\]
These equations follows directly from (2.13) for \(D\) and \(D^{*}\) and \(m=1\), which is equivalent to (IM4) under the assumption that (IM1), (IM2) and (IM3) hold. Indeed, first notice that using (2.8) and \(\rho_{*}^{*}D_{X}^{r,*}(d\langle\alpha,b\rangle)=D_{X}(d_{A^{*}}\langle\alpha,b\rangle)\) together with Cartan formula \(\mathcal{L}_{\alpha}=i_{\alpha}d_{A^{*}}+d_{A^{*}}i_{\alpha}\), one can see that both equations are exactly the same under the change \(\alpha\leftrightarrow b\). Now, using that
\[\langle D_{X}^{*}(i_{b}d_{A}\alpha)-i_{D_{X}(b)}d_{A}\alpha,a\rangle =-D_{X}^{*}(d_{A}\alpha)(a,b)\] \[\langle\mathcal{L}_{b}D_{X}^{*}(\alpha)+\rho^{*}i_{X}d\,\langle D _{(\cdot)}^{*}\alpha,b\rangle,a\rangle =-d_{A}D_{X}^{*}(\alpha)(a,b)+\mathcal{L}_{X}\langle D_{\rho(a)} ^{*}(\alpha),b\rangle+\langle D_{[\rho(a),X]}^{*}(\alpha),b\rangle,\]
one can check that (6.8) is exactly (2.13) in degree 1.
The situation for sections of type \(\sigma_{1}=(a,0)\), \(\sigma_{2}=(0,\beta)\) is entirely analogous. This proves claim 2 and concludes the proof of the first part of the theorem.
The assertion about the Nijenhuis condition follows from the decompositions \(\mathbb{D}=(D,D^{*})\) and \(\ell=(l,l^{*})\), and the fact that \(D\) is Nijenhuis (resp. Dolbeault) if and only is so is \(D^{*}\), see [9, Thm. 2.11].
When \(\boldsymbol{\mathcal{D}}=(\mathbb{D},\ell,r)\) is a Courant 1-derivation, there is also a compatibility with the bivector field \(\pi\in\mathfrak{X}^{2}(M)\) in (6.3), defined by the lagrangian splitting \(E=A\oplus A^{*}\), see [9, Prop. 4.6 (i)].
**Corollary 6.2**.: _The pair \((\pi,r)\) is compatible in the sense of (3.3). In particular, if the lagrangian splitting is by Dirac structures and \(\boldsymbol{\mathcal{D}}\) is Nijenhuis, then \((\pi,r)\) defines a Poisson-Nijenhuis structure._
Proof.: Using that \(\pi^{\sharp}=\rho_{*}\circ\rho^{*}\), \(\mathbb{D}=(D,D^{*})\) and \(\ell=(l,l^{*})\), the first condition in (3.3) follows from condition (IM1) for \(\mathcal{D}\) and \(\mathcal{D}^{*}\), while the second condition in (3.3) follows from (IM2) for \(\mathcal{D}\) and \(\mathcal{D}^{*}\).
Recall from [9] that a _Lie-Nijenhuis bialgebroid_ is a triple \((A,A^{*},\mathcal{D})\), where \((A,A^{*})\) is a Lie bialgebroid and \(\mathcal{D}\) is a Nijenhuis 1-derivation on \(A\) such that \(\mathcal{D}\) is compatible with the Lie algebroid structure on \(A\), and \(\mathcal{D}^{*}\) is compatible with the Lie algebroid structure on \(A^{*}\) (in
the sense of Def. 2.8). These are the infinitesimal objects corresponding to Poisson-Nijenhuis groupoids, see [9, SS 4.3].
By Theorem 6.1 we have the following enhancement of the known correspondence between Lie bialgebroids and Manin triples.
**Corollary 6.3**.: _Lie-Nijenhuis bialgebroids are equivalent to Courant-Nijenhuis algebroids equipped with a splitting by Dirac-Nijenhuis structures._
When \(\boldsymbol{\mathcal{D}}\) is a Dolbeault \(1\)-derivation, Theorem 6.1 gives the known correspondence between Lie (quasi-)bialgebroids and Manin (quasi-)triples in the holomorphic category.
|
2307.02456 | Derived Categories of Derived Grassmannians | This paper establishes semiorthogonal decompositions for derived
Grassmannians of perfect complexes with Tor-amplitude in $[0,1]$. This result
verifies the author's Quot formula conjecture [J21a] and generalizes and
strengthens Toda's result in [Tod23].
We give applications of this result to various classical situations such as
blowups of determinantal ideals, reducible schemes, and varieties of linear
series on curves.
Our approach utilizes the framework of derived algebraic geometry, allowing
us to work over arbitrary base spaces over $\mathbb{Q}$. It also provides
concrete descriptions of Fourier-Mukai kernels in terms of derived Schur
functors. | Qingyuan Jiang | 2023-07-05T17:31:44Z | http://arxiv.org/abs/2307.02456v1 | # Derived categories of derived Grassmannians
###### Abstract.
This paper establishes semiorthogonal decompositions for derived Grassmannians of perfect complexes with Tor-amplitude in \([0,1]\). This result verifies the author's Quot formula conjecture [10] and generalizes and strengthens Toda's result in [12].
We give applications of this result to various classical situations such as blowups of determinantal ideals, reducible schemes, and varieties of linear series on curves.
Our approach utilizes the framework of derived algebraic geometry, allowing us to work over arbitrary base spaces over \(\mathbb{Q}\). It also provides concrete descriptions of Fourier-Mukai kernels in terms of derived Schur functors.
## 1. Introduction
This paper establishes semiorthogonal decompositions for a broad class of maps \(\operatorname{\mathrm{Grass}}_{X}(\mathscr{E};d)\to X\), where \(\operatorname{\mathrm{Grass}}_{X}(\mathscr{E};d)\) is the relative Grassmannian of a complex \(\mathscr{E}\) over \(X\) ([10]):
**Theorem** (Theorem 3.2).: _Let \(d\in\mathbb{Z}_{>0}\). For any scheme (or more generally, prestack) \(X\) over \(\mathbb{Q}\), any perfect complex \(\mathscr{E}\) of Tor-amplitude in \([0,1]\) and rank \(r\geq 0\) on \(X\), and any type of derived category \(\mathrm{D}\in\{\mathrm{D}_{\mathrm{qc}},\mathrm{D}_{\mathrm{coh}}^{-}, \mathrm{D}_{\mathrm{coh}}^{\mathrm{b}},\mathrm{D}^{\mathrm{perf}}\}\), there is a semiorthogonal decomposition_
\[\mathrm{D}(\operatorname{\mathrm{Grass}}_{X}(\mathscr{E};d))=\left\langle \binom{r}{i}\text{ copies of }\mathrm{D}(\operatorname{\mathrm{Grass}}_{X}(\mathscr{E}^{\vee}[1];d-i)) \right\rangle_{0\leq i\leq\min\{r,d\}}. \tag{1.1}\]
_This semiorthogonal decomposition is induced by faithfully functors \(\Phi^{(i,\lambda)}\) (Notation 2.14) that are explicitly expressed in terms of derived Schur functors applied to universal perfect complexes on the incidence loci, parametrized by Young diagrams \(\lambda\) of height \(\leq(r-i)\) and width \(\leq i\)._
This result verifies and generalizes the author's Quot formula conjecture [10, Conj. A.5].
Yukinobu Toda [12] has established a version of this theorem1 using a different method, the categorified Hall product. His theorem applies to any smooth quasi-projective complex variety \(X\). This paper extends and strengthens Toda's result by removing the assumptions of smoothness and quasi-projectivity on the base \(X\), providing explicit descriptions of the Fourier-Mukai kernels, and including the cases for \(\mathrm{D}=\mathrm{D}_{\mathrm{qc}},\mathrm{D}_{\mathrm{coh}}^{-}\), and \(\mathrm{D}^{\mathrm{perf}}\).
Footnote 1: The semiorthogonal decompositions in these two papers have different semiorthogonal orders, but we expect that they differ by a sequence of mutations.
Our theorem both unifies and generalizes the following important results:
* Orlov's projective bundle formula [11].
* Kapranov's exceptional collections for Grassmannians and the generalization to Grassmannian bundles [13, 14, 15].
* Orlov's blowup formula [12].
* The semiorthogonal decompositions for standard flips [1, 12, 13].
* Orlov's universal hyperplane section formula [12].
* The embedding of derived categories for Grassmannian flips (see [1, 1, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 87, 89, 91, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 12, 14, 16, 18, 19, 13, 17, 19, 14, 18, 19, 15, 19, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 89, 94, 95, 96, 97, 98, 99, 10, 11, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 52, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 84, 87, 89, 95, 96, 97, 98, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 56, 57, 59, 61, 62, 63, 64, 65, 67, 68, 69, 70, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 84, 88, 89, 95, 96, 97, 98, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 84, 89, 95, 96, 97, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 69, 70, 74, 75, 76, 78, 79, 82, 83, 84, 85, 86, 87, 89, 90, 84, 89, 95, 96, 97, 99, 10, 11, 12, 13, 14, 15, 16, 17, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 4
* The map \(\operatorname{Grass}_{X}(\mathscr{E};d)\to X\) is a _stratified_ Grassmannian bundle. The general fibers are Grassmannian varieties \(\mathbb{G}_{d}(r)\), but the fiber dimension jumps over the degeneracy loci \(X_{j}=D_{m-j}(\sigma)\) where the map \(\sigma\) has rank \(\leq m-j\), for \(j\geq 1\).
* The maps \(\operatorname{Grass}_{X}(\mathscr{E}^{\vee}[1];j)\to X\) on the right-hand side of (1.1) are (derived) partial resolutions of the degeneracy loci \(X_{j}=D_{m-j}(\sigma)\), \(j\geq 1\). Consequently, their derived categories provide noncommutative partial resolutions of \(X_{j}\)'s.
Therefore, in this case, our theorem extends Kapranov's result to stratified Grassmannian bundles. The formula (1.1) implies that \(\operatorname{D}(\operatorname{Grass}_{X}(\mathscr{E};d))\) contains \(\binom{r}{d}\) copies of \(\operatorname{D}(X)\), which corresponds to a family version of Kapranov's exceptional collections for a genuine \(\mathbb{G}_{d}(r)\)-bundle. The "corrections" are given by the noncommutative partial resolution \(\operatorname{D}(\operatorname{Grass}_{X}(\mathscr{E}^{\vee}[1];j)\) of the degeneracy loci \(X_{j}\), \(j=1,2,\dots,d\), capturing the contributions arising from the fiber-dimension-jumping behavior of \(\operatorname{Grass}_{X}(\mathscr{E};d)\) over \(X_{j}\).
Similarly, the case \(d=r\) of the theorem extends Orlov's blowup formula [11] for blowups along locally complete intersection (l.c.i.) subschemes to non-l.c.i. cases (see Corollary 4.1).
In the case where \(d>r\), \(\operatorname{D}(\operatorname{Grass}_{X}(\mathscr{E};d))\) and \(\operatorname{D}(\operatorname{Grass}_{X}(\mathscr{E}^{\vee}[1];d-r))\) are both noncommutative partial resolution of the degeneracy locus \(X_{d-r}\). We should view \(\operatorname{Grass}_{X}(\mathscr{E};d))\ \mathfrak{e}_{--}\operatorname{Grass}_{X}( \mathscr{E}^{\vee}[1];d-r)\) as a derived generalization of Grassmannian flip. The formula (1.1) recovers the embedding of derived categories for this flip, with the orthogonal complement given by noncommutative partial resolutions of higher degeneracy loci \(X_{d-r+j}\), \(j=1,2,\dots,r\).
**Remark 1.1** (Characteristic-zero assumption).: Notice that in our proof, the characteristic-zero assumption is only required in Lemma 3.13.(2). Consequently, if we consider cases where the involved Lascoux-type complexes of Lemma 3.13.(2) are characteristic-free (e.g., when \(\operatorname{d}=1\)), our theorem's result is characteristic-free.
### Classical Applications
Due to the complex behavior of map \(\operatorname{Grass}_{X}(\mathscr{E};d)\to X\), our theorem finds applications to various interesting classical situations:
* (_Blowup formula for blowups of determinantal ideals SS4.1_). Let \(\operatorname{Bl}_{Z}(X)\to X\) be the blowup of a scheme \(X\) along a determinantal subscheme of codimension \((r+1)\) considered in SS4.1. Then we obtain a semiorthogonal decomposition \[\operatorname{D}\left(\operatorname{Bl}_{Z}(X)\right)=\left\langle\left\langle \binom{r}{j}\text{ copies of }\operatorname{D}(\widetilde{X_{j}})\right\rangle_{1\leq j\leq r}, \operatorname{D}(X)\right\rangle,\] where \(\widetilde{X_{j}}\) are (possibly derived) partial resolutions of the determinantal loci \(X_{j}\), \(j=1,\dots,r\); see Corollary 4.1.
* (_Derived categories for reducible schemes SS4.2_). In [16, Examples 7.22], the projectivization formula was used to obtain the following formula for attaching a rational tail \(\mathbb{P}^{1}\) to a smooth point \(p\) of a complex curve \(C\): \[\operatorname{D}\left(C\bigsqcup_{p}\mathbb{P}^{1}\right)=\big{\langle} \operatorname{D}(\operatorname{Spec}\mathbb{C}[\varepsilon_{1}]),\operatorname{ D}(C)\big{\rangle},\] where \(\mathbb{C}[\varepsilon_{1}]\) is the derived ring of dual numbers with \(\deg(\varepsilon_{1})=1\) and \(\varepsilon_{1}^{2}=0\); see also [11, Proposition 6.15]. This paper greatly generalizes this result to a large class of reducible schemes (see Corollary 4.3), which includes the central fibers of the deformation-to-normal-cone construction as special cases (Remark 4.4).
* (_Varieties of Linear Series on Curves_). Consider the varieties \(G_{d}^{r}(C)\) parametrizing linear series of degree \(d\) and dimension \(r\) on a smooth complex projective curve of genus \(g\geq 1\) (see [1, Chapters IV, V]). Our theorem implies that \(G_{d}^{r}(C)\) have natural derived enhancements \(\mathbf{G}_{d}^{r}(C)\), for which there is a semiorthogonal decomposition \[\operatorname{D}(\mathbf{G}_{d}^{r}(C))=\left\langle\binom{1-g+d}{i}\text{ copies of }\operatorname{D}(\mathbf{G}_{2g-2-d}^{r-i}(C))\right\rangle_{0\leq i\leq\min\{1-g+d,r+1\}}\]
provided that \(d\geq g-1\) and \(r\geq-1\); see Corollary 4.5 and [13, Corollary 1.6]. This result extends Toda's result for symmetric products of curves in [13] (see also [14, 15]) and the author's result [16] for the case when \(r=1\).
For special curves, the above Corollary 4.5 gives rise to examples of flips of classical threefolds, where the semiorthogonal decomposition contains components given by nonclassical derived schemes (see Example 4.6). It also produces examples of derived equivalences for threefold flops induced by nonclassical derived incidence schemes (see Example 4.7).
The framework presented in this paper allows us to extend the above Corollary 4.5 to families of singular integral curves \(\mathscr{C}/S\), with the role of \(\operatorname{Pic}^{d}(C)\) replaced by the compactified Jacobians \(\overline{\operatorname{Jac}}^{d}_{\mathscr{C}/S}\); see Remark 4.8.
### A Categorified Decomposition Theorem
For a proper map \(Y\to X\) between complex algebraic varieties, the Beilinson- Bernstein-Deligne (BBD) decomposition theorem (see [1]) provides a decomposition for intersection cohomologies
\[\operatorname{IH}^{k}(Y)\simeq\bigoplus_{i}\operatorname{IH}^{k-d_{i}}( \overline{X_{i}},L_{i}),\]
where \(X_{i}\subseteq X\) are strata for the map \(Y\to X\), \(L_{i}\) are locally systems on \(X_{i}\), and \(d_{i}\in\mathbb{Z}\).
A fundamental question is in which situation can we "categorify" this result, in the sense of finding a semiorthogonal decomposition of the derived category \(\operatorname{D}(Y)\) of \(Y\), with pieces given by derived categories of spaces supported over the closure \(\overline{X_{i}}\) of strata \(X_{i}\). For instance, such a categorified decomposition is not possible for \(K\)-trivial contractions \(Y\to X\).
As the spaces on the right-hand side of the formula (1.1) are derived partial resolutions of the closed strata for the map \(Y=\operatorname{Grass}_{X}(\mathscr{E};d)\to X\), we could regard our main Theorem 3.2 as such a categorification for a broad class of maps.
### Derived Algebraic Geometry
This paper uses the framework of derived algebraic geometry (DAG), developed by Lurie, Toen and Vezzosi and many others ([11, 12, 13]). DAG plays a crucial role in this paper in the following aspects:
1. (_Generality and compatibility with base change_). The theorem applies to any prestack \(X\) over \(\mathbb{Q}\), including all (derived) schemes and stacks as special cases. Moreover, the formulation of the formula (1.1) commutes with arbitrary base change.
2. (_Fourier-Mukai kernels via derived Schur functors_). The theorem provides explicit descriptions of the Fourier-Mukai kernels involved in terms of derived Schur functors. The derived Schur functors are non-abelian derived functors (in the sense of Quillen [14] and Lurie [11, or equivalently, animations in the sense of [10, SS5.1]) of classical Schur module functors. The theory of derived Schur functors has been studied in [16, SS3]. Importantly, they are highly _computable_: using the generalized Illusie's isomorphisms [16, Proposition 3.34], the derived Schur functors appearing in the theorem can be computed using Akin, Buchsbaum, and Weyman's theory of Schur complexes [1, 2].
3. (_Derived incidence correspondence schemes_). The Fourier-Mukai kernels of the theorem are supported on certain universal incidence loci (Definition 2.7). These incidence loci generally possess non-trivial derived structures, even in the cases where all involved spaces in the formula (1.1) are classical (see Example 4.7). They are the derived zero loci of cosections of the form \[\mathscr{D}_{+}^{\vee}\boxtimes_{X}\mathscr{D}_{-}^{\vee}[1]\xrightarrow{ \rho_{+}^{\vee}\boxtimes\rho_{-}^{\vee}[1]}\mathscr{E}^{\vee}\boxtimes \mathscr{E}\xrightarrow{\operatorname{ev}}\mathscr{O},\] where \(\mathscr{D}_{\pm}\) are universal quotient bundles. This incidence relation can be seen as a higher-rank and shifted version of the universal quadric incidence relation studied in homological projective duality ([15, 16, 17]).
### Other Related Works
The Chow-theoretical version of this paper's main theorem has been established by the author in [11, 12].
In Koseki's paper [13], Theorem 3.2 (in smooth case, for \(\mathrm{D}=\mathrm{D}^{\mathrm{b}}_{\mathrm{coh}}\)) is used to prove a categorical blow-up formula for Hilbert schemes of points:
\[\mathrm{D}(\mathrm{Hilb}_{n}(\widehat{S}))=\left\langle p(j)\text{ copies of }\mathrm{D}(\mathrm{Hilb}_{n-j}(S))\right\rangle_{j=0,1,\ldots,n},\]
where \(\widehat{S}\) is the blowup of a smooth complex surface \(S\) at a point and \(p(j)\) is the number of partitions of \(j\). Koseki [13] also considered the cases of higher rank sheaves on del Pezzo, K3, or abelian surfaces. For general surfaces, the moduli spaces of higher rank sheaves are highly singular. We expect our Theorem 3.2 to be helpful to generalize the above results in these situations and address open questions (1) &(2) of [13, SS1.4].
The flag correspondences of relative Grassmannians (see Notation 2.16) have also been studied by Hsu in [10]. We expect the results and methods presented in this paper to be beneficial for investigating the categorical actions explored in _loc. cit._.
In the case where \(d=r\) in Theorem 3.2, the map \(\mathrm{Grass}(\mathscr{E};r)\to X\) should be regarded as a derived version of blowup, and we expect it to be closely related to the concept of derived blowups studied by Hekking, Khan and Rydh (see [16, 17]).
### Notation and Convention
We will use the framework of \(\infty\)-categories developed by Lurie in [15]. Our notations and terminologies will mostly follow those of [12, 11]. Here, we list the notations and conventions that are frequently used in this paper:
* (_\(\infty\)-categories of spaces_). We let \(\mathcal{S}\) denote the _\(\infty\)-category of spaces_ (or equivalently, the \(\infty\)-category of \(\infty\)-groupoids). For a pair of objects \(C,D\) in an \(\infty\)-category \(\mathcal{C}\), we let \(\mathrm{Map}_{\mathcal{C}}(C,D)\in\mathcal{S}\) denote their _mapping space_. We let \(\mathcal{C}^{\simeq}\) denote _core_ of \(\mathcal{C}\), that is, the \(\infty\)-category obtained from \(\mathcal{C}\) by discarding all non-invertible morphisms. For a pair of \(\infty\)-categories \(\mathcal{C}\) and \(\mathcal{D}\), we let \(\mathrm{Fun}(\mathcal{C},\mathcal{D})\) denote the \(\infty\)-category of functors from \(\mathcal{C}\) to \(\mathcal{D}\).
* (_Simplicial commutative ring_). We let \(\mathrm{CAlg}^{\Delta}\) denote the \(\infty\)-category of "derived rings", that is, _simplicial commutative rings_ (see [15, Definition 25.1.1.1]; or equivalently, _animated commutative rings_ in the sense of [13, SS5.1]).
* (_Prestacks_). A _prestack_ is a functor \(X\colon\,\mathrm{CAlg}^{\Delta}\to\mathcal{S}\). A map between prestacks \(X,Y\colon\,\mathrm{CAlg}^{\Delta}\to\mathcal{S}\) is a natural transformation \(f\colon X\to Y\) of the functors. The notion of a prestack is probably the most general concept of spaces in algebraic geometry ([14, Chapter 2 SS0.1]), and includes all derived schemes and derived higher stacks as special cases.
* (_Partitions_). We let \(B_{\ell,d}\) denote the set of _partitions_ of height \(\leq\ell\) and width \(\leq d\), i.e., partitions \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell})\) such that \(d\geq\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{\ell}\geq 0\). For a partition \(\lambda\in B_{\ell,d}\), we let \(|\lambda|=\sum_{i=1}^{\ell}\lambda_{i}\). We denote its _transpose_ by \(\lambda^{t}=(\lambda_{1}^{t},\lambda_{2}^{t},\ldots,\lambda_{s}^{t})\), i.e., for any \(i\in\mathbb{Z}_{>0}\), \(\lambda_{i}^{t}\) is the number of \(j\)'s such that \(\lambda_{j}\geq i\). By convention, if one of \(\ell\) and \(d\) is zero, we set \(B_{\ell,d}\) to be the singleton of zero partition (0); we let \(B_{\ell,d}=\emptyset\) if \(\ell<0\) or \(d<0\).
* (_Notations for Derived categories_). In this paper, we will use the symbol \(\mathrm{D}\) to represent one of the following derived _\(\infty\)-categories_: \(\mathrm{D}_{\mathrm{qc}}\), \(\mathrm{D}^{-}_{\mathrm{coh}}\), \(\mathrm{D}^{\mathrm{b}}_{\mathrm{coh}}\), or \(\mathrm{D}^{\mathrm{perf}}\). Specifically, for a prestack \(X\), this paper will consider the following derived \(\infty\)-categories \(\mathrm{D}(X)\):
* (\(\mathrm{D}=\mathrm{D}_{\mathrm{qc}}\)). We let \(\mathrm{D}_{\mathrm{qc}}(X)\) denote the \(\infty\)-category of quasi-coherent complexes on \(X\) ([12, SS3.2]) and let \(\mathrm{D}_{\mathrm{qc}}(X)^{\leq 0}\) denote the full subcategory spanned by complexes \(\mathscr{E}\) such that \(\mathscr{H}^{\mathrm{c}}(\mathscr{E}):=\pi_{-i}(\mathscr{E})=0\) for \(i>0\). If \(X\) is a quasi-compact, quasi-separated scheme, the homotopy category of \(\mathrm{D}_{\mathrm{qc}}(X)\) is equivalent to the triangulated derived category of unbounded complexes of \(\mathscr{O}_{X}\)-modules with quasi-coherent cohomologies.
* (\(\mathrm{D}=\mathrm{D}^{-}_{\mathrm{coh}},\mathrm{D}^{\mathrm{b}}_{\mathrm{coh}}\)). We let \(\mathrm{D}^{-}_{\mathrm{coh}}(X)\) (resp. \(\mathrm{D}^{\mathrm{b}}_{\mathrm{coh}}(X)\)) denote the full subcategory of \(\mathrm{D}_{\mathrm{qc}}(X)\) spanned by almost perfect complexes (resp. locally truncated almost perfect complexes); see [15, Proposition 6.2.5.2] (resp., cf. [15, Notation 6.4.1.1]).
For a Noetherian scheme \(X\), the homotopy category of \(\operatorname{D}^{-}_{\operatorname{coh}}(X)\) (resp. \(\operatorname{D}^{\operatorname{b}}_{\operatorname{coh}}(X)\)) corresponds to the triangulated derived category \(\operatorname{D}^{-}(\operatorname{coh}(X))\) (resp. \(\operatorname{D}^{\operatorname{b}}(\operatorname{coh}(X))\)) of right-bounded (resp. bounded) complexes of coherent sheaves on \(X\), justifying the notations. * (\(\operatorname{D}=\operatorname{D}^{\operatorname{perf}}\)). We let \(\operatorname{D}^{\operatorname{perf}}(X)\) denote the \(\infty\)-category of perfect complexes on \(X\). Then \(\operatorname{D}^{\operatorname{perf}}(X)\) is equivalent to subcategory of \(\operatorname{D}_{\operatorname{qc}}(X)\) spanned by dualizable objects.
* (_Tor-amplitude_). A quasi-coherent \(R\)-complex \(M\), where \(R\in\operatorname{CAlg}^{\Delta}\), is said to have Tor-amplitude in \([0,1]\) if, for any discrete \(R\)-module \(N\), \(\pi_{i}(M\otimes_{R}N)=0\) for \(i\not\in[0,1]\). A quasi-coherent complex \(\mathscr{E}\) over \(X\) is said to have Tor-amplitude in \([0,1]\) if, for any \(\eta\colon\operatorname{Spec}R\to X\), where \(R\in\operatorname{CAlg}^{\Delta}\), \(\eta^{*}(\mathscr{E})\) has Tor-amplitude in \([0,1]\) as an \(R\)-complex.
* (_Derived convention_). All the functors are assumed to be _derived_. For example, if \(f\colon X\to Y\) is a map between schemes, \(\mathscr{E}\) is a sheaf on \(X\), then \(f_{*}(\mathscr{E})\) corresponds to the _derived_ pushforward \(\mathbb{R}f_{*}(\mathscr{E})\) in the classical convention.
* (_Grothendieck's convention_). We will use Grothendieck's convention for projectivizations \(\mathbb{P}(\mathscr{E})\), Grassmannians \(\operatorname{Grass}(\mathscr{E};d)\) and flags \(\operatorname{Flag}(\mathscr{E};\mathbf{d})\), so that they parametrize _quotients_ rather than sub-objects. For example, the projectivization \(\mathbb{P}_{X}(\mathscr{E})\) parametrizes line bundle quotients of \(\mathscr{E}\) over \(X\).
### Acknowledgment
The author would like to thank Arend Bayer for numerous helpful discussions and suggestions throughout this project, Richard Thomas for many valuable suggestions on the paper and helpful discussions on relative Grassmannians and degeneracy loci, and Yukinobu Toda for fruitful discussions related to the Quot formula conjecture and helpful comments on an earlier draft of this paper. This project originated when the author was a member at IAS, and he would like to thank Janos Kollar and Mikhail Kapranov for inspiring discussions during that period. The author is supported by the Engineering and Physical Sciences Research Council [EP/R034826/1] and by the ERC Consolidator grant WallCrossAG, no. 819864.
## 2. Derived Grassmannians and Incidence Correspondences
### Derived Grassmannians and Derived Schur Functors
This subsection briefly reviews the theory of derived Grassmannians and of derived Schur functors developed in [16]. We let \(X\) be a prestack and let \(\mathscr{E}\in\operatorname{D}_{\operatorname{qc}}(X)^{\leq 0}\).
#### 2.1.1. Derived Grassmannians and Derived Flag Schemes
Let \(\mathbf{d}=(0\leq d_{1}<\ldots<d_{k})\) be an increasing sequence of integers, where \(k\geq 1\). The _derived flag scheme of \(\mathscr{E}\) type_\(\mathbf{d}\) ([16, Definition 4.28]) is the prestack over \(X\), denoted by
\[\operatorname{pr}_{\operatorname{Flag}(\mathscr{E};\mathbf{d})}\colon \operatorname{Flag}_{X}(\mathscr{E};\mathbf{d})=\operatorname{Flag}(\mathscr{E };\mathbf{d})\to X,\]
which carries each \(\eta\colon T=\operatorname{Spec}A\to X\), where \(A\in\operatorname{CAlg}^{\Delta}\), to the full sub-Kan complexes of \(\operatorname{Fun}(\Delta^{k},\operatorname{D}_{\operatorname{qc}}(T)^{\leq 0})^{\simeq}\) spanned by those elements
\[\zeta_{T}=(\eta^{*}\mathscr{E}\xrightarrow{\varphi_{k,k+1}}\mathscr{P}_{k} \xrightarrow{\varphi_{k-1,k}}\mathscr{P}_{k-1}\to\cdots\xrightarrow{\varphi_{1,2}}\mathscr{P}_{1}),\]
where each \(\phi_{i,i+1}\) is surjective on \(\pi_{0}\), and \(\mathscr{P}_{i}\) is a vector bundle over \(T\) of rank \(d_{i}\), \(i\leq 1\leq k\).
Derived flag schemes are derived extensions of Grothendieck's classical flag schemes ([16, Proposition 4.37]). The natural projection \(\operatorname{Flag}(\mathscr{E};\mathbf{d})\to X\) is a relative derived scheme ([16, Proposition 4.38]). The formation of \(\operatorname{Flag}_{X}(\mathscr{E};\mathbf{d})\) commutes with arbitrary derived base change \(X^{\prime}\to X\) ([16, Proposition 4.32]). If \(\mathscr{E}\) is a perfect complex of Tor-amplitude in \([0,1]\), then the projection \(\operatorname{Flag}(\mathscr{E};\mathbf{d})\to X\) is a proper, quasi-smooth relative derived scheme, with an invertible relative dualizing complex (see [16, Corollary 4.47]). We refer to [16, SS4.3] for more details of their properties.
There are two important special cases of derived flag schemes:
**Example 2.1** (Derived Grassmannians; [14, Definition 4.3]).: If \(k=1\), \(\mathbf{d}=(d)\), then we denote the projection \(\operatorname{Flag}_{X}(\mathscr{E},\mathbf{d})\to X\) by
\[\operatorname{pr}_{\operatorname{Grass}(\mathscr{E};d)}\colon\operatorname{ Grass}_{X}(\mathscr{E};d)=\operatorname{Grass}(\mathscr{E};d)\to X,\]
and refer to it as the _rank \(d\) derived Grassmannian of \(\mathscr{E}\)_. It is by definition an element of \(\operatorname{Fun}(\operatorname{CAlg}^{\Delta},\mathcal{S})_{/X}\) which carries each \(\eta\colon T\to X\), where \(T\in\operatorname{CAlg}^{\Delta}\), to the space of morphisms
\[\{u\colon\eta^{*}\mathscr{E}\to\mathscr{P}\mid u\text{ is surjective on }\pi_{0}\text{ and }\mathscr{P}\text{ is a vector bundle of rank }d\text{ on }T\}^{\widetilde{\sim}}\,.\]
We will denote the universal fiber sequence on \(\operatorname{Grass}(\mathscr{E};d)\) by
\[\mathscr{R}_{\operatorname{Grass}(\mathscr{E};d)}\to\operatorname{pr}_{ \operatorname{Grass}(\mathscr{E};d)}^{*}(\mathscr{E})\xrightarrow{\rho}\mathscr{ Q}_{\operatorname{Grass}(\mathscr{E};d)},\]
where \(\mathscr{Q}_{\operatorname{Grass}(\mathscr{E};d)}\) is the universal quotient bundle of rank \(d\).
**Example 2.2** (Derived Complete Flag Schemes; [14, Example 4.31]).: Let \(n\geq 1\) be a positive integer, let \(\mathbf{d}=\underline{n}:=(1,2,3,\cdots,n)\). We will refer to
\[\operatorname{pr}_{\operatorname{Flag}(\mathscr{E};\underline{n})}\colon \operatorname{Flag}_{X}(\mathscr{E};\underline{n})=\operatorname{Flag}( \mathscr{E};\underline{n})\to X.\]
as the _derived complete flag scheme of type \(n\)_. We denote the universal quotient sequence by
\[\operatorname{pr}_{\operatorname{Flag}(\mathscr{E};\underline{n})}^{*}( \mathscr{E})\xrightarrow{\phi_{n,n+1}}\mathscr{Q}_{n}\xrightarrow{\phi_{n-1,n} }\mathscr{Q}_{n-1}\to\cdots\xrightarrow{\phi_{1,2}}\mathscr{Q}_{1},\]
where \(\mathscr{Q}_{i}\) is the universal quotient bundle of rank \(i\), and let
\[\mathscr{L}_{i}:=\operatorname{fib}(\phi_{i-1,i}\colon\mathscr{Q}_{i}\twoheadrightarrow \mathscr{Q}_{i-1})\]
denote the universal line bundle on \(\operatorname{Flag}(\mathscr{E};\underline{n})\), where we set \(\mathscr{Q}_{0}=0\) by convention. Consequently, for each \(1\leq i\leq n\), we have \(\mathscr{L}_{1}\otimes\mathscr{L}_{2}\otimes\cdots\otimes\mathscr{L}_{i}\simeq \det\mathscr{Q}_{i}\). For any sequence of integers \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\in\mathbb{Z}^{n}\), we define a line bundle \(\mathscr{L}(\lambda)\) on \(\operatorname{Flag}(\mathscr{E};\underline{n})\) by the formula:
\[\mathscr{L}(\lambda):=\mathscr{L}_{1}^{\otimes\lambda_{1}}\otimes\mathscr{L} _{2}^{\otimes\lambda_{2}}\otimes\cdots\otimes\mathscr{L}_{n}^{\otimes\lambda_ {n}}.\]
If \(\mathscr{E}=\mathscr{V}\) is a vector bundle of rank \(n\), then the morphism \(\phi_{n,n+1}\colon\operatorname{pr}_{\operatorname{Flag}(\mathscr{E}; \underline{n})}^{*}(\mathscr{V})\to\mathscr{Q}_{n}\) is an equivalence, and the forgetful map induces an equivalence \(\operatorname{Flag}(\mathscr{V};\underline{n})\xrightarrow{\sim} \operatorname{Flag}(\mathscr{V};\underline{n-1})\).
#### 2.1.2. Forgetful Maps Between Derived Flag Schemes
If \(\mathbf{d}^{\prime}=(d_{i_{j}})_{1\leq j\leq\ell}\) be a subsequence of \(\mathbf{d}=(d_{i})_{1\leq i\leq k}\), where \((i_{1}<i_{2}<\ldots<i_{\ell})\) is a subsequence of \(\underline{n}=(1,2,3,\cdots,n)\), then there is a natural forgetful map ([14, SS4.3.2])
\[\pi_{\mathbf{d}^{\prime},\mathbf{d}}\colon\operatorname{Flag}(\mathscr{E}; \mathbf{d})\to\operatorname{Flag}(\mathscr{E};\mathbf{d}^{\prime}).\]
We will need the following proposition, which is a special case of [14, Corollary 4.34]:
**Proposition 2.3** ([14, Corollary 4.34]).: _Let \(k\geq 2\), let \(i\) be an integer such that \(1\leq i\leq k-1\), and let \(\mathbf{d}^{\prime}=(d_{1},\cdots,d_{i})\) and \(\mathbf{d}^{\prime\prime}=(d_{i+1},\cdots,d_{k})\) so that \(\mathbf{d}=(\mathbf{d}^{\prime},\mathbf{d}^{\prime\prime})\)._
1. _The forgetful map_ \(\pi_{\mathbf{d}^{\prime\prime},\mathbf{d}}\colon\operatorname{Flag}_{X}( \mathscr{E};\mathbf{d})\to\operatorname{Flag}_{X}(\mathscr{E};\mathbf{d}^{ \prime\prime})\) _identifies_ \(\operatorname{Flag}_{X}(\mathscr{E};\mathbf{d})\) _as the derived flag scheme_ \(\operatorname{Flag}\left(\mathscr{Q}_{d_{i+1}}^{(\mathbf{d}^{\prime\prime})}; \mathbf{d}^{\prime}\right)\) _over_ \(\operatorname{Flag}_{X}(\mathscr{E};\mathbf{d}^{\prime\prime})\)_, where_ \(\mathscr{Q}_{d_{i+1}}^{(\mathbf{d}^{\prime\prime})}\) _is the universal quotient bundle on_ \(\operatorname{Flag}_{X}(\mathscr{E};\mathbf{d}^{\prime\prime})\) _of rank_ \(d_{i+1}\)_._
2. _The forgetful map_ \(\pi_{\mathbf{d}^{\prime},\mathbf{d}}\colon\operatorname{Flag}_{X}(\mathscr{E}; \mathbf{d})\to\operatorname{Flag}_{X}(\mathscr{E};\mathbf{d}^{\prime})\) _identifies_ \(\operatorname{Flag}_{X}(\mathscr{E};\mathbf{d})\) _as the derived flag scheme_ \(\operatorname{Flag}\left(\operatorname{fib}(\varphi_{i}^{(\mathbf{d}^{\prime \prime})});d_{i+1}-d_{i},\ldots,d_{k}-d_{i}\right)\) _over_ \(\operatorname{Flag}_{X}(\mathscr{E};\mathbf{d}^{\prime})\)_, where_ \(\mathscr{Q}_{d_{i}}^{(\mathbf{d}^{\prime})}\) _is the universal rank_ \(d_{i}\) _quotient bundle on_ \(\operatorname{Flag}_{X}(\mathscr{E};\mathbf{d}^{\prime})\) _and_ \(\varphi_{i}^{(\mathbf{d}^{\prime})}\colon\operatorname{pr}_{\operatorname{ Flag}(\mathscr{E};\mathbf{d}^{\prime})}^{*}(\mathscr{E})\to\mathscr{Q}_{d_{i}}^{( \mathbf{d}^{\prime})}\) _is the universal quotient map._
**Remark 2.4**.: As a direct consequence of Proposition 2.3, if we assume \(k\geq 3\), let \(i,j\) be integers such that \(1\leq i<j<k\) and let \(\mathbf{d}\) be written as:
\[\mathbf{d}=(\underbrace{d_{1},\cdots,d_{i}}_{\mathbf{d}^{(1)}};\underbrace{d_{ i+1},\cdots,d_{j}}_{\mathbf{d}^{(2)}};\underbrace{d_{j+1},\cdots,d_{k}}_{\mathbf{d}^{(3)}}),\]
then the natural commutative square of forgetful maps
\[\begin{CD}\operatorname{Flag}_{X}(\mathscr{E};\mathbf{d})@>{\pi_{(\mathbf{d}^{(2)}, \mathbf{d}^{(3)}),\mathbf{d}}}>{}>\operatorname{Flag}_{X}(\mathscr{E};\mathbf{d }^{(2)},\mathbf{d}^{(3)})\\ @V{}V{\pi_{(\mathbf{d}^{(1)},\mathbf{d}^{(2)}),\mathbf{d}}}V@V{}V{\pi_{ \mathbf{d}^{(2)},\mathbf{d}^{(2)},\mathbf{d}^{(3)})}}V\\ \operatorname{Flag}_{X}(\mathscr{E};\mathbf{d}^{(1)},\mathbf{d}^{(2)})@>{\pi_{ \mathbf{d}^{(2)},\mathbf{d}^{(1)},\mathbf{d}^{(2)})}}>{}>\operatorname{Flag}_{X}( \mathscr{E};\mathbf{d}^{(2)})\end{CD}\]
is a pullback square (i.e., it is a derived fiber product square).
#### 2.1.3. Derived Schur Functors
Derived Schur functors, studied in [14], are non-abelian derived functors (in the sense of Dold-Puppe [10], Quillen [15] and Lurie [16], or equivalently, animations in the sense of Cesnavicius and Scholze [17, SS5.1]) of the classical Schur module functors. Specifically, for a prestack \(Y\) and a partition \(\lambda\), the _derived Schur functor associated with \(\lambda\)_, as defined in [14, Definition 3.5, SS3.5.2], is an endforunctor of the derived \(\infty\)-category of quasi-coherent complexes denoted by
\[\mathbb{S}^{\lambda}\colon\operatorname{D}_{\operatorname{qc}}(Y)^{\leq 0} \to\operatorname{D}_{\operatorname{qc}}(Y)^{\leq 0}.\]
This functor extends the classical Schur module functors of vector bundles and preserves sifted colimits. In the particular case where \(\lambda=(n)\) (resp. \(\lambda=\underbrace{(1,\ldots,1)}_{n\operatorname{terms}}\)) for some integer \(n\geq 0\), the derived Schur functor \(\mathbb{S}^{(n)}=\operatorname{Sym}^{n}\) (resp. \(\mathbb{S}^{(1,\ldots,1)}=\bigwedge^{n}\)) corresponds to the \(n\)_th derived symmetric power_ (resp. _nth derived exterior power_) functor studied by Dold-Puppe [10], Illusie [11] and Lurie [16] [16].
The derived Schur functors possess many desirable functorial properties, such as their compatibility with arbitrary base change, and they satisfy derived generalizations of classical formulae like Cauchy's decomposition formula and Littlewood-Richardson rule. We refer the readers to [14, SS3] for a more comprehensive discussion and detailed explanations.
#### 2.1.4. Derived Borel-Weil-Bott Theorem
The theories of derived flag schemes and derived Schur functors are connected via a derived generalization of the Borel-Weil-Bott theorem.
**Theorem 2.5** (Borel-Weil-Bott Theorem for Derived Complete Flag Schemes; [14, Theorems 5.26, 5.35]).: _Consider the situation described in Example 2.2, assume that \(\mathscr{E}\) is a perfect complex rank \(n\) and Tor-amplitude in \([0,1]\) over \(X\), and let \(\lambda=(\lambda_{1},\ldots,\lambda_{n})\in\mathbb{N}^{n}\). Then:_
1. _If_ \(\lambda\) _is a partition, we have a canonical equivalence_ \((\operatorname{pr}_{\operatorname{Flag}(\mathscr{E};\mathbb{H})})_{*}( \mathscr{E}(\lambda))\simeq\mathbb{S}^{\lambda}(\mathscr{E})\)_, where_ \(\mathbb{S}^{\lambda}(\mathscr{E})\) _is the derived Schur functor applied to the perfect complex_ \(\mathscr{E}\)_._
2. _If_ \(X\) _is defined over_ \(\mathbb{Q}\)_, one of the following two mutually exclusive cases occurs:_ 1. _There exists a pair of integers_ \(1\leq i<j\leq n-1\) _such that_ \(\lambda_{i}-\lambda_{j}=i-j\)_. In this case,_ \((\operatorname{pr}_{\operatorname{Flag}(\mathscr{E};\mathbb{H})})_{*}( \mathscr{E}(\lambda))\simeq 0\)_._ 2. _There exists a unique permutation_ \(w\in\mathfrak{S}_{n}\) _such that_ \(w\centerdot\lambda\) _is non-increasing. In this case, there is a canonical equivalence_ \((\operatorname{pr}_{\operatorname{Flag}(\mathscr{E};\mathbb{H})})_{*}( \mathscr{E}(\lambda))\simeq\mathbb{S}^{w\lambda}(\mathscr{E})[-\ell(w)]\)_. Here,_ \(w\centerdot\lambda=w(\lambda+\rho)-\rho\) _denotes the dot action, and_ \(\ell(w)\) _is the length of_ \(w\)_._
In the special case where \(\mathscr{E}\) is a vector bundle, the above results reduce to the familiar Borel-Weil-Bott theorem for vector bundles; see [10] and [11, Theorems 4.1.4 & 4.1.10].
**Remark 2.6** ([14, Corollaries 5.30 & 5.39]).: The above theorem implies that corresponding Borel-Weil-Bott theorem for derived Grassmannians \(\operatorname{Grass}(\mathscr{E};d)\) studied in Example 2.1. Specifically, let \(d\) be an integer such that \(1\leq d\leq n\). Let \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\) and \(\beta=(\beta_{1},\ldots,\beta_{n-d})\) be two partitions and let \(\lambda=(\alpha,\beta)\) be their concatenation. Then \(\mathscr{V}(\alpha,\beta):=(\pi_{(d),\underline{n}})_{*}(\mathscr{E}(\lambda ))\simeq\mathbb{S}^{\alpha}(\mathscr{Q}_{\operatorname{Grass}(\mathscr{E};d)} )\otimes\mathbb{S}^{\beta}(\mathscr{R}_{\operatorname{Grass}(\mathscr{E};d)})\). Consequently, the theorem implies the following:
1. If \(\lambda\) is a partition, \((\operatorname{pr}_{\operatorname{Grass}(\mathscr{E};d)})_{*}(\mathscr{V}( \alpha,\beta))\simeq\mathbb{S}^{\lambda}(\mathscr{E})\).
2. If \(X\) is defined over \(\mathbb{Q}\), then one of the following two mutually exclusive cases occurs:
1. There exists a pair of integers \(1\leq i<j\leq n-1\) such that \(\lambda_{i}-\lambda_{j}=i-j\). In this case, \((\operatorname{pr}_{\operatorname{Grass}(\mathscr{E};d)})_{*}(\mathscr{V}( \alpha,\beta))\simeq 0\).
2. There exists a unique permutation \(w\in\mathfrak{S}_{n}\) such that \(w\star\lambda\) is non-increasing. In this case, there is a canonical equivalence \((\operatorname{pr}_{\operatorname{Grass}(\mathscr{E};d)})_{*}(\mathscr{V}( \alpha,\beta))\simeq\mathbb{S}^{w\lambda}(\mathscr{E})[-\ell(w)]\).
### Incidence Correspondences
This subsection studies the incidence correspondences between derived Grassmannians, generalizing the incidence correspondences of the projectivization case [10, SS7.1]. Throughout this subsection, we let \(X\) be a prestack and assume that \(\mathscr{E}\) is a perfect complex over \(X\) of Tor-amplitude in \([0,1]\) of rank \(r\geq 0\). Notice that the shifted dual \(\mathscr{E}^{\vee}[1]\) is also a perfect complex of Tor-amplitude in \([0,1]\), but has rank \((-r)\).
**Definition 2.7** (Incidence Correspondences).: Let \((d_{+},d_{-})\in\mathbb{N}^{2}\) be a pair of integers and consider the derived Grassmannians (Example 2.1):
\[\operatorname{pr}_{+}\colon\operatorname{Grass}(\mathscr{E};d_{+})\to X\quad \text{and}\quad\operatorname{pr}_{-}\colon\operatorname{Grass}(\mathscr{E}^{ \vee}[1];d_{-})\to X,\]
with tautological fiber sequences \(\mathscr{R}_{\pm}\to\operatorname{pr}_{\pm}^{*}(\mathscr{E})\xrightarrow{ \rho_{\pm}}\mathscr{Q}_{\pm}\), where \(\mathscr{Q}_{\pm}\) are universal quotient bundles of rank \(d_{\pm}\), respectively. We define the _universal incidence locus_\(\mathfrak{Incid}_{(d_{+},d_{-})}(\mathscr{E})\) to be the derived zero locus of the cosection of the perfect complex \(\mathscr{Q}_{+}^{\vee}\boxtimes_{X}\mathscr{Q}_{-}^{\vee}[1]\) over \(\operatorname{Grass}_{d_{+}}(\mathscr{E})\times_{X}\operatorname{Grass}_{d_{- }}(\mathscr{E}^{\vee}[1])\) defined as the composition
(Here, for complexes \(\mathscr{M}_{+}\) on \(\operatorname{Grass}(\mathscr{E};d_{+})\) and \(\mathscr{M}_{-}\) on \(\operatorname{Grass}(\mathscr{E}^{\vee}[1];d_{-})\), we use \(\mathscr{M}_{+}\boxtimes_{X}\mathscr{M}_{-}\) to denote the external tensor product \(\operatorname{pr}_{1}^{*}\mathscr{M}_{+}\otimes_{\mathscr{O}_{Z}} \operatorname{pr}_{2}^{*}\mathscr{M}_{-}\), where \(\operatorname{pr}_{i}\) are the projections from \(\operatorname{Grass}(\mathscr{E};d_{+})\times_{X}\operatorname{Grass}( \mathscr{E}^{\vee}[1];d_{-})\) to its \(i\)th factors.) We will refer to the commutative diagram
(2.1)
as the _incidence diagram_. By construction, there is a canonical commutative diagram
in \(\operatorname{D}^{\operatorname{perf}}(\mathfrak{Incid}_{(d_{+},d_{-})}( \mathscr{E}))\). We consider the following perfect complex
\[\mathscr{E}^{\operatorname{univ}}_{(d_{+},d_{-})}:=\operatorname{cofib} \left(r_{-}^{*}(\mathscr{Q}_{-}^{\vee})[1]\to\operatorname{fib}\left(\widehat {\operatorname{pr}}^{*}(\mathscr{E})\xrightarrow{r_{+}^{*}(\rho_{+})}r_{+}^{*}( \mathscr{Q}_{+})\right)\right),\]
and refer to it as the _universal perfect complex_ on \(\mathfrak{Incid}_{(d_{+},d_{-})}(\mathscr{E})\).
**Example 2.8**.:
1. If \(d_{-}=0\), \(\mathfrak{Incid}_{(d_{+},0)}(\mathscr{E})=\operatorname{Grass}(\mathscr{E};d_ {+})\) and \(\mathscr{E}^{\operatorname{univ}}_{(d_{+},0)}=\mathscr{R}_{\operatorname{ Grass}(\mathscr{E};d_{+})}\).
2. In the universal local situation of Notation 3.6 (where \(X=\underline{\operatorname{Hom}}_{k}(\Bbbk^{m},\Bbbk^{n})\) and \(\mathscr{E}=[\mathscr{O}_{X}^{m}\xrightarrow{\tau}\mathscr{O}_{X}^{n}]\) is the tautological map), the perfect complex \(\mathscr{E}^{\operatorname{univ}}_{(d_{+},d_{-})}\) is canonically represented by a universal two-term complex of vector bundles \(\left[r_{-}^{*}(R_{\mathscr{G}_{d_{-}}}^{\vee})\to r_{+}^{*}(R_{\mathscr{G}_{ d_{+}}^{+}})\right]\).
**Lemma 2.9**.: _In the situation of Definition 2.7, we have:_
1. _There is a canonical equivalence_ \[\mathscr{E}^{\operatorname{univ}}_{(d_{+},d_{-})}\simeq\operatorname{fib} \left(\operatorname{cofib}\left(r_{-}^{*}(\mathscr{Q}_{-}^{\vee})[1] \xrightarrow{r_{-}^{*}(\mathscr{O}_{-}^{\vee}[1])}\widehat{\operatorname{pr} }^{*}(\mathscr{E})\right)\to r_{+}^{*}(\mathscr{Q}_{+})\right).\]
_._
2. _If_ \(\mathscr{E}\) _has rank_ \(r\) _(and Tor-amplitude in_ \([0,1]\)_), then_ \(\mathscr{E}^{\mathrm{univ}}_{(d_{d},d_{-})}\) _is a perfect complex over_ \(\mathfrak{Imcid}_{(d_{+},d_{-})}(\mathscr{E})\) _of Tor-amplitude in_ \([0,1]\) _and rank_ \((r-d_{+}+d_{-})\)_._
Proof.: To prove assertion (1), consider the following induced commutative diagram
where all three squares are pushouts, hence bi-Cartesian as \(\mathrm{D}_{\mathrm{qc}}(\mathfrak{Imcid}_{(d_{+},d_{-})}(\mathscr{E}))\) is a stable \(\infty\)-category. This proves (1). Since \(\mathrm{cofib}(r^{*}_{-}(\rho^{\vee}_{-}[1]))\) has Tor-amplitude in \([0,1]\) and rank \((r+d_{-})\), \(r^{*}_{+}(\mathscr{Q}_{+})\) is a vector bundle of rank \(d_{+}\), and the natural map \(\mathrm{cofib}(r^{*}_{-}(\rho^{\vee}_{-}[1]))\to r^{*}_{+}(\mathscr{Q}_{+})\) is surjective on \(\pi_{0}\), assertion (2) follows from (1) (see [11, Proposition 7.2.4.23.2]).
**Lemma 2.10**.: _In the situation of Definition 2.7, we have:_
1. _The projection_ \(r_{+}\) _of (_2.1_) identifies_ \(\mathfrak{Imcid}_{(d_{+},d_{-})}(\mathscr{E}))\) _as the rank_ \(d_{-}\) _derived Grassmannian of the perfect complex_ \(\mathrm{cofib}\left(r^{*}_{+}(\rho^{\vee}_{+}[1])\colon\mathscr{Q}^{\vee}_{+}[ 1]\to\mathscr{E}^{\vee}[1]\right)\) _over_ \(\mathrm{Grass}(\mathscr{E};d_{+})\)_._
2. _The projection_ \(r_{-}\) _of (_2.1_) identifies_ \(\mathfrak{Imcid}_{(d_{+},d_{-})}(\mathscr{E}))\) _as the rank_ \(d_{+}\) _derived Grassmannian of the perfect complex_ \(\mathrm{cofib}\left(r^{*}_{-}(\rho^{\vee}_{-}[1])\colon\mathscr{Q}^{\vee}_{-}[ 1]\to\mathscr{E}\right)\) _over_ \(\mathrm{Grass}(\mathscr{E}^{\vee}[1];d_{-})\)_._
_Consequently, the projections \(r_{\pm}\) are both proper, quasi-smooth relative derived schemes._
Proof.: Similar to the projectivization case [10, Lemma 7.3], assertions (1) and (2) follow from the characterizations of closed immersions of the form \(\mathrm{Grass}(\mathscr{F}^{\prime\prime};d)\to\mathrm{Grass}(\mathscr{F};d)\) between derived Grassmannians induced by cofiber sequences \(\mathscr{F}^{\prime}\to\mathscr{F}\to\mathscr{F}^{\prime\prime}\) of connective complexes (see [10, Proposition 4.19]). As a result, the assertion about the properness and quasi-smoothness of the maps \(r_{\pm}\) follow from [10, Corollary 4.28].
**Remark 2.11** (Expected Dimensions and Classical Criteria).: In the situation of Lemma 2.10, assume \(X\) has constant dimension \(\dim X\), then the quasi-smooth relative derived schemes \(\mathrm{Grass}(\mathscr{E};d_{+})\), \(\mathrm{Grass}(\mathscr{E}^{\prime}[1];d_{-})\) and \(\mathfrak{Imcid}_{(d_{+},d_{-})}(\mathscr{E})\) over \(X\) have virtual dimensions
\[\dim X+d_{+}(r-d_{+}),\quad\dim X-d_{-}(r+d_{-})\quad\text{and}\quad\dim X+(d_ {+}-d_{-})+d_{+}d_{-}-d_{+}^{2}-d_{-}^{2},\]
respectively. If \(X\) is a Cohen-Macaulay scheme, then \(\mathrm{Grass}(\mathscr{E};d_{+})\), (resp. \(\mathrm{Grass}(\mathscr{E}^{\vee}[1];d_{-})\), \(\mathfrak{Imcid}_{(d_{+},d_{-})}(\mathscr{E}))\)) is classical if and only if its underlying classical scheme has dimension equal to its virtual dimension (see the proof of [10, Lemma 6.7]). Moreover, if all these schemes are classical, then \(\mathfrak{Imcid}_{(d_{+},d_{-})}(\mathscr{E})\) is canonically isomorphic to the classical fiber product of \(\mathrm{Grass}(\mathscr{E};d_{+})\) and \(\mathrm{Grass}(\mathscr{E}^{\vee}[1],d_{-})\) over \(X\).
### Compatibility of Incidence and Flag Correspondences
**Proposition 2.12**.: _In the situation of Definition 2.7, we let \(d^{\prime}_{+}>d_{+}\) be another integer, then there is a canonical forgetful map_
\[\mathrm{forg}\colon\,\mathrm{Flag}(\mathscr{E};d_{+},d_{+}+1,\ldots,d^{\prime }_{+})\times_{\mathrm{Grass}(\mathscr{E};d^{\prime}_{+})}\mathfrak{Imcid}_{(d^ {\prime}_{+},d_{-})}(\mathscr{E})\to\mathfrak{Imcid}_{(d_{+},d_{-})}(\mathscr{ E})\]
_which identifies the domain of \(\mathrm{forg}\) with the derived flag scheme_
\[\mathrm{Flag}_{\mathfrak{Imcid}_{(d_{+},d_{-})}(\mathscr{E})}\left(\mathscr{E} ^{\mathrm{univ}}_{(d_{+},d_{-})};1,2,\ldots,d^{\prime}_{+}-d_{+}\right)\]
_of the universal perfect complex \(\mathscr{E}^{\mathrm{univ}}_{(d_{+},d_{-})}\) over \(\mathfrak{Imcid}_{(d_{+},d_{-})}(\mathscr{E})\)._
Proof.: Let \(\mathscr{F}:=\operatorname{cofib}\left(\mathscr{Q}^{\vee}_{-}[1]\xrightarrow{r_{-}^{ \ast}(\mathscr{O}^{\vee}_{-}[1])}\operatorname{pr}_{-}^{\ast}(\mathscr{E})\right)\) over \(Y:=\operatorname{Grass}(\mathscr{E}^{\vee}[1];d_{-})\), and consider the following commutative diagram:
where the maps \(\pi,\pi^{\prime}\) are the natural forgetful maps between derived flag schemes (SS2.1.2), and \(\operatorname{pr},\operatorname{pr}^{\prime}\) are the natural projections. By virtue of Lemma 2.10, there are canonical equivalences
\[\operatorname{Grass}_{Y}(\mathscr{F};d_{+}^{\prime})\simeq\mathfrak{Incid}_{(d _{+},d_{-})}(\mathscr{E})\quad\text{and}\quad\operatorname{Grass}_{Y}(\mathscr{ F};d_{+})\simeq\mathfrak{Incid}_{(d_{+},d_{-})}(\mathscr{E}).\]
Let \(r_{+}\colon\mathfrak{Incid}_{(d_{+},d_{-})}(\mathscr{E})\to\operatorname{ Grass}(\mathscr{E};d_{+})\) (resp. \(r_{+}^{\prime}\colon\mathfrak{Incid}_{(d_{+}^{\prime},d_{-})}(\mathscr{E})\to \operatorname{Grass}(\mathscr{E};d_{+}^{\prime})\)) denote the natural projection, and \(\mathscr{Q}_{+}\) (resp. \(\mathscr{Q}_{+}^{\prime}\)) the tautological quotient bundle of rank \(d_{+}\) (resp. \(d_{+}^{\prime}\)) over \(\operatorname{Grass}(\mathscr{E};d_{+})\) (resp. \(\operatorname{Grass}(\mathscr{E};d_{+}^{\prime})\)). By virtue of Proposition 2.3.(1), the forgetful map \(\pi^{\prime}\) identifies \(Z\) as the derived flag scheme
\[Z\simeq\operatorname{Flag}_{\mathfrak{Incid}_{(d_{+}^{\prime},d_{-})}(\mathscr{ E})}\left(r_{+}^{\prime\ast}(\mathscr{Q}_{+}^{\prime});d_{+},d_{+}+1,\ldots,d_{+}^{ \prime}\right).\]
Since Proposition 2.3.(1)) also implies that the forgetful map \(\operatorname{Flag}(\mathscr{E};d_{+},d_{+}+1,\ldots,d_{+}^{\prime})\to \operatorname{Grass}(\mathscr{E};d_{+}^{\prime})\) is equivalent to the derived flag bundle of \(\mathscr{Q}_{+}^{\prime}\) of type \((d_{+},d_{+}+1,\ldots,d_{+}^{\prime})\) over \(\operatorname{Grass}(\mathscr{E};d_{+})\), we obtain that \(\pi^{\prime}\) identifies \(Z\) with the domain of the map \(\operatorname{foreg}\). On the other hand, by virtue of Proposition 2.3.(2) and the equivalence \(\mathscr{E}^{\operatorname{univ}}_{(d_{+},d_{-})}\simeq\operatorname{fib} \left(\operatorname{pr}^{\ast}(\mathscr{F})\to r_{+}^{\ast}(\mathscr{Q}_{+})\right)\) of Lemma 2.9.(1), the forgetful map \(\pi\) identifies \(Z\) as the derived flag scheme of \(\mathscr{E}^{\operatorname{univ}}_{(d_{+},d_{-})}\) of type \((1,2,\ldots,d_{+}^{\prime}-d_{+})\) over the incidence space \(\mathfrak{Incid}_{(d_{+},d_{-})}(\mathscr{E})\). Hence the proposition is proved.
**Corollary 2.13**.: _Let \(\mathscr{E}\) be a perfect complex of rank \(r\geq 0\) and \(\operatorname{\mathit{Tor}}\)-amplitude in \([0,1]\), let \(d\geq 0\) and \(0\leq i\leq\min\{d,r\}\) be integers, and let \(\lambda=(\lambda_{1}\geq\ldots\geq\lambda_{r-i})\) be a partition. Then there is a natural forgetful map_
\[\operatorname{forg}\colon\operatorname{Flag}(\mathscr{E};d,d+1,\ldots,d+r-i) \times_{\operatorname{Grass}(\mathscr{E};d+r-i)}\mathfrak{Incid}_{(d+r-i,d-i)}( \mathscr{E})\to\mathfrak{Incid}_{(d,d-i)}(\mathscr{E}).\]
_which is a proper, quasi-smooth relative derived scheme and induces a canonical equivalence_
\[\operatorname{forg}_{\ast}\bigl{(}\mathscr{L}^{\lambda_{1}}_{d+1}\otimes \mathscr{L}^{\lambda_{2}}_{d+2}\otimes\ldots\otimes\mathscr{L}^{\lambda_{r- i}}_{d+r-i}\bigr{)}\simeq\operatorname{\mathbb{S}}^{\lambda}(\mathscr{E}^{ \operatorname{univ}}_{(d,d-i)}),\]
_where \(\mathscr{Q}_{i}\)'s are universal quotient bundles of rank \(i\) (where \(d\leq i\leq d+r-i\)) and \(\mathscr{L}_{i}=\operatorname{fib}(\mathscr{Q}_{i}\to\mathscr{Q}_{i-1})\) are the associated universal line bundles (where \(d+1\leq i\leq d+r-i\))._
Proof.: Apply Proposition 2.12 and Theorem 2.5.(1) to the case where \((d_{+},d_{-})=(d,d-i)\) and \(d_{+}^{\prime}=d+r-i\). In this case, \(\mathscr{E}^{\operatorname{univ}}_{(d,d-i)}\) is a perfect complex of \(\operatorname{Tor}\)-amplitude in \([0,1]\) and rank \((r-i)\) (Lemma 2.9.(2)), and the properness and quasi-smoothness of the map \(\operatorname{forg}\) follow from [10, Corollary 4.28].
The above relationship between moduli prestacks yields compatibility result for the induced Fourier-Mukai functors, which we will now investigate.
**Notation 2.14**.: Assume that we are in the situation of Definition 2.7 and let maps \(r_{\pm}\) be defined as in diagram (2.1). Assume that \(r-d_{+}+d_{-}\geq 0\), and let \(\lambda=(\lambda_{1}\geq\cdots\geq\lambda_{r-d_{+}+d_{-}})\) be a partition and \(i\in\mathbb{Z}\). We consider Fourier-Mukai functors:
\[\Phi^{\lambda}_{(d_{+},d_{-})}=r_{+\ast}\bigl{(}r_{-}^{\ast}( \underline{\phantom{-}})\otimes\operatorname{\mathbb{S}}^{\lambda}(\mathscr{E}^ {\operatorname{univ}}_{(d_{+},d_{-})})\bigr{)}\colon \operatorname{D}(\operatorname{Grass}(\mathscr{E}^{\vee}[1];d_{-}))\to \operatorname{D}(\operatorname{Grass}(\mathscr{E};d_{+})).\] \[\Phi^{(i,\lambda)}_{(d_{+},d_{-})}=\Phi^{\lambda}_{(d_{+},d_{-}) }(\underline{\phantom{-}})\otimes\det(\mathscr{Q}_{+})^{i}\colon \operatorname{D}(\operatorname{Grass}(\mathscr{E}^{\vee}[1];d_{-}))\to \operatorname{D}(\operatorname{Grass}(\mathscr{E};d_{+})).\]
We will omit the subindex \((d_{+},d_{-})\) and write \(\Phi^{\lambda}\) and \(\Phi^{(i,\lambda)}\) instead when there is no confusion. Here, we use the symbol \(\operatorname{D}\) to denote any of the following derived \(\infty\)-categories: \(\operatorname{D}_{\operatorname{qc}}\), \(\operatorname{D}_{\operatorname{coh}}^{-}\), \(\operatorname{D}_{\operatorname{coh}}^{\operatorname{b}}\) or \(\operatorname{D}^{\operatorname{perf}}\). This definition will be justified by the following lemma.
**Lemma 2.15**.: _In the situation of Notation 2.14, let \(\mathrm{D}=\mathrm{D}_{\mathrm{qc}}\), then the functor \(\Phi^{\lambda}\) (resp. \(\Phi^{(i,\lambda)}\)) admits both a left adjoint \((\Phi^{\lambda})^{L}\) (resp. \((\Phi^{(i,\lambda)})^{L}\)) and a right adjoint \((\Phi^{\lambda})^{R}\) (resp. \((\Phi^{(i,\lambda)})^{R}\)). Furthermore, all these functors preserve (almost) perfect complexes and locally truncated almost perfect complexes, and commute with arbitrary base change \(X^{\prime}\to X\)._
Proof.: The left and right adjoints of \(\Phi^{\lambda}\) can be given explicitly by the formula
\[(\Phi^{\lambda})^{L} =r_{-\,!}\left(r_{+}^{*}(\underline{\phantom{-}})\otimes\mathbb{ S}^{\lambda}(\mathscr{E}^{\mathrm{univ}}_{(d_{+},d_{-})})^{\vee}\right): \mathrm{D}(\mathrm{Grass}(\mathscr{E};d_{+}))\to\mathrm{D}(\mathrm{ Grass}(\mathscr{E}^{\vee}[1];d_{-})).\] \[(\Phi^{\lambda})^{R} =r_{-\,*}\left(r_{+}^{\dagger}(\underline{\phantom{-}})\otimes \mathbb{S}^{\lambda}(\mathscr{E}^{\mathrm{univ}}_{(d_{+},d_{-})})^{\vee}\right): \mathrm{D}(\mathrm{Grass}(\mathscr{E};d_{+}))\to\mathrm{D}( \mathrm{Grass}(\mathscr{E}^{\vee}[1];d_{-})).\]
Here, \(r_{-\,!}\) denotes the left adjoint of \(r_{-}^{*}\), and \(r_{+}^{!}\) denotes the right adjoint of \(r_{+\,*}\). Since \(r_{\pm}\) are proper and quasi-smooth (Lemma 2.10), the desired assertions follow from Lipman-Neeman-Lurie's version of Grothendieck duality (see [10, Theorem 3.7.(3)]).
**Notation 2.16**.: Let \(\mathscr{E}\) be a perfect complex of Tor-amplitude in \([0,1]\) and rank \(r\geq 1\), and let \(d\) be an integer \(0\leq d\leq r-1\). We consider the following commutative diagram
(2.2)
where \(p_{\pm}\) are the natural forgetful maps (SS2.1.2). Let \(\mathrm{D}\) denote \(\mathrm{D}_{\mathrm{qc}}\), \(\mathrm{D}_{\mathrm{coh}}^{-}\), \(\mathrm{D}_{\mathrm{coh}}^{\mathrm{b}}\) or \(\mathrm{D}^{\mathrm{perf}}\). We consider Fourier-Mukai functors:
\[\Psi_{\mathscr{L}^{k}_{d+1}}=p_{+\,*}(p_{-}^{*}(\underline{ \phantom{-}})\otimes\mathscr{L}^{k}_{d+1})\colon \mathrm{D}(\mathrm{Grass}(\mathscr{E};d+1))\to\mathrm{D}(\mathrm{ Grass}(\mathscr{E};d)).\] \[\Psi_{k}=p_{+\,*}\,p_{-}^{*}(\underline{\phantom{-}})\otimes \det(\mathscr{Q}_{d})^{\otimes k}\colon \mathrm{D}(\mathrm{Grass}(\mathscr{E};d+1))\to\mathrm{D}(\mathrm{ Grass}(\mathscr{E};d)).\]
Here \(\mathscr{Q}_{i}\)'s are universal quotient bundles of rank \(i\) for \(i=d,d+1\), and \(\mathscr{L}_{d+1}=\mathrm{fib}(\mathscr{Q}_{d+1}\to\mathscr{Q}_{d})\).
**Remark 2.17**.: Proposition 2.3 implies that the projection \(p_{+}\) of diagram (2.2) identifies \(\mathrm{Flag}(\mathscr{E};d,d+1)\) as the derived projectivization of the perfect complex \(\mathscr{R}_{\mathrm{Grass}(\mathscr{E};d)}\) over \(\mathrm{Grass}(\mathscr{E};d)\) with \(\mathscr{O}_{p_{+}}(1)\simeq\mathscr{L}_{d+1}\), where \(\mathscr{R}_{\mathrm{Grass}(\mathscr{E};d)}=\mathrm{fib}\left(\mathrm{pr}_{ \mathrm{Grass}(\mathscr{E};d)}^{*}(\mathscr{E})\to\mathscr{Q}_{\mathrm{ Grass}(\mathscr{E};d)}\right)\) has Tor-amplitude in \([0,1]\) and rank \((r-d)\). The projection \(p_{-}\) identifies \(\mathrm{Flag}(\mathscr{E};d,d+1)\) as the derived projectivization \(\mathbb{P}(\mathscr{Q}^{\vee}_{\mathrm{Grass}(\mathscr{E};d+1)})\) of the rank \((d+1)\) vector bundle \(\mathscr{Q}^{\vee}_{\mathrm{Grass}(\mathscr{E};d+1)}\) over \(\mathrm{Grass}(\mathscr{E};d+1)\), with \(\mathscr{O}_{p_{-}}(1)\simeq\mathscr{L}^{\vee}_{d+1}\); or equivalently, as the derived Grassmannian \(\mathrm{Grass}(\mathscr{Q}_{\mathrm{Grass}(\mathscr{E};d+1)};d)\) of \(\mathscr{Q}_{\mathrm{Grass}(\mathscr{E};d+1)}\) over \(\mathrm{Grass}(\mathscr{E};d+1)\), with universal quotient bundle \(\mathscr{Q}_{d}\).
As a consequence of Corollary 2.13, we have the following compatibility result for the Fourier-Mukai functors considered in Notations 2.14 and 2.16:
**Corollary 2.18**.: _In the situation of Corollary 2.13, let \(\mathrm{D}\) denote \(\mathrm{D}_{\mathrm{qc}}\), \(\mathrm{D}_{\mathrm{coh}}^{-}\), \(\mathrm{D}_{\mathrm{coh}}^{\mathrm{b}}\) or \(\mathrm{D}^{\mathrm{perf}}\), assume that \(\lambda=(i\geq\lambda_{1}\geq\ldots\geq\lambda_{r-i}\geq 0)\in B_{r-i,i}\), and let_
\[\Phi^{(i,\lambda)}_{(d,d-i)}\colon\,\mathrm{D}(\mathrm{Grass}(\mathscr{E}^{ \vee}[1];d-i))\to\mathrm{D}(\mathrm{Grass}(\mathscr{E};d))\]
_denote the functor defined in Notation 2.14 in the case \((d_{+},d_{-})=(d,d-i)\). Let \(\Psi_{k}\)'s be defined as in Notation 2.16. Then there is a canonical equivalence of functors:_
\[\Phi^{(i,\lambda)}_{(d,d-i)}\simeq\Psi_{i-\lambda_{1}}\circ\cdots\circ\Psi_{ \lambda_{r-1}-\lambda_{r-i-1}}\circ\det(\mathscr{Q}_{d+r-i})^{\lambda_{r-i}} \circ\Phi^{(0)}_{(d+r-i,d-i)}.\]
Proof.: For each \(d+1\leq k\leq d+r-i\), we have \(\det(\mathscr{Q}_{k})\simeq\mathscr{L}_{k}\otimes\det(\mathscr{Q}_{k-1})\). By induction, we obtain a canonical equivalence of functors from \(\mathrm{D}(\mathrm{Grass}(\mathscr{E};d+r-i))\) to \(\mathrm{D}(\mathrm{Grass}(\mathscr{E};d))\):
\[(\otimes\det(\mathscr{Q}_{d})^{i})\circ\Psi_{\mathscr{L}^{\lambda_{1}}_{d+1}} \circ\cdots\circ\Psi_{\mathscr{L}^{\lambda_{r-i}}_{d+r-i}}\simeq\Psi_{i-\lambda_ {1}}\circ\cdots\circ\Psi_{\lambda_{r-1}-\lambda_{r-i-1}}\circ(\otimes\det( \mathscr{Q}_{d+r-i})^{\lambda_{r-i}}),\]
where \(\Psi_{\mathscr{L}^{k}_{d+1}}\)'s are defined in Notation 2.16. Consequently, it suffices to prove that there is a canonical equivalence of functors
\[\Phi^{\lambda}_{(d,d-i)}\simeq\Psi_{\mathscr{L}^{\lambda_{1}}_{d+1}}\circ\cdots \circ\Psi_{\mathscr{L}^{\lambda_{r-i}}_{d+r-i}}\circ\Phi^{(0)}_{(d+r-i,d-i)} \colon\operatorname{D}(\operatorname{Grass}(\mathscr{E}^{\vee}[1];d-i)) \to\operatorname{D}(\operatorname{Grass}(\mathscr{E};d)).\]
Consider the following commutative diagram:
where the vertical map forg is the forgetful map in Corollary 2.13 and \(r_{\pm}\) are the projection maps of the incidence diagram (2.1). By repeated use of Remark 2.4, we see that the composite functor \(\Psi_{\mathscr{L}^{\lambda_{1}}_{d+1}}\circ\cdots\circ\Psi_{\mathscr{L}^{ \lambda_{r-i}}_{d+r-i}}\circ\Phi^{(0)}_{(d+r-i,d-i)}\) is equivalent to the functor
\[\pi_{+\,*}\left(\pi_{-}^{*}(\underline{\phantom{*}})\otimes( \mathscr{L}^{\lambda_{1}}_{d+1}\otimes\mathscr{L}^{\lambda_{2}}_{d+2}\otimes \ldots\otimes\mathscr{L}^{\lambda_{r-i}}_{d+r-i})\right)\] \[\simeq r_{+\,*}\circ\operatorname{forg}_{*}\left(\operatorname{ forg}^{*}\circ r_{-}^{*}(\underline{\phantom{*}})\otimes(\mathscr{L}^{ \lambda_{1}}_{d+1}\otimes\mathscr{L}^{\lambda_{2}}_{d+2}\otimes\ldots\otimes \mathscr{L}^{\lambda_{r-i}}_{d+r-i})\right)\] \[\simeq r_{+\,*}\left(r_{-}^{*}(\underline{\phantom{*}})\otimes \operatorname{forg}_{*}(\mathscr{L}^{\lambda_{1}}_{d+1}\otimes\mathscr{L}^{ \lambda_{2}}_{d+2}\otimes\ldots\otimes\mathscr{L}^{\lambda_{r-i}}_{d+r-i})\right)\] \[\simeq r_{+\,*}\left(r_{-}^{*}(\underline{\phantom{*}})\otimes \operatorname{S}^{\lambda}(\mathscr{E}^{\operatorname{un
_Then \(\Phi^{(i,\lambda)}\) is fully faithful. Moreover, these functors \(\Phi^{(i,\lambda)}\), where \(0\leq i\leq\min\{r,d\}\) and \(\lambda\in B_{r-i,i}\), induce a semiorthogonal decomposition_
\[\mathrm{D}(\mathrm{Grass}(\mathscr{E};d))=\Big{\langle}\operatorname{Im}\big{(} \Phi^{(i,\lambda)}\big{)}\mid 0\leq i\leq\min\{r,d\},\,\lambda\in B_{r-i,i}\Big{\rangle},\]
_with semiorthogonal order given by the total order \(<\) defined in Notation 3.1. Specifically, \(\operatorname{Map}\big{(}\operatorname{Im}(\Phi^{(i,\lambda)}),\operatorname{ Im}(\Phi^{(j,\mu)})\big{)}\simeq 0\) if \((j,\mu)<(i,\lambda)\)._
Notice that when fixing \(d\), the subindices "\((d,d-i)\)" of \(\Phi^{(i,\lambda)}_{(d,d-i)}\) are uniquely determined by the superscripts "\((i,\lambda)\)". Therefore, there is no ambiguity in writing \(\Phi^{(i,\lambda)}\) for \(\Phi^{(i,\lambda)}_{(d,d-i)}\).
**Example 3.3**.: If \(r=4\) and \(d\geq 4\), we have a semiorthogonal decomposition
\[\mathrm{D}(\mathrm{Grass}(\mathscr{E};d))=\Big{\langle} \operatorname{Im}\big{(}\Phi^{(0,(0))}\big{)},\operatorname{Im}\big{(}\Phi^{( 1,(1,1,1))}\big{)},\operatorname{Im}\big{(}\Phi^{(1,(1,1))}\big{)}, \operatorname{Im}\big{(}\Phi^{(2,(2,2))}\big{)},\] \[\operatorname{Im}\big{(}\Phi^{(1,(1))}\big{)},\operatorname{Im} \big{(}\Phi^{(2,(2,1))}\big{)},\operatorname{Im}\big{(}\Phi^{(2,(2))}\big{)}, \operatorname{Im}\big{(}\Phi^{(3,(3))}\big{)},\operatorname{Im}\big{(}\Phi^{( 1,(0))}\big{)},\operatorname{Im}\big{(}\Phi^{(2,(1,1))}\big{)},\] \[\operatorname{Im}\big{(}\Phi^{(2,(1))}\big{)},\operatorname{Im} \big{(}\Phi^{(3,(2))}\big{)},\operatorname{Im}\big{(}\Phi^{(2,(0))}\big{)}, \operatorname{Im}\big{(}\Phi^{(3,(1))}\big{)},\operatorname{Im}\big{(}\Phi^{( 3,(0))}\big{)},\operatorname{Im}\big{(}\Phi^{(4,(0))}\big{)}\Big{\rangle}.\]
Proof of Theorem 3.2, Part 1.: Let \(\Phi_{1},\Phi_{2},\ldots,\Phi_{N}\) be all the functors in
\[\{\Phi^{(i,\lambda)}\mid 0\leq i\leq\min\{r,d\},\lambda\in B_{r-i,i}\}\]
listed in ascending order with respect to the total order \(<\) on the superscripts \((i,\lambda)\), where \(N=\sum_{i=0}^{\min\{r,d\}}\binom{r}{i}\). For each \(1\leq j\leq N\), we let \(\mathfrak{R}_{j}\) denote the endofunctor \(\operatorname{fib}(\operatorname{id}\to\Phi_{j}\circ\Phi_{j}^{L})\) of \(\mathrm{D}(\mathrm{Grass}(\mathscr{E};d))\), where \(\Phi_{j}^{L}\) denotes the left adjoint of \(\Phi_{j}\). Consequently, there is a canonical filtered sequence in \(\operatorname{Fun}\big{(}\mathrm{D}(\mathrm{Grass}(\mathscr{E};d)),\mathrm{D}( \mathrm{Grass}(\mathscr{E};d))\big{)}\)
\[\mathfrak{R}_{N}\circ\mathfrak{R}_{N-1}\circ\cdots\circ\mathfrak{R}_{1}\to \mathfrak{R}_{N-1}\circ\cdots\circ\mathfrak{R}_{1}\to\cdots\to\mathfrak{R}_{2} \circ\mathfrak{R}_{1}\to\mathfrak{R}_{1}\to\mathrm{id},\]
where \(\operatorname{cofib}(\mathfrak{R}_{1}\to\operatorname{Id})\simeq\Phi_{1}\circ \Phi_{1}^{L}\colon\mathrm{D}(\mathrm{Grass}(\mathscr{E};d)))\to\operatorname{ Im}(\Phi_{1})\), and for each \(2\leq j\leq N\),
\[\operatorname{pr}_{j}:=\operatorname{cofib}\big{(}\mathfrak{R}_{j}\circ \mathfrak{R}_{j-1}\circ\cdots\circ\mathfrak{R}_{1}\to\mathfrak{R}_{j-1}\circ \cdots\circ\mathfrak{R}_{1}\big{)}\simeq\Phi_{j}\circ\Phi_{j}^{L}\circ( \mathfrak{R}_{j-1}\circ\cdots\circ\mathfrak{R}_{1})\]
defines a functor from \(\mathrm{D}(\mathrm{Grass}(\mathscr{E};d))\) to \(\operatorname{Im}(\Phi_{j})\).
Therefore, to establish the desired semiorthogonal decomposition, it is equivalent to prove the following assertions about the functors \(\Phi_{j}\), \(\Phi_{j}^{L}\) and \(\mathfrak{R}_{j}\) (for the corresponding category \(\mathrm{D}\)):
* Fully-faithfulness: For each \(1\leq j\leq N\), the counit map \(\Phi_{j}^{L}\circ\Phi_{j}\to\operatorname{id}\) is an equivalence.
* Semiorthogonality: For all \(1\leq j<k\leq N\), \(\Phi_{j}^{L}\circ\Phi_{k}\simeq 0\).
* Generation: \(\mathfrak{R}_{N}\circ\mathfrak{R}_{N-1}\circ\cdots\circ\mathfrak{R}_{1}\simeq 0\).
We have the following observations:
* Since the assertions (a), (b) and (c), regarded as properties for the pair \((X,\mathscr{E})\), are local with respect to Zariski topology, we may assume that \(X\) is a derived affine scheme \(\operatorname{Spec}A\), where \(A\in\operatorname{CAlg}^{\Delta}\), and \(\mathscr{E}\) is the cofiber of a map \(\sigma\colon A^{m}\to A^{n}\) between finite local free sheaves, where \(m,n\geq 0\) are integers such that \(n-m=r\).
* Given that the functors \(\Phi_{j},\Phi_{j}^{L}\), and hence \(\mathfrak{R}_{j}\), preserve all small colimits and perfect objects (Lemma 2.15), and that for any quasi-compact, quasi-separated derived scheme \(Y\), we have \(\mathrm{D}_{\mathrm{qc}}(Y)\simeq\operatorname{Ind}(\mathrm{D}^{\mathrm{perf}}(Y))\), to prove the assertions (a), (b) and (c) in the case where \(\mathrm{D}=\mathrm{D}_{\mathrm{qc}}\) and \(X=\operatorname{Spec}A\), it suffices to verify them in the case where \(\mathrm{D}=\mathrm{D}^{\mathrm{perf}}\) and \(X=\operatorname{Spec}A\), and vice versa.
* Since the functors \(\Phi_{j},\Phi_{j}^{L}\), and hence \(\mathfrak{R}_{j}\), preserve almost perfect objects and locally truncated almost perfect objects (Lemma 2.15), the assertions (a), (b) and (c) in the cases \(\mathrm{D}=\mathrm{D}_{\mathrm{coh}}^{-}\) and \(\mathrm{D}=\mathrm{D}_{\mathrm{coh}}^{\mathrm{b}}\) can be deduced from the case \(\mathrm{D}=\mathrm{D}_{\mathrm{qc}}\).
Consequently, it suffices to consider the case where \(\mathrm{D}=\mathrm{D}^{\mathrm{perf}}\), \(X=\operatorname{Spec}A\), where \(A\in\operatorname{CAlg}^{\Delta}\), and \(\mathscr{E}=\operatorname{cofib}(\sigma\colon A^{m}\to A^{n})\), where \(n-m=r\). Moreover, as the formation of the assertions (a), (b) and (c) commutes with arbitrary base change (as discussed in Lemma 2.15), we may
assume that \(X=\operatorname{Hom}_{\Bbbk}(\Bbbk^{m},\Bbbk^{n})\) is the total space of \(\Bbbk\)-homomorphisms from \(\Bbbk^{m}\) to \(\Bbbk^{n}\) and \(\mathscr{E}\simeq[\mathscr{O}_{X}^{m}\xrightarrow{\tau}\mathscr{O}_{X}^{n}]\), where \(\Bbbk\) is an ordinary commutative ring and \(\tau\) is the tautological map; in this case, the desired assertions are established in the next subsection (see SS3.2.4).
**Remark 3.4** (Dual Exceptional Sequences).: Let \(d<r\) and consider the functors \(\Phi^{(i,\lambda)}\) in the case \(i=d\). Then Theorem 3.2 implies that the collection of objects
\[\{\mathbb{S}^{\lambda}\big{(}\mathscr{R}_{\operatorname{Grass}(\mathscr{E};d) }\big{)}\mid\lambda\in B_{r-d,d}\}\]
forms a relative exceptional sequence ([14, Definition 6.8]) over \(X\) with respect to the opposite lexicographic order. On the other hand, in [14] we show that
\[\{\mathbb{S}^{\alpha}\big{(}\mathscr{Q}_{\operatorname{Grass}(\mathscr{E};d) }\big{)}\mid\alpha\in B_{d,r-d}\}\]
forms a relative exceptional sequence over \(X\) with respect to the colexicographic order. Using derived Borel-Weil-Bott Theorem 2.5 and the filtered sequences associated with Schur complexes, we can show that these two relative exceptional sequences are dual (and mutation-equivalent) to each other. Details will appear in a separate note.
**Example 3.5**.: Let \(X\to S\) be any quasi-smooth map between prestacks that is a relative derived algebraic space of constant dimension \(r\geq 0\). Then the relative cotangent complex \(\mathbb{L}_{X/S}\) has perfect-amplitude in \([0,1]\) and rank \(r\). We let \(\mathbb{T}_{X/S}[1]=\mathbb{L}_{X/S}^{\vee}[1]\) denote the shifted tangent complex. Theorem 3.2 implies semiorthogonal decompositions for all \(d\geq 0\):
\[\operatorname{D}(\operatorname{Grass}_{X}(\mathbb{L}_{X/S};d))=\left\langle \binom{r}{i}\text{ copies of }\operatorname{D}(\operatorname{Grass}_{X}(\mathbb{T}_{X/S}[1];d-i)) \right\rangle_{0\leq i\leq\min\{r,d\}}.\]
In the special case where \(d=r\), the derived relative scheme \(\operatorname{Grass}_{X}(\mathbb{L}_{X/S};r)\to X\) is closely related to the construction of Nash blowups.
### The Universal Local Situation
In this subsection, we let \(\operatorname{D}=\operatorname{D}^{\operatorname{perf}}\).
#### 3.2.1. The Setup for the Universal Local Situation
Now we introduce the basic setup for the universal local situation:
**Notation 3.6**.:
1. Let \(\Bbbk\) be a commutative ring, let \(n,m,d\geq 0\) be integers such that \(n-m=:r\geq 0\), and \(W=\Bbbk^{m}\) and \(V=\Bbbk^{n}\).
2. For a pair of non-negative integers \((d_{+},d_{-})\), we let \[\mathbb{G}_{d_{+}}^{+}:=\operatorname{Grass}_{\operatorname{Spec}\Bbbk}(V;d_ {+})\quad\text{and}\quad\mathbb{G}_{d_{-}}^{-}:=\operatorname{Grass}_{ \operatorname{Spec}\Bbbk}(W^{\vee};d_{-})\] denote the rank \(d_{\pm}\) Grassmannian \(\Bbbk\)-schemes of \(V\) and \(W^{\vee}\), respectively, and let \[R_{\mathbb{G}_{d_{+}}^{+}}\hookrightarrow V\otimes\mathscr{O}_{\mathbb{G}_{d_ {+}}^{+}}\twoheadrightarrow Q_{\mathbb{G}_{d_{+}}^{+}}\quad\text{and}\quad R _{\mathbb{G}_{d_{-}}^{-}}\hookrightarrow W^{\vee}\otimes\mathscr{O}_{ \mathbb{G}_{d_{-}}^{-}}\twoheadrightarrow Q_{\mathbb{G}_{d_{-}}^{-}},\] denote the tautological short exact sequences, where \(Q_{\mathbb{G}_{d_{\pm}}^{\pm}}\) are tautological quotient bundles of ranks \(d_{\pm}\), respectively.
3. Let \(X=\underline{\operatorname{Hom}}_{\Bbbk}(W,V)=\operatorname{Spec}( \operatorname{Sym}_{\Bbbk}^{*}(W\otimes_{\Bbbk}V^{\vee}))\) denote affine \(\Bbbk\)-space parametrizing \(\Bbbk\)-homomorphisms from \(W\) to \(V\), and let \(\tau\colon W\otimes\mathscr{O}_{X}\to V\otimes\mathscr{O}_{X}\) denote the tautological morphism. Let \(\mathscr{E}=[W\otimes\mathscr{O}_{X}\xrightarrow{\tau}V\otimes\mathscr{O}_{X}]\) (with \(V\otimes\mathscr{O}_{X}\) placed in degree \(0\)); then \(\mathscr{E}^{\vee}[1]\simeq[V^{\vee}\otimes\mathscr{O}_{X}\xrightarrow{\tau^ {\vee}}W^{\vee}\otimes\mathscr{O}_{X}]\) (with \(W^{\vee}\otimes\mathscr{O}_{X}\) placed in degree \(0\)).
**Lemma 3.7**.: _In the situation of Notation 3.6, we have canonical identifications:_
\[q_{\mathbb{G}_{d_{+}}^{+}}:\;\operatorname{Grass}(\mathscr{E};d_ {+})\simeq\operatorname{Spec}\Big{(}\operatorname{Sym}_{\mathscr{O}_{ \mathscr{O}_{d_{+}}^{+}}}^{*}\big{(}W\otimes R_{\mathbb{G}_{d_{+}}^{+}}^{ \vee}\big{)}\Big{)}\to\mathbb{G}_{d_{+}}^{+}.\] \[q_{\mathbb{G}_{d_{-}}^{-}}:\;\operatorname{Grass}(\mathscr{E}^{ \vee}[1];d_{-})\simeq\operatorname{Spec}\Big{(}\operatorname{Sym}_{\mathscr{O}_{ \mathscr{O}_{d_{-}}^{-}}}^{*}\big{(}R_{\mathbb{G}_{d_{-}}^{-}}^{\vee}\otimes V ^{\vee}\big{)}\Big{)}\to\mathbb{G}_{d_{-}}^{-}.\] \[q_{\operatorname{Dacid}_{(d_{+},d_{-})}}\colon\mathfrak{D} \operatorname{acid}_{(d_{+},d_{-})}(\mathscr{E})\simeq\operatorname{Spec} \Big{(}\operatorname{Sym}_{\mathscr{O}_{\mathscr{O}_{d_{+}}^{+}\times\mathbb{G}_{ d_{-}}^{-}}}^{*}\big{(}R_{\mathbb{G}_{d_{-}}^{-}}^{\vee}\boxtimes R_{ \mathbb{G}_{d_{+}}^{+}}^{\vee}\big{)}\Big{)}\to\mathbb{G}_{d_{+}}^{+}\times \mathbb{G}_{d_{-}}^{-}.\]
Proof.: In this case, these derived schemes are classical (Remark 2.11). Therefore, the desired result follows easily from definitions (see [1, Lemma 4.1] or [11, Lemma 5.1]).
**Notation 3.8**.: In the situation of Lemma 3.7, to reduce the burden of notations, for objects \(\mathscr{F}_{\pm}\in\mathrm{D}(\mathbb{G}_{d_{\pm}}^{\pm})\), we will simply use the _same_ notations \(\mathscr{F}_{+}=q_{\mathbb{G}_{d_{+}}^{+}}^{*}(\mathscr{F}_{+})\in\mathrm{D}( \mathrm{Grass}(\mathscr{E};d_{+}))\) and \(\mathscr{F}_{-}=q_{\mathbb{G}_{d_{-}}^{*}}^{*}(\mathscr{F}_{-})\in\mathrm{D}( \mathrm{Grass}(\mathscr{E}^{\vee}[1];d_{-}))\) to denote their respective pullbacks.
**Notation 3.9**.: If \(\mathcal{C}\) is an idempotent-complete stable \(\infty\)-category (such as \(\mathrm{D}(\mathrm{Grass}(\mathscr{E};d_{+}))\) and \(\mathrm{Grass}(\mathscr{E}^{\vee}[1];d_{-})\)), and \(\{C_{i}\}_{i\in I}\) is a collection of objects of \(\mathcal{C}\), we let \(\langle\{C_{i}\}_{i\in I}\rangle\subseteq\mathcal{C}\) denote the stable subcategory thickly generated by \(\{C_{i}\}_{i\in I}\) (i.e., \(\langle\{C_{i}\}_{i\in I}\rangle\) is the smallest idempotent-complete stable \(\infty\)-subcategory of \(\mathcal{C}\) which contains all \(C_{i}\)).
**Lemma 3.10**.: _In the situation of Lemma 3.7 and using Notations 3.8, 3.9, we have_
\[\mathrm{D}(\mathrm{Grass}(\mathscr{E};d_{+})) =\left\langle\big{\{}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d_ {+}}^{\vee}}^{\vee}\big{)}\big{\}}_{\lambda\in B_{n-d_{+},d_{+}}}\right\rangle\] \[\mathrm{D}(\mathrm{Grass}(\mathscr{E}^{\vee}[1];d_{-})) =\left\langle\big{\{}\mathbb{S}^{\mu}\big{(}R_{\mathbb{G}_{d_{-} }^{-}}\big{)}\big{\}}_{\mu\in B_{m-d_{-},d_{-}}}\right\rangle.\]
Proof.: This follows from Kapranov's exceptional collections for \(\mathrm{D}(\mathbb{G}_{d_{\pm}}^{\pm})\) ([13]; see also [1, 10] for the characteristic-free version) and the fact that the natural projections \(q_{\mathbb{G}_{d_{\pm}}^{\pm}}\) are relative affine spaces.
#### 3.2.2. Incidence Correspondences in the Universal Local Situation
This subsection considers the incidence diagram (2.1) of Definition 2.7 in the universal local situation SS3.2.1. We assume that \(d\geq r=m-n\) and consider the incidence diagram (2.1) in the case where \((d_{+},d_{-})=(d,d-r)\). Let \(\Phi=\Phi_{(d,d-r)}^{(0)}\) be the functor of Notation 2.14 in the case where \(\lambda=(0)\), and let \(\Phi^{L}\) be its left adjoint functor, that is:
\[\Phi =r_{+\ast}r_{-}^{\ast}\colon\mathrm{D}(\mathrm{Grass}(\mathscr{E} ^{\vee}[1]);d-r)\to\mathrm{D}(\mathrm{Grass}(\mathscr{E};d))\] \[\Phi^{L} =r_{-\dagger}r_{+}^{\ast}\colon\mathrm{D}(\mathrm{Grass}(\mathscr{E };d))\to\mathrm{D}(\mathrm{Grass}(\mathscr{E}^{\vee}[1];d-r)).\]
**Lemma 3.11**.: _In the above situation, we have canonical equivalences_
\[\Phi\Big{(}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d-r}^{-}} \big{)}\Big{)} \simeq\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d}^{+}}^{\vee}\big{)}\quad \text{for all}\quad\lambda\in B_{n-d,d-r}.\] \[\Phi^{L}\Big{(}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d}^{+}} \big{)}\Big{)} \simeq\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d-r}^{-}}\big{)}\quad \text{for all}\quad\lambda\in B_{n-d,d}.\]
Proof.: This a special case of the key lemma [11, Lemma 5.6]; we present here a characteristic-free proof for readers' convenience. We only prove the first equivalence; the other case is similar. The projection \(r_{+}\) factorizes through a composite map ([11, Proposition 4.19])
\[\mathfrak{I}\mathrm{ncid}_{(d,d-r)}(\mathscr{E})\xrightarrow{\iota}\mathbb{G }_{d-r}^{-}\times_{\Bbbk}\mathrm{Grass}(\mathscr{E};d)\xrightarrow{\mathrm{ pr}}\mathrm{Grass}(\mathscr{E};d),\]
where \(\iota\) is a closed immersion induced by a regular section of the vector bundle \(Q_{\mathbb{G}_{d-r}^{-}}\boxtimes R_{\mathbb{G}_{d}^{+}}\), and \(\mathrm{pr}\) is the canonical projection. Therefore, we have a canonical equivalence
\[\Phi\Big{(}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d-r}^{-}} \big{)}\Big{)}\simeq\mathrm{pr}_{\ast}\Big{(}\mathbb{S}^{\lambda}\big{(}R_{ \mathbb{G}_{d-r}^{-}}\big{)}\otimes\iota_{\ast}(\mathscr{O}_{\mathfrak{ncid}_{( d,d-r)}(\mathscr{E})})\Big{)},\]
where \(\iota_{\ast}(\mathscr{O}_{\mathfrak{ncid}_{(d,d-r)}(\mathscr{E})})\) is resolved by a Koszul complex whose \(\ell\)th terms are given by
\[\bigwedge^{\ell}\Big{(}Q_{\mathbb{G}_{d-r}^{-}}^{\vee}\boxtimes R_{\mathbb{G}_ {d}^{+}}^{\vee}\Big{)}\quad\text{where}\quad 0\leq\ell\leq(n-d)(d-r).\]
By Cauchy's decomposition formula ([1, Theorems III.1.4], [13, Propositions 2.3]), there is a canonical filtration of \(\bigwedge^{\ell}\big{(}Q_{\mathbb{G}_{d-r}^{-}}^{\vee}\boxtimes R_{\mathbb{G}_ {d}^{+}}^{\vee}\big{)}\) whose associated graded is given by
\(\mathbb{S}^{\mu}(R^{\vee}_{\mathbb{G}^{+}_{d}})\), where \(\mu\) run through all elements of \(B_{n-d,d-r}\) such that \(|\mu|=\ell\). Consequently, it suffices to prove the following
\[\operatorname{Hom}_{\operatorname{D}(\mathbb{G}^{-}_{d-r})}\left(\mathbb{S}^{ \mu^{t}}\big{(}Q_{\mathbb{G}^{-}_{d-r}}\big{)},\mathbb{S}^{\lambda}\big{(}R_{ \mathbb{G}^{-}_{d-r}}\big{)}[\lambda]\right)\simeq\delta_{\mu,\lambda}\cdot \operatorname{id},\]
where \(\delta_{\mu,\lambda}=1\) if \(\mu=\lambda\) and \(\delta_{\mu,\lambda}=0\) if \(\mu\neq\lambda\). This follows from that \(\big{\{}\big{(}\mathbb{S}^{\mu^{t}}\big{(}Q_{\mathbb{G}^{-}_{d-r}}\big{)} \big{\}}_{\mu\in B_{n-d,d-r}}\) and \(\big{\{}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}^{-}_{d-r}}\big{)}[\lambda] \big{\}}_{\lambda\in B_{n-d,d-r}}\) are dual full exceptional collections of \(\operatorname{D}(\mathbb{G}^{-}_{d-r})\); see [1, Theorem 7.5] and [1, Theorem 1.6].
**Corollary 3.12**.: _In the situation of Lemma 3.11, the functor \(\Phi\) is fully faithful, with essentially image \(\operatorname{Im}\Phi=\Big{\langle}\big{\{}\mathbb{S}^{\mu}\big{(}R^{\vee}_{ \mathbb{G}^{+}_{d}}\big{)}\big{\}}_{\mu\in B_{n-d,d-r}}\Big{\rangle}\subseteq \operatorname{D}(\operatorname{Grass}(\mathscr{E};d))\)._
Proof.: Lemma 3.11 implies that the counit map \(\Phi^{L}\circ\Phi\to\operatorname{id}\) is an equivalence when evaluated at the generators \(\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}^{-}_{d-r}}\big{)}\) of \(\operatorname{D}(\operatorname{Grass}(\mathscr{E}^{\vee}[1];d-r))\) described in Lemma 3.10, where \(\lambda\in B_{n-r,r-d}\). Since the collection of objects \(\mathscr{F}\), for which the counit map \(\Phi^{L}\circ\Phi(\mathscr{F})\to\mathscr{F}\) is an equivalence, forms an idempotent-complete stable \(\infty\)-subcategory of \(\operatorname{D}(\operatorname{Grass}(\mathscr{E}^{\vee}[1];d-r))\), it follows that the counit map \(\Phi^{L}\circ\Phi\to\operatorname{id}\) is an equivalence. Hence the corollary follows.
#### 3.2.3. Flag Correspondences in the Universal Local Situation
Now we consider flag correspondences (2.2) in the universal local situation of SS3.2.1. Let \(d\) be an integer such that \(0\leq d\leq n-1\), and let \(\Psi=\Psi_{0}\) be the functor defined in Notation 2.16 and let \(\Psi^{L}\) be its left adjoint; that is:
\[\Psi =p_{+\ast}\,p_{-}^{\ast}\colon\operatorname{D}(\operatorname{Grass }(\mathscr{E};d+1))\to\operatorname{D}(\operatorname{Grass}(\mathscr{E};d)),\] \[\Psi^{L} =p_{-\ast}\,p_{+}^{\ast}\colon\operatorname{D}(\operatorname{Grass }(\mathscr{E};d))\to\operatorname{D}(\operatorname{Grass}(\mathscr{E};d+1)),\]
where \(p_{\pm}\) are defined as in (2.2), and \(p_{-!}\) denotes the left adjoint of \(p_{-}^{\ast}\).
The following is analogous to [11, Lemma 5.6] in the case where \(\ell_{+}-\ell_{-}=1\); the combinatorics of the Lascoux-type complexes \(F_{\ast}\) in this case are also similar to that of the staircase complexes studied in [13, 15].
**Lemma 3.13**.: _In the above situation, we have:_
1. _For any_ \(\lambda\in B_{n-d,d}\)_, there is a canonical equivalence_ \[\Psi^{L}\Big{(}\mathbb{S}^{\lambda}\big{(}R^{\vee}_{\mathbb{G}^{+}_{d}}\big{)} \Big{)}\simeq\begin{cases}\mathbb{S}^{\lambda}\big{(}R^{\vee}_{\mathbb{G}^{+}_ {d+1}}\big{)}&\text{if}\quad\lambda\in B_{n-d-1,d}\subseteq B_{n-d,d};\\ 0&\text{if}\quad\lambda\in B_{n-d,d}\setminus B_{n-d-1,d}.\end{cases}\]
2. _If_ \(\Bbbk\) _is a_ \(\mathbb{Q}\)_-algebra, then for any_ \(\lambda\in B_{n-d-1,d}\) _with_ \(\lambda_{1}=k\)_, where_ \(\max\{0,d-r+1\}\leq k\leq d\)_, the image_ \(\Psi\big{(}\mathbb{S}^{\lambda}\big{(}R^{\vee}_{\mathbb{G}^{+}_{d+1}}\big{)} \big{)}\) _admits a resolution by vector bundles_ \[\Psi\Big{(}\mathbb{S}^{\lambda}\big{(}R^{\vee}_{\mathbb{G}^{+}_{d+1}}\big{)} \Big{)}\simeq F_{\ast}=\big{[}0\to F_{k}\to\dots\to F_{1}\to F_{0}\big{]},\] _where_ \(F_{0}=\mathbb{S}^{\lambda}\big{(}R^{\vee}_{\mathbb{G}^{+}_{d}}\big{)}\) _and_ \(F_{i}=\mathbb{S}^{\lambda^{(i)}}(R^{\vee}_{\mathbb{G}^{+}_{d}})\otimes\bigwedge ^{\lambda^{(i)}|-|\lambda|}(W)\) _for_ \(1\leq i\leq k\)_. Here, for any given_ \(1\leq i\leq k\)_, let_ \(1\leq j\leq n-d-1\) _be such that_ \(\lambda_{j}\geq i\geq\lambda_{j+1}+1\)_, then_ (3.1) \[\lambda^{(i)}=(\lambda_{1},\lambda_{2},\dots,\lambda_{j},i,\lambda_{j+1}+1, \dots,\lambda_{n-d-1}+1)\in B_{n-d,k}\setminus B_{n-d-1,k}.\]
Proof.: First, we prove assertion (1). Using the Notation(s) 2.16 (and 3.8), there is a short exact sequence of vector bundles on \(\operatorname{Flag}(\mathscr{E};d,d+1)\):
\[\mathscr{L}^{\vee}_{d+1}\hookrightarrow p_{+}^{\ast}\big{(}R^{\vee}_{\mathbb{G} ^{+}_{d}}\big{)}\twoheadrightarrow p_{-}^{\ast}\big{(}R^{\vee}_{\mathbb{G}^{+}_ {d+1}}\big{)},\]
where \(\mathscr{L}_{d+1}=\operatorname{Ker}(\mathscr{Q}_{d+1}\to\mathscr{Q}_{d})\), and \(p_{\pm}\) are defined as in (2.2). Let \(\lambda\in B_{n-d,d}\), then from the from direct-sum decomposition formula [13, Theorems 2.5 (b)] (see also [11, Theorem
2.12]), there is filtration on \(\mathbb{S}^{\lambda}\Big{(}p_{+}^{*}\big{(}R_{\mathbb{G}_{d}^{+}}^{\vee}\big{)} \Big{)}\simeq p_{+}^{*}\Big{(}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d}^{+}}^ {\vee}\big{)}\Big{)}\) whose associated graded is
\[\bigoplus_{\nu=(\nu_{1},\ldots,\nu_{a-d-1})\leq\lambda=(\lambda_{1},\ldots, \lambda_{n-d})}\mathbb{S}^{\nu}\Big{(}p_{-}^{*}\big{(}R_{\mathbb{G}_{d+1}^{+}} ^{\vee}\big{)}\Big{)}\otimes\mathbb{S}^{\lambda/\nu}(\mathscr{L}_{d+1}^{\vee }).\lx@note{footnote}{notice that we switch the roles of $N$ and $L$ in [Kou91, Theorem 1.4 (b)], and our version follows from [Kou91, Theorem 2.5 (b)] by applying the duality \(\mathbb{W}^{\lambda/\mu}(\mathscr{U}^{\vee})^{\vee}\simeq\mathbb{S}^{\lambda/ \mu}(\mathscr{U})\) for vector bundles \(\mathscr{U}\).}\]
The skew Schur module \(\mathbb{S}^{\lambda/\nu}(\mathscr{L}_{d+1}^{\vee})\) is a quotient of tensor products \(\bigotimes_{i=1}^{\lambda_{1}}\bigwedge^{\lambda_{i}^{t}-\nu_{i}^{t}}(\mathscr{ L}_{d+1}^{\vee})\) (see [Jia22b, Notation 2.3] or [Wey03, SS2.1]), where \(\lambda^{t}=(\lambda_{1}^{t},\lambda_{2}^{t},\ldots)\) and \(\nu^{t}=(\nu_{1}^{t},\nu_{2}^{t},\ldots)\) are transposes of \(\lambda\) and \(\nu\), respectively. As a result, \(\mathbb{S}^{\lambda/\nu}(\mathscr{L}_{d+1}^{\vee})\) is zero unless
\[0\leq\lambda_{n-d}\leq\nu_{n-d-1}\leq\lambda_{n-d-1}\leq\ldots\leq\nu_{2}\leq \lambda_{2}\leq\nu_{1}\leq\lambda_{1}\leq d, \tag{3.2}\]
in which case the corresponding summand of the associated graded is equvivalent to
\[\mathbb{S}^{\nu}\Big{(}p_{-}^{*}\big{(}R_{\mathbb{G}_{d+1}^{+}}^{\vee}\big{)} \Big{)}\otimes(\mathscr{L}_{d+1}^{\vee})^{|\lambda|-|\nu|}\simeq p_{-}^{*} \Big{(}\mathbb{S}^{\nu}\big{(}R_{\mathbb{G}_{d+1}^{+}}^{\vee}\big{)}\Big{)} \otimes(\mathscr{L}_{d+1}^{\vee})^{|\lambda|-|\nu|}.\]
If \(\lambda\in B_{n-d-1,d}\), the partitions \(\nu\) appearing in (3.2) can be classified into two cases:
* Case \(\nu=\lambda\): In this case, the corresponding summand is equivalent to \(p_{-}^{*}\Big{(}\mathbb{S}^{\nu}\big{(}R_{\mathbb{G}_{d+1}^{+}}^{\vee}\big{)} \Big{)}\).
* Case \(\nu\neq\lambda\): In this case, we have \(1\leq|\lambda|-|\nu|\leq\lambda_{1}\leq d\). For such cases, Serre's vanishing theorem (Remark 2.17, [Jia22a, Theorem 5.2]) implies that \(p_{-\uparrow}((\mathscr{L}_{d+1}^{\vee})^{|\lambda|-|\nu|})\simeq 0\).
Consequently, we obtain that \(p_{-\uparrow}\,p_{+}^{*}\Big{(}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d}^{+ }}^{\vee}\big{)}\Big{)}\simeq\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d+1}^{+ }}^{\vee}\big{)}\) as claimed.
If \(\lambda\in B_{n-d,d}\setminus B_{n-d-1,d}\) which means that \(\lambda_{n-d}\geq 1\) and \(\lambda_{1}\leq d\), then we have \(1\leq|\lambda|-|\nu|\leq\lambda_{1}\leq d\) for all partitions \(\nu\) satisfying (3.2). By applying Serre's vanishing theorem once again, we conclude that \(p_{-}!\,p_{+}^{*}\Big{(}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d}^{+}}^{ \vee}\big{)}\Big{)}\simeq 0\) as desired.
Next, we prove assertion (2). Similarly as with Lemma 3.11, [Jia22b, Proposition 4.19] implies that the projection \(p_{+}\) factorizes through a composite map
\[\operatorname{Flag}(\mathscr{E};d,d+1)\xrightarrow{\iota}\mathbb{P}_{ \operatorname{Grass}(\mathscr{E};d)}(R_{\mathbb{G}_{d}^{+}})\xrightarrow{\text{ pr}}\operatorname{Grass}(\mathscr{E};d)\]
where \(\iota\) is a closed immersion induced by a regular section of the vector bundle \(W^{\vee}\otimes\mathscr{L}_{d+1}\), and \(\operatorname{pr}\) is the canonical projection. Therefore, we have a canonical equivalence
\[\Psi\Big{(}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d+1}^{+}}^{\vee}\big{)} \Big{)}\simeq\operatorname{pr}_{*}\Big{(}\mathbb{S}^{\lambda}\big{(}R_{ \mathbb{G}_{d+1}^{+}}^{\vee}\big{)}\otimes\iota_{*}(\mathscr{O}_{\operatorname {Flag}(\mathscr{E};d,d+1)})\Big{)},\]
where \(\iota_{*}(\mathscr{O}_{\operatorname{Flag}(\mathscr{E};d,d+1)})\) is resolved by a Koszul complex whose \(\ell\)th terms are given by
\[\big{(}\bigwedge^{\ell}W\big{)}\otimes\mathscr{L}_{d+1}^{-\ell}\quad\text{ where}\quad 0\leq\ell\leq m.\]
Since \(\bigwedge^{\ell}W\simeq 0\) if \(\ell>m\), we may assume \(0\leq\ell\leq n+k-d-1\). By considering the spectral sequence which computes the above higher direct image \(\operatorname{pr}_{*}\Big{(}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d+1}^{+}}^{ \vee}\big{)}\otimes\iota_{*}\big{(}\mathscr{O}_{\operatorname{Flag}(\mathscr{E} ;d,d+1)}\big{)}\Big{)}\) (see [Laz17, Lemma B.1.5]), it suffices to compute derived pushforwards of the form
\[\operatorname{pr}_{*}\Big{(}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d+1}^{+}}^{ \vee}\big{)}\otimes\mathscr{L}_{d+1}^{-\ell}\Big{)}\otimes\big{(}\bigwedge^{ \ell}W\big{)}[\ell]\qquad 0\leq\ell\leq n+k-d-1. \tag{3.3}\]
Using the equivalence \(\mathbb{P}(R_{\mathbb{G}_{d}^{+}})\simeq\operatorname{Grass}(R_{\mathbb{G}_{d}^{+ }}^{\vee};n-d-1)\) and Theorem 2.5.(1), we have
\[\operatorname{pr}_{*}\Big{(}\mathbb{S}^{\lambda}\big{(}R_{\mathbb{G}_{d+1}^{+}}^{ \vee}\big{)}\otimes\mathscr{L}_{d+1}^{-\ell}\Big{)}\simeq\pi_{*}\big{(} \mathscr{L}(\lambda,\ell)\big{)},\]
where \(\pi\colon\operatorname{Flag}(R^{\vee}_{\mathbb{G}^{+}_{d}};\underline{n-d})\to \operatorname{Grass}(\mathscr{E};d)\) is the complete flag bundle of \(R^{\vee}_{\mathbb{G}^{+}_{d}}\) over \(\operatorname{Grass}(\mathscr{E};d)\), and \(\mathscr{L}(\lambda,\ell)\) is the line bundle associated with the sequence \((\lambda,\ell)=(\lambda_{1},\ldots,\lambda_{n-d-1},\ell).\) According to Borel-Weil-Bott theorem, let \(\rho=(n-d-1,n-d-2,\ldots,2,1,0)\), to compute the derived pushforward \(\pi_{*}\big{(}\mathscr{L}(\lambda,\ell)\big{)}\) it suffices to analyze the sequence
\[(\lambda,\ell)+\rho=(\lambda_{1}+n-d-1,\lambda_{2}+n-d-2,\ldots,\lambda_{n-d-1 }+1,\ell). \tag{3.4}\]
First, we consider the case \(\ell=0\). In this case, (3.4) is a partition, and Borel-Weil theorem (see [23, Theorems 4.1.4] or Theorem 2.5.(1)) implies that (3.3) is isomorphic to
\[\operatorname{pr}_{*}\Big{(}\mathbb{S}^{\lambda}\big{(}R^{\vee}_{\mathbb{G}^ {+}_{d+1}}\big{)}\Big{)}\simeq\mathbb{S}^{\lambda}\big{(}R^{\vee}_{\mathbb{G} ^{+}_{d}}\big{)}.\]
Next, we consider the the case where \(1\leq\ell\leq n+k-d-1\). From Borel-Weil-Bott theorem (see [10], [23, Theorem 4.1.10] or Theorem 2.5.(2)), we obtain that (3.3) is nonzero only if the entries of (3.4) are pairwise distinct. There are precisely \((n+k-d-1)-(n-d-1)=k\) such choices for \(\ell\), all of the form \(\lambda_{j}+n-d-j>\ell>\lambda_{j+1}+n-d-(j+1)\), where \(1\leq j\leq n-d-1\). For each such \(\ell\) and \(j\), it requires a minimal number of \((n-d-1-j)\) permutations of entries of (3.4) such that the resulting sequence
\[(\lambda_{1}+n-d-1,\ldots,\lambda_{j}+n-d-j,\ \ell,\ \lambda_{j+1}+n-d-(j+1), \ldots,\lambda_{n-d-1}+1)\]
is strictly decreasing. Subtracting \(\rho\) from the above sequence, we obtain a partition \((\lambda_{1},\ldots,\lambda_{j},\ \ell+j-(n-d-1),\ \lambda_{j+1}+1,\ldots, \lambda_{n-d-1}+1)\), which precisely corresponds to the partition \(\lambda^{(i)}\) of (3.1), where \(i=\ell+j-(n-d-1)\).
Conversely, for any \(1\leq i\leq k\), we let \(1\leq j\leq n-d-1\) be such that \(\lambda_{j}\geq i\geq\lambda_{j+1}+1\). In this case, \(\ell:=|\lambda^{(i)}|-|\lambda|\) is the unique integer in \([1,n+k-d-1]\) such that \(\lambda_{j}+n-d-j>\ell>\lambda_{j+1}+n-d-(j+1)\). For each such \(i\) and \(\ell\), the Borel-Weil-Bott theorem implies that (3.3) is canonically equivalent to
\[\mathbb{S}^{\lambda^{(i)}}\big{(}R^{\vee}_{\mathbb{G}^{+}_{d}}\big{)}[\ell-( n-d-1-j)]\otimes\big{(}\bigwedge^{\ell}W\big{)}=\mathbb{S}^{\lambda^{(i)}} \big{(}R^{\vee}_{\mathbb{G}^{+}_{d}}\big{)}\otimes\big{(}\bigwedge^{|\lambda^ {(i)}|-|\lambda|}W\big{)}[i].\]
Hence the lemma is proved.
Notice that Lemma 3.13.(2) is the only part of the proof of the main theorem in this paper where the characteristic-zero assumption is required.
**Corollary 3.14**.: _Assume we are in the same situation as Lemma 3.13.(2) and let \(k\) be an integer such that \(\max\{0,d-r+1\}\leq k\leq d\). We let \(\Psi_{i}=\Psi\) be defined as in Notation 2.16; that is \(\Psi_{i}=\Psi(\underline{\cdot})\otimes\det(\mathscr{Q}_{\operatorname{ Grass}(\mathscr{E};d)})^{\otimes i}\simeq\Psi(\underline{\cdot})\otimes\det(R^{\vee}_{ \mathbb{G}^{+}_{d}})^{\otimes i}\). For any \(\ell,d^{\prime}\geq 0\), we define_
\[\mathcal{B}_{\ell,d^{\prime}}=\Big{\langle}\big{\{}\mathbb{S}^{\lambda}\big{(} R^{\vee}_{\mathbb{G}^{+}_{n-d}}\big{)}\big{\}}_{\lambda\in B_{\ell,d^{\prime}}} \Big{\rangle}\subseteq\operatorname{D}(\operatorname{Grass}(\mathscr{E};n-\ell )).\]
_Then for each integer \(0\leq i\leq k-\max\{0,d-r+1\}\), the restriction of the functor \(\Psi_{i}\),_
\[\Psi_{i}|_{\mathcal{B}_{n-d-1,k-i}}\colon\mathcal{B}_{n-d-1,k-i}\to \operatorname{D}(\operatorname{Grass}(\mathscr{E};d)),\]
_is fully faithful, with essential image contained in \(\mathcal{B}_{n-d,k}\). Moreover, these functors \(\Psi_{i}|_{\mathcal{B}_{n-d-1,k-i}}\), for \(0\leq i\leq k-\max\{0,d-r+1\}\), induce a semiorthogonal decomposition_
\[\mathcal{B}_{n-d,k}=\Big{\langle}\big{\langle}\Psi_{k-i}(\mathcal{B}_{n-d-1,i} )\big{\rangle}_{i\in[0,\max\{0,d-r+1\}]}\,\ \Psi^{0}_{k-d+r}(\mathcal{B}_{n-d,d-r})\Big{\rangle}, \tag{3.5}\]
_where \(\Psi^{0}_{k-d+r}\) denotes the functor \(\otimes\det(\mathscr{Q}_{\operatorname{Grass}(\mathscr{E};d)})^{\otimes(k-d+r)}\), the last component is understood as empty if \(d<r\), and the semiorthogonal order of the first part is given by the usual order \(<\) of integers in \([0,\max\{0,d-r+1\}]\), that is: for all \(0\leq j<i\leq k-\max\{0,d-r+1\}\), \(\operatorname{Map}(\Psi_{k-i}(\mathcal{B}_{n-d-1,i}),\Psi_{k-j}(\mathcal{B}_{n-d-1,j}))\simeq 0\)._
Proof.: We will only prove the case where \(d\geq r\); the other case where \(d<r\) is similar and simpler. Notice that Lemma 3.13.(2) implies that \(\Psi_{i}\big{(}\mathbb{S}^{\lambda}(R^{\vee}_{\mathbb{G}^{+}_{d+1}})\big{)} \in\big{\langle}\{\mathbb{S}^{\lambda}(R^{\vee}_{\mathbb{G}^{+}_{d}})\}\lambda _{\in B_{n-d,k}}\big{\rangle}\) for all \(0\leq i\leq k-d+r-1\) and \(\lambda\in B_{n-d-1,k-i}\). This proves the assertion that the essential image \(\Psi_{i}|_{\mathcal{B}_{n-d-1,k-i}}\) is contained in \(\mathcal{B}_{n-d,k}\).
To establish the desired semiorthogonal decomposition (3.5), it suffices to prove:
1. Fully-faithfulness: The counit map \(\Psi_{i}^{L}\Psi_{i}\to\mathrm{id}\) is an equivalence when restricted to the subcategory \(\mathcal{B}_{n-d-1,k-i}\), where \(0\leq i\leq k-d+r-1\). As with Corollary 3.12 and from the definition of \(\mathcal{B}_{n-d-1,k-i}\), it suffices to prove that \(\Psi_{i}^{L}\Psi_{i}\big{(}\mathbb{S}^{\lambda}\big{(}R^{\vee}_{\mathbb{G}^{ +}_{d+1}}\big{)}\big{)}\to\mathbb{S}^{\lambda}(R^{\vee}_{\mathbb{G}^{+}_{d+1}})\) is an equivalence for all \(\lambda\in B_{n-d-1,k-i}\). This follows from Lemma 3.13.(1)-(2).
2. Semiorthogonality: 1. For all \(0\leq j<i\leq k-d+r-1\), \(\Psi_{j}^{L}\Psi_{i}\simeq 0\) when restricted to the subcategory \(\mathcal{B}_{n-d-1,k-i}\). As before, it suffices to prove that \(\Psi_{j}^{L}\Psi_{i}\Big{(}\mathbb{S}^{\lambda}\big{(}R^{\vee}_{\mathbb{G}^{ +}_{d+1}}\big{)}\Big{)}\simeq 0\) for all \(\lambda\in B_{n-d-1,k-i}\). This is again a direct consequence of Lemma 3.13.(1)-(2). 2. For all \(0\leq j<i\leq k-d+r-1\), the restriction of the functor \(\Psi_{i}^{L}\) to \(\Psi_{k-d+r}^{0}(\mathcal{B}_{n-d,d-r})\) is equivalent to zero. Once again, it suffices to prove that for any \(\alpha\in B_{n-d,d-r}\), \(\Psi_{i}^{L}\Big{(}\mathbb{S}^{\alpha}\big{(}R^{\vee}_{\mathbb{G}^{+}_{d}} \big{)}\otimes\det(R^{\vee}_{\mathbb{G}^{+}_{d}})^{(k-d+r)}\Big{)}\simeq 0\). Since \(d\geq r\) and \(0\leq i\leq k-d+r-1\), we have \(1\leq k-d+r-i\leq d\). Hence \(\mathbb{S}^{\alpha}\big{(}R^{\vee}_{\mathbb{G}^{+}_{d}}\big{)}\otimes\det(R^ {\vee}_{\mathbb{G}^{+}_{d}})^{(k-d+r)}\in B_{n-d,d}\backslash B_{n-d-1,d}\), and the desired result follows from Lemma 3.13.(1).
3. Generation: To complete the proof, we will show that any element \(\mathbb{S}^{\alpha}(R^{\vee}_{\mathbb{G}^{+}_{d}})\), where \(\alpha\in B_{n-d,d-r}\), belongs to the right-hand side of (3.5). We will establish this result using induction. Let us introduce the following notations: for any \(\nu\in B_{n-d,w}\) and \(i\in\mathbb{Z}\), where \(w\geq 0\) is an integer, we let \(\nu(i)=(\nu_{1}+i,\dots,\nu_{n-d}+i)\). Let \(B_{n-d,w}(i):=\{\nu(i)\mid\nu\in B_{n-d,w}\}\). Using these notations, we can express a disjoint union decomposition as follows: \[B_{n-d,k}=B_{n-d,d-r}(k-d+r)\sqcup\bigsqcup_{i=0}^{n-d+r-1}B_{n-d-1,k-i}(i).\] It is clear that if \(\alpha\in B_{n-d,d-r}(k-d+r)\), meaning that \(\alpha=\nu(k-d+r)\) for some \(\nu\in B_{n-d,d-r}\), then \(\mathbb{S}^{\alpha}(R^{\vee}_{\mathbb{G}^{+}_{d}})=\mathbb{S}^{\nu}(R^{\vee} _{\mathbb{G}^{+}_{d}})\otimes\det(R^{\vee}_{\mathbb{G}^{+}_{d}})^{(k-d+r)}\) belongs to the right-hand side of (3.5). Now we assume that \(\alpha\in B_{n-d-1,k-i}(i)\), that is, \(\alpha=\nu(i)\) for some \(\nu\in B_{n-d-1,k-i}\). According to Lemma 3.13.(2), there is a canonical map \(\mathbb{S}^{\alpha}(R^{\vee}_{\mathbb{G}^{+}_{d}})\to\Psi_{i}\big{(}\mathbb{S} ^{\nu}(R^{\vee}_{\mathbb{G}^{+}_{d+1}})\big{)}\), and the cone of this map is given by iterated extensions of elements of the form \(\mathbb{S}^{\beta}(R^{\vee}_{\mathbb{G}^{+}_{d}})\otimes K\), where \(\beta\in B_{n-d,d-r}(k-d+r)\sqcup\bigsqcup_{j=i+1}^{n-d+r-1}B_{n-d-1,k-j}(j)\) and \(K\) is a finite free \(\Bbbk\)-module. Consequently, the desired result regarding generation follows from induction.
#### 3.2.4. Proof of Theorem 3.2, Part 2
We now complete the proof of Theorem 3.2 by establishing the theorem in the universal local situation SS3.2.1 and when \(\mathrm{D}=\mathrm{D}^{\mathrm{perf}}\), using the preparations made in the preceding subsections.
If \(d=0\) or \(r=0\), the desired result follows directly from Corollary 3.12. Therefore, we may assume \(d\) and \(r\) are both greater than zero.
We now generate a semiorthogonal decomposition of \(\mathrm{D}(\mathrm{Grass}(\mathscr{E};d))\) by iteratively applying Corollary 3.14. Let us describe the process:
1. Starting with the case where \(k=d\), we apply Corollary 3.14 and obtain a semiorthogonal decomposition of \(\mathrm{D}(\mathrm{Grass}(\mathscr{E};d))=\mathcal{B}_{n-d,d}\). This decomposition takes the form of the form (3.5) and its components are given by the images \(\Psi_{i}(\mathcal{B}_{n-d-1,d-i})\) for \(0\leq i\leq d-\max\{0,d-r+1\}\), and \(\Psi_{d-r}^{0}(\mathcal{B}_{n-d,d-r})\) (if \(d\geq r\)). Notably, the appearing subcategories \(\mathcal{B}_{a,b}\) as the domains of \(\Psi_{i}\) or \(\Psi_{d-r}^{0}\) satisfy the condition \(n-r\leq a+b\leq n-1\).
For any subcategory \(\mathcal{B}_{a,b}\) appearing in the above decomposition with \(a+b>n-r\) (implying \(\mathcal{B}_{a,b}=\mathcal{B}_{n-d-1,d-j}\) for some \(j\geq 0\)), we further decompose \(\mathcal{B}_{a,b}\) by applying Corollary 3.14 again. The involved subcategories \(\mathcal{B}_{a^{\prime},b^{\prime}}\) appearing of this decomposition again satisfy \(n-r\leq a^{\prime}+b^{\prime}\leq a+b-1\leq n-2\). We continue this process, applying Corollary 3.14 to each involved subcategory \(\mathcal{B}_{a^{\prime},b^{\prime}}\) such that \(a^{\prime}+b^{\prime}>n-r\), until all the subcategories are of the form \(\mathcal{B}_{a^{\prime\prime},b^{\prime\prime}}\), where \(a^{\prime\prime}+b^{\prime\prime}=n-r\).
The above process \((*)\) clearly terminates in a finite number of steps. At the end, we obtain a semiorthogonal decomposition of \(\mathrm{D}(\mathrm{Grass}(\mathscr{E};d))\) whose components are given by fully faithful images of subcategories of the form \(\mathcal{B}_{n-d-r+i,d-i}\), where \(0\leq i\leq\min\{r,d\}\). Each such category \(\mathcal{B}_{n-d-r+i,d-i}\) is embedded via a functor of the form:
\[\Psi_{a_{1}}\circ\Psi_{a_{2}}\circ\cdots\circ\Psi_{a_{r-i}}\circ\Psi^{0}_{i- \sum a_{j}}\colon\mathcal{B}_{n-d-r+i,d-i}\to\mathrm{D}(\mathrm{Grass}(\mathscr{ E};d)), \tag{3.6}\]
where \(\Psi^{0}_{i-\sum a_{j}}=\otimes\det(\mathscr{Q}_{\mathrm{Grass}(\mathscr{E};d+ r-i)})^{i-\sum a_{j}}\); the notation \(\Psi^{0}_{i-\sum a_{j}}\) indicates that it is a "zero-times composition of \(\Psi\)'s, further twisted by a line bundle of degree \((i-\sum a_{j})\)". Here, \(a_{1},\ldots,a_{r-i}\geq 0\) is a (possibly empty) sequence of integers with with \(\sum a_{j}\leq i\). If \(i=r\), we understand \(a_{1},\ldots,a_{0}\) as the empty sequence and (3.6) as the functor \(\Psi^{0}_{r}=\otimes\det(\mathscr{Q}_{\mathrm{Grass}(\mathscr{E};d)})^{r}\).
Conversely, for any (possibly empty) sequence of integers \(a_{1},\ldots,a_{r-i}\geq 0\) with \(\sum a_{j}\leq i\), there is precisely one copy of \(\mathcal{B}_{n-d-r+i,d-i}\) embedded as the image of the functor (3.6) in the semiorthogonal decomposition obtained through the above process \((*)\). Moreover, for any given \(0\leq i\leq\min\{r,d\}\), such a (possibly empty) sequence \(a_{1},\ldots,a_{r-i}\) is in one-to-one correspondence with a (possibly zero) partition \(\lambda\in B_{r-i,i}\) via the formula
\[a_{1}=i-\lambda_{1},\quad a_{2}=\lambda_{1}-\lambda_{2},\quad\ldots,\quad a_{ r-i}=\lambda_{r-i-1}-\lambda_{r-i}. \tag{3.7}\]
Here, if \(r=i\), the empty sequence \(a_{1},\ldots,a_{0}\) corresponds to the zero partition \((0)\in B_{0,r}\).
For each such (possibly empty) sequence \(a_{1},\ldots,a_{r-i}\) (or equivalently, for each partition \(\lambda\in B_{r-i,i}\), in view of (3.7)), composing (3.6) with the equivalence of Corollary 3.12,
\[\Phi^{(0)}_{(d+r-i,d-i)}\colon\mathrm{D}(\mathrm{Grass}(\mathscr{E}^{\vee}[1] ;d-i))\xrightarrow{\simeq}\mathcal{B}_{n-d-r+i,d-i},\]
we obtain precisely one copy of \(\mathrm{D}(\mathrm{Grass}(\mathscr{E}^{\vee}[1];d-i))\) embedded into \(\mathrm{D}(\mathrm{Grass}(\mathscr{E};d))\) via the fully faithful functor
\[\Psi_{a_{1}}\circ\cdots\circ\Psi_{a_{r-i}}\circ(\otimes\det(\mathscr{Q}_{ \mathrm{Grass}(\mathscr{E};d+r-i)})^{i-\sum a_{j}})\circ\Phi^{(0)}_{(d+r-i,d-i )}\simeq\Phi^{(i,\lambda)}_{(d,d-i)},\]
where the last equivalence follows from Corollary 2.18 and (3.7).
To summarize, for each \(0\leq i\leq\min\{r,d\}\) and each partition \(\lambda\in B_{r-i,i}\), we obtain an embedding of \(\mathrm{D}(\mathrm{Grass}(\mathscr{E}^{\vee}[1],d-i))\) into \(\mathrm{D}(\mathrm{Grass}(\mathscr{E},d))\) via the fully faithful functor \(\Phi^{(i,\lambda)}=\Phi^{(i,\lambda)}_{(d,d-i)}\). All the components produced in the process \((*)\) can be expressed in this form in a unique way. Therefore, we have obtained the desired semiorthogonal decomposition.
Furthermore, it is clear from the Corollary 3.14 and the process \((*)\) that the resulting semiorthogonal decomposition has the semiorthogonal order given by the lexicographic order \(<_{\mathrm{lex}}\) of the sequences \((a_{1},a_{2},\ldots,a_{r-i},0,0,\ldots)\) indexing the components embedded via the functors (3.6), where the empty sequence represents the largest element. In view of (3.7), this is equivalently to the order \(<_{\mathrm{diff}}\) on the pairs \((i,\lambda)\) defined in Notation 3.1.
This concludes the proof of Theorem 3.2 in the universal local situation. By combining it with the argument presented in SS3.1, we have completed the proof of Theorem 3.2.
## 4. Applications
In this section, we explore some of the applications of Theorem 3.2 in classical scenarios. We will fix a \(\mathbb{Q}\)-algebra \(\Bbbk\), and consider schemes, morphisms, and classical fiber products within the category of \(\Bbbk\)-schemes. We let the symbol \(\mathrm{D}\) denote \(\mathrm{D}_{\mathrm{qc}}\), \(\mathrm{D}_{\mathrm{coh}}^{-}\), \(\mathrm{D}_{\mathrm{coh}}^{\mathrm{b}}\) or \(\mathrm{D}^{\mathrm{perf}}\).
### Blowups of Determinantal Ideals
We consider a \(\Bbbk\)-scheme with \(Z\subseteq X\) a determinantal subscheme of codimension \((r+1)\), where \(r\geq 1\). For simplicity, we define \(Z\) as the zero subscheme of a Fitting ideal \(\operatorname{Fitt}_{r}(\mathscr{H}^{0}(\mathscr{E}))\) (see [13, Tag 0C3C]), where \(\mathscr{E}\) is a perfect complex with Tor-amplitude in \([0,1]\) and rank \(r\) and \(\mathscr{H}^{0}(\mathscr{E})\) is the zeroth sheaf homology of \(\mathscr{E}\).
We consider the projection \(\pi=\operatorname{pr}_{\operatorname{Grass}(\mathscr{E},r)}\colon\operatorname {Grass}_{X}(\mathscr{E};r)\to X\). Assuming that \(\operatorname{Grass}_{X}(\mathscr{E};r)\) is a classical scheme and that \(E:=\pi^{-1}(Z)\subseteq\operatorname{Grass}_{X}(\mathscr{E};r)\) is an effective Cartier divisor, then \(\operatorname{Grass}_{X}(\mathscr{E};d)\) is isomorphic to the (classical) blowup \(\pi\colon\operatorname{Bl}_{Z}(X)=\operatorname{Proj}_{X}\bigoplus_{n\geq 0} \mathscr{I}_{Z}^{n}\to X\) of \(X\) along \(Z\), with \(\det(\mathscr{Q}_{\operatorname{Grass}(\mathscr{E},d)})=\mathscr{O}_{ \operatorname{Bl}_{Z}X}(-E)\otimes\det(\mathscr{E})\) (see [16, Lemma 2.24]).
For each \(j\geq 0\), we let \(X_{j}\) (\(=X^{2r+j}(\mathscr{H}^{0}(\mathscr{E}))\) of [16]) be the closed subscheme defined by the Fitting ideal \(\operatorname{Fitt}_{r-1+j}(\mathscr{H}^{0}(\mathscr{E}))\); notice then \(X_{0}=X\) and \(X_{1}=Z\). We write
\[\widetilde{X_{j}}:=\operatorname{Grass}_{X}(\mathscr{E}^{\vee}[1];j)\to X.\]
The underlying classical map of \(\widetilde{X_{j}}\to X\) factorizes through \(X_{j}\), and is an isomorphism over \(X_{j}\setminus X_{j+1}\) (see [13, Tag 05P8] or [16, Corollary 2.8]). Therefore, we can view \(\widetilde{X_{j}}\) as a _(possibly derived) partial desingularization_ of the higher determinantal subscheme \(X_{j}\). For example, in the case where \(X\) is an irreducible Cohen-Macaulay subscheme and \(X_{j}\subseteq X\) have expected codimensions \(j(r+j)\) for all \(j\geq 1\), then \(\widetilde{X_{j}}\) are classical irreducible Cohen-Macaulay schemes and \(\widetilde{X_{j}}\to X_{j}\) are IH-small partial desingulartizations (see [16, Theorem 5.2]) for all \(j\geq 1\).
For each \(j\geq 0\), the incidence locus \(\mathfrak{Incid}_{(r,j)}(\mathscr{E})\) is a possibly derived scheme, whose underlying classical scheme is the classical fiber product \(\operatorname{Bl}_{Z}(X)\times_{X}^{\operatorname{cl}}\widetilde{X_{j}}\), and \(\mathscr{E}_{(r,j)}^{\operatorname{univ}}\) is a universal perfect complex of rank \(j\) and Tor-amplitude in \([0,1]\) on \(\mathfrak{Incid}_{(r,j)}(\mathscr{E})\). For each \(j\geq 0\) and \(\lambda\in B_{j,r-j}\), we consider the Fourier-Mukai functors
\[\Omega_{(j,\lambda)}:=r_{+\,*}\left(r_{-}^{*}(\underline{\phantom{-}})\otimes \operatorname{S}^{\lambda}(\mathscr{E}_{(r,j)}^{\operatorname{univ}})\right) \otimes\mathscr{O}_{\operatorname{Bl}_{Z}(X)}(jE)\colon\operatorname{D}( \widetilde{X_{j}})\to\operatorname{D}(\operatorname{Bl}_{Z}(X)),\]
where \(r_{\pm}\) are the natural projection maps in the incidence diagram (2.1),
\[\widetilde{X_{j}}\xleftarrow{r_{-}}\mathfrak{Incid}_{(r,j)}(\mathscr{E}) \xrightarrow{r_{+}}\operatorname{Bl}_{Z}(X).\]
We denote the essential image of \(\Omega_{(j,\lambda)}\) by \(\operatorname{D}(\widetilde{X_{j}})_{(j,\lambda)}\). When \(j=0\), the functor \(\Omega_{(0,(0))}\) is the pullback functor \(\pi^{*}\colon\operatorname{D}(X)\to\operatorname{D}(\operatorname{Bl}_{Z}(X))\), and we denote its essential image by \(\operatorname{D}(X)_{0}\).
As a result of our main theorem 3.2, we obtain the following corollary:
**Corollary 4.1** (Blowup formula for determinantal ideals).: _In the situation of a determinantal subscheme \(Z\subseteq X\) of codimension \((r+1)\) as described above, where \(r\geq 1\), the functors \(\Omega_{(j,\lambda)}\) are fully faithful for all \(j\geq 0\) and \(\lambda\in B_{j,r-j}\). Moreover, these functors \(\Omega_{(j,\lambda)}\), where \(0\leq j\leq r\) and \(\lambda\in B_{j,r-j}\), induce a semiorthogonal decomposition_
\[\operatorname{D}\left(\operatorname{Bl}_{Z}(X)\right)=\left\langle\Big{\langle} \operatorname{D}(\widetilde{X_{j}})_{(j,\lambda)}\mid 1\leq j\leq r,\,\lambda\in B_{j,r-j}\Big{\rangle}, \,\,\operatorname{D}(X)_{0}\right\rangle,\]
_with semiorthogonal order given as follows: \(\operatorname{Map}\big{(}\operatorname{D}(\widetilde{X_{j}})_{(j,\lambda)}, \operatorname{D}(\widetilde{X_{k}})_{(k,\mu)}\big{)}\simeq 0\) if \((r-k,\mu)<(r-j,\lambda)\), where \(<\) is the total order defined in Notation 3.1._
This result generalizes both Orlov's blowup formula [14] and the formula for blowups of Cohen-Macaulay subschemes of codimension \(2\) ([16, Jia21]). If we base-change the above semiorthogonal decomposition to the Zariski open subset \(X\backslash Z_{2}\), we recover Orlov's blowup formula for the local complete intersection (l.c.i.) closed immersion \((Z\backslash Z_{2})\subseteq(X\backslash Z_{2})\). Therefore, the above formula extends Orlov's to the non-l.c.i. loci of \(Z\subseteq X\). The "corrections" to Orlov's formula in this situation are precisely given by copies of derived categories of the partial resolutions \(\widetilde{Z_{j}}\) of the higher determinantal loci \(Z_{j}\subseteq Z\) for \(2\leq j\leq r\).
**Remark 4.2**.: Even if we don't assume that \(\operatorname{Grass}_{X}(\mathscr{E};r)\) is classical and \(\pi^{-1}(Z)\subseteq\operatorname{Grass}_{X}(\mathscr{E};r)\) is an effective Cartier divisor, the semiorthogonal decomposition described in Corollary 4.1 still applies to \(\operatorname{D}(\operatorname{Grass}_{X}(\mathscr{E};r))\). However, in this situation, \(\operatorname{Grass}_{X}(\mathscr{E};r)\) is no longer isomorphic to the classical blowup \(\operatorname{Bl}_{Z}(X)\). Instead, we should regard \(\operatorname{Grass}_{X}(\mathscr{E};r)\) as a derived version of blowup of \(X\) along \(Z\). We expect this perspective to be closely related to the concept of a derived blowup of Hekking, Khan and Rydh (see [11, 12]).
### Reducible Schemes
We consider two classes of reducible schemes.
#### 4.2.1.
Let \(X\) be a \(\Bbbk\)-scheme, and let \(Z\subseteq X\) a regularly immersed closed subscheme of codimension \(r\geq 1\) with normal bundle \(\mathscr{N}_{Z/X}\). For simplicity, we assume that \(Z\) is the zero locus of a regular section \(s\) of a rank \(r\) vector bundle \(\mathscr{V}\) over \(X\). We also consider a line bundle \(\mathscr{L}\) on \(X\), and denote by \(\mathscr{L}_{Z}\) the restriction of \(\mathscr{L}\) to \(Z\). We define a perfect complex \(\mathscr{E}\) of Tor-amplitude in \([0,1]\) and rank \(r\) as follows:
\[\mathscr{E}=\left[\mathscr{O}_{X}\xrightarrow{(s,0)^{T}}\mathscr{V}\oplus \mathscr{L}\right]\quad\text{so that}\quad\mathscr{E}^{\vee}[1]\simeq\left[ \mathscr{V}^{\vee}\oplus\mathscr{L}^{\vee}\xrightarrow{(s^{\vee},0)}\mathscr{ O}_{X}\right].\]
We have the following observations:
1. The derived Grassmannian \(\operatorname{Grass}_{X}(\mathscr{E};r)\) is isomorphic to the classical reducible scheme \[\operatorname{Bl}_{Z}(X)\bigsqcup_{\mathbb{P}_{Z}(\mathscr{N}_{Z/X}^{\vee})} \mathbb{P}_{Z}(\mathscr{N}_{Z/X}^{\vee}\oplus\mathscr{L}_{Z}^{\vee}),\] where \(\mathbb{P}_{Z}(\mathscr{N}_{Z/X}^{\vee})\subseteq\operatorname{Bl}_{Z}(X)\) is the inclusion of the exceptional divisor, and \(\mathbb{P}_{Z}(\mathscr{N}_{Z/X}^{\vee})\subseteq\mathbb{P}_{Z}(\mathscr{N}_{ Z/X}^{\vee}\oplus\mathscr{L}_{Z}^{\vee})\) is the closed immersion induced by \(\mathscr{N}_{Z/X}\subseteq\mathscr{N}_{Z/X}\oplus\mathscr{L}_{Z}\). The scheme structure is described as follows. By working Zariski locally on \(X\), we may assume that \(X=\operatorname{Spec}R\) for some commutative ring \(R\), \(\mathscr{V}=\mathscr{O}_{X}^{r}\) and \(\mathscr{L}=\mathscr{O}_{X}\), and \(s\) is given by a regular sequence \((x_{1},\dots,x_{r})\) of \(R\). Using [10, Proposition 4.19] and the fact that \(s\) is regular, we obtain that the regular closed immersion \[\operatorname{Grass}_{X}(\mathscr{E};r)\hookrightarrow\operatorname{Grass}_{X }(\mathscr{O}_{X}\oplus\mathscr{O}_{X}^{r};r)\simeq\operatorname{Spec}R \times\mathbb{P}^{r}\] is identified with the inclusion of the _classical_ subscheme defined by the equations \[x_{i}X_{j}-x_{j}X_{i}=0\quad\text{for}\quad 1\leq i<j\leq r,\quad\text{and}\quad x _{k}X_{0}=0\quad\text{for}\quad 1\leq k\leq r,\] where \([X_{0}:X_{1}:\dots:X_{r}]\) denotes the homogeneous coordinates of \(\mathbb{P}^{r}\). (In fact, one can work over affine charts of \(\mathbb{P}^{r}\) as follows. For any \(1\leq i\leq r\), let \(U_{i}=\{X_{i}\neq 0\}\simeq\mathbb{A}^{r}\subseteq\mathbb{P}^{r}\), with affine coordinates \((u_{0},\dots,\widehat{u}_{i},\dots,u_{r})\), \(u_{j}=X_{j}/X_{i}\) for \(j\neq i\). In the local chart \(\operatorname{Spec}R\times U_{i}\), \(\operatorname{Grass}_{X}(\mathscr{E};r)\) is defined by the equations \(\{x_{j}=x_{i}u_{j}\mid j\neq 0,r\}\) together with \(u_{0}\cdot x_{i}=0\). The first \((r-1)\) equations \(\{x_{j}=x_{i}u_{j}\}\) are precisely the defining equations for the blowup \(\operatorname{Bl}_{Z}(X)\) in \(X\times\mathbb{A}^{r-1}=\operatorname{Spec}R[u_{1},\dots,\widehat{u}_{i}, \dots,u_{r}]\), which we shall denote as \(\operatorname{Bl}_{Z}(X)_{U_{i}}\). The last equation \(u_{0}\cdot x_{i}=0\) defines a normal crossing divisor in \(\operatorname{Bl}_{Z}(X)_{U_{i}}\times\operatorname{Spec}\Bbbk[u_{0}]\), where the two divisors \(\{u_{0}=0\}\simeq\operatorname{Bl}_{Z}(X)_{U_{i}}\) and \(\{x_{i}=0\}\simeq Z\times U_{i}\) intersect along \(\{u_{0}=x_{i}=0\}\simeq Z\times\{u_{0}=0\}\simeq Z\times\mathbb{A}^{r-1}\).)
2. By virtue of [10, Example 4.35], we have a canonical equivalence \[q_{Z}\colon\mathbb{P}_{X}(\mathscr{E}^{\vee}[1])\simeq\operatorname{Tot}_{Z}( \mathscr{L}_{Z}[-1])\to Z,\] where \(\operatorname{Tot}_{Z}(\mathscr{L}_{Z}[-1])=\operatorname{Spec}\operatorname{ Sym}_{Z}^{\vee}(\mathscr{L}_{Z}^{\vee}[1])\) denotes total space of \(\mathscr{L}_{Z}[-1]\).
3. The map \(r_{-}\) exhibits the incidence locus \(\operatorname{\mathfrak{I}ncid}_{(r,1)}(\mathscr{E})\) as the projective bundle \[r_{-}=q\colon\operatorname{\mathfrak{I}ncid}_{(r,1)}(\mathscr{E})\simeq \mathbb{P}_{\operatorname{Tot}_{Z}(\mathscr{L}_{Z}[-1])}(p_{Z}^{*}(\mathscr{N}_{ Z/X}^{\vee}\oplus\mathscr{L}_{Z}^{\vee}))\to\operatorname{Tot}_{Z}( \mathscr{L}_{Z}[-1]),\] and the universal perfect complex \(\mathscr{E}_{(r,1)}^{\operatorname{univ}}\) is isomorphic to \(\mathscr{O}_{q}(-1)\) (see Lemma 2.10.(2)). The map \(r_{+}=\iota\) is a closed immersion (see Lemma 2.10.(1)). For \(1\leq j\leq r\), we let \[\Omega_{j}=\iota_{*}\big{(}q^{*}(\underline{\_}{})\otimes\mathscr{O}_{q}(-j) \big{)}\colon\operatorname{D}(\operatorname{Tot}_{Z}(\mathscr{L}_{Z}[-1]))\to \operatorname{D}(\operatorname{Grass}_{X}(\mathscr{E};d)).\]
Therefore, in the above situation, Theorem 3.2 implies that:
**Corollary 4.3**.: _The pullback functors \(\operatorname{pr}_{\operatorname{Grass}(\mathscr{E};d)}^{*}\) and \(\Omega_{j}\) (where \(1\leq j\leq r\)) are fully faithful. Denoting the essential image of \(\operatorname{pr}_{\operatorname{Grass}(\mathscr{E};d)}^{*}\) as \(\operatorname{D}(X)_{0}\) and \(\Omega_{j}\) as \(\operatorname{D}(\operatorname{Tot}_{Z}(\mathscr{L}[-1]))_{-j}\) (where \(1\leq j\leq r\)), we have a semiorthogonal decomposition:_
\[\operatorname{D}\left(\operatorname{Bl}_{Z}(X)\bigsqcup_{\mathbb{ P}_{Z}(\mathscr{N}_{Z/X}^{\vee})}\mathbb{P}_{Z}(\mathscr{N}_{Z/X}^{\vee} \oplus\mathscr{L}_{Z}^{\vee})\right)\\ =\Big{\langle}\operatorname{D}(\operatorname{Tot}_{Z}(\mathscr{L }_{Z}[-1]))_{-r},\cdots,\operatorname{D}(\operatorname{Tot}_{Z}(\mathscr{L}_{Z }[-1]))_{-1},\ \operatorname{D}(X)_{0}\Big{\rangle}.\]
**Remark 4.4**.: In the special case where \(\mathscr{L}=\mathscr{O}_{X}\) is trivial, \(\operatorname{Tot}_{Z}(\mathscr{L}_{Z}[-1])=Z[\varepsilon_{1}]\), where \(Z[\varepsilon_{1}]\) denotes \(Z\times\operatorname{Spec}(\operatorname{Sym}^{*}(\Bbbk[1]))\), and the above semiorthogonal decomposition reduces to
\[\operatorname{D}\left(\operatorname{Bl}_{Z}(X)\bigsqcup_{\mathbb{P}_{Z}( \mathscr{N}_{Z/X}^{\vee})}\mathbb{P}_{Z}(\mathscr{N}_{Z/X}^{\vee}\oplus \mathscr{O}_{Z})\right)=\Big{\langle}\operatorname{D}(Z[\varepsilon_{1}])_{- r},\cdots,\operatorname{D}(Z[\varepsilon_{1}])_{-1},\ \operatorname{D}(X)_{0}\Big{\rangle}.\]
Here, the scheme appearing on the left-hand side is precisely the central fiber \(\rho^{-1}(0)\) in the deformation-to-normal-cone construction, where \(\rho\) denotes the natural projection \(\operatorname{Bl}_{Z\times\{0\}}(X\times\mathbb{A}^{1})\to\mathbb{A}^{1}\). In this case, the above semiorthogonal decomposition agrees with the derived base-change of Orlov's blowup formula [11] for \(\operatorname{Bl}_{Z\times\{0\}}(X\times\mathbb{A}^{1})\) to the central fiber.
In the special case where \(Z=D\) is an effective divisor and \(\mathscr{L}=\mathscr{N}_{D/X}\), we recover the the semiorthogonal decomposition
\[\operatorname{D}\left(X\bigsqcup_{D}\mathbb{P}_{D}^{1}\right)=\big{\langle} \operatorname{D}(\operatorname{Tot}(\mathcal{N}_{D/X}[-1])),\ \operatorname{D}(X)\big{\rangle}\]
of [15, Examples 7.21]. If we furthermore assume that \(X=C\) is a complex curve and \(Z=\{p\}\) is a non-singular closed point, we recover the semiorthogonal decomposition
\[\operatorname{D}\left(C\bigsqcup_{p}\mathbb{P}^{1}\right)=\big{\langle} \operatorname{D}(\operatorname{Spec}\mathbb{C}[\varepsilon_{1}]),\operatorname {D}(C)\big{\rangle}\]
of [15, Examples 7.22] (see also [14, Proposition 6.15]), where \(\mathbb{C}[\varepsilon_{1}]=\operatorname{Sym}_{\mathbb{C}}^{*}(\mathbb{C}[1])\) is the ring of derived dual numbers. Hence the above result is a higher-codimensional generalization of the formula [15, Examples 7.21] for attaching \(\mathbb{P}^{1}\)-bundles to divisors.
### Varieties of Linear Series on Curves
In this subsection, we consider the case where \(\Bbbk=\mathbb{C}\), and study a family of smooth complex projective curves \(\mathscr{C}/S\) of genus \(g\geq 1\). For simplicity, we assume the existence of a section \(\sigma\colon S\to\mathscr{C}\) of \(\mathscr{C}/S\).
We denote the classical (rigidified) relative Picard functor of degree \(d\) by \(\operatorname{Pic}_{\mathscr{C}/S}^{d}\), which assigns to each \(S\)-scheme \(T\) the isomorphism class of pairs \((\mathscr{L}_{T},i)\), where \(\mathscr{L}_{T}\) is a line bundle on \(X_{T}\) with fiberwise degree \(d\), and \(i\) is an isomorphism \(\sigma^{*}(\mathscr{L}_{T})\xrightarrow{\simeq}\mathscr{O}_{T}\). Under this assumption, the functor \(\operatorname{Pic}_{\mathscr{C}/S}^{d}\) is representable by a locally projective, smooth \(S\)-scheme \(\operatorname{Pic}_{\mathscr{C}/S}^{d}\to S\) of relative dimension \(g\) (see [11, 12]).
Let \(\mathscr{L}_{\operatorname{univ}}\) be the Poincare line bundle on \(\operatorname{Pic}_{\mathscr{C}/S}^{d}\times_{S}\mathscr{C}\) and \(\operatorname{pr}\colon\operatorname{Pic}_{\mathscr{C}/S}^{d}\times_{S} \mathscr{C}\to\operatorname{Pic}_{\mathscr{C}/S}^{d}\) the natural projection. By applying the argument of [13, SS3.1.3], we obtain that \(\mathscr{E}:=(\operatorname{pr}_{*}(\mathscr{L}_{\operatorname{univ}}))^{\vee}\) is a perfect complex on \(\operatorname{Pic}_{\mathscr{C}/S}^{d}\) of Tor-amplitude in \([0,1]\) and rank \((1-g+d)\). For an integer \(r\geq-1\), we define a (possibly derived) scheme
\[\mathbf{G}_{d}^{r}(\mathscr{C}/S):=\operatorname{Grass}_{\operatorname{Pic}_{ \mathscr{C}/S}^{d}}(\mathscr{E};r+1)\to\operatorname{Pic}_{\mathscr{C}/S}^{d}.\]
This derived scheme is proper and quasi-smooth over \(\operatorname{Pic}_{\mathscr{C}/S}^{d}\), and its underlying closed points over a point \(s\in S(\mathbb{C})\) correspond to the \(\mathbb{C}\)-points of the variety \(G_{d}^{r}(\mathscr{C}_{s})\) of linear series \(g_{d}^{r}\) of degree \(d\) and dimension \(r\) on \(\mathscr{C}_{s}\) as studied in [1, Chapter IV]. More specifically, the
closed points of \(\mathbf{G}_{d}^{r}(\mathscr{C}/S)\) over \(s\in S(\mathbb{C})\) are given by the isomorphism classes of the pair \((\mathscr{L}_{s},g_{d}^{r})\), where \(\mathscr{L}_{s}\) is a line bundle on \(\mathscr{C}_{s}\) of degree \(d\), and \(g_{d}^{r}\) is a \(r\)-dimensional linear projective subspace of \(\mathbb{P}^{\mathrm{sub}}(\mathrm{H}^{0}(\mathscr{C}_{s};\mathscr{L}_{s}))\). For any \(0\leq i\leq r+1\), the relative Serre duality implies the isomorphism
\[\mathbf{G}_{2g-2-d}^{r-i}(\mathscr{C}/S)\simeq\operatorname{Grass}_{\operatorname {Pic}_{\mathscr{C}/S}^{d}}(\mathscr{E}^{\vee}[1];r+1-i)\to\operatorname{Pic}_{ \mathscr{C}/S}^{d}\]
whose underlying map carries a pair \((\mathscr{L}_{s},g_{2g-2-d}^{r-i})\) to the line bundle \(\mathscr{L}_{s}^{\vee}\otimes\omega_{\mathscr{C}_{s}}\in\operatorname{Pic}^{2 g-2-d}(\mathscr{C}_{s})\).
In this case, Theorem 3.2 yields the following corollary:
**Corollary 4.5**.: _In the above situation, assuming that \(d\geq g-1\) and \(r\geq-1\), there exists a semiorthogonal decomposition:_
\[\operatorname{D}(\mathbf{G}_{d}^{r}(\mathscr{C}/S))=\left\langle\binom{1-g+d} {i}\text{ copies of }\operatorname{D}(\mathbf{G}_{2g-2-d}^{r-i}(\mathscr{C}/S))\right\rangle_{0 \leq i\leq\min\{1-g+d,r+1\}},\]
_where the Fourier-Mukai functors and semiorthogonal orders are given as in Theorem 3.2._
Now we focus on the case where \(S=\operatorname{Spec}\mathbb{C}\). We denote \(\mathbf{G}_{d}^{r}(C)=\mathbf{G}_{d}^{r}(C/\operatorname{Spec}\mathbb{C})\).
If \(C\) is a _general_ curve, \(\mathbf{G}_{d}^{r}(C)=G_{d}^{r}(C)\) is the classical variety of linear series on \(C\) of degree \(d\) and dimension \(r\) studied in [1]. Similarly, \(\mathbf{G}_{2g-2-d}^{r-i}(C)=G_{2g-2-d}^{r-i}(C)\) for \(0\leq i\leq r+1\). Moreover, these varieties are reduced, smooth, and have expected dimensions ([1, Theorems V.(1.5), V.(1.6)]). They are non-empty precisely when their expected dimensions are non-negative ([1, Theorems V.(1.1), V.(1.5)]). In this case, Corollary 4.5 implies
\[\operatorname{D}(G_{d}^{r}(C))=\left\langle\binom{1-g+d}{i}\text{ copies of }\operatorname{D}(G_{2g-2-d}^{r-i}(C))\right\rangle_{0\leq i\leq\min\{1-g+d,r+1\}}.\]
Additionally, when \(C\) is general, it can be shown, following a similar argument as in [1, Lemmas B.3, B.4], that the incidence schemes \(\mathfrak{Ind}_{r+1,r+1-i}(\mathscr{E})\) are isomorphic to the classical fiber products \(G_{d}^{r}(C)\times_{\operatorname{Pic}_{C}^{d}}G_{2g-2-d}^{r-i}(C)\) and have expected dimensions. Furthermore, for a general curve \(C\), Lin and Yu [15] showed that the derived categories \(\operatorname{D}(G_{2g-2-d}^{r-i}(C))\) are indecomposable for all \(0\leq i\leq r+1\).
If \(C\) is special, then Corollary 4.5 still holds and reveals many intriguing phenomena. Here, we only focus on two \(3\)-fold examples:
**Example 4.6**.: If \(C\) is a (non-hyperelliptic) trigonal curve of genus \(5\), then \(W_{5}^{2}(C)\simeq W_{3}^{1}(C)\) consists of a single point. In this case, \(\mathbb{G}_{5}^{1}(C)=G_{5}^{1}\) is a classical irreducible singular threefold, and \(\mathbb{G}_{3}^{0}\simeq C^{(3)}\) is classical and smooth. Corollary 4.5 implies
\[\operatorname{D}(G_{5}^{1}(C))=\big{\langle}\operatorname{D}(\mathbf{G}_{3}^ {1}(C)),\ \operatorname{D}(C^{(3)})\big{\rangle},\]
where \(\mathbf{G}_{3}^{1}(C)\) is a nonclassical derived scheme with virtue dimension \(-1\), and and has underlying scheme \(W_{5}^{2}(C)\simeq W_{3}^{1}(C)\) whose support consists of a single point. The birational map \(C^{(3)}\dashrightarrow G_{5}^{1}(C)\) is a flip of threefold, and the embedding \(\operatorname{D}(C^{(3)})\hookrightarrow\operatorname{D}(G_{5}^{1}(C))\) is induced by the structure sheaf of the classical reducible scheme \(C^{(3)}\times_{\operatorname{Pic}^{3}(C)}^{\operatorname{cl}}G_{1}^{5}(C)\).
**Example 4.7**.: If \(C\) is a general trigonal curve of genus \(7\), then \(\dim W_{6}^{1}(C)=3\), \(\dim W_{6}^{2}(C)=0\), and they are both nonempty (see [1, Lemma 2.1]). In this case, \(\mathbf{G}_{6}^{1}(C)=G_{6}^{1}(C)\) is a classical, equidimensional scheme of dimension three. Hence Corollary 4.5 implies a derived equivalence \(\operatorname{D}(G_{6}^{1}(C))\xrightarrow{\sim}\operatorname{D}(G_{6}^{1}(C))\) for the threefold flop \(G_{6}^{1}(C)\to W_{6}^{1}(C)\gets G_{6}^{1}(C)\), where the second projection \(G_{6}^{1}(C)\to W_{6}^{1}(C)\subseteq\operatorname{Pic}^{6}(C)\) is given by \((L,g_{6}^{1})\mapsto L^{\vee}\otimes\omega_{C}\). Moreover, the derived equivalence is induced by a nonclassical incidence scheme whose underling scheme is the classical fiber products \(G_{6}^{1}(C)\times_{\operatorname{Pic}^{6}(C)}^{\operatorname{cl}}G_{6}^{1}(C)\).
The above examples highlight the importance of considering both the derived structures on \(\mathbf{G}_{d}^{r}\)'s as well as derived structures on incidence correspondence schemes when studying semiorthogonal decompositions for varieties of linear series on special curves.
**Remark 4.8**.: The framework presented in this paper allows us to extend Corollary 4.5 to families of singular curves \(\mathscr{C}/S\). For instance, consider a family \(\mathscr{C}/S\) of integral Gorenstein curves with arithmetic genus \(g\geq 1\), and let \(d\geq g-1\) be an integer. In the case of \(r=0\), we obtain a semiorthogonal decomposition:
\[\operatorname{D}(\operatorname{Hilb}^{d}_{\mathscr{C}/S})=\big{\langle} \operatorname{D}(\operatorname{Hilb}^{2g-2-d}_{\mathscr{C}/S}),\;\operatorname {D}(\overline{\operatorname{Jac}}^{d}_{\mathscr{C}/S})_{1},\cdots,\operatorname {D}(\overline{\operatorname{Jac}}^{d}_{\mathscr{C}/S})_{1-g+d}\big{\rangle},\]
where \(\operatorname{Hilb}^{d}_{\mathscr{C}/S}\) and \(\operatorname{Hilb}^{2g-2-d}_{\mathscr{C}/S}\) are derived Hilbert schemes of \(d\) and \((2g-2-d)\) points, respectively, for the family \(\mathscr{C}/S\), and \(\overline{\operatorname{Jac}}^{d}_{\mathscr{C}/S}\) is the compactified Jacobian scheme parametrizing rank one torsion-free sheaves of degree \(d\). Similar generalizations of Corollary 4.5 exist in the case of all \(r\). The details will appear in a forthcoming paper.
|
2301.07878 | Structure and evolution of a tidally heated star | The shearing motion of tidal flows that are excited in non-equilibrium binary
stars transform kinetic energy into heat via a process referred to as tidal
heating. In this paper we aim to explore the way tidal heating affects the
stellar structure. We used the TIDES code, which solves the equations of motion
of the three-dimensional (3D) grid of volume elements that conform multiple
layers of a rotating binary star to obtain an instantaneous value for the
angular velocity, $\omega''$, as a function of position in the presence of
gravitational, centrifugal, Coriolis, gas pressure, and viscous forces. The
released energy, $\dot{E,}$ was computed using a prescription for turbulent
viscosity that depends on the instantaneous velocity gradients. The $\dot{E}$
values for each radius were injected into a MESA stellar structure calculation.
The method is illustrated for a 1.0+0.8 M$_\odot$ binary system, with an
orbital period of $P$=1.44d and departures from synchronous rotation of 5% and
10%. We find that heated models have a larger radius and surface luminosity, a
smaller surface convection zone, and lower nuclear reaction rates than the
equivalent standard stellar models, and their evolutionary tracks extend to
higher temperatures. The magnitude of these effects depends on the amount of
injected energy, which, for a fixed set of stellar, rotation and orbital
parameters, depends on the perturbed star's density structure and turbulent
viscosity. Tidal heating offers a possible alternative for describing phenomena
such as bloated or overluminous binary components, age discrepancies, and
aspherical mass ejection, as well as the extended main sequence turnoff in
clusters. However, establishing its actual role requires 3D stellar structure
models commensurate with the nonspherically symmetric properties of tidal
perturbations. | Diana Estrella-Trujillo, S. Jane Arthur, Gloria Koenigsberger, Edmundo Moreno | 2023-01-19T04:47:51Z | http://arxiv.org/abs/2301.07878v1 | # Structure and evolution of a tidally heated star
###### Abstract
Context:The shearing motion of tidal flows that are excited in non-equilibrium binary stars transform kinetic energy into heat via a process referred to as tidal heating.
Aims:We aim to explore the way tidal heating affects the stellar structure.
Methods:We used the TIDES code, which solves the equations of motion of the three-dimensional (3D) grid of volume elements that conform multiple layers of a rotating binary star to obtain an instantaneous value for the angular velocity, \(\omega^{\prime\prime}\), as a function of position in the presence of gravitational, centrifugal, Coriolis, gas pressure, and viscous forces. The released energy, \(\dot{E}\), was computed using a prescription for turbulent viscosity that depends on the instantaneous velocity gradients. The \(\dot{E}\) values for each radius were injected into a MESA stellar structure calculation. The method is illustrated for a 1.0+0.8 M\({}_{\odot}\) binary system, with an orbital period of \(P\)=1.44 d and departures from synchronous rotation of 5% and 10%.
Results:Heated models have a larger radius and surface luminosity, a smaller surface convection zone, and lower nuclear reaction rates than the equivalent standard stellar models, and their evolutionary tracks extend to higher temperatures. The magnitude of these effects depends on the amount of injected energy, which, for a fixed set of stellar, rotation and orbital parameters, depends on the perturbed star's density structure and turbulent viscosity.
Conclusions:Tidal heating offers a possible alternative for describing phenomena such as bloated or overluminous binary components, age discrepancies, and aspherical mass ejection, as well as the extended main sequence turnoff in clusters. However, establishing its actual role requires 3D stellar structure models commensurate with the nonspherically symmetric properties of tidal perturbations.
## 1 Introduction
Binary stars are generally assumed to be in equilibrium, thus allowing them to be modeled as single stars during most of their lifetime. In an equilibrium state, the rotation and orbital rates are equal, the orbit is circular and the orbital and rotation axes are parallel. There are, however, many binary stars that are not in equilibrium, such as those in eccentric orbits as well as those in circular orbits that have either not yet attained equilibrium or have departed from this state due to evolutionary changes. In such cases, the tidal interaction excites shearing flows where kinetic energy is converted into heat.
Kopal (1968) addressed the question of whether the energy dissipation rates, \(\dot{E}\), due to such tidal interactions could affect the internal stellar structure, concluding that if only molecular and radiative viscosities are active in the stellar material, then the energy dissipation rates due to shearing motions are too small to have any effect on the star's structure (see also Kundu 1990). Hence, \(\dot{E}\) is generally neglected in the stellar structure calculations. However, Kopal (1968) also noted that if the so-called "turbulent viscosity," \(\nu_{\rm turb}\), is present, then the internal stellar structure could be affected.
Turbulent viscosity is associated with the idea that turbulence in a fluid dynamical system induces an effective viscosity associated with the size of the eddies (Landau & Lifshitz 1987; Shakura & Sunyaev 1988). Numerical and laboratory experiments have been used to estimate its value in stars (Richard & Zahn 1999; Mathis et al. 2004; Penev et al. 2007; Mathis et al. 2018). In binary stars with external convective layers, \(\nu_{\rm turb}\) is assumed to be related to \(\nu_{\rm cv}\), which is associated with the convective eddies predicted by the mixing-length theory (Spiegel 1971). The scaling between the tidally induced \(\nu_{\rm turb}\) and \(\nu_{\rm cv}\) depends on the largest eddy turnover time and the tidal frequency (Zahn 1966, 1989; Goldreich & Keeley 1977; Goldreich & Nicholson 1977; Duguid et al. 2020). Values of \(\nu_{\rm cv}\) typically lie in the range between \(10^{10}\) - \(10^{13}\) cm\({}^{2}\)s\({}^{-1}\) (Penev et al. 2007). However, determining the actual values of \(\nu_{\rm turb}\), in addition to determining the conditions under which the shear instability is triggered remain challenging issues (see, e.g., Mathis et al. 2004; Garaud & Kulenthirararajh 2016; Mathis et al. 2016, and references therein).
The value of \(\nu_{\rm turb}\) enters into the expression for calculating \(\dot{E}\) and determines the amount of energy released by the motions of the shearing layers. Substantial ongoing efforts are being devoted to determining the impact of \(\nu_{\rm turb}\) and \(\dot{E}\) on the rotation and orbital element evolution (see, e.g., Ogilvie 2014; Terquem
2021; Patel & Penev 2022, and references therein). Much less consideration has been given to evaluating the potential effects of turbulence-induced heating on the stellar structure, although the dissipative heating rate could exceed the luminosity carried by convection and significantly alter the internal dynamics of stars and planets (Currie & Browning 2017). Also, the role of tidal heating is now well accepted as a mechanism responsible for the Jupiter moon Io's volcanism (Segatz et al. 1988) and the subsurface liquid oceans in the icy moons of the Solar System (Greenberg et al. 1998; Pappalardo et al. 1999; Hussmann et al. 2002).
Koenigsberger & Moreno (2016, hereinafter KM2016) explored the possibility that tidally induced energy dissipation could cause a star to become more extended than a single-star counterpart. Focusing on the post-main sequence phases, they found that \(\dot{E}\) could act to rapidly expand the outer layers, resulting in a runaway process with each radius increase leading to a larger value of \(\dot{E}\) and, therefore, an even larger radius increase. However, these results were based on three approximations. The first is that the tidal perturbation amplitudes were computed only for a surface layer, thus neglecting the contribution from deeper stellar regions and relied on a coarse estimate of the stellar density. The second is the use of a constant \(v_{\rm turb}\) value, while in reality it depends strongly on position and is time-variable. The third simplifying approximation is the assumption that the tidal shear energy dissipation leads only to a radius increase, with no other effect on the stellar structure.
In this paper we perform more realistic calculations for which we have implemented an n-layer calculation of the tidal perturbations, including a prescription for the turbulent viscosity allowing it to be computed as a function of the time-varying and location-dependent velocity fields within the perturbed star. In addition, the structure and evolution of the perturbed star are determined by injecting the tidal energy into a standard structure and evolution model.
The question of the degree to which tidal shear energy dissipation may affect the internal structure of asynchronously rotating binary stars and their evolution has implications for several important phenomena, including the onset of mass-transfer and common envelope processes, the subsequent evolutionary path followed by the remnant star and the ejection of circumstellar material. In addition, it could have a bearing on the study of stellar populations if the observable properties of tidally heated stars differ significantly from those of their nonperturbed counterparts.
In Section 2, we describe the tidal shear energy dissipation method and the grid of models. In Section 3, we describe the stellar structure models that were constructed and the effects of injecting tidal shear energy dissipation into such models. In Section 4, we discuss the results and in Section 5, we present our conclusions.
## 2 Tidal shear energy dissipation
### Method
The response of the stellar layers to the external gravitational field of a companion is computed with the n-layer _TIDES_ code1(Koenigsberger et al. 2021), which is based on the numerical method introduced in Moreno & Koenigsberger (1999) and Toledano et al. (2007), and upgraded in Moreno et al. (2011).
Footnote 1: The _Tidal Interactions with Dissipation of Energy due to Shear TIDES_ code in all its versions is available upon request and is easily implemented in any operating system running a Fortran or GNU Forran compiler
The TIDES code can be described as a quasi-hydrodynamic Lagrangian scheme which simultaneously solves the orbital motion of the companion and the equations of motion of a 3D grid of volume elements covering the inner, rigidly rotating "core" of the tidally perturbed primary star. The core is defined as the interior region that is rotating as a solid body and does not necessarily coincide with the nuclear burning region. The equations of motion include the gravitational acceleration of both stars as well as centrifugal, Coriolis, and gas pressure accelerations. The motions of individual elements are coupled to those of neighboring elements and to the core through viscous stresses. The method is fully described in Moreno et al. (2011) and Koenigsberger et al. (2021).
The initial dimensions of the volume elements are (\(\ell_{\nu}^{0}\), \(\ell_{\varphi}^{0}\), \(\ell_{\theta}^{0}\)) determined by the selected 3D grid size. The mass contained in each volume element remains constant over time and is determined by the polytropic stellar structure that is selected. Once the calculation is initiated, the solution of the equations of motion determines the new location of the center of mass of each volume element and its distance from neighboring elements determines the new dimensions (\(\ell_{\tau}\), \(\ell_{\varphi}\), \(\ell_{\theta}\)). These, in turn, are used to compute the new gas pressure within each volume element and the corresponding acceleration term, which is then included in the new acceleration to compute the centers of mass locations in the next integration time step.
The typical depth of the volume elements is \(\Delta R/R_{1}<\)0.1, where \(R_{1}\) is the primary star radius. The equations are solved in the frame of reference with origin in the center of the primary star and that rotates with the binary orbit, using a seventh order Runge-Kutta integrator. The secondary is considered to be a point-mass source and its orbital plane is coplanar with the primary star's equator.
The TIDES calculation captures the effects due to the oscillatory nature of the tidal flows, in addition to those that are due to a differential rotation structure. However, it neglects buoyancy effects and heat and radiation transfer, as well as detailed microphysical processes involving the diffusion and advection of chemical elements which, in most cases, occur on different timescales than the tidal forcing. Hence, the TIDES calculation provides the 3D internal rotation and energy dissipation structures that result from the particular instantaneous dynamical conditions in the system, but the impact of the above processes on the subsequent dynamical evolution cannot be assessed.
The time-marching algorithm is applicable for binary stars with arbitrary rotation velocity and eccentricity, as long as neighboring grid elements retain contact over at least \(\sim\)80% of their surface and the centers of mass of two adjoining grid elements do not overlap. Perturbations that depart from these conditions halt the computation. For these very strong perturbations, a full SPH calculation with a different scheme is required, which goes beyond the scope of our current investigation.
The rate of energy dissipation per unit volume is as given in Moreno et al. (2011):
\[\dot{E}_{\nu}\simeq\nu\rho\left\{\frac{4}{3}\left(\frac{\partial\omega^{ \prime}}{\partial\varphi^{\prime}}\right)^{2}+\left[r^{\prime 2}\left(\frac{\partial\omega^{ \prime}}{\partial r^{\prime}}\right)^{2}+\left(\frac{\partial\omega^{\prime} }{\partial\theta^{\prime}}\right)^{2}\right]\sin^{2}\!\theta^{\prime}\right\}, \tag{1}\]
with \(\nu\) as the kinematical viscosity, \(\rho\) as the mass density, \(\omega^{\prime}\) as the angular velocity, and \(r^{\prime}\), \(\theta^{\prime}\), \(\varphi^{\prime}\) are, respectively, the radius, latitude, and longitude coordinates. The primes indicate that the variables are measured in the noninertial frame of reference that
rotates with the companion star's orbital velocity and with its origin at the center of the perturbed star.
For the new version of TIDES used in this paper, we implemented a self-consistent calculation of turbulent viscosity instead of providing it as a fixed input parameter as in previous versions. Then we chose the simplest formulation (Landau & Lifshitz 1987):
\[\nu_{\rm turb}=\lambda\ell_{\rm i}\Delta u_{\rm t}, \tag{2}\]
where \(\ell_{\rm i}\) is the characteristic length of the largest eddies that are associated with the turbulence, \(\Delta u_{\rm t}\) is the typical average velocity variation of the flow over the length, \(\ell_{\rm i}\), and \(\lambda\) is the proportionality parameter that is analogous to \(\alpha\) introduced by Shakura & Sunyaev (1988) in the \(\alpha\)-disk model.
The largest amplitude perturbation caused by the tidal interaction is in the azimuthal direction (Scharlemann 1981; Harrington et al. 2009). Our n-layer numerical simulation considers only the azimuthal motions and their radial gradients. Thus, we set \(\ell_{\rm i}\)=\(\ell_{r}\) and took \(\Delta u_{\rm t}\) as the typical average velocity variation between a given volume element and the medium surrounding it within this distance, \(\ell_{r}\). This corresponds to velocity variations between any individual volume element and the elements above and below it.
In the numerical scheme, the value of \(\Delta u_{\rm t}\) is obtained as follows. Let \(r\) be the radius of a particular element which we call \(e_{\rm i}\) and \(\nu_{\rm i}^{\prime}\)=\(\omega_{\rm i}r\sin\theta\) its linear velocity. Here, \(\omega_{\rm i}^{\prime}\) is the angular velocity in the frame of reference, \(S^{\prime}\), that rotates with the binary orbit, and \(\theta\) is the polar angle. The distance from the center of \(e_{\rm i}\) to its top and bottom surfaces is, respectively, \(r+\ell_{r}/2\) and \(r-\ell_{r}/2\). The algorithm averages the angular velocity of the volume elements that lie above and below \(e_{\rm i}\) and then multiplies this average by \((r+\ell_{r}/2)\sin\theta\) (above) and \((r-\ell_{r}/2)\sin\theta\) (below). This gives the typical velocities above and below \(e_{\rm i}\), which we call \(\nu_{\rm up}\) and \(\nu_{\rm bottom}\), respectively. Hence, the average velocity of the elements surrounding \(e_{\rm i}\) is \(\nu_{\rm ave}\)= (\(\nu_{\rm top}+\nu_{\rm bottom}\))/2 and the typical average velocity variation of the flow over the length scale \(\ell_{r}\) is \(\Delta u_{\rm t}\)=\(\nu\)-\(\nu_{\rm ave}\).
The values of \(\ell_{r}\) and \(\Delta u_{\rm t}\) obtained as described above for each volume element in the grid are inserted in Eq. 2. The total viscosity is computed as \(\nu\)=\(\nu_{\rm turb}+\nu_{\rm mode}\), where \(\nu_{\rm mode}\) is the molecular viscosity and it is given as input. This value of \(\nu\) at each volume element is now used by the TIDES algorithm to solve the equations of motion in the next time step of the calculation.
### TIDES input parameters and model runs
The modeled binary star is called the "primary." Its mass is \(m_{1}\) and it has an initial, unperturbed radius, \(R_{1}\). Its initial rotation condition is one of rigid rotation with its rotation velocity given in terms of the synchronicity parameter, \(\beta_{0}\)=\(\omega_{0}\)/\(\Omega_{0}\), where \(\omega_{0}\) is the angular velocity of the inner core (assumed throughout to rotate as a rigid body) and \(\Omega_{0}\) is the orbital angular velocity at a reference point in the orbit, for example, periastron in the case of nonzero eccentricity. The TIDES input parameters are described in Table 1, where we also list the values that were kept constant. The stellar masses and orbital period were chosen to be the same as in KM2016.
In this paper, we mainly consider rotation rates that are close to corotation. Specifically, we analyzed cases with \(\beta_{0}\)=0.95, 1.05 and 1.10. We also probed the effects for stellar radii in the range \(R_{1}\)=0.97-2.25 R\({}_{\odot}\), which correspond to those of a 1 M\({}_{\odot}\) star from the time it reaches the main sequence (MS) until shortly after the terminal age main sequence (TAMS). These radii are smaller than the Roche Lobe radius of the primary star (\(R_{\rm RL}\) =2.5 R\({}_{\odot}\)). The radii that were probed and the corresponding surface equatorial velocities \(V_{\rm wet}\) in the unperturbed star are listed in columns 2 and 6, respectively, of Tables 2 and 3.
The tidal shear energy dissipation rates depend on the tidal velocity gradient, the viscosity, and the density, with the latter depending on the assumed polytropic structure. The outer convective layers of low-mass stars are generally represented with a \(n\)=1.5 polytrope. However, the inner layers correspond to polytropes with larger \(n\) values, depending on the evolutionary stage. For the early main sequence models, we chose \(n\)=3, which provides a closer match to the values of the density in the outer layers. For later evolutionary states, we considered polytropic indices that provide a best approximation to the density structure of the MESA model in the outer layers.
The numerical experiments to determine the tidal shear energy dissipation rates are listed in Table 2 and are divided into three blocks. The aim of each block is to examine the dependence on particular input parameters.
In the first block, we probe the dependence on the stellar radius and \(\lambda\) (the turbulent viscosity coefficient; see Eq. 2) while holding \(\beta_{0}\)=0.95. Calculations were performed for a range in stellar radii \(R_{1}\)/R\({}_{\odot}\)=[0.99, 2.25]. For each radius, three viscosity representations were tested: a constant value as described in Koenigsberger et al. (2021), with the same values that were used in KM2016 and listed here in column 4 of Table 2; values computed with \(\lambda\)=0.1; and value computed with \(\lambda\)=1. The value of \(\lambda\) is listed in column 5 of Table 2. The second block of numerical experiments listed in Table 2 was performed with \(\beta_{0}\)=1.05. This means that the primary star's rotation is slightly super-synchronous. Here, we fixed \(\lambda\)=1 and tested different polytropic indices as well as different stellar radii within the range tested in block 1.
The third block of numerical computations was performed with a slightly greater departure from synchronicity than those in block 2, namely \(\beta_{0}\)=1.10. All other input parameters were kept the same. Table 3 shows results of numerical computations with ten layers, instead of the five that are used in the models listed in Table 2. These models probe the behavior of layers that lie closer to the nuclear burning core. Since we retained the same layer thickness for all computations, adding more layers allows us to probe regions that lie deeper in the star. For example, the
\begin{table}
\begin{tabular}{l l l} \hline \hline Param & Value & Description \\ \hline \(P_{\rm orb}\) & 1.44 & Orbital period (d) \\ \(e\) & 0 & Orbital eccentricity \\ \(m_{1}\) & 1.0 & Primary mass (perturbed star) (M\({}_{\odot}\)) \\ \(m_{2}\) & 0.8 & Secondary mass (point source) (M\({}_{\odot}\)) \\ \(R_{1}\) & Tab.2 & Primary unperturbed radius (R\({}_{\odot}\)) \\ \(\omega_{0}\) &..... & Rotation angular velocity of rigid core \\ \(\Omega_{0}\) &..... & Orbital angular velocity in a circular orbit \\ \(\beta_{0}^{0}\) & Tab.2 & Synchronicity parameter \(\beta_{0}^{0}\)=\(\omega_{0}\)/\(\Omega_{0}\) \\ \(\nu_{\rm mode}\) & \(10^{-16}\) & Molecular viscosity (R\({}_{\odot}^{2}\) d\({}^{-1}\)) \\ \(\lambda\) & Tab.2 & Turbulent viscosity coefficient \\ \(n\) & Tab.2 & Polytropic index \\ \(\Delta R/R_{1}\) & 0.06 & Layer thickness \\ \(N_{r}\) & Tab.2, 3 & Number of layers \\ \(N_{\varphi}\) & 200 & Number of partitions in longitude \\ \(N_{\theta}\) & 20 & Number of partitions in latitude \\ \(Tol\) & \(10^{-7}\) & Tolerance for the Runge-Kutta integration \\ \hline \hline \end{tabular}
\end{table}
Table 1: Description of TIDES input parameters.
models in Table 2 include layers down to 0.7 \(R_{1}\), where \(R_{1}\) is the unperturbed equilibrium radius of the star, while the corresponding model in Table 3 reaches down to 0.4 \(R_{1}\). We have also performed experiments with 20 layers, arriving at nearly the same results as with ten layers because the perturbed velocities decline very rapidly at smaller radii. Increasing the grid size beyond ten layers significantly increases the processing time and is deemed unnecessary for our current purposes. For problems where the main concern is modeling the layers near the surface, we find that a reduced number of layers yields results that are comparable to a larger radial grid computation. This is illustrated in Table B.1, where we list energy dissipation rates in each layer obtained from a five-layer and a ten-layer computation.
The TIDES computation provides the angular velocity, viscosity, and energy dissipation rates as a function of azimuth angle \(\varphi\) for each radius, \(r\), and colatitude, \(\theta\), of the computational grid and as a function of time, \(t\). Thus, the energy dissipation rates are represented as \(\dot{E}_{r,\theta,\varphi,t}\). The temporal dependence cannot be neglected because asynchronous binaries undergo orbital-phase dependent variability. Therefore, for each model run, we
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Model & \(R_{1}\) & \(n\) & \(\nu_{\rm const}\) & \(\lambda\) & \(V_{\rm rot}\) & \(\nu_{\rm max}\) & \(E_{k=1}\) & \(\dot{E}_{k=4}\) & \(\dot{E}_{\rm tot}\) \\ & (R\({}_{\odot}\)) &.... & (R\({}_{\odot}^{2}/d\)) &.... & (km/s) & (R\({}_{\odot}^{2}/d\)) & (ergs/s) & (ergs/s) & (ergs/s) \\ \hline \multicolumn{11}{c}{Block 1: \(\beta_{0}\)=0.95} \\ \hline
2 & 0.99 & 3.0 & 9.1\(\times\)10\({}^{-4}\) &.... & 32.6 &.... & 2.0\(\times\)10\({}^{30}\) & 4.1\(\times\)10\({}^{30}\) & 1.0\(\times\)10\({}^{31}\) \\
[MISSING_PAGE_POST]
)}\) &
chose to output the data at 40 orbital phases within each of five orbital cycles. An inspection of the five cycles allows for an assessment of long-term variability patterns and also whether the transitory state of the numerical integration has transpired. The latter has usually occurred within \(\sim\)200 orbital cycles. As the nature of the phase-dependent variability is not the subject of this paper, \(\dot{E}_{r,\theta,\varphi,d}\) was averaged over the 40 orbital phases within an orbital cycle in which the calculation has reached the stationary state. This yields \(\dot{E}_{r,\theta,\varphi}\). The radial profile \(\dot{E}_{r}\) is obtained by integrating over \(\theta\) and \(\varphi\).
For a general characterization of the models in Tables 2 and 3, we opted to list the energy dissipation rates in the deepest layer of the calculation and also in the layer that neighbors the surface. The deepest layer is represented by \(\dot{E}_{k=1}\), to indicate that it refers to the first layer, which in Table 2 corresponds to \(r\sim\)0.7 \(R_{1}\) and in Table 3, it corresponds to \(r\sim\)0.4 \(R_{1}\). The layer that is contiguous to the surface layer is represented by \(\dot{E}_{k=4}\) in Table 2 and \(\dot{E}_{k=9}\) in Table 3, both of which correspond to \(r\sim\)0.9 \(R_{1}\). We note that the \(k\) number index denotes the layer, with \(k\)=1 corresponding always to the layer that lies closest to the core.
### Results
An example of the results is illustrated in Fig. 1, where we plot the data obtained in model 63. The top panel illustrates the angular velocity, \(\omega^{\prime\prime}\), in the equatorial latitude (\(\theta\)=90\({}^{\circ}\)), measured in the rotating frame of the primary star2 as a function of the azimuth angle. The sinusoidal shape corresponds to the equilibrium tide. The peak-to-peak amplitude of each curve decreases with decreasing distance from the stellar center, as expected from the tidal force. These characteristics are shared by all the models that were run for this paper, as shown in Appendix A. Another general feature is that the overall shape and the peak-to-peak amplitude do not depend significantly on the viscosity for the range of viscosity values that were explored, a result that was already found in the one-layer calculations that were performed by KM2016.
Footnote 2: The double-prime notation indicates that it is measured with respect to the reference frame \(S^{\prime\prime}\), which rotates at the same constant rate as the primary star core, as opposed to the \(S^{\prime}\) reference frame that rotates with the orbital motion of the companion which for eccentric orbits is not constant (see Koenigsberger et al., 2021).
The corresponding behavior of the viscosity in model 63 is illustrated in the middle panel of Fig.1. Its value in the layers closest to the surface is a few orders of magnitude larger than near the core. Thus, the inner layers are significantly less coupled to the outer layers compared to the case in which a constant viscosity is used (models 2, 7, 11, and 23). Thus, the outer layers approach synchronous rotation more rapidly than the inner layers. This results in a radial gradient in the average velocity of each layer, a result that is consistent with the conclusion of Goldreich & Nicholson (1989), asserting that a star synchronizes from the surface inward. The differential rotation structure that is obtained with a variable viscosity is in contrast with the uniform average rotation structure in models 2, 7, 11, and 23, where the viscosity is kept constant, illustrating the important role that is played by this parameter.
We list in column 7 of Table 2 the maximum viscosity, \(\nu_{\rm max}\). It appears in the calculation always near the surface and around the equatorial latitude. For stars with radii between 0.99 R\({}_{\odot}\) - 2.25 R\({}_{\odot}\), we find \(\nu_{\rm max}\)=4\(\times\)10\({}^{13}\) cm\({}^{2}\) s\({}^{-1}\)- 2 \(\times\)10\({}^{15}\) cm\({}^{2}\) s\({}^{-1}\) (\(\lambda\)=1).
The bottom panel of Fig.1 shows \(\dot{E}_{r,\varphi}\), the energy dissipation rate for each radius as a function of azimuth. Its behavior mimics the \(\varphi\)-dependence of the viscosity as might be expected given that the density remains relatively constant within each layer and only the velocity gradients change as a function of azimuth angle.
We give the energy dissipation rates in the deepest layer, \(\dot{E}_{k=1}\), and the layer immediately below the surface layer, \(\dot{E}_{k=4}\), in cols. 8 and 9, respectively, of Table 2. The last column of this table lists \(\dot{E}_{\rm tot}\), the total energy dissipation rate obtained by integrating \(\dot{E}_{r}\) over all layers. The relatively small difference between \(\dot{E}_{\rm tot}\) and \(\dot{E}_{k=4}\) is due to the fact that the layers near the surface are responsible for the largest share of the dissipated energy. In fact, the greatest contribution to the total energy dissipation rate for models with a polytropic index \(>\)1.5 arises in the layer that is adjacent to the surface (in this case, \(\dot{E}_{k=4}\)), not in the surface layer. A priori, it would appear curious that the energy dissipation rate in the surface layer is not the greatest since it is the one subject to the strongest tidal perturbations. There are two explanations for this apparent contradiction. The first resides in the fact that only the inner boundary of the surface interacts with a neighbor, whereas all other layers interact with both a layer above and one below. The second factor is based on the interplay between the three terms that enter into the energy dissipation calculation (see Eq. 1). When the polytropic index \(>\)1.5, the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{3}{c}{Model 62} & \multicolumn{2}{c}{Model 63} & \multicolumn{2}{c}{Model 70} & \multicolumn{2}{c}{Model 68} \\ & \(R_{1}\)=0.97, \(n\)=1.5 & \(R_{1}\)=0.97, \(n\)=2.2 & \(R_{1}\)=1.648, \(n\)=3.8 & \(R_{1}\)=1.294, \(n\)=1.5 \\ \hline \(k\) & r (R\({}_{\odot}\)) & \(\dot{E}_{r}\) & \(\dot{E}_{r}\) & r (R\({}_{\odot}\)) & \(\dot{E}_{r}\) \\ \hline
1 & 0.39 & 1.8\(\times\)10\({}^{27}\) & 2.2\(\times\)10\({}^{27}\) & 0.66 & 6.0\(\times\)10\({}^{27}\) & 0.52 & 6.1\(\times\)10\({}^{30}\) \\
2 & 0.45 & 3.4\(\times\)10\({}^{27}\) & 3.8\(\times\)10\({}^{27}\) & 0.76 & 1.8\(\times\)10\({}^{28}\) & 0.60 & 3.5\(\times\)10\({}^{31}\) \\
3 & 0.50 & 1.1\(\times\)10\({}^{28}\) & 1.1\(\times\)10\({}^{28}\) & 0.86 & 7.3\(\times\)10\({}^{28}\) & 0.67 & 1.2\(\times\)10\({}^{32}\) \\
4 & 0.56 & 5.2\(\times\)10\({}^{28}\) & 4.7\(\times\)10\({}^{28}\) & 0.96 & 2.5\(\times\)10\({}^{29}\) & 0.75 & 3.1\(\times\)10\({}^{32}\) \\
5 & 0.62 & 2.5\(\times\)10\({}^{29}\) & 2.3\(\times\)10\({}^{29}\) & 1.05 & 5.2\(\times\)10\({}^{29}\) & 0.83 & 6.0\(\times\)10\({}^{32}\) \\
6 & 0.68 & 7.9\(\times\)10\({}^{29}\) & 7.3\(\times\)10\({}^{29}\) & 1.15 & 7.6\(\times\)10\({}^{29}\) & 0.90 & 9.6\(\times\)10\({}^{32}\) \\
7 & 0.74 & 1.4\(\times\)10\({}^{30}\) & 1.4\(\times\)10\({}^{30}\) & 1.25 & 8.9\(\times\)10\({}^{29}\) & 0.98 & 1.2\(\times\)10\({}^{33}\) \\
8 & 0.80 & 2.0\(\times\)10\({}^{30}\) & 1.9\(\times\)10\({}^{30}\) & 1.35 & 1.6\(\times\)10\({}^{30}\) & 1.06 & 2.1\(\times\)10\({}^{33}\) \\
9 & 0.85 & 9.2\(\times\)10\({}^{30}\) & 4.4\(\times\)10\({}^{30}\) & 1.45 & 5.9\(\times\)10\({}^{30}\) & 1.14 & 1.2\(\times\)10\({}^{34}\) \\
10 & 0.91 & 1.2\(\times\)10\({}^{31}\) & 1.5\(\times\)10\({}^{30}\) & 1.55 & 1.5\(\times\)10\({}^{30}\) & 1.22 & 1.2\(\times\)10\({}^{34}\) \\ \hline \hline \end{tabular} 1
\end{table}
Table 4: Energy dissipation rate radial profiles from the TIDES ten-layer models.
density decrease near the surface is more pronounced than the increase in viscosity and in the velocity gradients. This effects is illustrated by comparing models 65-67, where \(\dot{E}_{\rm tot}\) decreases by over two orders of magnitudes due to the decreasing surface density that results from changing the polytropic index from 1.5 to 3.8.
The depth dependence of \(\dot{E}_{r}\) for different density structures is most clearly illustrated with models that are computed with layers that are deeper, such as those listed in Table 3. These are ten-layer runs for which \(\dot{E}_{k=1}\) correspond to a layer that lies at \(\sim\)0.4 R\({}_{\odot}\). A layer-by-layer comparison down to this depth clearly illustrates the density-dependence of \(\dot{E}_{r}\), as shown in Table 4 as a comparison models 62 and 63.
The energy dissipation rate is very sensitive to the value of the synchronicity parameter, \(\beta_{0}\). For example, models 66 and 73 have identical input parameters except for \(\beta_{0}\): the latter model is slightly more supersynchronous than the former (\(\beta_{0}\)=1.10 versus \(\beta_{0}\)=1.05). The maximum viscosity value of model 73 is approximately twice as large as that of model 66, and \(\dot{E}_{\rm tot}\) is one order of magnitude larger. Model 71 illustrates the case in which the inner core rotates significantly slower than the other cases studied (\(\beta_{0}\)=0.20), but because of the large departure from synchronicity, it has one of the largest values of \(\dot{E}\) in all layers. Thus, increasing departures from synchronicity while holding other parameters constant does lead to increasing energy dissipation rates.
Finally, the near-zero-values displayed in the viscosity plots merit a comment. The value of \(\nu_{\rm turb}\) is calculated using the instantaneous velocity gradients. There are longitudes in our calculations at which these gradients vanish and thus the minimum \(\nu_{\rm turb}\rightarrow\)0 (see Fig.1). However, in reality, the transfer of the kinetic energy of the flow into eddies that then act as the viscosity source is not instantaneous, nor are the eddies expected to disappear instantaneously when the velocity gradient vanishes. Thus, there may be a minimum viscosity that is larger than the molecular viscosity even when the velocity gradients momentarily vanish. Furthermore, in convective envelopes, a base-level viscosity associated with the convective eddies are expected to always be present.
\begin{table}
\begin{tabular}{l l l} \hline \hline Age & Event & \\ (Gyr) & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \end{tabular}
\end{table}
Table 6: Ages and events in MESA models.
Figure 1: Angular velocity, viscosity and dissipated energy as a function of azimuth angle in the ten layers of model 63. The azimuth angle is measured in the direction of the companion’s orbital motion and \(\varphi\)=0 corresponds to the sub-binary longitude. _Top:_ Angular velocity \(\omega^{\prime\prime}\) at the equator in the perturbed star rest frame. The surface layers (largest amplitude curves) are more strongly coupled to the tidal field than the inner layers and thus are forced to lag behind the supersynchronously rotating core. _Middle:_ Turbulent viscosity values computed by the model. _Bottom:_ Tidal shear energy dissipation rate.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Name & Added heat & \(\dot{E}\) profile & Note \\ \hline h0 & no & none &... \\ Set2 h1 & yes & Model 63 & constant \\ Set2 h10 & yes & Model 63 \(\times\)10 & constant \\ Set3 h1 & yes & Model 63 & gradual \\ Set3 h10 & yes & Model 63 \(\times\)10 & gradual \\ \hline \hline \end{tabular}
\end{table}
Table 5: Heat injection in MESA models.
## 3 Stellar structure models
The stellar structure models were computed with version 15140 of the MESA open-source 1D stellar evolution code3(Paxton et al. 2011, 2013, 2015, 2018, 2019). We ran models for a 1 M\({}_{\odot}\) star with solar abundances (\(Z=0.02\)) and an initial surface rotation velocity of 40 km s\({}^{-1}\) on the zero age main sequence (ZAMS). All the models were run until the stellar radius reached 2.25 R\({}_{\odot}\), which occurs shortly after the central H fraction falls below 10\({}^{-4}\). By this point, the surface rotation velocity has fallen to about 10 km s\({}^{-1}\).
Footnote 3: [https://docs.mesastar.org/en/r15140/](https://docs.mesastar.org/en/r15140/)
We include the effect of energy dissipation on the stellar envelope using the other_energy hook provided in the MESA code. The extra heat we inject into the envelope is taken from the energy dissipation rate as a function of radius, \(E_{r}\), of a TIDES binary model. In particular, we chose the energy dissipation profile of model 63, which is listed in col. 4 of Table 4. The parameters of this model are given in Table 3. The TIDES code is not an evolution code and the radius of a given model is fixed, whereas the MESA simulations show that the radius of the star changes during the main sequence. We account for this in our models by normalizing the radius of the TIDES energy dissipation profile and apply the profile to the normalized stellar radii of the MESA models. In addition, MESA has many more interior radial points than the TIDES model and we find the energy dissipation rate at each of these by linear interpolation between the nearest TIDES model points.
We run three sets of models: (i) a standard model (h0) which evolves the star from the ZAMS to the stopping point with no added heat; (ii) set 2, in which heat is injected steadily into the stellar envelope with the TIDES energy dissipation rate starting at the ZAMS; and (iii) set 3, in which we emulate the change from synchronous rotation during the MS to asynchronous rotation as the primary star approaches the terminal age main sequence (TAMS; defined in this paper as the time when the central hydrogen mass fraction falls below 10\({}^{-4}\)). In set 3, we multiply all of our dissipation rates by the time-dependent factor of
\[fac=\min\left[\frac{e^{(t/t_{\rm RMS})^{2}}-1}{(e-1)},1\right], \tag{3}\]
which increases sharply to unity as \(t\to t_{\rm TAMS}\), where it becomes saturated.
It is evident from Table 4 that the energy dissipation rate has a strong dependence on radius, internal structure, and rotation rate. The aim of this paper is to simply explore the effect on the stellar structure of this type of energy input. Thus, in order to take into account larger energy input rates than given by the TIDES model 63, we also computed models with the model 63 profile multiplied by a factor of 10. These models are labeled set2-h10 and set3-h10 for the set 2 and set 3 cases, respectively. Such higher heating rates are obtained, for example, by increasing the \(\beta_{0}\) value from 1.05 to 1.10, while holding the polytropic index constant or by increasing the radius from 0.99 to 1.4 R\({}_{\odot}\); this can be seen by comparing models 66 and 73 and models 4 and 12, respectively, in Table 2.
### Results for constant injected heat
The effect of adding heat into the stellar envelope on basic stellar physical properties is illustrated in Fig. 2. In the top panels, we plot the stellar radius and luminosity as a function of time for the standard model h0, and for models set2-h1 and set2-h10, which correspond to time-constant heat injection throughout the MS, beginning at the ZAMS. For the same age, the h1 and h10 models have larger radii and luminosities, compared to the standard model, h0.
The bottom panels show the total power generated by the proton-proton chain and through the CNO cycle as a function of time. Throughout the MS, the pp chain dominates energy production but it is clear that the extra luminosity in the outer layers changes the energy transport in the envelope and has repercussions on the energy production in the stellar core, since the nuclear energy generation rate in the heated models is slightly smaller than in the standard model.
We can also examine the internal structure of the model stars at any point in their evolution. The internal luminosity structure for ages of 2 Gyr and ages between 9.257 and 11.644 Gyr is shown in Fig. 3. There is a significantly different structure in the heated models compared to the standard model. The luminosity inside \(\sim\)80% of the maximum radius is smaller in the heated models than in the standard model, while the outer \(\sim\)20% has a higher luminosity. Thus, the surface luminosities of the heated stars are higher than those of the standard model.
In Fig. 4 we illustrate the Kippenhahn diagrams for the h0 and the h10 models.4 These enable us to study how energy transport in different regions of the stellar interior changes as a function of time. The most notable difference is the reduced size of the convective region (green shaded region) in the h10 models compared to the unperturbed models, both on the MS and afterward. A similar, but much less prominent reduction is present in the h1 models. This is because adding heat to the stellar envelope flattens the temperature gradient and energy can be transported by radiative diffusion almost up to the surface.
Figure 2: Properties of the MESA models of set 2, in which the heating profile is introduced continuously throughout the evolutionary trajectory. _Top:_ Stellar radius (left) and luminosity (right). The blue curve corresponds to the nonheated models. The orange and green curves correspond, respectively, to the heating profile of model 63 and a heating profile that is ten times larger. _Bottom:_ Proton-proton reaction rate power (left) and CNO reaction rate power (right). The blue curve corresponds to the nonheated models and its ordinate is on the right side of the plot and given in units of log(L\({}_{\odot}\)). The orange and green curves correspond to the ratio of rates in the heated and nonheated models, with colors the same as in the top panels.
### Results for gradual injected heat
An overview of the results for gradual time-increasing heat injection is illustrated in Fig. 5. In contrast to the set 2 models shown in Fig. 2, there is no significant difference throughout the MS between the heated and the unperturbed models; this is as expected since the injection of heat is very small until the TAMS is approached. However, at the TAMS, the set 3 models very rapidly approach the properties of the set 2 models at similar ages.
The internal luminosity structure of set 3 models at ages 9.257, 9.653, and 11.644 Gyr is illustrated in Fig. 6, which shows similar characteristics as those of set 2. Specifically, the heated models have a lower internal luminosity than the standard model and a significantly larger surface luminosity. The similarity between the set 2 and set 3 heated models is best exemplified with a Hertzsprung-Russell Diagram (HRD), as shown in Fig. 7, where we see that the set2-h10 and the set3-h10 tracks are identical after the MS turnoff, which is when both sets are being equally heated.
## 4 Discussion
In this paper, we explore the manner in which extra energy that is injected into the external stellar layers affects its structure and its observational properties. The source of this extra energy is here assumed to be that which is released by the action of shearing layers whose motions are driven by the tidal interaction with a close companion. We adopted a simple prescription for the turbulent viscosity, \(\nu_{\rm turb}=\lambda\ell_{\rm t}\Delta u_{\rm t}\), where \(\Delta u\) is the instantaneous velocity difference between two contiguous layers, \(\ell_{\rm t}\) is a characteristic distance taken to be the separation of the layers, and \(\lambda\) is a proportionality factor. The instantaneous values of \(\Delta u\) are computed using the TIDES code, which solves the equations of motion in the rotating reference frame of the binary, taking into account the gravitational, centrifugal, Coriolis, gas pressure, and viscous accelerations. The stellar structure and evolution is computed with MESA.
The nominal calculation performed to probe the effects of tidal shear energy dissipation on the stellar structure is for a 1 M\({}_{\odot}\) star with an initial radius of 0.97 R\({}_{\odot}\) in a 1.44 d orbit and a 0.8 M\({}_{\odot}\) companion. The TIDES computational grid consists of \(\sim\)14300 volume elements (one hemisphere, north-south symmetry is assumed) covering the outer \(\sim\)54% in radius of the star. The synchronicity parameter \(\beta_{0}\)=1.05 is chosen such that the core of the star rotates at a rate that is 5% faster than co-rotation. Additional models are computed for stellar radii up to 2.25 R\({}_{\odot}\), polytropic indices 1.5\(\leq n\leq\)3.8, and synchronicity parameters 0.20 \(\leq\beta_{0}\leq\) 1.1.
### Viscosity and tidal shear energy dissipation
For the \(\sim\)1 R\({}_{\odot}\) models, we find maximum turbulent viscosity values near the stellar surface at the equator in the range 5\(\times\)10\({}^{-4}\) to 10\({}^{-3}\) R\({}_{\odot}^{2}\)/d in calculations with \(\lambda\)=1. These values correspond to 3-6\(\times\)10\({}^{13}\) cm\({}^{2}\) s\({}^{-1}\), respectively. The corresponding total energy dissipation rates integrated over the entire star \(\dot{E}_{\rm tot}\) are in the
Figure 4: Kippenhahn diagram for the standard model (top) and the set2-h10 model (bottom). The panels from left to right represent: _Left_: Evolution from ZAMS to TAMS (defined as abundance of H first drops below 10\({}^{-4}\) in the stellar core). _Center_: Evolution from the TAMS to 90% of the remaining time. _Right_: Final 10% of the post-TAMS evolution. The ordinate is the mass coordinate that runs from the center of the star to the surface. The nuclear energy generation rates are indicated by the different shades of blue. The green hatches indicate zones in which the energy transport is dominated by convection.
Figure 3: Luminosity structure of MESA set 2 models at ages of (top left) 2 Gyr and ages between (top right) 9.257, (bottom left) 9.653 and (bottom right) 11.644 Gyr. The blue curves correspond to the models without added heating; orange to those with heating as given by model 63; green to heating by ten times that given by model 63. The tidally heated models have surface luminosities that are larger than the nonheated models, but with lower luminosities in deeper layers.
range 2-20\(\times\)10\({}^{30}\) erg s\({}^{-1}\). These values decrease in proportion to the value of \(\lambda\).
As a star evolves and becomes larger, the turbulent viscosity grows and, assuming that the density structure does not significantly change, the energy dissipation rate also increases. The maximum value of \(\nu_{\rm turb}\) obtained with \(\lambda\)=1 in this paper is not very different from the values used in Koenigsberger & Moreno (2016, henceforth KM2016), except for the largest radii listed in block 1 of Table 2, where we see a difference of a factor of \(\sim\)3. The difference arises because the value of \(\nu\) in the KM2016 is an input parameter which, broadly speaking, is unconstrained. To avoid using a completely arbitrary \(\nu\) value, it was estimated assuming it to scale with the characteristic spatial and velocity dimensions. This was not a very precise estimate. In our current formulation, \(\nu\) is computed internally, allowing for a more consistent estimate.
Our current \(\dot{E}_{\rm tot}\) values are significantly smaller compared to those listed in KM2016 (their Table 3). For example, they list \(\dot{E}_{m=1.5}\sim\)10\({}^{34}\) ergs/s for the \(R_{1}\)=0.99 R\({}_{\odot}\) model, while we obtain \(\dot{E}\)=2\(\times\)10\({}^{30}\) ergs/s for our model 4. The dominant source of this difference is that KM2016 assumes a \(n\)=1.5 polytropic index, while all the models in block 1 of our Table 2 were computed with \(n\)=3. The density at a radius of 0.965 R\({}_{\odot}\) is \(\sim\)3 orders of magnitude higher for \(n\)=1.5 than for \(n\)=3. We performed an analogous one-layer computation to that of KM2016 using their same model, but this time with \(n\)=3 and obtained \(\dot{E}_{m=3}\)=6\(\times\)10\({}^{30}\) ergs/s, which is only a factor \(\sim\)3 larger than what we obtain in the current calculations for model 4. This remaining difference is due to a combination of factors. The first is that in the KM2016 model, the viscosity is constant over the entire surface layer, while in our current model its value significantly decreases at various locations along the azimuthal coordinate and, especially, in the polar direction. The second is that the KM2016 model includes radial motions which, although they are approximately ten times smaller than the azimuthal motions, also contribute toward the energy dissipation rates.
The lower energy dissipation rates that we find in this paper could impact the interpretation of the V1309 Sco merger phenomenon in terms of a tidally-induced runaway process, as discussed in KM2016. However, the higher energy dissipation rates needed for a rapid orbital evolution timescale can still be obtained in our current model by increasing the departure from synchronicity. While it is beyond the scope of this paper, a reanalysis using our current model, but relaxing the conditions on \(\beta_{0}\) and the polytropic index, is warranted before abandoning the tidal runaway scenario.
Figure 5: Properties of the MESA models of set 3, in which the heating profile is introduced gradually, starting at the ZAMS and reaching maximum at the TAMS. _Top:_ Stellar radius (left) and luminosity (right). The blue curve corresponds to the nonheated models. The orange and green curves correspond, respectively, to the heating profile of model 63 and a heating profile that is ten times larger. _Bottom:_ Proton-proton reaction rate power (left) and CNO reaction rate power (right). The blue curve corresponds to the nonheated models and its ordinate is on the right side of the plot and given in units of log(L\({}_{\odot}\)). The orange and green curves correspond to the ratio of rates in the heated and nonheated models, with colors the same as in the top panels.
Figure 6: Luminosity structure of MESA set 3 models at ages 9.257, 9.653, and 11.644 Gyr. Blue curves correspond to the models without added heating; red to those with heating as given by model 63; green by heating ten times that given by model 63. The tidally heated models have a surface luminosity that is larger than the nonheated models, but lower luminosity in deeper layers.
Figure 7: Evolutionary tracks on the HRD of the standard model (light blue), the set3-h1 model (orange), and the set3-h10 model (green) showing the result of introducing a gradual heating near the end of the main sequence. The dark blue curve corresponds to the set2-h10 model in which there was constant heating throughout the main sequence.
### Implications for stellar structure
The energy dissipation rates computed by TIDES for each layer were injected into MESA stellar structure and evolution calculations. With even a small (\(\sim\)5%) departure from synchronicity, the radius and luminosity values of the tidally heated stars are larger than those of the equivalent unperturbed model at all evolutionary times. The star also has a smaller surface convective region and lower nuclear processing rates, the latter allowing the tidally perturbed star to live longer. The differences between an asynchronous binary and its unperturbed counterpart depend on the amount of injected energy which, for a fixed set of stellar and orbital parameters, depends on how much the stellar rotation departs from synchronicity and the value of turbulent viscosity.
From an observational perspective, determining whether a star is truly in synchronous rotation is a challenging problem as the only available information is the projected surface equatorial speed. Because the synchronization time scales as the radius of the layer and its viscosity, \(\tau_{\rm visc}\sim\tau^{2}/\nu_{\rm turb}\), the star tends to synchronize from the surface inward (Goldreich & Nicholson 1989; Koenigsberger et al. 2021). Thus, stars in circular orbits that are thought to be synchronized may actually retain an internal angular velocity gradient upon which tidally excited oscillations are superposed. Furthermore, all eccentric binary systems are asynchronously rotating during most of their orbital trajectory; hence, they suffer from tidal perturbations regardless of their age. This would be particularly true of recently-formed binaries in dense regions of stellar clusters where close encounters of single stars are believed to frequently occur.
In this context, it is interesting to note that radii of low-mass binary stars in short-period orbits have been determined to be as much as 10% larger than their counterparts in long-period orbits (Hoxie 1973; Lacy 1977; Popper 1997; Clausen et al. 1999; Torres et al. 2006), consistent with the radius increase that we find in the tidally heated stars explored in this paper. Many low-mass stars are associated with significant surface activity, as evidenced by their light curves and the emission cores of Ca ii H and K lines (Torres et al. 2006; Lopez-Morales 2007; Morales et al. 2008). This activity is explained in terms of the presence of strong surface magnetic fields and these fields are thought to inhibit efficient convection (Torres et al. 2006; Chabrier et al. 2007; Clausen et al. 2009; Feiden 2016). Tidal flows and magnetic fields are not mutually exclusive, but the manner in which these two physical processes might interact is an open question.
Our scenario for tidal shear energy dissipation may also have a bearing on the mass discrepancy problem in massive stars that has been known for several decades, but that has still eluded explanation. The discrepancy, first noted by Herrero et al. (1992), consists of the fact that the masses derived from spectroscopic analysis are systematically lower than those found from evolutionary models; alternatively, they are more luminous than predicted by the evolutionary models. In their analysis of a set of eclipsing binary stars, Massey et al. (2012) found them to be on average 11% less massive or conversely, 0.2 dex more luminous, as compared to stellar structure models. Because it is now generally accepted that a large majority of massive stars are in binary systems, it is tempting to suggest that a possible solution to the mass-discrepancy problem may reside in the phenomena we discuss in this paper.
### Implications for stellar evolution and population synthesis
The effects of binary interactions on stellar evolution have, until now, focused mainly on the effects of mass-loss and mass transfer between the components in late evolutionary stages. In the case of massive stars, binary interactions during the main sequence have been incorporated only indirectly in the sense that tides are invoked to maintain short-period binary stars in rapid rotation, allowing them to be treated as rapid rotators throughout their main sequence lifetime. The possible modification in the stellar structure that results from the tidal interactions, however, is generally neglected. These effects include tidal shear energy dissipation and turbulent viscosity, which, in turn, have an impact on the internal energy budget as well as the rates of angular momentum and chemical transport.
The presence of tidal perturbations does not depend on the age of a star, but it may be most easily detected in very close binaries or those in which the radius of one of the stars is significant compared to the orbital separation - specifically, stars at the end of the main sequence. We find that evolutionary tracks of our tidally heated stars extend further to the blue during the end stages of the main sequence than does the track for the standard model. This effect is reminiscent of the extended main sequence turnoff (eMSTO) phenomenon that was first detected by Bertelli et al. (2003) and Mackey & Broby Nielsen (2007) in the Large Magellanic Cloud clusters NGC 2173 and NGC 1846; this is now considered an ubiquitous feature of Magellanic Cloud Clusters with ages between \(\sim\)20 Myr and \(\sim\)2 Gyr (Milone et al. 2018; Goudfrooij et al. 2014). The eMSTO has been interpreted as the result of a prolonged star formation (Mackey et al. 2008; Glatt et al. 2008; Goudfrooij et al. 2011; Keller et al. 2011) or as a result of stellar rotation (Bastian & de Mink 2009). We suggest that tidal heating may be an additional potential explanation. In globular clusters, the presence of blue straggler stars not showing evidence of mass-transfer (Ferraro et al. 2006) could also potentially be a manifestation of the effectiveness of this process.
Another interesting application of our model refers to the stability of a binary star's outer layers as it leaves the main sequence and heads up the giant branch. As the stellar radius increases, so do the tidal amplitudes (Section A). We can speculate that the growing surface velocities could attain the escape velocity before the Roche Lobe radius is reached. Because the tidal amplitudes are largest around the equator, this is where the star would become most bloated and where mass loss might be expected to occur first and take the form of an excretion disk. Observational evidence for asymmetrical mass-loss episodes is found in planetary nebulae, many of which display bipolar morphologies (Soker & Rappaport 2001; De Marco 2009; Jones 2011; Jones & Boffin 2017; Boffin & Jones 2019), as well as the presence of structures such as jets, rings, and halos (Harpaz et al. 1997; Kwok 2002; Phillips et al. 2009). Therefore, taking into account the perturbations caused by tidal forces on the progenitor star during its expansion stages may contribute to improving our understanding of the processes that give rise to the wide variety of morphologies of such interstellar structures. Finally, we note that although our focus is the potential effects on the stellar structure and evolution due to tidal perturbations, any nonstandard process that can heat sub-surface stellar layers would produce the same effects we describe in this paper.
### Caveats
There are several important caveats to our results. The first is that our simplified prescription for the turbulent viscosity depends on a \(\lambda\) parameter, which can be associated with the fraction of kinetic energy in the shearing flows that is transformed into turbulent eddies. Our calculations were performed for \(\lambda\)=0.1 and 1, and we find that the values of \(\dot{E}\) scale approximately linearly with \(\lambda\). Significant differences between the heated and the standard MESA models appear mainly for the larger \(\lambda\) value, which is likely to be unrealistic. However, \(\dot{E}\) values large enough to produce significant differences can also be obtained by increasing the synchronicity parameter, as we showed for the cases with \(\beta_{0}\)=1.05 and 1.1, or decreasing it as illustrated for \(\beta_{0}\)=0.2. This means that even if \(\lambda\) is small, stars having significant departures from synchronous rotation could be observed to display the tidal heating effects that we have described.
An underlying assumption in our treatment is that the criteria for triggering shear instabilities are met, an issue that depends on the hydrodynamical properties and the microphysics of the fluid (Garaud & Kulenthirarajah, 2016, and references therein) processes that our approach cannot compute. In the case of solar-type stars, turbulent viscosity is associated with the convective eddies, so in principle, the criteria for triggering the shear instability are met. However, the nature of the interaction between convective motions and the tidal flows remains to be resolved (Terquem, 2021; Vidal & Barker, 2020; Duguid et al., 2020).
Finally, it is important to keep in mind that \(\dot{E}\) is highly non-isotropic in 3D space, attaining maximum values near the equator and decreasing toward the poles, as well as having a dependence on azimuth. Although the TIDES model captures the 3D tidal perturbation structure, the MESA models are currently only 1D. The \(\dot{E}\) radial profile that was injected into the h1 and h10 MESA models corresponds to the total energy dissipation in each layer and, hence, the heating is strongly concentrated around the equator. This would lead to equatorial bloating unless the horizontal energy transport processes are sufficiently efficient to re-distribute the added energy.
## 5 Conclusions
Stars in a binary system can interact in different ways, depending on both their evolutionary stage and their orbital parameters (Soker, 1997), and these interactions have an impact on the stellar evolution of the components. In this paper, we explore the potential role of tidal shear energy dissipation in altering not only the evolutionary path of a star in its post-main sequence stages, but also its internal structure during the main sequence. Tidal heating offers a possible alternative for describing discrepancies between observations and the standard stellar structure models. Examples include phenomena such as the eMSTO in clusters, bloated or overluminous binary components, age discrepancies, and aspherical mass ejection. However, establishing the actual role of tidal heating requires incorporating the nonspherically symmetric properties of the tidal perturbations into stellar structure models. It also requires a hydrodynamical approach to determining the turbulent viscosity of the stellar fluid.
###### Acknowledgements.
We acknowledge support from CONACYT project 252499 and DGAPA/PAPIIT projects IN103619 and IN105723. We thank an anonymous referee for the very insightful and helpful comments, GK thanks Catherine Pilahowski and Constantine Delyiannis for enlightening discussions.
|
2307.03990 | FTFDNet: Learning to Detect Talking Face Video Manipulation with
Tri-Modality Interaction | DeepFake based digital facial forgery is threatening public media security,
especially when lip manipulation has been used in talking face generation, and
the difficulty of fake video detection is further improved. By only changing
lip shape to match the given speech, the facial features of identity are hard
to be discriminated in such fake talking face videos. Together with the lack of
attention on audio stream as the prior knowledge, the detection failure of fake
talking face videos also becomes inevitable. It's found that the optical flow
of the fake talking face video is disordered especially in the lip region while
the optical flow of the real video changes regularly, which means the motion
feature from optical flow is useful to capture manipulation cues. In this
study, a fake talking face detection network (FTFDNet) is proposed by
incorporating visual, audio and motion features using an efficient cross-modal
fusion (CMF) module. Furthermore, a novel audio-visual attention mechanism
(AVAM) is proposed to discover more informative features, which can be
seamlessly integrated into any audio-visual CNN architecture by modularization.
With the additional AVAM, the proposed FTFDNet is able to achieve a better
detection performance than other state-of-the-art DeepFake video detection
methods not only on the established fake talking face detection dataset (FTFDD)
but also on the DeepFake video detection datasets (DFDC and DF-TIMIT). | Ganglai Wang, Peng Zhang, Junwen Xiong, Feihan Yang, Wei Huang, Yufei Zha | 2023-07-08T14:45:16Z | http://arxiv.org/abs/2307.03990v1 | # FTFDNet: Learning to Detect Talking Face Video Manipulation with Tri-Modality Interaction
###### Abstract
DeepFake based digital facial forgery is threatening public media security, especially when lip manipulation has been used in talking face generation, and the difficulty of fake video detection is further improved. By only changing lip shape to match the given speech, the facial features of identity are hard to be discriminated in such fake talking face videos. Together with the lack of attention on audio stream as the prior knowledge, the detection failure of fake talking face videos also becomes inevitable. It's found that the optical flow of the fake talking face video is disordered especially in the lip region while the optical flow of the real video changes regularly, which means the motion feature from optical flow is useful to capture manipulation cues. In this study, a fake talking face detection network (FTFDNet) is proposed by incorporating visual, audio and motion features using an efficient cross-modal fusion (CMF) module. Furthermore, a novel audio-visual attention mechanism (AVAM) is proposed to discover more informative features, which can be seamlessly integrated into any audio-visual CNN architecture by modularization. With the additional AVAM, the proposed FTFDNet is able to achieve a better detection performance than other state-of-the-art DeepFake video detection methods not only on the established fake talking face detection dataset (FTFDD) but also on the DeepFake video detection datasets (DFDC and DF-TIMIT).
DeepFake, fake talking face detection, optical flow, audio information, cross modal fusion, audio-visual attention mechanism.
## I Introduction
Human facial features are unique to everyone, as the symbols of personal identity, they play an important role in social communication. Over the last decades, human face forgery with deep neuron networks has been extensively studied [1, 2, 3, 4, 5], and those manipulation methods are uniformly called DeepFake. By regarding the level of manipulation, DeepFake can be usually categorized into four groups [6]: entire face synthesis, identity swap, attribute manipulation and expression swap. Not only limited for the purpose of data augmentation, these unrealistic face images may wantonly spread on the Internet and cause a series of moral problems. Fortunately, many forgery detection algorithms [7, 8, 9, 10, 11, 12, 13, 14, 15] have been proposed to combat DeepFake, especially for digital facial forgery.
In the age of AI, do you still believe that the voice from a person in the video is realistic or not? More recently, an emergence of talking face generation [16, 17, 18, 19, 20, 21, 22, 23, 24] has posed a new challenge to fake detection. Compared with common DeepFake, talking face generation only manipulates the lip shape to match the given speech, and does not change facial features of identity, which has stronger concealment. As more and more works have achieved accurate lip synchronization, the generation of undistinguishable videos of fake talking faces is no longer difficult. By using these methods, the lip shapes of characters in the video can be easily manipulated, which may help the generated fake talking face videos (e.g. fake news) to spread disinformation and conduct online fraud. Imagine that if there is an email with such a video of assignments from leaders or help from family members, do you believe or not? What you have seen is hard to determine, and 'Seeing is Believing' has become a serious challenge.
The majority of existing DeepFake video detection schemes are visual-only based, which typically contain data pre-processing, feature extractor, further processing for effectively utilizing the extracted features and a classifier for obtaining the classification probability, as shown in Fig. 1(top). Due to the degradation of frame-wise information in video encoding, it is hard for many fake image detection approaches [27, 28, 29] to be directly applied to the task of DeepFake video detection, which usually requires to utilize the inter-frame temporal information [30, 31, 32, 33, 34]. Unfortunately, the functionality of the audio stream has not been sufficiently exploited in most of current DeepFake video detection studies even it takes an essential part of the video. To capture forgery cues, only few audio-visual based detection methods [35, 36, 37] have attempted to employ the inconsistency between audio and visual modalities. The reasons behind the limited study of audio-visual fusion strategy in traditional DeepFake video detection can be explained as: (i) the shortage of high-quality datasets
Fig. 1: The process of video forgery detection. Top: the traditional DeepFake video detection process, which only uses the visual modality. Bottom: our multi-modal detection framework, which learns a joint representation from visual, audio and motion features.
which contain both audio and visual information; (ii) no direct correlation between the face forgery and video speech in conventional DeepFake, which leads negligible enhancement via audio stream integration. In fact, the operation of talking face forgery is performed by reshaping the character's mouth appearance to match the given speech, which means that the capability of fake video discrimination can be substantially improved with an effective audio guidance.
According to biological perception mechanism, human multisensory neurons in the superior colliculus are capable of combining multi-modal sensory information about the common source, which is able to improve the ability of object localization and discrimination, even to accelerate the response to them [22, 38, 39, 40, 41]. When the visual and audio information is received at the same time, the auditory and visual perception systems are activated to send signals to superior colliculus multisensory neurons, in which the separately obtained information is synthesized as the response to the external stimulation. With the combination of the audio-visual features in a class of neurons, this perceptual mechanism enables the informed decisions output based on a fusion-decoding process. It has been also verified that the auditory information can enhance post-sensory visual evidence [42] during rapid multisensory decision-making, which also motivates a study of fake information perception using joint audio-visual representation.
For more challenging talking face video forgery detection with few artifacts of intra-frame with audio-visual hints, the inter-frame motion such as optical flow can be taken into consideration to capture the subtle manipulation artifacts of talking face video manipulation. Fig. 3 is the visualization of the estimated optical flow using Gunnar Farneback' algorithm [43] on the real and fake video frames. It can be found that the optical flow of the real videos is generally smooth and coherent, in comparison to the frequent disorders in that of the fake talking face videos, especially in the manipulated mouth region area. Considering such an advantage, optical flow based motion feature is also incorporated into the proposed detection model to detect imperceptible differences between real and fake in talking face videos.
Inspired by the decision-making mechanism of human multisensory perception system, in this work, the **F**ake **T**alking **F**ace **D**etection **N**etwork (FTFDNet) is proposed by incorporating visual modality, audio information and motion feature as shown in Fig. 1(bottom). A self-attention based **C**ross-**M**odal **F**usion (CMF) module is designed to explore the inter-relationships across different modalities, and a novel **A**udio-**V**isual **A**ttention **M**echanism (AVAM) is also proposed to discover which portion of the feature is more informative to the network. Specifically, the proposed AVAM is end-to-end trainable along with CNNs which is able to be seamlessly embedded into more audio-visual CNN architectures, as functioning in the proposed base network for fake talking face detection. Extensive experiments on our established **F**ake **T**alking **F**ace **D**etection **D**ataset (FTFD), DeepFake video detection datasets (DFDC [44] and DFTIMIT [45]) have validated a superior performance compared to the other state-of-the-art works. The main contributions are summarized as follows:
* To fully utilize the inter-relationships between different modalities for forgery detection, a cross-modal fusion scheme is proposed by learning a joint feature representation from audio, visual and motion information.
* A novel audio-visual attention mechanism is proposed to discover more informative features, which significantly outperforms the popular visual attention mechanism.
* A large-scale and challenging fake talking face video detection dataset is established, which is generated with state-of-the-art talking face generation methods.
* In comparison to DeepFake video detection, a more challenging task of fake talking face video detection is introduced with a multi-modal based modeling solution.
Fig. 2: Examples of face images from FTFDD and DeepFake video detection datasets including FF++ and Celeb-DF. The images on the top are from our established dataset including the original video frames without any manipulation and the fake talking face video frames that are forged by Wav2Lip [20], MakeItZalk [22] and PC-AVS [23]. The images on the bottom are from available popular DeepFake video detection datasets including FaceForensics++ (FF++) [25] and Celeb-DF [26].
Extensive experiments demonstrate that the proposed model achieves state-of-the-art performance on different datasets.
The rest of the paper is organized as follows: Sec. 2 discusses more details about talking face generation, fake talking face videos detection, attention mechanism and multi-modality learning that tightly related to this study. Sec. 3 introduces the proposed methodology, and Sec. 4 presents the experiments with discussions, as well as the datasets established for training and testing. In Sec. 5, we conduct ablation studies for more convincible evaluation, and conclude the whole in Sec. 6.
## II related work
**Talking Face Generation.** To generate talking face videos from a given speech is a long-standing matter of great concern in multimedia applications. Kumar _et al._[16] proposed ObamaNet, in which they employ the Char2Wav architecture (Sotelo _et al._[18]) to generate speech from the input text, and then train the speech synthesis system using the audio and frames extracted from the videos. This approach can be utilized to generate lip shapes with specified identity for any text. Similarly, Suwajanakorn _et al._[17] firstly maps the audio feature to sparse shape coefficients by RNNs, then maps the sparse shape to mouth texture/shapes, finally synthesizes highly detailed face textures. Jamaludin _et al._[46] proposed Speech2Vid based on a joint embedding of the face and audio to obtain the synthesized talking face video frames with an encoder-decoder convolutional neural network (CNN). The GAN-based LipGAN [19] model inputs the bottom-half masked target face to act as a pose prior, which guarantees that the generated mouth crops can seamlessly be fitted back into the original video without further post-processing. As an extension of LipGAN, Wav2Lip [20] employs a pre-trained lip-syncing discriminator to correct the lip-syncing and a visual quality discriminator to improve the visual quality. MakleItTalk [22] animates the portrait in a speaker-aware fashion driven by disentangled content and speaker embeddings. Zhou _et al._[23] proposed Pose-Controllable Audio-Visual System by devising an implicit low-dimension pose code, which generates accurately lip-synced talking faces whose appearances are controllable by other videos.
**DeepFake Video Detection.** With rapid development with different purposes, it is hard for DeepFake techniques to avoid a dark side, such as generating fake malicious videos of celebrities and masses, which has stimulated enormous research of DeepFake video detection. Afchar _et al._[30] point out that most image based fake detection can not be directly extended to video forgery detection due to the degradation of the frames by video compression. Thus, they proposed a facial video forgery detection network (MesoNet), which is integrated by CNNs with a small number of layers. Another fake detection [31] is proposed by incorporating the attention mechanism into the EfficientNet [47] and using the Siamese training strategy. Considering the temporal information between consecutive video frames, Tariq _et al._[32] proposed CLRNet based on Convolutional LSTM and Residual Network. Sabir _et al._[48] design the face manipulation detection model by combining recurrent convolutional architecture and face alignment. Haliassos _et al._[49] proposed LipForensics, which learns the representation of lip movements and uses the inconsistency in semantically mouth movements to achieve face forgery detection. Zhou _et al._[50] introduced a more complex multi-person face forgery detection, constructed FFIW dataset and proposed a novel detection algorithm using multiple instance learning. Unfortunately, all of those detection strategies failed to take the audio information into account, whereas it is an essential part of the video. The ignorance of the audio information is because of none-audio-guidance to the traditional face manipulation (e.g. entire synthesis, attribute manipulation, etc.).
**Fake Talking Face Video Detection.** According to the level of manipulation, the DeepFake methods are categorized into four different groups [6]: (1) entire face synthesis: create entire non-existent face images using GAN models. (2) identity swap: replace the face of one person in a video with another one. (3) attribute manipulation: edit face or retouch face, consists of modifying some facial attributes such as the hair or skin colors, gender, age, glass wearing, etc. (4) expression swap: modify the facial expression of the person. In general, all of the DeepFake methods have more or fewer changes in facial features of identity. Different from DeepFake, talking face generation aims at syncing lips to match the input audio, and does not change facial identity features as shown in Fig. 2. To benefit many applications such as lip-syncing for movie dubbing and lecture translation, it also enables people to produce fake talking face videos with malicious purposes, e.g. spreading fake news or extorting. In addition, the change of mouth shape usually frequently occurs in videos and leaves no trace on the person's identity, which also results in the difficulty of video forgery detection. Only by the facial features, people have little chance to discriminate whether such a video is true or not, which motivates this study of fake talking face video detection from the existing DeepFake video detection. In comparison of traditional DeepFake video detection, the more advanced talking face forgery is
Fig. 3: Optical flow of real video frames (top) and fake talking face video frames (bottom) estimated with Gunnar Farneback’ algorithm [43]. There are obvious differences between the optical flow of real videos and fake videos, especially in the lip region.
guided by the given speech, which means that the audio-visual representation based modeling would be a promising way to design more effective fake talking face video detectors.
**Audio-Visual based DeepFake Video Detection.** Audio and visual modalities are regarded to be complementary in videos, which has enlightened increasing studies on audio-visual based DeepFake video detection. By analyzing the similarity between audio and visual modalities from the video, Mittal _et al._[51] proposed a Siamese network-based architecture to learn the differences between real and fake videos. Chugh _et al._[35] designed the modality dissonance score (MDS) to represent the similarity between audio and visual streams for the video, and further to judge whether the video is real or fake with a threshold. Similar in jointly modeling video and audio modalities, Zhou _et al._[37] proposed a detection method for the case that it is unknown whether the visual or the audio has been manipulated or not. However, the audio and visual streams are highly synchronized in fake talking face videos. Especially when the videos are generated by Wav2Lip [20], which is a network structure using SynNet [36] to correct lip-syncing, the accurate synchronization between the audio and visual would lead to the failure of detection. Therefore, the models above are inapplicable to fake talking face video detection due to their direct-usage limitations.
**Multi-Modality Learning.** The trend from single-modality learning to multi-modality learning is an attractive topic in recent studies. With the capability of overcoming the limitations in perception tasks based on single-modal, audio-visual fusion has advocated more interesting works, e.g. audio-visual generation [52, 53], separation [54, 55], localization [56, 57], etc. Unfortunately, the effective fusion of different modalities is still challenging. Concatenation and element-wise sum are the most commonly used fusion operations owing to their simplicity, but the neglected internal relevance between different modalities [58] would cause the failure of joint feature representation from multiple modalities. To bridge the inter-relationship across modalities, a cross-modal fusion strategy is introduced based on the query-key-value mechanism, which is benefitted from the architecture of multi-head attention in transformer [59].
**Attention Mechanism.** Attention plays an important role in human perception, which enables a person to focus selectively on the salient targets instead of aimlessly processing a whole scenario at one moment. This human biological skill of visual observation can guarantee high efficiency and accuracy of our visual sensing capability, which has led to wide studies on attention mechanism [60, 61, 62, 63, 64, 65]. Typically, Woo _et al._[66] proposed a convolutional block attention module (CBAM) to improve the representative ability of CNNs. By sequentially inferring attention maps along channel and spatial axes, the attention maps are multiplied with the input feature map to obtain the processed maps in which the key regions are effectively emphasized. Considering that human attention is formed by a variety of factors, e.g. when you enjoy a concert,
Fig. 4: An overview of the proposed FTFDNet. FTFDNet employs three-stream encoders to learn features of the audio, visual and motion information, then uses the CMF to fuse them. Finally, a classifier is used to obtain the prediction results. Furthermore, the AVAM is embedded into each block of the visual encoder for further improvement of detection performance.
the visual signals make you focus on the stage unconsciously, and the music leads you to stare at the salient instruments or performers, this means that visual information can be supplemented with audio signals to build up the region-of-interest. Motivated by such an interactive mechanism, an audio-visual attention mechanism is designed in this work to achieve a preferable performance in comparison to the visual-only attention mechanism.
## III proposed methods
### _Framework Overview_
The proposed FTFDNet is based on multi-modal architecture as shown in Fig. 4. The whole process starts from the input face frames, its corresponding audio spectrogram and motion feature by three-stream encoders. The obtained high-dimensional representations of three modalities are employed to learn a unified representation by the proposed CMF efficiently. The fusion output is finally mapped into the probability to determine whether it is real or fake by the classifier with three fully connected layers, in which Dropout [67] is employed to enhance robustness. Into each block of the visual encoder, the AVAM is incorporated to further improve the detection performance. The training of FTFDNet is to minimize LogLoss between the predictive probability \(\hat{y}\) and the target label \(y\) as:
\[L_{L}=-\frac{1}{N}\sum_{i=1}^{N}{[y_{i}\times\log(S(\hat{y}_{i}))+[1-y_{i}] \times\log(1-S(\hat{y}_{i}))]}, \tag{1}\]
where \(\hat{y}_{i}\) represents the predictive \(i\)-th face score, \(y_{i}\in\{0,1\}\) is the ground truth, and label \(0\) is associated with faces coming from real pristine videos and label \(1\) with fake videos. \(N\) is the total number of face frames used for training and \(S(\cdot)\) is the Sigmoid function.
### _Three-Stream Encoders_
To achieve fake talking face video detection by the fusion of audio, visual and motion modalities, three backbone networks are employed for feature extraction. Especially, the structure of the three encoders needs to be as similar as possible, which enables the further process of intermediate features.
**Visual Encoder**. The visual backbone network is based on VGG [68] to encode the consecutive face frames into a high-dimensional representation. For adequate learning of temporal information between consecutive video frames, \(T\) frames with the size of \(T\times H\times W\times C\) are firstly reshaped to \((T\times C)\times H\times W\), then they are fed into the visual encoder. In addition, Batch Normalization [69] is also incorporated to further benefit the performance of detection.
**Motion Encoder.** The motion feature is obtained from the optical flow by the motion backbone network. The structure of the motion encoder is similar to the visual encoder, and some parameter configuration of convolution layers has been changed to adapt to the reshaped size \((T\times 2)\times H\times W\) of the optical flow feature.
**Audio Encoder.** For the audio stream represented as MFCC feature, the audio backbone network is also designed based on VGG. By taking consideration of the small size of the audio spectrogram, the size reduction of the feature map is achieved by setting the parameters of convolution layers instead of the max-pooling operation.
### _Cross-Modal Fusion_
DeepFake video detection can benefit from the synchronization between the audio and visual, which has motivated several studies [35, 37, 51]. Traditional multi-modal fusion strategies (e.g. concatenation and element-wise sum) fail to take the interrelation among different modalities into consideration, especially the correlation between audio and visual. Therefore, CMF is proposed to generate the final joint representation of audio, visual and motion features.
In the Eq.2, the FiLM [70] is employed to perform affine transformation on visual feature \(F_{v}\) by the condition of motion feature \(F_{m}\). Then, a 2D convolution layer with Batch Normalization and ReLU activation is subsequently performed on the outputs of FiM to obtain modulated visual feature \(F_{vm}\). With operations above, the modulated visual feature for joint representation learning is generated as shown in Fig. 5(a).
\[F_{vm}=FiLM(F_{m},F_{v})=\gamma(F_{m})\cdot F_{v}+\beta(F_{m}), \tag{2}\]
where \(\gamma(\cdot)\) and \(\beta(\cdot)\) are both single fully connected layer which output the scaling vector and bias vector.
Inspired by the architecture of self-attention in transformer [59], the **C**ross-**M**odal **A**ttention (CMA) is the most important process to discover the inter-relationships across modalities in the proposed fusion network, as shown in Fig. 5(b). Given audio feature \(F_{a}\) and the modulated visual feature \(F_{vm}\), 2D
Fig. 5: The structure diagram of each part in the fusion module. (a): The interaction includes affine transformation. (b): The layer of cross-modal attention.
convolution operations are used to generate the query (\(Q\)), key (\(K\)) and value (\(V\)) as Eq. 3. Then the fused feature \(F_{fused}\) is calculated by Eq. 4.
\[K=Conv_{k}(F_{a}),Q=Conv_{q}(F_{a}),V=Conv_{v}(F_{vm}). \tag{3}\]
\[F_{fused}=CMA(F_{vm},F_{a})=SoftMax(\frac{KQ^{T}}{\sqrt{d}})V+F_{vm}, \tag{4}\]
where \(d\) denotes the dimension of \(Q\), \(K\) and \(V\).
To substantially exploit the features from different depths of the network, a multi-scale features fusion strategy is developed: (i) Using CMF to fuse the feature maps from the last two blocks of the feature extraction network, respectively; (ii) Performing downsampling on the fused feature of the penultimate block for dimension alignment; (iii) Concatenating two fused feature representations along channel axis, and inputting the concatenated feature into the classifier to generate the probability of real and fake.
### _Audio-Visual Attention Mechanism_
The visual attention mechanism has been extensively studied [71, 72, 73, 66], which can be incorporated into CNN architectures for feature refinement. As the forming mechanism of human attention introduced in Sec. 2, the region-of-interest is established based on the interactions between audio and visual. Therefore, an effective audio-visual based attention mechanism is further designed to enable the detection network to focus more on the significant artifacts in the feature maps.
Given an intermediate visual feature map \(F^{i}_{v}\in\mathbb{R}^{C_{1}\times H\times W}\) and a corresponding audio feature map \(F^{i}_{a}\in\mathbb{R}^{C_{2}\times H\times W}\) as input, AVAM infers a 2D attention map \(M\in\mathbb{R}^{H\times W}\) as illustrated in Fig. 6. In the beginning, an average-pooling operation is applied along the channel axis to both \(F^{i}_{v}\) and \(F^{i}_{a}\), and then the results are concatenated to generate a feature descriptor \(F^{i}_{avg}\in\mathbb{R}^{2\times H\times W}\). In the same way, \(F^{i}_{max}\in\mathbb{R}^{2\times H\times W}\) is obtained by utilizing max-pooling operation. The above operations are shown as Eq. 5.
\[\begin{split}& F^{i}_{avg}=Concat(AvgPool(F^{i}_{v}),AvgPool(F^{i}_{a})), \\ & F^{i}_{max}=Concat(MaxPool(F^{i}_{v}),MaxPool(F^{i}_{a})),\end{split} \tag{5}\]
where \(AvgPool\) and \(MaxPool\) denote the average-pooling and max-pooling operations, respectively. \(Concat\) is the concatenation operation.
Then, we apply a 2D convolution layer followed by a sigmoid activation function to \(F^{i}_{avg}\) and \(F^{i}_{max}\) separately, and the results are concatenated to generate an intermediate feature map \(F^{\prime}\in\mathbb{R}^{2\times H\times W}\). Finally, another 2D convolution layer followed by a sigmoid activation function is utilized to generate an attention map \(M\in\mathbb{R}^{H\times W}\), which encodes the regions to be emphasized or suppressed. The above operations are shown as Eq. 6.
\[\begin{split}& F^{\prime}=Concat(\sigma(Conv(F^{i}_{avg})), \sigma(Conv(F^{i}_{max}))),\\ & M=\sigma(Conv(F^{\prime})),\end{split} \tag{6}\]
where \(Concat\) is the concatenation operation, \(Conv\) represents the 2D convolution operation with the filter size of \(7\times 7\), and \(\sigma\) denotes the sigmoid function.
Finally, the attention map \(M\in\mathbb{R}^{H\times W}\) is multiplied with the input visual feature map \(F^{i}_{v}\in\mathbb{R}^{C_{1}\times H\times W}\) for adaptive feature refinement as described in Eq. 7. In the new visual feature map \(F^{i}_{attn}\in\mathbb{R}^{C_{1}\times H\times W}\), the informative regions are emphasized, and the inessential parts are suppressed as well.
\[F^{i}_{attn}=F^{i}_{v}\,\otimes\,M, \tag{7}\]
where \(\otimes\) denote element-wise multiplication.
In talking face generation, the tampering usually happens in limited regions of the whole face, which indicates that the detection network should pay more attention to these informative regions, e.g. the lip region. Considering the guidance of audio to lip shape, the AVAM, instead of the conventional visual attention module, is integrated into our detection network.
The features learned from the CNN blocks of the audio encoder and visual encoder are dimensional mismatching,
Fig. 6: Diagram of the proposed AVAM. Compared with the conventional visual attention mechanism CBAM [66], the audio information as a supplement is integrated into the attention module in a similar way to the process of the visual feature.
which can not fulfil the same size requirement of the input audio and visual feature maps in our AVAM. And considering the flexibility of integrating AVAM into other CNN architectures, in our work, the spectrogram of the input audio stream is first resized to the same size as the input visual feature. In correspondence to the visual feature map \(F_{v}^{i}\), the resized audio spectrogram is input into a Siamese like visual encoder to generate the intermediate audio feature map \(F_{a}^{i}\). Then by replacing the original visual feature map \(F_{a}^{i}\) with the newly attentional feature map \(F_{attn}^{i}\) computed by AVAM, a modified fake talking face detection network is formed. And the proposed AVAM is integrated into each block of the visual encoder (5 blocks) in FTFDNet to achieve overall network optimization.
## IV experiments
### _Fake Talking Face Detection Dataset_
Even the DeepFake video detection has attracted a lot of attention recently, to our limited knowledge, fake talking face video detection has not been widely studied yet. This means that the datasets of fake talking face videos are still far from sufficiency for the public. Fortunately, a series of talking face generation approaches [16, 17, 18, 19, 20, 21, 22, 23] make it possible for the dataset generation. Some of the early methods are limited by identity and language, which makes them unsuitable for generating negative samples. Considering the algorithm performance and whether the code is available, in our experiment, high-performance and mainstream models (e.g. Wav2Lip [20], MakeltTalk [22], PC-AVS [23], etc.) have been employed to generate fake talking face videos on VoxCeleb2. VoxCeleb2 is an audio-visual dataset consisting of short clips of human speech from YouTube. The establishment of FTFD is introduced in detail as below.
There are more than 6000 celebrities in VoxCeleb2 covering over 1 million utterances. Considering that each person's video in the dataset has many, only 70000 videos are selected as an alternative and any of them are randomly selected from persons' utterances. 30000 videos are chosen from the alternative as positive samples, and the rest are used to generate fake talking videos. For each video used to generate fake video, an audio stream is taken from another randomly-sampled video in the alternative. It is noteworthy that some methods of talking face generation only require a single portrait image as an identity reference (MakeltTalk and PC-AVS), which means that the first frame of the video is selected by default as the portrait image. Specifically, there is an extra video required by the PC-AVS as head pose reference, and we used the original video as the pose video reference to obtain more harmonious visions. By removing the videos of generation failure or no face detected while synthesizing, more than 30000 fake talking face videos are left. Together with the 30000 genuine videos discussed above, there are 64679 videos (each segment is at least 1.6 seconds long) included in FTFDD. The duration of FTFDD is more than 120 hours in total. And 60% of the data is used for model training, and 20% for model validation, 20% for well-trained model testing. The accurate number of the videos in each class of the FTFDD can be found in Table II. Table II also shows the confidence score (\(\mathrm{Sync}_{\mathrm{conf}}\)) proposed in SyncNet [36] to account for the accuracy of lip-syncing. It is obvious that the confidence score of fake videos even outperforms the real videos', which proves that the fake talking face videos in FTFDD are nearly comparable to the realistic videos. Fig. 2 shows some sample fake faces from our FTFDD (top), DeepFake video detection datasets (bottom) including Face Forenics++ (FF++) [25] and Celeb-DF [26]. The fake faces in FF++ and Celeb-DF show noticeable artifacts, and our forged talking faces are indistinguishable from the real.
Table I illustrates the popular datasets of DeepFake video
detection. Only DFDC [44] and DFTIMIT [45] contain both audio and video, which can be used for audio-visual studies. Unlike others containing videos with manipulated faces, DFDC contains a mix of videos with manipulated faces, audio or both. The established FTFDD is also illustrated in Table I, in comparison to the other datasets, FTFDD has a large number of videos and contains both audio and visual modality, which would encourage further research in the area of audio-visual DeepFake video detection.
### _Experiment Configurations_
To take full advantage of temporal information between consecutive video frames, in our experiments, we randomly choose \(T=4\) consecutive frames from each video as the input of our network, and a further discussion on \(T\) is conducted in ablation studies. Instead of the video frames, only the cropped face patches are input into the network because of the occurrence of mouth shape forgery during talking face generation. The face detection is performed using S3FD [76], and the obtained face crops are resized to \(112\times 112\times 3\). The models are trained using Adam [77] optimizer with default parameters (\(\beta_{1}=0.9\), \(\beta_{2}=0.999\)), and initial learning rate equals to \(0.001\). All experiments are conducted on 2 RTX-2080Ti GPUs, with the batch size of 32. For the evaluation, 25 continuous frames (1-second segment) from each video in the test set are used to evaluate trained models with the metrics of detection accuracy (ACC), area under the curve (AUC) and LogLoss.
### _Comparison Works_
To demonstrate the superiority of the proposed multi-modal detection, comparative experiments have been conducted with the other state-of-the-art works: (1) MesoInception [30], which employs CNNs with a small number of layers based on inception architecture [78]. (2) Xception [25], a single-frame based detector which is upgraded from the Xception network. (3) LipForensics [49], a multi-frame based detector by learning the inconsistencies of mouth movements. (4) CNN-GRU [48], which is implemented by ensembling DenseNet121 and GRU.
### _Evaluation on Fake Talking Face Detection Dataset_
#### Iv-D1 Quantitative Analysis
The proposed FTFDNet and other comparison models are trained with the training set of FTFDD, and the evaluation is performed on the test set. All the detection results are shown in Table IV. It can be seen that the proposed model achieves significant performance superior to all the other approaches. MesoInception simply stacked with convolution layers has the worst performance, and Xception which is widely used in DeepFake detection is much better than MesoInception. LipForensics, which only learns the motion features of lips, obtains a relatively satisfactory result in this task because the forgery mainly occurs in the lip region in fake talking face videos. It's noticeable that the proposed FTFDNet (based on a variant VGG) outperforms CNN-GRU (based on a more complex and efficient backbone) and achieves the highest detection accuracy of 98.27% and the lowest LogLoss of 0.0546.
#### Iv-D2 Qualitative Analysis
Fig. 7 shows the visualization of detection results of the best-performing detecting model, our FTFTNet. In Fig. 7, the genuine faces are marked by a green outline (top) and the generated fake faces are marked by a red outline (bottom). By observing the enlarged mouth areas, it can be found that the fake talking faces are unidentifiable by the naked eyes, which sufficiently embodied the application value of the proposed high-precision detection model.
### _Evaluation on DeepFake Video Detection Datasets_
To validate the effectiveness of face forgery detection for the proposed FTFDNet, extensive experiments have also been conducted on the popular DeepFake video detection datasets of DFDC [44] and DFTIMIT [45], which contain the modalities of both visual and audio. Specifically, the DF-TIMIT has two different forgeries: Low Quality (LQ) and High Quality (HQ). Since both of the forgeries only contain the videos with manipulated faces, the real video source VidTIMIT [75] is then used as positive samples. For DFDC in the experiments, 18000 videos are randomly sampled in consideration of computational overhead. Both DF-TIMIT and DFDC are split into training set (60%), validation set (20%), and testing set (20%) as same as the established FTFDD.
All the detection results are presented in Table III. For the approaches whose implementation is not available, we directly refer the results reported in their papers, and "_" denotes the results are unavailable. Owing to a smaller quantity of videos and obvious artifacts in fake videos, all detection methods achieve high detection performance on DF-TIMIT (LQ). Both FTFDNet and CNN-GRU also achieve high accuracy of 99% on DF-TIMIT (HQ) and significantly outperform other methods as well. For the DFDC with complex video forgery, LipForensics shows a very poor detection performance because the learning of forgery features is limited within the lip regions. It is noteworthy that the proposed FTFDNet outperforms the other visual-only and audio-visual based detection with the accuracy of 88.45% and AUC of 93.35%, which demonstrates a superior performance compared to the other state-of-the-art works in not only the fake talking face detection but also the conventional DeepFake video detection.
### _Ablation Studies_
#### Iv-F1 Ablation Studies on Input Video Frame Number
In [32], Tariq _et al._ indicated that the temporal information between consecutive video frames (5 consecutive frames as the input) is effective to discover the imperceptible artifacts. Usually for available talking face generation models, consecutive frames from a pre-defined buffer \(T\) (\(T=5\) in Wav2Lip [20], LipGan [19] and PC-AVS [23]) are input to obtain the temporal context information. This also means that a strong correlation between lips can be formed during talking face generation, and further demonstrates the necessity for multi-frame exploration. Similar by using a frame buffer of 5, SyncNet [36] is designed to evaluate the synchronisation between mouth motion and speech in a video. These works motivates us to design the proposed multi-frame detection network with about 5 consecutive frames for model input.
The ablation study is conducted on the number of consecutive frames, which are randomly sampled from each video on FTFDD as the input of networks. Fig. 8 reports the curves of ACC and AUC with \(T\in{1,2,...,6}\) on the audio-visual network (audio and visual encoder without AVAM of FTFDNet). It can be found that the inter-frame information has indeed enhanced the accuracy of fake talking face detection (The accuracy of \(T>1\) is higher than the accuracy of \(T=1\)) as expected. At the same time, it is found that a bigger buffer size does not mean better performance (The accuracy of \(T=4\) is higher than the accuracy of \(T=5\)). The possible reason is that the accumulated temporal context information between the frames also increases the complexity of the recognition, which needs to be balanced. Through experimental verification, we set \(T\) to 4 to achieve the best performance.
#### Iv-F2 Ablation Studies on AVAM
In order to confirm the performance of our proposed AVAM, we conduct ablation experiments: (1) w/o Attention, which represents the audio-visual network. (2) w CBAM, which incorporates CBAM [66] into the audio-visual network in a similar way as FTFDNet. (3) w AVAM, which incorporates our proposed AVAM into the audio-visual network. As shown in Table VI, both CBAM and AVAM can improve the performance of the detection model, and our AVAM achieves an increase of 0.83% on detection accuracy, which is significantly superior to the performance of CBAM with an increase of 0.34%. The experimental results have demonstrated that audio information is useful to build up the region of interest in audio-visual tasks.
Fig. 8: The ACC and AUC of \(T\) with different values on the FTFDD dataset. It’s observed that when \(T=4\), the audio-visual network performs the best.
Fig. 7: Examples of fake talking face video detection. Top line: the original talking faces from the video. Bottom line: fake talking faces from FTFDD, which are difficult to be recognized with the naked eyes. Our model can precisely identify whether any given video is real or fake.
Fig. 9 shows the visualization of fused feature maps using Grad-CAM [79], where the feature maps are extracted from the last convolution block of w/o Attention (top), w CBAM (middle) and w AVAM (bottom). It is found that after integrating the attention mechanism, the network produces different degrees of attention to different regions in the whole face image. Compared to the conventional visual attention mechanism CBAM, the proposed AVAM enables the fake detection network to focus attention on the lip regions which might be tampered by talking face generation methods. This verifies the usefulness of audio information for key regions extraction. Once these informative regions have been emphasized, the fake talking face detection network is able to achieve higher detection accuracy.
Similar to CBAM, the proposed AVAM can be incorporated into more CNN architectures for audio-visual learning. To verify the effectiveness of AVAM on other CNN based networks, we add AVAM into visual-only based MesoInception [30]. CBAM and AVAM are respectively incorporated into each block of MesoInception network in the same way as FTFDNet. According to the requirement of the audio stream for AVAM, the same structure of MesoInception is employed
Fig. 9: Visualization of feature maps using Grad-CAM [79]. Some examples of final fused feature maps from w/o Attention (top), w CBAM (middle) and w AVAM (bottom) are visualized in their original face crops.
to design an audio branch for audio features generation in correspondence to obtained visual features. The detection results on DFDC [44] and DFTIMIT (HQ) [45] datasets are shown in Table V. It's found that the MesoInception with AVAM performs the best, which has shown a capability of flexible integration of our AVAM with other CNN models. Besides, the performance improvement is also greater than the visual-only based attention mechanism CBAM.
#### Iv-B3 Ablation Studies on Fusion Strategies
Owing to the simplicity and applicability, concatenation and element-wise sum are the most commonly used fusion strategies. These simple fusion methods ignore the correlation between different modalities. In contrast, the proposed CMF can effectively achieve a joint representation of different modalities in another way, which is inspired by multi-head attention architecture in transformer. To verify the superiority of CMF compared with conventional fusion strategies, ablation experiments are conducted on the FTFDD dataset by changing the fusion module in FTFDNet. The experimental results are shown in Table VII, it can be easily found that the fusion based on CMF performs better than the common fusion methods of concatenation and element-wise sum, which indicates that the inter-relationship across different modalities can be effectively bridged by the proposed fusion.
To discuss the details of our fusion strategy set up, we conduct the experiment CMF-kqv, in which the key (\(K\)), value (\(V\)) are generated by the modulated visual feature \(F_{vm}\) and the query (\(Q\)) is calculated by the audio feature \(F_{a}\). The CMF-kqv underperforms the CMF, which shows that the architecture of self-attention in CMF is superior. Furthermore, Table VII shows the detection results of fusing features from different stages in the backbone network. (CMF-last: only using the feature from the last stage. CMF-last3: using the feature from the last two stages.) The results proved that the multi-scale fusion strategy of the last two stages outperform the best.
#### Iv-B4 Ablation Studies on Modalities
To verify the contribution of each modality to the detection performance, we conduct ablation experiments on audio, visual and motion modalities. Table VIII lists the detection results of using each modality and combination of multiple modalities. As shown in Table VIII, using only audio feature achieved the lowest performance because of no manipulation in audio from the video, and only motion feature based network obtained the accuracy of more than 80%, which confirmed that there are differences in the optical flow between real and fake videos. Table VIII also shows a detection accuracy increase of 1.34% when audio information is used to help the detection model capture the manipulation cues. There are differences between the optical flow of real video frames and fake video frames. To verify that the motion feature from optical flow is helpful to capture the subtle artifact cues, we integrate the audio, visual and motion features using our proposed CMF, and a detection accuracy increase of 1.08% is obtained compared with the audio-visual network.
## V Limitation and Discussion
Our established dataset, unfortunately, is constrained by the utilized talking face generation methods, such as the lip synchronization accuracy and the quality of the generated images produced by current algorithms. The utilization of low-resolution real talking face videos also contributes to the production of low-quality forged videos. With the advancement in talking face generation techniques, maintaining all-embracing for fake talking face datasets poses a formidable challenge.
To solve the more challenging task of fake talking face video detection, in this paper, we propose a novel **F**ake **T**alking **F**ace **D**etection **N**etwork (FTFDNet) by incorporating audio, visual and motion features using an efficient **C**ross-**M**odal **F**usion (CMF) strategy. Beyond that, we propose an **A**udio-**V**isual **A**ttention **M**echanism (AVAM), which enables the network to focus on the most relevant portions of the feature maps. To further improve the detection performance, our proposed AVAM is embedded into FTFDNet and obtained a significant performance boost over the popular visual attention mechanism. Training and evaluating of the proposed FTFDNet are performed on the established **F**ake **T**alking **F**ace **D**etection **D**ataset (FTFDD) and popular DeepFake video detection datasets, the proposed FTFDNet shows excellent performance on the detection of not only fake talking face videos but also DeepFake videos.
## Acknowledgments
This work was supported in part by the National Natural Science Foundation of China under Grants 61971352 and 61862043, in part by the Natural Science Foundation of Jiangxi Province under Grant 20204BCJ22011.
|
2304.05669 | Factorized Inverse Path Tracing for Efficient and Accurate
Material-Lighting Estimation | Inverse path tracing has recently been applied to joint material and lighting
estimation, given geometry and multi-view HDR observations of an indoor scene.
However, it has two major limitations: path tracing is expensive to compute,
and ambiguities exist between reflection and emission. Our Factorized Inverse
Path Tracing (FIPT) addresses these challenges by using a factored light
transport formulation and finds emitters driven by rendering errors. Our
algorithm enables accurate material and lighting optimization faster than
previous work, and is more effective at resolving ambiguities. The exhaustive
experiments on synthetic scenes show that our method (1) outperforms
state-of-the-art indoor inverse rendering and relighting methods particularly
in the presence of complex illumination effects; (2) speeds up inverse path
tracing optimization to less than an hour. We further demonstrate robustness to
noisy inputs through material and lighting estimates that allow plausible
relighting in a real scene. The source code is available at:
https://github.com/lwwu2/fipt | Liwen Wu, Rui Zhu, Mustafa B. Yaldiz, Yinhao Zhu, Hong Cai, Janarbek Matai, Fatih Porikli, Tzu-Mao Li, Manmohan Chandraker, Ravi Ramamoorthi | 2023-04-12T07:46:05Z | http://arxiv.org/abs/2304.05669v2 | # Factorized Inverse Path Tracing
###### Abstract
Inverse path tracing has recently been applied to joint material and lighting estimation, given geometry and multi-view HDR observations of an indoor scene. However, it has two major limitations: path tracing is expensive to compute, and ambiguities exist between reflection and emission. We propose a novel Factorized Inverse Path Tracing (FIPT) method which utilizes a factored light transport formulation and finds emitters driven by rendering errors. Our algorithm enables accurate material and lighting optimization faster than previous work, and is more effective at resolving ambiguities. The exhaustive experiments on synthetic scenes show that our method (1) outperforms state-of-the-art indoor inverse rendering and relighting methods particularly in the presence of complex illumination effects; (2) speeds up inverse path tracing optimization to less than an hour. We further demonstrate robustness to noisy inputs through material and lighting estimates that allow plausible relighting in a real scene. The source code is available at: [https://github.com/lwwu2/fipt](https://github.com/lwwu2/fipt)
## 1 Introduction
We address the task of estimating the materials and lighting of an indoor scene based on image observations (Fig. 1). Recent work has shown that optimizing per-scene material and emission profiles through photometric loss and a differentiable renderer, with geometry reconstructed with the existing 3D reconstruction algorithms [40, 30, 53], can lead to promising results [1, 33, 51]. However, key challenges remain unsolved in these methods: (1) they require expensive Monte Carlo estimation for both the loss and derivative evaluations; (2) inherent ambiguity exists between material and lighting, and this ill-posed inverse problem hinders the optimization. We present an alternative inverse rendering algorithm that outperforms the state-of-the-art in terms of both efficiency and accuracy.
Optimizing scene parameters with Monte Carlo differentiable rendering can suffer from high variance and lead to slow convergence. Inspired by classical rendering literature [49], our key idea to address this challenge is to factor the material term out of the rendering integral and bake the incoming radiance to significantly speed up inverse rendering. Unlike prior work which also applied a similar factorization ([37, 25]) but does not consider view-dependent reflections, our method extends to general specular materials and both local and global illumination.
To address the ill-posed nature of joint optimization of
Figure 1: **Ours vs standard IPT.** IPT [1] takes a piecewise constant parameterization of material to reduce Monte Carlo variance and ambiguity for inverse rendering, losing fine spatial details as a result. Directly extending it to complex material representation (MILO [51]) shows very slow convergence. In contrast, we propose Factorized Inverse Path Tracing (FIPT) to get rid of variance and reduce ambiguities, yielding efficient and high quality BRDF-emission (4th row), appealing relighting (1st row), and object insertion (the bunny on the table). The presented scene is synthetic with the inset showing the input (lower-left sub-figure).
material and lighting, our observation is, by taking out the emission term in the rendering equation for the first bounce, only emissive surfaces will have high rendering loss. This observation allows us to design an effective way to detect emitters. We incorporate our emitter detection method into a full inverse rendering pipeline and independently estimate the emission after emitter detection.
Overall, our method achieves fast convergence over the material-lighting estimation task thanks to our factorized light transport formulation and emitter extraction strategy (Fig. 1). To demonstrate accurate BRDF-emission comparison, we perform exhaustive experiments on synthetic scenes (Sec. 5.1, 5.2) while also validating on noisy data in a real captured scene (Sec. 5.3). The results show our method is able to obtain high-quality reconstruction for complicated indoor scenes that can easily fail for the state-of-the-art (Tab. 2), yet the training speed is 4-10 times faster (Tab. 3).
## 2 Related Work
Inverse rendering.Inverse rendering aims to estimate the intrinsic properties of an observed scene, via decoupling material, geometry and lighting which jointly contribute to image appearance. Given the inherent ambiguity between the aforementioned high-dimensional factors, classical methods seek to regularize the solution with a surface rendering objective. Approaches include a low-dimensional surface reflectance representation [52], sparsity priors for intrinsic images [4], and spherical-harmonics-based lighting representation [29]. These methods rely on simplified representation of material or lighting, and their regression-based nature calls for heuristic-based priors which may not be appropriate for a wide variety of scenes.
Earlier work can already photorealistically render synthetic objects in a photograph by estimating lighting and geometry [12, 18, 19]. These methods do not retrieve the materials of the scene, and thus cannot show the reflection of the object on a specular surface in the scene.
Learning-based methods.Learning-based approaches leverage priors learned from datasets. These methods typically take a single image [41, 23, 57, 48, 24] or a pair of stereo images [44], and apply deep learning models to predict spatially-varying materials and lighting. Although learned priors help to regularize individual components, these methods do not explicitly model the physics of global light transport and have to rely on approximated inference [27].
Philip _et al_. [37] take multiple images and aggregate multiview irradiance and albedo information to a pre-trained network to synthesize the relit image. The network takes physically rendered shading using light sources that are semi-automatically estimated as inputs, and outputs an image after relighting. We show in the results that in our synthetic scenes, their method's reliance on the network to render the final image can lead to undesired artifacts, while our use of a physically-based renderer delivers more realistic images.
Local or distant lighting.Many recent methods aim to model a specific form of light transport. Some methods focus on a single object or distant illumination (environment map) [14, 55, 32, 7, 8, 56]. Srinivasan _et al_. [43] model two-bounce volumetric lighting with known light sources, and Yao _et al_. [50] represent incident radiance as a 5D network. However, optimization of spatially-varying lighting without physically-based constraints is extremely ill-posed especially without abundant observation of light sources. Moreover, object-centric methods do not trivially generalize to indoor settings, where complex lighting effects including occlusion, inter-reflections, and directional highlights call for modeling of long-range interactions of lighting and scene properties.
Global light transport.Most related to our work, to model general global light transport, recent methods [1, 33, 51] build on a per-scene optimization pipeline using a differentiable path tracer [22, 3, 34, 54]. These methods jointly optimize material and lighting along extensively sampled light paths, and thus are subject to incorrect and slow convergence and high variance due to expensive path queries, gradient propagation, and Monte Carlo sampling, as well as the inherent ambiguities between materials and lighting. We propose an inverse rendering pipeline that models the global light transport, but converges significantly faster and more accurately than existing methods. Our variance reduction technique using light baking is inspired by classical rendering methods [49, 21, 42], and we tightly integrate the technique in an inverse rendering pipeline. A concurrent work TexIR [26] adopts similar ideas to ours by using a pre-baked irradiance as HDR texture map to recover scene materials. However, they do not model view-dependent light transport and do not estimate emission.
## 3 Background
Given posed HDR image captures of an indoor scene, our method builds upon input mesh or existing 3D reconstruction algorithms (_e.g_. MonoSDF [53]) to further estimate the material and lighting of the scene. To ensure the problem is well-constrained, we take similar assumption on scene acquisition to previous works [1, 37, 51] that the dominant light sources and most of the scene geometry are observed
\begin{table}
\begin{tabular}{|l|l|} \hline \((\cdot)_{+}\) & dot product clamped to positive value \\ \(\mathbf{\omega}_{i}\) & incident (light) direction \\ \(\mathbf{\omega}_{o}\) & outgoing (viewing) direction \\ \(\mathbf{h}\) & half vector: \((\mathbf{\omega}_{i}+\mathbf{\omega}_{o})/\|\mathbf{\omega}_{i}+\mathbf{\omega}_{o}\|_{2}\) \\ \(\mathbf{n}\) & surface normal \\ \(\mathbf{k}_{d}(\mathbf{x})\) & diffuse reflectance: \(\mathbf{a}(\mathbf{x})(1-m(\mathbf{x}))\) \\ \(\mathbf{k}_{s}(\mathbf{x})\) & specular reflectance: \(\mathbf{a}(\mathbf{x})m(\mathbf{x})+0.04(1-m(\mathbf{x}))\) \\ \(\mathbf{a}(\mathbf{x})\) & surface base color \\ \(m(\mathbf{x})\) & surface metallic \\ \(\sigma(\mathbf{x})\) & surface roughness \\ \(D(\cdot)\) & GGX normal distribution [47] \\ \(F(\cdot)\) & Schlick’s approximation of Fresnel coefficient [39] \\ \(G(\cdot)\) & Geometry (Shadow-Masking) term [47] \\ \hline \end{tabular}
\end{table}
Table 1: **Notations**
in input images.
The material is described as a spatially varying BRDF [17] (including the cosine term) with notations specified in Tab. 1:
\[\begin{split}& f(\mathbf{x},\mathbf{\omega}_{i},\mathbf{\omega}_{o})= \frac{\mathbf{k}_{d}(\mathbf{x})}{\pi}\left(\mathbf{n}\cdot\mathbf{\omega}_{i} \right)_{+}\\ &+\frac{F(\mathbf{\omega}_{i},\mathbf{h},\mathbf{k}_{s}(\mathbf{x}))D (\mathbf{h},\mathbf{n},\sigma(\mathbf{x}))G(\mathbf{\omega}_{i},\mathbf{\omega}_{o}, \mathbf{n},\sigma(\mathbf{x}))}{4(\mathbf{n}\cdot\mathbf{\omega}_{o})},\end{split} \tag{1}\]
where \(\mathbf{k}_{d}=\mathbf{a}(1-m)\) and \(\mathbf{k}_{s}=0.04(1-m)+\mathbf{a}m\) are the diffuse and specular reflectance with base color \(\mathbf{a}\) and metallic \(m\) controlling the two coefficients. The emitted light is assumed to be view-independent across the surface: \(\mathbf{L}_{e}(\mathbf{x},\mathbf{\omega}_{o})=\mathbf{L}_{e}(\mathbf{x})\), which generalizes well to the emission profile of the indoor scene (extending to more complex emitters is possible; see Sec. 5.4).
With the parameterization above, our goal is to find \(\mathbf{a},\sigma,m,\mathbf{L}_{e}\) that minimize the difference of renderings with respect to the ground truth over the training images:
\[\min_{\mathbf{a},\sigma,m,\mathbf{L}_{e}}\sum_{\mathbf{x},\mathbf{ \omega}_{o}}\left\|\mathbf{L}(\mathbf{x},\mathbf{\omega}_{o})-\mathbf{L}_{gt}( \mathbf{x},\mathbf{\omega}_{o})\right\|_{2}^{2} \tag{2}\] \[\mathbf{L}(\mathbf{x},\mathbf{\omega}_{o})=\mathbf{L}_{e}(\mathbf{x},\mathbf{\omega}_{o})+\mathbf{L}_{r}(\mathbf{x},\mathbf{\omega}_{o})\] (3) \[\mathbf{L}_{r}(\mathbf{x},\mathbf{\omega}_{o})=\int_{\Omega^{+}} \mathbf{L}_{i}(\mathbf{x},\mathbf{\omega}_{i})f(\mathbf{x},\mathbf{\omega}_{i},\mathbf{ \omega}_{o})d\mathbf{\omega}_{i}. \tag{4}\]
\(\mathbf{L}_{gt}\) is a ground truth RGB pixel obtained from camera ray \((\mathbf{x},\mathbf{\omega}_{o})\). \(\mathbf{L}\) denotes the synthesized rendering following the rendering equation [16], where \(\mathbf{L}_{e}\) is the surface emission. \(\mathbf{L}_{r}\) is the reflection equation given by integrating incident light \(\mathbf{L}_{i}\) times the BRDF response.
## 4 Factorized Inverse Path Tracing
To optimize re-rendering error (Eq. 2), previous works [1, 33] apply differentiable path tracing to solve Eq. 3 and update BRDF and emission jointly with gradient descent. This approach can be unstable and inefficient: (1) gradient descent optimization is computationally intensive, which limits the number of path tracing samples and therefore increases the estimation variance; (2) fundamental ambiguities exist between BRDF and emission, making emission optimization difficult to regularize or converge. To reduce variance in optimization, we propose a factorized light transport representation (Sec. 4.1) which utilizes pre-baking of diffuse and specular shading maps (Eq. 6) to separate the BRDF coefficients out of the rendering integral.
Based on this theory, our full pipeline is demonstrated in Fig. 2, which optimizes dense BRDF and emission from posed images and scene geometry. The pipeline consists of 3 stages: (1) first, the factorized diffuse and specular shadings are initialized (baked) as described in Sec. 4.2; (2) given baked shadings, BRDF and emission mask are then optimized (Sec. 4.5), followed by emitter extraction (Sec. 4.4); (3) given current BRDF-emission estimation, the shadings are refined (Sec. 4.3), and the algorithm alternates between (2) and (3) until convergence.
### Factorized light transport
A common way to speed up path tracing in the rendering literature [49, 21, 42] is to separate BRDF coefficients from the rendering integrals, and then to pre-bake and reuse the integral parts. Employing a similar idea, we rewrite the reflection equation (Eq. 4) as:
\[\begin{split}\mathbf{L}_{r}(\mathbf{x},\mathbf{\omega}_{o})& =\mathbf{k}_{d}\mathbf{L}_{d}(\mathbf{x})\\ &+\mathbf{k}_{s}\mathbf{L}_{o}^{0}(\mathbf{x},\mathbf{\omega}_{o}, \sigma)+\mathbf{L}_{s}^{1}(\mathbf{x},\mathbf{\omega}_{o},\sigma)\\ \mathbf{L}_{d}(\mathbf{x})&=\int_{\Omega^{+}} \mathbf{L}_{i}(\mathbf{x},\mathbf{\omega}_{i})\frac{(\mathbf{n}\cdot\mathbf{\omega}_{i })_{+}}{\pi}d\mathbf{\omega}_{i}\\ \mathbf{L}_{s}^{0}(\mathbf{x},\mathbf{\omega}_{o},\sigma)& =\int_{\Omega^{+}}\mathbf{L}_{i}(\mathbf{x},\mathbf{\omega}_{i})\frac{F_{0}DG}{4( \mathbf{n}\cdot\mathbf{\omega}_{o})}d\mathbf{\omega}_{i}\\ \mathbf{L}_{s}^{1}(\mathbf{x},\mathbf{\omega}_{o},\sigma)& =\int_{\Omega^{+}}\mathbf{L}_{i}(\mathbf{x},\mathbf{\omega}_{i})\frac{F_{1}DG}{4( \mathbf{n}\cdot\mathbf{\omega}_{o})}d\mathbf{\omega}_{i},\end{split} \tag{5}\]
where \(\mathbf{L}_{d}\) is the diffuse shading; \(\mathbf{L}_{s}^{0},\mathbf{L}_{s}^{1}\) are the two specular shadings associated with two Fresnel components [39]:
\[\begin{split}& F(\mathbf{h},\mathbf{\omega}_{i},\mathbf{k}_{s}( \mathbf{x}))=\mathbf{k}_{s}(\mathbf{x})F_{0}+F_{1}\\ & F_{0}=\left(1-(1-\mathbf{h}\cdot\mathbf{\omega}_{i})^{5}\right),F_{ 1}=(1-\mathbf{h}\cdot\mathbf{\omega}_{i})^{5}.\end{split} \tag{6}\]
The specular shadings are further approximated by linear interpolation of 6 pre-defined roughness levels:
\[\mathbf{L}_{s}^{*}(\cdot,\sigma)\approx\text{l}\text{p}\text{p}(\{\mathbf{L}_{ s}^{*}(\cdot,\sigma_{k})|\sigma_{k}\in\text{l}\text{m}\text{p}\text{p}\text{p}\text{p} \text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p} \text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p} \text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p} \text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p} \text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p} \text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p} \text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p} \text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p} \text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p} \text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p} \text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p}\text{p} \
initialized; otherwise, the optimization will not converge. For simplicity, we abstract the integrands in Eq. 6 to the form: \(\mathbf{L}_{*}=\mathbf{L}_{i}g\), where \(g\) denotes the factorized BRDF term. In path tracing notation, the shading integral can be initialized by querying a surface light field approximation:
\[\begin{split}\mathbf{L}_{*}(\mathbf{x})&=\mathbf{L} (\mathbf{x}_{1}\rightarrow\mathbf{x})g(\mathbf{x}_{1}\rightarrow\mathbf{x})\\ &\approx\mathbf{L}^{\prime}(\mathbf{x}_{1})g(\mathbf{x}_{1} \rightarrow\mathbf{x}),\end{split} \tag{9}\]
where \(\mathbf{L}\) is the exact surface light field at sampled location \(\mathbf{x}_{1}\) towards \(\mathbf{x}\), and \(\mathbf{L}^{\prime}\) is its approximation obtained by average pooling all the training pixels onto a voxel grid spanned on the scene geometry (Fig. 3 left). Since objects in an indoor scene are often near-diffuse, and the renderings are essentially low pass filtering the incident light field [38] that blurs the detail, we find that using a \(256^{3}\) voxel grid with nearest neighbor radiance query gives good shading approximations (Fig. 3 right).
### Path-traced shading refinement
Eq. 9 gives incorrect shading if the surface light field is sampled at locations that are mainly specular (\(\mathbf{L}\) is view dependent), which subsequently leads to incorrect BRDF estimation. Given BRDF-emitter estimations optimized from Eq. 2 under current shading estimations, we re-estimate the light transport on specular surfaces by growing the path in Eq. 9 until the ray either hits an emitter or intersects with a near diffuse surface (identified by \(\sigma>0.6\); Fig. 4):
\[\begin{split}\mathbf{L}_{*}(\mathbf{x})=\mathbf{R}(\mathbf{x}_{n })\prod_{i=1}^{n-1}f(\mathbf{x}_{i+1}\rightarrow\mathbf{x}_{i})g(\mathbf{x}_{ 1}\rightarrow\mathbf{x}),\\ \text{s.t. }\sigma(\mathbf{x}_{i})\leq 0.6,\forall i<n\end{split} \tag{10}\]
\[\begin{split}\mathbf{R}(\mathbf{x}_{n})=\begin{cases}\mathbf{L}^{ \prime}(\mathbf{x}_{n})&\sigma(\mathbf{x}_{n})>0.6\\ \mathbf{L}_{e}(\mathbf{x}_{n})&\mathbf{L}_{e}(\mathbf{x}_{n})>0\end{cases}, \end{split} \tag{11}\]
where \(n\) is the length of a certain path before it terminates. The above equation essentially estimates the shadings by multi-bounced path tracing with \(\mathbf{L}^{\prime}\) being a diffuse radiance cache, which helps speed up the evaluation and also reduce error: initial estimations of \(f\)s may retain large error but \(\mathbf{L}^{\prime}\) is very close to a diffuse surface light field (as it is also view-independent). Most of the path hits a diffuse surface within one to two bounces, such that the errors from the BRDF will not be magnified (Fig. 5).
Substituting shadings in factorized rendering by their refinements makes Eq. 5 more closely match the ground truth light transport, such that BRDF can be re-estimated with fewer artifacts (Fig. 4: 'Origin' VS 'Refined'). The re-estimated BRDF in turn is applied to further improve the shadings, and this BRDF and shading refinement is performed alternatively until convergence.
### Error-driven emitter estimation
If we replace rendering equation Eq. 3 by Eq. 4 that excludes the emission, the objective Eq. 2 still converges for non-emissive surfaces (as their \(\mathbf{L}_{e}=0\)), but regions with emission will present large errors, which is a good indicator of emitters (Fig. 6). With this intuition, we introduce an emission mask (encouraged to be small) \(\alpha\in[0,1]\) to the rendering loss:
\[\begin{split}\min_{\mathbf{a},\sigma,m,\alpha}\sum_{\mathbf{x}, \omega_{\alpha}}\left\|(1-\alpha)\mathbf{L}_{r}+\alpha\mathbf{L}_{gt}-\mathbf{ L}_{gt}\right\|_{2}^{2}\\ \text{s.t. }\alpha\to 0.\end{split} \tag{12}\]
When a surface is non-emissive, \(\alpha\) will stay small owing to the regularization and the loss is minimized by adjusting
Figure 4: **Shading refinement:** Cabinet’s diffuse reflectance estimation is initially darker than ground truth, owing to the excessive incident light received from the range hood that reflects non-diffuse light (2nd column). The artifacts are reduced by growing the path for the specular surface according to the optimized BRDF (1st column), which gives more accurate shadings that can be used to further refine the BRDF (3rd column).
Figure 5: **Diffuse radiance cache** from \(\mathbf{L}^{\prime}\) helps reduce variance and error for shading estimation (2nd image). Without it, sampling the tiny emitters below the cabinet will be difficult (1st image), which leads to incorrect shading and albedo (4th image).
Figure 3: **Diffuse and specular shadings are initialized** by tracing a voxel representation of the surface light field \(\mathbf{L}^{\prime}\) (left), which gives approximations (top row on right) close to the ground truth (bottom row on the right; obtained by path tracing).
towards \(\mathbf{L}_{gt}\); but \(\mathbf{L}_{r}\) cannot model the emission, so \(\alpha\) for an emissive surface has to become large to accommodate the error. By changing the optimization objective to Eq. 12, we first jointly estimate the BRDF-emission mask, and then threshold the mask to find the emitter (\(\alpha>0.01\)). Afterward, each emitter's emission \(\mathbf{L}_{e}\) is estimated independently from BRDF:
\[\mathbf{L}_{e}=\begin{cases}\operatorname*{arg\,min}_{\mathbf{L}_{e}}\sum \limits_{\mathbf{x},\boldsymbol{\omega}_{o}}\left\|\mathbf{L}_{e}+\mathbf{L}_ {r}-\mathbf{L}_{gt}\right\|_{1}&\alpha>0.01\\ 0&\text{otherwise}\end{cases}. \tag{13}\]
Our formulation is found to be more stable than joint optimization (demonstrated in ablations in Sec. 5.4), because the emission mask value is in the same range as BRDF coefficients, such that the gradient update is balanced between the BRDF and the emission mask. In contrast, surface emission can be much larger than BRDF coefficients, making it more difficult to directly fit or regularize.
Emitter extraction.We assume emission is constant for each mesh triangle. After \(\alpha\) is optimized, we uniformly sample 100 locations on each triangle and find their corresponding \(\alpha\) value. A triangle is then classified as an emitter if the mean of its \(\alpha\)s is above 0.01. Eq. 13 in general is ill-posed (_e.g_. \(\mathbf{k}_{d}\) can be increased by decreasing \(\mathbf{L}_{e}\)), so we make the assumption that an emitter reflects zero light (\(f=0\)). In such a situation, \(\mathbf{L}_{e}\) for a triangle has the closed-form solution as the median of RGBs from all training pixels it intersects, which does not require any gradient descent optimization, so it can be estimated efficiently and accurately.
### Optimization
Given either initial or refined shadings, the BRDF and emission mask are optimized using the objective in Eq. 12. We encode BRDF and the emission mask with two MLPs:
\[\begin{split}(\mathbf{a},m,\sigma)=\text{Sigmoid}\left(\text{MLP }_{\text{brdf}}(\mathbf{x})\right)\\ \alpha=1-\exp\left(-\text{ReLU}(\text{MLP}_{\text{emit}}( \mathbf{x}))\right),\end{split} \tag{14}\]
where \(\text{MLP}_{\text{brdf}}\) has hash encoding [31] and \(\text{MLP}_{\text{emit}}\) is a positional encoded MLP [30]. The objective Eq. 12 is converted to a gradient descent loss function as a tone-mapped L2 loss \(l\) plus a L1 regularization term \(l_{e}\):
\[l=\sum_{\mathbf{x},\mathbf{o}}\left\|\Gamma((1-\alpha)\mathbf{L} _{r}+\alpha\mathbf{L}_{gt})-\Gamma(\mathbf{L}_{gt})\right\|_{2}^{2} \tag{15}\] \[l_{e}=\lambda_{e}\sum_{\mathbf{x}}\|\text{MLP}_{\text{emit}}( \mathbf{x})\|_{1},\lambda_{e}=1, \tag{16}\]
where \(\Gamma\) is the tone-mapping function proposed in [32] to help suppress noise from high dynamic range values. We prefer neural networks rather than a textured mesh (as in [1, 33]) as scenes with complex geometries can create degenerate UVs, which reduces the BRDF quality.
Roughness-metallic regularization.Surface roughness and metallic can take arbitrary values if there are no highlights (\(\mathbf{L}_{s}^{0},\mathbf{L}_{s}^{1}\approx 0\)), which leads to ambiguity. We prevent this by encouraging surfaces to be diffuse:
\[l_{d}=\lambda_{d}\sum_{\mathbf{x}}\left(\|1-\sigma(\mathbf{x})\|_{1}+\|m( \mathbf{x})\|_{1}\right),\lambda_{d}=\text{5e-4}, \tag{17}\]
such that a diffuse surface will not be misinterpreted as a specular surface with weak reflection. To get valid roughness-metallic for training pixels that do not observe highlights, we further assume they stay constant inside each material part, and utilize image-level part segmentation to group training pixels. The roughness-metallic from pixels with highlights are propagated to their corresponding group by another regularization loss:
\[l_{p}=\lambda_{p}\sum_{\mathbf{x}}\left\|\begin{bmatrix}\sigma( \mathbf{x})\\ m(\mathbf{x})\end{bmatrix}-\begin{bmatrix}\sigma^{\prime}(\mathbf{x})\\ m^{\prime}(\mathbf{x})\end{bmatrix}\right\|_{1},\lambda_{p}=\text{5e-3} \tag{18}\] \[\begin{bmatrix}\sigma^{\prime}(\mathbf{x})\\ m^{\prime}(\mathbf{x})\end{bmatrix}=\sum_{\text{Seg}(\mathbf{x}^{\prime})= \text{Seg}(\mathbf{x})}\frac{w(\mathbf{x}^{\prime})}{\sum_{\mathbf{x}^{\prime }}w(\mathbf{x}^{\prime})}\begin{bmatrix}\sigma(\mathbf{x}^{\prime})\\ m(\mathbf{x}^{\prime})\end{bmatrix}\] (19) \[w(\mathbf{x}^{\prime})=\text{Seg}\left(\|\mathbf{k}_{\mathbf{s}} \mathbf{L}_{s}^{0}+\mathbf{L}_{s}^{1}\|_{1}\right),\]
where \(\text{Seg}(\mathbf{x})\) gives the segmentation ID for \(\mathbf{x}\), \(\text{sg}(\cdot)\) denotes stop the gradient, and \(w(\mathbf{x})\) is a propagation kernel that weights the pixel by the amount of highlights. While part segmentation in practice can be hard to obtain, semantic segmentation is readily available from pre-trained model _e.g_. Mask2Former [11], where multiple material parts may stay
Figure 6: **Rendering images without emission terms produces distinctive error near emissive surfaces (3rd image). By jointly optimizing an emission mask (4th image) to cancel this error, the emitter can be found by checking the mask’s response, which is robust even for tiny emitters (2nd image for ground truth).**
Figure 7: **Roughness optimization can be ambiguous without any regularization (1st image). By encouraging a surface to be diffuse, specular surfaces still get an incorrect roughness value if no highlights are observed (2nd image). The roughness can be more reasonably estimated with part segmentation for guidance (3rd image). Semantic segmentation (4th image) shows similar results except the roughness for small objects get blurred.**
inside the same semantic label. To account for such detail loss, we consider two pixels belong to the same material part only if: (1) they share the same semantic ID; (2) have similar albedo value; (3) and are close to each other, which suggests an alternative propagation kernel \(w(\mathbf{x},\mathbf{x}^{\prime})\):
\[\begin{split} w(\mathbf{x}^{\prime},\mathbf{x})=\text{sg}\left(e^{- \frac{\|\mathbf{x}(\mathbf{x})-\|\mathbf{x}^{\prime}\|_{2}^{2}}{2\sigma_{a}^{2}} }e^{-\frac{\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}^{2}}{2\sigma_{z}^{2}}}\right) \\ \sigma_{a}=\text{1.6e-2},\sigma_{x}=\text{1e-2}.\end{split} \tag{20}\]
By replacing \(\mathbf{w}(\mathbf{x})\) with \(\mathbf{w}(\mathbf{x},\mathbf{x}^{\prime})\) and changing the regularization weight to \(\lambda_{p}=\text{1e-3}\), we can still have reasonable roughness-metallic estimation even with semantic segmentation (Fig. 7).
## 5 Experiments
We evaluate our method on 4 synthetic and 1 real indoor scenes, where the synthetic scenes are obtained from [5] with large glass objects being removed (as we do not model transmission), and the real scene is captured by us. Each synthetic scene contains around 200 posed HDR images generated by Mitsuba3 [15], per-camera view BRDF-emission maps generated by Blender [6], and ground truth geometry. The real scene is a conference room captured by a Sony A7M3 camera with around 200 HDR images reconstructed by 5-stop exposure bracketing. The camera poses are estimated from COLMAP [40] and the geometry is reconstructed using MonoSDF [53]. For synthetic scenes, we show our method with both part segmentation (FIPT) and semantic segmentation mask (FIPT-sem).
### Synthetic: BRDF-emission estimation
While synthetic scenes allow us to directly compare with the ground truth BRDF-emission without noise from geometry or image captures, BRDF parameterizations can vary across different baselines. For fair comparison, we empirically found diffuse reflectance \(\mathbf{k}_{d}\) for diffuse surfaces, roughness \(\sigma\), and the material reflectance defined by \(\mathbf{a}^{\prime}=\int_{\Omega^{+}}fd\mathbf{\omega}_{i}\) are very close across different BRDF models like [9, 17]. We therefore measure the PSNR for these metrics in image space for BRDF comparison. The \(\mathbf{k}_{d}\) is compared only for diffuse surfaces, and \(\mathbf{a}^{\prime}\) is estimated using Monte Carlo integration of 128 samples per pixel. For emission, we estimate the IoU of emission mask and \(\log\text{L2}\) error of emission map.
Baselines.We compare with the original inverse path tracing (IPT) [1] and its extension MILO [51] that also parameterizes spatially varying BRDF with neural networks. IPT assumes BRDF parameters to be constant inside each mesh triangle, and MILO takes manual input of number of emitters. Both IPT and MILO are evaluated by their original authors due to non-public code, and the MILO training is stopped after 10 hours. Meanwhile, we also compare NeILF [50] that models illumination in an unconstrained way and the learning based approach [24] (Li22) for single-view inverse rendering.
Results.As is shown in Tab. 2, our method gives the best BRDF and emission estimation with the fastest training speed (Tab. 3) even when only semantic level segmentation is provided (FIPT-sem). The learning-based approach (Li22) fails to generate reasonable reconstruction as it does not utilize multi-view cues, while unconstrained optimization (NeILF) suffers from the ambiguity between material and lighting. While IPT converges, its accuracy is limited by a piece-wise constant constraint to reduce the variance. MILO also fails to reconstruct high frequency details because of the Monte Carlo noise from path tracing, and it requires manual specification of the number of emitters to constrain the emission optimization. In contrast, our method
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline & Method & \(\mathbf{k}_{d}\) & \(\mathbf{a}^{\prime}\) & \(\sigma\) & \(\mathbf{L}_{e}\) \\ & & & PSNR\(\uparrow\) & & IoU\(\uparrow\) & logL2\(\downarrow\) \\ \hline \multirow{4}{*}{Bathroom} & Li22 [24] & 19.92 & 15.78 & 13.77 & 0.45 & 1.35 \\ & NeILF [50] & 10.12 & 9.01 & 14.82 & - & - \\ & IPT [1] & 22.43 & 18.59 & 14.69 & 0.33 & 1.09e-1 \\ & MILO [51] & 11.83 & 9.80 & 5.56 & 0.05 & 5.60e-1 \\ & FIPT & **30.13** & **25.28** & **28.79** & **0.63** & **3.18e-2** \\ & FIPT-sem & 27.81 & 24.00 & 21.84 & **0.63** & **3.18e-2** \\ \hline \multirow{4}{*}{Bedroom} & Li22 [24] & 21.87 & 17.18 & 12.12 & 0.34 & 2.78 \\ & NeILF [50] & 14.88 & 12.42 & 11.30 & - & - \\ \cline{1-1} & IPT [1] & 29.39 & 22.46 & 13.33 & 0.92 & 4.01e-3 \\ \cline{1-1} & MILO [51] & 23.65 & 15.16 & 15.42 & 0.08 & 1.59e-2 \\ \cline{1-1} & FIPT & **31.10** & **29.41** & 23.19 & **0.96** & 4.95e-4 \\ \cline{1-1} & FIPT-sem & 31.00 & 28.45 & **25.23** & **0.96** & **4.93e-4** \\ \hline \multirow{4}{*}{Livingroom} & Li22 [24] & 17.25 & 15.32 & 12.72 & 0.17 & 3.61 \\ & NeILF [50] & 12.34 & 10.97 & 13.45 & - & - \\ \cline{1-1} & IPT [1] & 21.24 & 19.01 & 11.77 & 0.90 & 6.08e-3 \\ \cline{1-1} & MILO [51] & 22.88 & 18.39 & 13.98 & 0.06 & 1.39e-2 \\ \cline{1-1} & FIPT & 28.86 & **28.70** & **32.48** & **0.95** & **8.06e-4** \\ \cline{1-1} & FIPT-sem & **29.09** & 28.62 & 25.15 & **0.95** & 8.09e-4 \\ \hline \multirow{4}{*}{Kitchen} & Li22 [24] & 18.14 & 14.54 & 10.82 & 0.43 & 1.41 \\ \cline{1-1} & NeILF [50] & 12.63 & 9.96 & 10.64 & - & - \\ \cline{1-1} & IPT [1] & 25.68 & 21.61 & 11.84 & 0.83 & 1.08e-2 \\ \cline{1-1} & MILO [51] & 18.25 & 13.86 & 12.56 & 0.10 & 8.28e-2 \\ \cline{1-1} & FIPT & 33.07 & **27.53** & **29.24** & **0.91** & **1.54e-3** \\ \cline{1-1} \cline{2-1} & FIPT-sem & **33.25** & 27.38 & 21.70 & **0.91** & **1.54e-3** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **BRDF-emission comparison on synthetic scenes** shows that our method gives the overall best reconstruction. The results are similar even if only semantic segmentation is provided (FIPT-sem). NeILF does not estimate emitters. The best method is marked in bold.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{2}{c}{Our per-stage profiling} \\ & Stage 1 & Stage 2 & Stage 3 \\ \hline Memory & 3.2GB & 2.8GB & 3.4GB \\ Time & 6min & 2min & 16min \\ \hline \hline \end{tabular}
\begin{tabular}{l c} \hline \hline Method & training time\(\downarrow\) \\ \hline NeILF [50] & 1h38min \\ IPT [1] & \(\approx\)3hr \\ MILO [51] & \(\approx\)10hr \\ FIPT & **44min** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Averaged training speed comparison** suggests our method is very efficient (right table). The per-stage profiling is shown on the left with Stage 2 and 3 being repeated twice. The comparison is made on a 3090Ti GPU.
requires no human input during optimization, which allows more stable and faster convergence with results that match the ground truth well (Fig. 8).
### Synthetic: view synthesis and relighting
To demonstrate the applications of inverse rendering outputs, we compare the rendered scenes under novel views and novel lighting using estimated BRDF and emission. For quantitative comparison, we tone-map the rendered images with \(\gamma=1/2.2\) then calculate their PSNR with respect to the ground truth.
Baselines.Besides IPT, MILO, and Li22, we also consider FVP [37] that performs view synthesis and relighting in a learning-based way. FVP assumes emissions come from saturated regions on the images, which may wrongly classify surfaces with strong reflection as emitters. So we offer ground truth emission to FVP instead as oracle. The renderings for IPT, MILO, and our method are obtained by path tracing with 1024 samples per pixel, which is further denoised by the Optix denoiser [35].
Results.As is shown in Tab. 4, our (FIPT and FIPT-sem) estimated BRDF-emission gives the most accurate view synthesis and relighting results. While results from FVP are seemingly visually appealing, the method is not guaranteed to be physically plausible and fails to match the ground truth. As shown in Fig. 9, our method handles specular reflections and even mirror reflection well, which is difficult to be modeled by standard inverse path tracing owing to its high variance.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Method & Bathroom & Bedroom & Livingroom & Kitchen \\ \hline \multirow{4}{*}{View synthesis} & FVP [37] & 23.38 & 20.49 & 24.63 & 20.77 \\ & IPT [1] & 14.76 & 21.85 & 23.87 & 19.94 \\ & MILO [51] & 20.62 & 20.25 & 24.47 & 18.09 \\ & FIPT & 25.42 & 29.84 & **30.86** & **25.38** \\ & FIPT-sem & **25.76** & **29.89** & 30.84 & 25.27 \\ \hline \multirow{4}{*}{Relight} & Li22 [24] & 22.86 & 23.20 & 19.83 & 21.76 \\ & FVP [37] & 23.72 & 24.11 & 19.51 & 23.31 \\ \cline{1-1} & IPT [1] & 20.61 & 28.16 & 27.26 & 27.28 \\ \cline{1-1} & MILO [51] & 14.97 & 23.39 & 22.10 & 19.62 \\ \cline{1-1} & FIPT & **31.28** & 36.64 & **31.56** & **29.13** \\ \cline{1-1} & FIPT-sem & 31.03 & **36.69** & 30.82 & 28.79 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Quantitative results (PSNR) of view synthesis and relighting on synthetic scenes** show our estimated BRDF and emission gives very consistent rendering under novel views and lighting. View synthesis is not available for Li22.
Figure 8: **Qualitative comparison of BRDF and emission on synthetic scenes** shows our method successfully reconstructs material reflectance (1st row), roughness (2nd row), and emission (3rd row) with high frequency details and less ambiguity. Emission estimation is shown as error heatmaps (warmer colors indicate higher emission error; GT emitter boundary is marked in white lines).
Figure 9: **Qualitative results of view synthesis and relighting on synthetic scenes** demonstrate accurate light transport can be simulated with our estimated BRDF and emission even for very specular surfaces. The reflection of the chair from the microwave oven can be seen in kitchen scene on top, and mirrors are correctly rendered for bathroom (bottom).
### Real-world scene
We now show results for a real-world scene which we captured. Because it is not possible to obtain ground truth BRDF and emission from just RGB captures, we only showcase qualitative reconstruction results and view synthesis together with the view synthesis PSNR in Fig. 10. While the rendering comparison is not a reliable metric for the BRDF-emission quality (since the rendering difference can be minimized even with the presence of material-lighting ambiguity), our method visually produces reasonable material reflectance and roughness, and correctly identifies the emitters. Only semantic segmentation (FIPT-sem) is used for the real-world scene as ground truth part segmentation is not available. More comparison is shown in Sec. B.1.
### Ablation study
Training strategy.Tab. 5 shows the effect of different training strategies on the kitchen scene. If we jointly optimize the emission and BRDF with the regularization term in [1], the BRDF optimization can still converge, but the emission estimation does not converge given the same amount of time (2 epochs). Since the majority of the scene receives incident light from nearby diffuse surfaces, the reconstruction result is still reasonable without shading refinement (stage 3), but further refining the BRDF estimation helps to correct light transport for specular surfaces. If we simply path-trace the BRDF-emission without using the radiance cache from stage 1, the refined shadings will accumulate too much estimation error causing the subsequent BRDF estimation to deviate from ground truth.
emission of an emitter triangle. While environment lighting may not be fully observed and consequently causes artifacts near windows (see Sec. 6), a majority of the surfaces can still be reconstructed well owing to the diffuse radiance cache.
## 6 Limitations and Future Work
Our method shares certain limitations with standard inverse path tracing. The framework does not optimize geometry, so that the BRDF and emission estimations can be inaccurate if the input geometry (especially for emitters) is extremely bad (Fig 13, left). Combining differentiable geometry optimization [2, 46] may help improve the robustness. Meanwhile, the BRDF estimation fails if the dominant light source (_e.g_. the sun) is not directly observed, which can happen very often with environment emitters whose observations are blocked by the windows (Fig. 13, right). Incorporating learning based methods may help. Lastly, the optimization relies on photometric observations, which means it cannot remove ambient occlusion effects out of the BRDF maps (as radiance there is near zero) and our model does not model transparent objects.
Acknowledgements.This work was supported in part by NSF grants 1730158, 1751365, 2105806, 2110409, 2127544, ONR grant N000142012529, gifts from Qualcomm, Adobe and Google, the Ronald L. Graham Chair and the UC San Diego Center for Visual Computing.
Additionally, we would like to thank Bohan Yu, Dejan Azinovic, Julien Philip, and Zhengqin Li for generous assistance in evaluation of their methods, David Forsyth, Shenlong Wang, and Merlin Nimier-David for insightful discussions, as well as Jiaer Zhang for assistance in implementation.
|
2307.04027 | Slow-roll inflation and growth of perturbations in Kaniadakis
modification of Friedmann cosmology | Kaniadakis entropy is a one-parameter deformation of the classical
Boltzmann-Gibbs-Shannon entropy, arising from a self-consistent relativistic
statistical theory. Assuming a Kaniadakis-type generalization of the entropy
associated with the apparent horizon of Friedmann-Robertson-Walker (FRW)
Universe and using the gravity-thermodynamics conjecture, a new cosmological
scenario is obtained based on the modified Friedmann equations. By employing
such modified equations, we analyze the slow-roll inflation, driven by a scalar
field with power-law potential, at the early stages of the Universe. We explore
the phenomenological consistency of this model by computation of the scalar
spectral index and tensor-to-scalar ratio. Comparison with the latest Planck
data allows us to constrain Kaniadakis parameter to
$\kappa\lesssim\mathcal{O}(10^{-12}\div10^{-11})$, which is discussed in
relation to other observational bounds in the past literature. We also disclose
the effects of Kaniadakis correction term on the growth of perturbations at the
early stages of the Universe by employing the spherically symmetric collapse
formalism in the linear regime of density perturbations. We find out that the
profile of density contrast is non-trivially affected in this scenario.
Interestingly enough, we observe that increasing Kaniadakis parameter $\kappa$
corresponds to a faster growth of perturbations in a Universe governed by the
corrected Friedmann equations. Finally, we comment on the consistency of the
primordial power spectrum for scalar perturbations with the best data-fit
provided by Planck. | Gaetano Lambiase, Giuseppe Gaetano Luciano, Ahmad Sheykhi | 2023-07-08T18:29:51Z | http://arxiv.org/abs/2307.04027v3 | # Slow-roll inflation and growth of perturbations in Kaniadakis Cosmology
###### Abstract
Kaniadakis entropy is a one-parameter deformation of the classical Boltzmann-Gibbs-Shannon entropy, arising from a self-consistent relativistic statistical theory. Assuming a Kaniadakis-type generalization of the entropy associated with the apparent horizon of Friedmann-Robertson-Walker (FRW) Universe and using the gravity-thermodynamics conjecture, a new cosmological scenario is obtained based on the modified Friedmann equations. By employing such modified equations, we analyze the slow-roll inflation, driven by a scalar field with power-law potential, at the early stages of the Universe. We explore the phenomenological consistency of this model by computation of the scalar spectral index and tensor-to-scalar ratio. Comparison with the latest Planck data allows us to constrain Kaniadakis parameter to \(\kappa\lesssim\mathcal{O}(10^{-13}\div 10^{-12})\), which is discussed in relation to other observational bounds in the past literature. We also disclose the effects of Kaniadakis correction term on the growth of perturbations at the early stages of the Universe by employing the spherically symmetric collapse formalism in the linear regime of density perturbations. We find out that the profile of density contrast is non-trivially affected in this scenario. Interestingly enough, we observe that increasing Kaniadakis parameter \(\kappa\) corresponds to a faster growth of perturbations in a Universe governed by the corrected Friedmann equations.
## I Introduction
It is a general belief that our Universe has experienced two phases of accelerated expansion. The first, named inflation, which was proposed to address some internal problems (flatness, horizon, structure formation) of standard modern Cosmology [1; 2; 3]. According to inflationary model, our Universe expanded exponentially soon after the Big Bang, which leads to produce a huge amount of energy. The second is thought to have started roughly five billion years ago and has by now been confirmed by measurements of the luminosity distances of type Ia Supernovae [4; 5]. While being largely understood in most of their facets, concerns remain about the origin of these phenomena. Among the candidate mechanisms, two explanations are the most credited so far: on one hand, it is possible to maintain the classical Einstein's description of gravity and introduce extra energy degrees of freedom that drive the Universe acceleration, such as an inflaton field [6] and the dark sectors of the cosmos [7; 8; 9]. Alternatively, one can resort to a class of modified theories of gravity and leave the energy content of the Universe unaffected (see [10] for a review). From the latter perspective, potential developments have been achieved in the gravity-thermodynamics picture, which conjectures that the gravitational field equations in the cosmological context can be extracted from the first law of thermodynamics on the Universe apparent horizon, and vice-versa [11; 12; 13; 14; 15; 16] (see [17; 18; 19; 20; 21; 22; 23] for further applications).
In its traditional formulation, the gravity-thermodynamics conjecture identifies the thermodynamic entropy of the Universe with the Boltzmann-Gibbs-Shannon (BGS) entropy. In recent years, much efforts have been devoted to study generalized scenarios based on extended entropies, such as Tsallis [24; 25; 26], Barrow [27] and Kaniadakis [28; 29; 30; 31; 32] entropies, which all possess the BGS framework as a limit. In particular, Kaniadakis entropy is a one-parameter modification of the BGS entropy that arises from a coherent and self-consistent relativistic statistics, while still retaining the basic features of the classical BGS theory. Motivated by the relativistic essence of this new formalism, implications have been examined in several areas [33; 34; 35] and the problem of how to equip the Standard Model of Cosmology (SMC) with a fully relativistic description of the entropic degrees of freedom in the early Universe has been investigated from multiple perspectives [36].
The modified Friedmann equations in Kaniadakis cosmology have been first explored in [37], where the extra terms have been incorporated into additional dark energy components that reflect the effects of the corrected entropy. The ensuing theory has proved to exhibit a richer phenomenology compared to the SMC, especially as for the ability to predict the usual thermal history of the Universe [37], the evolution of the Hubble rate [38] and the early baryogenesis and primordial Lithium abundance [39]. Recently, an alternative derivation has been proposed in [40] by incorporating Kaniadakis effects as
geometrical (i.e. gravitational) corrections to the left-hand side of the Friedmann equations and keeping the energy content of the Universe as standard model. In this approach, the apparent horizon radius of the Universe varies with time due to the cosmic expansion [40]. In this context, it has been argued that the generalized second law of thermodynamics still holds for a Universe enclosed by the apparent horizon and endowed with Kaniadakis entropy. From theoretical shores, the analysis of Kaniadakis cosmology has landed on more experimental grounds in the last years. Observational attempts to constrain Kaniadakis corrections appear in [38; 39; 41], leading to different upper bounds on Kaniadakis parameter.
Starting from the above premises, in this work we explore more in-depth the cosmological implications of Kaniadakis entropy. Specifically, we study the slow-roll inflationary model of the Universe driven by a scalar field with power-law potential. By computing the scalar spectral index and tensor-to-scalar ratio, we show that Kaniadakis cosmology is phenomenologically consistent with the latest Planck data, provided that the entropic parameter is constrained to \(\kappa\lesssim\mathcal{O}(10^{-13}\div 10^{-12})\). Comparison with previous bounds on \(\kappa\) supports the idea that the existing observational literature on Kaniadakis cosmology can be understood in a unified picture if one allows \(\kappa\) to have a running behavior, namely to vary with the energy scale. We finally examine the influence of Kaniadakis entropy corrections on the growth of cosmological perturbations in the early stages of the Universe. We employ the Top-Hat Spherical Collapse (SC) model [42], which describes the evolution of uniform and spherical symmetric perturbations in an expanding background by adopting the same Friedmann equations for the underlying theory of gravity [43; 44; 45]. We find out that the profile of density contrast differs from the standard cosmology, in particular, the growth rate of the total perturbations becomes faster compared to the standard cosmology. Throughout this work we use the natural units \(\bar{\hbar}=c=k_{B}=1\).
This paper is organized as follows. The next section is devoted to a review on the derivation of the modified Friedmann equations inspired by the Kaniadakis entropy. In section III, we explore the slow-roll solution for the power-law inflation in entropy-corrected Kaniadakis cosmology. In section IV we comment on the growth of perturbations at the early stages of the Universe using the spherically collapse formalism. We summarize our results in section V.
## II Kaniadakis entropy and modified Friedmann equations
Motivated by the evidence that the spectrum of the relativistic cosmic rays exhibits a power-law tailed behavior instead of the classical exponential one, the problem of deriving the generalization of the Maxwell-Boltzmann distribution in a special relativistic framework has been first addressed in [29]. As a result, it has been shown that the symmetries of the Lorentz transformations impose the following modification of the BGS entropy
\[S_{\kappa}=-\sum_{i}n_{i}\ln_{\kappa}n_{i}\,, \tag{1}\]
where
\[\ln_{\kappa}x\equiv\frac{x^{\kappa}-x^{-\kappa}}{2\kappa}\,. \tag{2}\]
Using the maximum entropy principle, the corresponding Boltzmann factor for the \(i\)-th level of a given system becomes
\[n_{i}=\alpha\exp_{\kappa}\left[-\beta\left(E_{i}-\mu\right)\right], \tag{3}\]
where
\[\alpha=\left[(1-\kappa)/(1+\kappa)\right]^{1/2\kappa}\,,\ \ \ \ 1/\beta=\sqrt{1-\kappa^{2}}\,T\,, \tag{4}\]
and the \(\kappa\)-deformed exponential is given by
\[\exp_{\kappa}(x)\,\equiv\,\left(\sqrt{1+\kappa^{2}\,x^{2}}\,+\,\kappa\,x \right)^{1/\kappa}\,. \tag{5}\]
From Eq. (1), it is evident that departure from the standard BGS entropy is quantified by the (dimensionless) exponent \(-1<\kappa<1\). The classical framework is recovered in the limit of \(\kappa\to 0\).
Kaniadakis entropy is formally adopted for black holes to explore relativistic corrections to characteristic thermodynamic quantities, like the Hawking temperature, mass and heat capacity. These studies are also useful for holographic applications. In this context, it is convenient to express Eq. (1) in the probabilistic language [46; 47]
\[S_{\kappa}=-\sum_{i=1}^{W}\frac{P_{i}^{1+\kappa}-P_{i}^{1-\kappa}}{2\kappa}\,, \tag{6}\]
where \(P_{i}\) is the probability for the system to be in the \(i\)-th microstate and \(W\) the total number of permitted configurations. For equiprobable states (i.e. \(P_{i}=1/W\)) and recalling that the BGS entropy obeys \(S\propto\log W\), we have \(P_{i}=e^{-S}\). For the case of black holes (\(S=S_{BH}=A/(4G)\)), this takes the form \(P_{i}=e^{-A/(4G)}\). Here, we have denoted the standard Bekenstein-Hawking entropy by \(S_{BH}\), which scales as the horizon surface area \(A\) of the black hole (entropy-area law). By plugging into Eq. (6), we obtain
\[S_{\kappa}=\frac{1}{\kappa}\sinh\left(\kappa\,S_{BH}\right). \tag{7}\]
Two comments are in order here: first, we notice that \(S_{\kappa}\) as written in Eq. (7) is apparently symmetric for \(\kappa\to-\kappa\). Thus, one can simply restrict to positive values of the entropic exponent. Furthermore, considering that deviations from the classical entropy are expected to be
small (i.e. \(\kappa\ll 1\)), it seems reasonable to expand \(S_{\kappa}\) to the leading order as
\[S_{\kappa}=S_{BH}+\frac{\kappa^{2}}{6}S_{BH}^{3}+\mathcal{O}(\kappa^{4})\,, \tag{8}\]
where the zero-th order returns the entropy-area law, as expected. The above approximation is useful to extract analytical solutions from Kaniadakis entropy-based equations, especially in the cosmological framework [37]. We shall rely on Eq. (8) for our next considerations, verifying a posteriori the validity of the condition \(\kappa\ll 1\).
### Modified Friedmann equations
Modified cosmology through Kaniadakis entropy (7) has been explored in [37]. This study has revealed that new cosmological scenarios emerge based on corrected Friedmann equations, which contain extra terms that represent an effective dark energy sector depending on the model parameter \(\kappa\). It was argued that the effective dark energy equation of state parameter deviates from standard \(\Lambda\)CDM cosmology at small redshifts, and remains in the phantom regime during the history of the Universe [37]. While achieving the same formal result, a geometric re-interpretation of these corrections has been provided in [40], motivated by the observation that entropy is a geometrical quantity and any modification to it should change the geometry part of the field equations [40]. In what follows, we stick to the latter derivation and consider a homogeneous and isotropic FRW flat geometry with metric1
Footnote 1: For the general case of a curved geometry, see [37; 40].
\[ds^{2}=h_{\mu\nu}dx^{\mu}dx^{\nu}+\tilde{r}^{2}\left(d\theta^{2}+\sin^{2} \theta\,d\phi^{2}\right), \tag{9}\]
where \(\tilde{r}=a(t)r\), \(x^{0}=t\), \(x^{1}=r\), \(h_{\mu\nu}=\text{diag}\left(-1,a^{2}\right)\) and \(a(t)\) is the time-dependent scale factor.
Conceiving the Universe as a spherical thermodynamic system, the radius of the apparent horizon is \(\tilde{r}_{A}=1/H=a/\dot{a}\), where \(H\) is the Hubble rate and the overdot denotes ordinary derivative with respect to the cosmic time \(t\). In turn, the associated temperature follows from the definition of the surface gravity [14]
\[K=-\frac{1}{\tilde{r}_{A}}\left(1-\frac{\dot{\tilde{r}}_{A}}{2H\tilde{r}_{A}} \right), \tag{10}\]
which gives
\[T_{h}=\frac{K}{2\pi}=-\frac{1}{2\pi\tilde{r}_{A}}\left(1-\frac{\dot{\tilde{r}} _{A}}{2H\tilde{r}_{A}}\right). \tag{11}\]
We now assume the Universe to be filled with matter and energy in the form of perfect fluid. Denoting the equilibrium energy density and pressure by \(\rho\) and \(p\), respectively, the energy-momentum tensor is
\[T_{\mu\nu}=\left(\rho+p\right)u_{\mu}u_{\nu}+p\,g_{\mu\nu}\,, \tag{12}\]
where \(u_{\mu}\) is the four-velocity of the fluid. The conservation equation for the total matter and energy content reads \(\nabla_{\mu}T^{\mu\nu}=0\), which gives
\[\dot{\rho}=-3H\left(\rho+p\right) \tag{13}\]
for the background (9).
In order to derive the cosmological equations, let us employ the gravity-thermodynamic conjecture. Toward this end, we apply the first law of thermodynamics
\[dE=T_{h}dS_{h}+WdV\,, \tag{14}\]
on the apparent horizon of entropy \(S_{h}\), where \(E=\rho V\) is the total energy in the spherical volume \(V\) enclosed by the horizon. Due to the cosmic expansion, the work density done by a change in the horizon radius is
\[W=-\frac{1}{2}T^{\mu\nu}h_{\mu\nu}=\frac{1}{2}\left(\rho-p\right)\,. \tag{15}\]
We omit standard textbook calculations. Assuming the entropy of the apparent horizon is in the form of Kaniadakis entropy (8) and replacing the horizon radius of the black hole with the radius of the apparent horizon, after some algebra we get from Eq. (14) [40]
\[H^{2}-\kappa^{2}\frac{\pi^{2}}{2\left(GH\right)^{2}}\simeq\frac{8\pi G}{3} \rho\,, \tag{16}\]
where we have neglected the irrelevant contribution from the cosmological constant. This is the first Friedmann equation in Kaniadakis entropy-based cosmology [40].
Similarly, one can derive the second Friedmann equation by taking the time derivative of Eq. (16) and using the continuity equation (13). We get
\[\dot{H}\left[1+\kappa^{2}\frac{\pi^{2}}{2\left(GH^{2}\right)^{2}}\right] \simeq-4\pi G\left(\rho+p\right)\,. \tag{17}\]
It is worth noting that Eqs. (16) and (17) coincide with the leading order of the exact equations found in [37]. Clearly, the Friedmann equations in standard cosmology are recovered for \(\kappa\to 0\).
Before moving on to the study of the implications of Eqs. (16) and (17), we remark that modified cosmic scenarios have also been investigated in Tsallis [48; 49; 50; 51] and Barrow [52; 53; 54; 55] entropies, inspired by non-extensive and quantum gravitational considerations, respectively. Here, we stress that Kaniadakis model has a relativistic foundation that makes it conceptually different from the other modified cosmologies.
## III Slow-roll Inflation in Kaniadakis Cosmology
We now study inflation in Kaniadakis cosmology. Following the analysis of [56; 57], we consider the high-energy era of the Universe from the slow-roll condition perspective [58] and assume the evolution to be driven by a scalar field \(\phi\) with potential \(V(\phi)\). In this setting, the characteristic parameters measuring inflation are the tensor-to-scalar ratio and the scalar spectral index of the primordial curvature perturbations, which for a minimal coupling are defined by [59; 60]
\[r = 16\epsilon\,, \tag{18}\] \[n_{s} = 1-6\epsilon+2\eta\,, \tag{19}\]
respectively. Here, we have denoted the slow-roll parameters by
\[\epsilon = -\frac{\dot{H}}{H^{2}}\,, \tag{20}\] \[\eta = -\frac{\ddot{H}}{2H\dot{H}}\,. \tag{21}\]
Under the canonical scalar field assumption, we can write the model lagrangian as
\[\mathcal{L}=X-V(\phi)\,, \tag{22}\]
where
\[X=-\frac{1}{2}h^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\,, \tag{23}\]
and \(V(\phi)\) are the kinetic and (spatially homogenous) potential terms, respectively. The energy density and pressure of the associated matter content inside the early Universe are now
\[\rho_{\phi} = \frac{\dot{\phi}^{2}}{2}+V(\phi)\,, \tag{24}\] \[p_{\phi} = \frac{\dot{\phi}^{2}}{2}-V(\phi)\,. \tag{25}\]
They obey the continuity equation (13), which can be recast in the Klein-Gordon-like form
\[\ddot{\phi}+3H\dot{\phi}+\partial_{\phi}V(\phi)=0\,, \tag{26}\]
where we have used the shorthand notation \(\partial_{\phi}V\equiv\frac{dV}{d\phi}\).
Assuming the potential energy dominates all other forms of energy, the slow-roll approximation is expressed as [59; 61]
\[\ddot{\phi}\ll H\dot{\phi}\,,\qquad\frac{\dot{\phi}^{2}}{2}\ll V(\phi)\,. \tag{27}\]
If we solve the first modified Friedmann equation (16) with respect to \(H\) and use the second of the above conditions, we obtain to the leading order in \(\kappa\)
\[H\simeq\sqrt{\frac{8\pi G}{3}}V+\sqrt{\frac{27\pi}{2G^{7}V^{3}}}\frac{\kappa ^{2}}{64}\,, \tag{28}\]
where we have omitted the field dependence of \(V\) to streamline the notation.
On the other hand, the second Friedmann equation (17) along with Eqs. (24) and (25) give
\[\dot{H}\simeq\left(-4\pi G+\frac{9\pi\kappa^{2}}{32G^{3}V^{2}}\right)\dot{\phi }^{2}\,. \tag{29}\]
Hence, the slow-roll parameters (20) and (21) take the form
\[\epsilon \simeq \left(\frac{3}{2V}-\frac{27\kappa^{2}}{128}\frac{1}{G^{4}V^{3}} \right)\dot{\phi}^{2}\,, \tag{30}\] \[\eta \simeq \left[-\sqrt{\frac{3}{8\pi GV}}\,\ddot{\phi}\right.\] (31) \[+\left.\sqrt{\frac{3}{2\pi G^{9}\,V^{7}}}\frac{9\kappa^{2}}{512} \left(\ddot{\phi}\,V-2\dot{\phi}^{2}\,\partial_{\phi}V\right)\right]\frac{1}{ \dot{\phi}}\,\,.\]
As observed in [56], these two parameters are to be computed at the horizon crossing, where the fluctuations of the inflaton freeze.
The amount of inflation that occurs is measured by the number \(N\) of e-folds [62]
\[N=\int_{t_{i}}^{t_{f}}H(t)dt\,, \tag{32}\]
where we have denoted the initial (final) time of the inflation by \(t_{i}\) (\(t_{f}\)). Since the primordial fluctuations of the inflaton are relevant during the horizon crossing and the power spectrum of the inflation dynamics is evaluated at this point, the beginning time of the inflation is set to the horizon crossing time, i.e. \(t_{i}\equiv t_{c}\). In terms of the field, this amounts to saying \(\phi_{i}\equiv\phi_{c}\), which allows us to write Eq. (32) in the equivalent form
\[N=\int_{\phi_{c}}^{\phi_{f}}\frac{H}{\dot{\phi}}\,d\phi\,, \tag{33}\]
where we have defined \(\phi_{c}\equiv\phi(t_{c})\) and \(\phi_{f}\equiv\phi(t_{f})\).
The dynamics of the inflaton field is governed by the potential \(V(\phi)\). In our model, we assume a power-law potential in the form
\[V=V_{0}\phi^{n}\,, \tag{34}\]
where \(V_{0}>0\) is a constant with dimension \([E]^{4-n}\) associated with the energy scale of inflation \(E_{inf}\simeq 10^{15}\,\mathrm{GeV}\), and \(n>0\) is the power-term. From the latest data, we know that inflationary models with \(n\sim\mathcal{O}(10^{-1}\div 1)\) are observationally favored, while \(n\geq 2\) tends to be excluded in the minimal coupling setting. In the following, we assume \(n\simeq 1\).
Analytical solutions of the inflationary observable indices can be extracted by expressing \(\dot{\phi}\) and \(\ddot{\phi}\) as functions of the scalar field \(\phi\). Toward this end, we use the first of
the two slow-roll conditions in Eq. (27) to simplify the dynamics (26) as
\[\dot{\phi}\simeq-\frac{1}{3H}\partial_{\phi}V\,. \tag{35}\]
By substitution of Eqs. (28) and (34), we are led to
\[\dot{\phi}\simeq-\frac{n}{2}\sqrt{\frac{V_{0}}{6\pi G}}\phi^{(n-2)/2}+\frac{3n \kappa^{2}}{512}\sqrt{\frac{3}{2\pi\left(G^{9}V_{0}^{3}\right)}}\phi^{-(2+3n) /2} \tag{36}\]
Let us impose that inflation (and counting of e-folds \(N\)) ends when \(\epsilon(\phi_{f})\sim 1\). We first recast Eq. (30) as
\[\epsilon(\phi_{f})\simeq\frac{1}{16\pi G\phi_{f}^{2}}-\frac{27\kappa^{2}}{204 8\pi G^{5}(V_{0}\phi_{f}^{2})^{2}}\,, \tag{37}\]
where we have used the power-law potential (34) with \(n\simeq 1\). Solving with respect to \(\phi_{f}\), we obtain
\[\phi_{f}\simeq\frac{1}{4\sqrt{\pi G}}-\frac{27\kappa^{2}}{64V_{0}^{2}}\sqrt{ \frac{\pi}{G^{7}}}\,, \tag{38}\]
where we have considered the only solution that recovers the correct limit for \(\kappa\to 0\).
The value \(\phi_{c}\) of the inflaton at the horizon crossing can be then estimated by Eq. (33), which gives
\[N\simeq-\frac{1}{4}+4\pi G\phi_{c}^{2}+\frac{9\pi\kappa^{2}\left[3+\log\left( 16\pi G\phi_{c}^{2}\right)\right]}{32G^{3}V_{0}^{2}}\,. \tag{39}\]
To solve this equation, we make the ansatz that the two terms in the square brackets are of the same order. We check a posteriori the validity of this assumption (see below Eq. (46)). Under the above condition, we obtain
\[\phi_{c}\simeq\frac{1}{4}\sqrt{\frac{1+4N}{\pi G}}-\frac{1.2\kappa^{2}}{V_{0} ^{2}}\sqrt{\frac{\pi}{\left(1+4N\right)G^{7}}}\,. \tag{40}\]
By use of this relation, we can cast the tensor-to-scalar ratio (18) and the scalar spectral index (19) the in terms of the e-folding time \(N\). To this aim, we observe that
\[\epsilon(\phi_{c}) \simeq \frac{1}{1+4N}+\frac{6.2\pi\kappa^{2}}{G^{3}\left[V_{0}\left(1+4 N\right)\right]^{2}}\,, \tag{41}\] \[\eta(\phi_{c}) \simeq \frac{1}{-1-4N}-\frac{4\pi\kappa^{2}}{G^{3}\left[V_{0}\left(1+4 N\right)\right]^{2}}\,. \tag{42}\]
which can be inserted into Eqs. (18) and (19) to give
\[r \simeq \frac{16}{1+4N}+\frac{10^{2}\pi\kappa^{2}}{G^{3}\left[V_{0}\left( 1+4N\right)\right]^{2}}\,, \tag{43}\] \[n_{s} \simeq \frac{4N-7}{1+4N}-\frac{45.5\pi\kappa^{2}}{G^{3}\left[V_{0}\left( 1+4N\right)\right]^{2}}\,. \tag{44}\]
We now notice that the above description of inflation in Kaniadakis cosmology is phenomenologically consistent, provided that Eqs. (43) and (44) match with the latest Planck observations, which set the experimental bounds [63]
\[r<0.064\qquad\text{(95\% CL)}\,, \tag{45}\]
(from Planck TT,TE,EE+long+lowEB) and
\[n_{s}=0.9649\pm 0.0042\quad\text{(68\% CL)}\,, \tag{46}\]
(from Planck TT, TE, EE+lowE+lensing).
The behavior of \(r\) and \(n_{s}\) in Eqs. (43) and (44) is plotted in Fig. 1 and 2, respectively, versus the (rescaled) Kaniadakis parameter Following the standard literature [64; 65; 66; 67], we have fixed the number of e-folds to be \(N\gtrsim 60\). From Fig. 1 we see that the \(\kappa\)-corrected
Figure 2: Plot of \(n_{s}\) versus the (rescaled) Kaniadakis parameter. The light grey-shaded regions are excluded by the observational constraint (46), which is represented (within the margin of the experimental error) by the horizontal red-dashed lines. The vertical black-solid line delimitates the dark-grey shaded region excluded by the upper bound on \(\kappa\) fixed by Eq. (45). We have set \(N\gtrsim 60\).
Figure 1: Plot of \(r\) versus the (rescaled) Kaniadakis parameter. The grey-shaded region is excluded by the observational constraint (45), which is represented by the horizontal red-dashed line. The ensuing upper bound \(\kappa\lesssim 6.05\times 10^{-13}\) on Kaniadakis parameter is indicated by the vertical black-solid line. We have set \(N\gtrsim 60\).
tensor-to-scalar ratio is observationally consistent as far as \(\kappa\lesssim 6\times 10^{-13}\). Moreover, with this constraint we also fit the prediction of the scalar spectral index within the margin of the experimental error, as shown in Fig. 2.
It remains to be seen whether the ansatz below Eq. (39) is valid. By using Eq. (40) with \(N\gtrsim 60\), we then get \(\log\left(16\pi G\phi_{c}^{2}\right)/3\sim\mathcal{O}(1)\), which confirms our assumption.
Therefore, we conclude that Kaniadakis cosmology allows for the slow-roll inflationary era of a Universe driven by a scalar field with power-law potential (\(n\sim\mathcal{O}(1)\)), setting the constraint
\[\kappa\lesssim\mathcal{O}(10^{-13}\div 10^{-12}) \tag{47}\]
on the entropic parameter. More discussion on this result compared to the past literature can be found in the concluding section.
## IV Growth of cosmological perturbations
In this section we explore the influence of the modified Kaniadakis entropy on the growth of cosmological perturbations at the early stages of the Universe. Following [45], we now consider, for simplicity, a universe filled with pressure-less matter. This assumption does not affect the generality of our next considerations. For later convenience, we recast the modified Friedmann equations (17) as
\[\frac{\ddot{a}}{a}\simeq-\frac{4\pi G}{3}\,\rho+\frac{15\pi\kappa^{2}}{32G^{3} \rho}\,, \tag{48}\]
where we have used
\[\frac{\ddot{a}}{a}=\dot{H}+H^{2}\,, \tag{49}\]
along with the first Friedmann equation (16). The conservation equation (13) for the dust matter takes the form
\[\dot{\rho}+3H\rho=0\,. \tag{50}\]
To study the evolution of the cosmological perturbations, we use the the Top-Hat Spherical Collapse (SC) model [42]. In this formalism, one considers a uniform and spherical symmetric perturbation in an expanding background and analyzes the growth of perturbations in a spherical region of radius \(a_{p}\) and density \(\rho^{c}\) by resorting to the same Friedmann equations as for the underlying theory of gravity [43; 44; 45]. At time \(t\), the density of the fluid in this region can be written as [45]
\[\rho^{c}=\rho(t)+\delta\rho\,, \tag{51}\]
where \(\delta\rho\) denotes the density fluctuation. The conservation equation reads
\[\dot{\rho}^{c}+3h\rho^{c}=0\,, \tag{52}\]
where \(h=\dot{a}_{p}/a_{p}\) is the local Hubble rate of the perturbed region. Additionally, since Eq. (48) is valid in the whole spacetime, it can be specifically written for the perturbed region to give
\[\frac{\ddot{a}_{p}}{a_{p}}\simeq-\frac{4\pi G}{3}\rho^{c}+\frac{15\pi\kappa^{2 }}{32G^{3}\rho^{c}}\,. \tag{53}\]
Let us now define the density contrast \(\delta\) of the fluid in the Universe by
\[\delta=\frac{\rho^{c}}{\rho}-1=\frac{\delta\rho}{\rho}\ll 1\,, \tag{54}\]
where we have reasonably assumed the fluctuation to be much smaller than the density itself (linear regime). Deriving the above relation respect to \(t\) and using Eqs. (50) and (52), we obtain
\[\dot{\delta}=3\left(1+\delta\right)\left(H-h\right), \tag{55}\]
which by further derivation gives
\[\ddot{\delta}=3\left(1+\delta\right)\left(\dot{H}-\dot{h}\right)+\frac{\dot{ \delta}^{2}}{1+\delta}\,. \tag{56}\]
This is the differential equation for the evolution of the matter perturbations.
We can observe that the simultaneous usage of Eqs. (48), (53) and (54) leads to
\[\dot{H}-\dot{h}\simeq h^{2}-H^{2}+\left(\frac{4\pi G}{3}\,\rho+\frac{15\pi \kappa^{2}}{32G^{3}\rho}\right)\delta\,, \tag{57}\]
where we have expanded to the leading order in \(\delta\) due to the linear regime we are working in. Hence, we are allowed to recast the differential equation (56) as
\[\ddot{\delta}+2H\dot{\delta}-\left(4\pi G\rho+\frac{45\pi\kappa^{2}}{32G^{3} \rho}\right)\delta=0\,. \tag{58}\]
In order to analyze the impact of Kaniadakis entropy on the evolution of the density contrast, it is convenient to express Eq. (58) in terms of the redshift parameter
\[z=\frac{1-a}{a}\,, \tag{59}\]
where we have set the present value of the scale factor to unity. First, we recall that the matter energy density is given by
\[\rho=\rho_{0}\left(1+z\right)^{3}\,. \tag{60}\]
Then, we replace the time derivatives with derivatives respect to \(a\) (denoted by the prime). Toward this end, we note that
\[\dot{\delta} = aH\delta^{\prime}\,, \tag{61}\] \[\ddot{\delta} = a^{2}H^{2}\delta^{\prime\prime}+a\left(H^{2}-4\pi G\rho+\frac{2 \pi^{3}\kappa^{2}\rho}{GH^{4}}\right)\delta^{\prime}\,. \tag{62}\]
Inserting into Eq. (58), we are finally led to
\[\delta^{\prime\prime}+\frac{3}{2a}\left(1+\frac{9\kappa^{2}}{64G^{4}\rho^{2}} \right)\delta^{\prime}-\frac{3}{2a^{2}}\left(1+\frac{9\kappa^{2}}{32G^{4}\rho^{ 2}}\right)\delta=0\,. \tag{63}\]
It can be easily checked that, for \(\kappa\to 0\), this equation reduces to
\[\delta^{\prime\prime}+\frac{3}{2a}\delta^{\prime}-\frac{3}{2a^{2}}\delta=0\,, \tag{64}\]
which correctly reproduces the result of the SMC in the absence of the cosmological constant [42].
The behavior of the matter density contrast \(\delta\) versus redshift is plotted in Fig. 3 for different values of \(\kappa\) and in comparison with the prediction of the SMC (\(\kappa=0\)). We can see that Kaniadakis corrections influence the growth of cosmological perturbations (in particular at lower redshift), in such a way that the higher \(\kappa\), the faster the growth rate. Such a result can be understood by observing that, according to Kaniadakis' prescription (7), the modified entropy (and, thus, the number of holographic degrees of freedom) of the Universe gets increased compared to the standard Boltzmann-Gibbs scenario, supporting a more rapid growth of fluctuations in energy density.
It is interesting to note that a similar behavior has been found in [45] in the framework of quantum gravity deformations of the entropy-area law, motivated by the intricate fractal structure of the apparent horizon in Barrow cosmology. On the other hand, corrections induced by non-extensive Tsallis statistics may result in a faster or slower growth of perturbations, depending on the value of the model parameter [45].
## V Conclusion and discussion
Kaniadakis entropy provides a coherent and self-consistent attempt to generalize Boltzmann-Gibbs-Shannon statistics to the relativistic framework. Furthermore, its holographic usage in cosmology underlies the derivation of the modified Friedmann equations, which predict an interesting phenomenology. In this work we have studied implications of Kaniadakis cosmology on the slow-roll inflationary era. We have assumed the energy content of the Universe to be represented by a scalar field with a perfect fluid form and a power-law potential. Phenomenological consistency of our model has been explored by computing the scalar spectral index and tensor-to-scalar ratio at the horizon crossing. From the latest Planck data, we have obtained \(\kappa\lesssim\mathcal{O}(10^{-13}\div 10^{-12})\), to be compared with other estimates from Baryon Acoustic Oscillation (BAO) [38], Type Ia supernova2[38] and baryogenesis [39] measurements, among others (see Tab 1).
Footnote 2: It is worth noting that the values of \(\kappa\) given in [38; 41] are in terms of the re-scaled parameter \(\beta=\kappa\frac{M_{0}^{2}}{H_{0}^{2}}\), where \(H_{0}\) is the present Hubble rate.
Although all these constraints on \(\kappa\) lie within the domain fixed by the upper bound (47), they span a relatively wide range. Despite not being contemplated in the original \(\kappa\)-statistics, this gap can be understood by allowing Kaniadakis parameter to have a running behavior, namely to vary with the energy (or time) scale. Such a behavior is actually what one expects from the holographic application of the relativistic Kaniadakis' prescription. In fact, entropy quantifies the physical degrees of freedom of a system - the Universe in our cosmological setup. In the standard model of cosmology, it is typically assumed that the dynamics of the early Universe was initially set by relativistic energy content (radiation-dominated era), which was later exceeded by the energy density of the non-relativistic matter (matter-dominated era) as the Universe cooled down. It is then natural to apply the same paradigm to the entropic degrees of freedom. In the ensuing scenario, the relativistic nature of entropy would be quantified by a decreasing function of time, \(\kappa\equiv\kappa(t)\) (or, equivalently, by an increasing function
\begin{table}
\begin{tabular}{|c|c c|} \hline \(|\kappa|\) & Physical framework & Ref. \\ \hline \(10^{-125}\) & Baryon Acoustic Oscillations (BAO) & [38] \\ \hline \(10^{-125}\) & CC+SNIa+BAO & [38] \\ \hline \(10^{-124}\) & Cosmological constant (CC) & [38] \\ \hline \(10^{-124}\) & Type Ia supernova (SNIa) & [38] \\ \hline \(10^{-123}\) & Hubble data & [41] \\ \hline \(10^{-123}\) & Strong lensing systems & [41] \\ \hline \(10^{-123}\) & HII galaxies & [41] \\ \hline \(10^{-83}\) & Baryogenesis & [39] \\ \hline \end{tabular}
\end{table}
Table 1: Cosmological constraints on Kaniadakis parameter.
Figure 3: Plot of the density contrast \(\delta\) versus \(z\). The values of \(\kappa\) have been overestimated to graphically appreciate Kaniadakis corrections. We have assumed the same initial conditions as in [45] and worked in units of \(G\).
of the energy scale, \(\kappa\equiv\kappa(E)\)), in a way that it is maximal (\(\kappa\sim\mathcal{O}(1)\)) in the earliest stages of the Universe's existence, while it recovers the standard Boltzmann-Gibbs-Shannon behavior (\(\kappa\simeq 0\)) at present time. This conjecture would explain in a unified picture the predictions \(\kappa\lesssim\mathcal{O}(10^{-13}\div 10^{-12})\) inferred from inflation (\(E_{inf}\simeq(10^{15}\div 10^{16})\,\text{GeV}\)), \(\kappa\simeq\mathcal{O}(10^{-83})\) from Baryogenesis [39] (\(100\,\text{GeV}\lesssim E_{bar}\lesssim 10^{12}\,\text{GeV}\)[68]) and \(\kappa\simeq\mathcal{O}(10^{-125})\) from Baryon Acoustic Oscillations measurements [38] (pre-recombination stage, \(E_{rec}\simeq(10^{-1}\div 1)\,\text{eV}\)). In passing, we mention that a similar analysis has been recently conducted within the framework on non-extensive Tsallis thermodynamics with varying exponent [69; 70; 71]. In that case, the running behavior is motivated by quantum field theoretical considerations associated with renormalization group flow, which is in principle unavoidable when one attempts to incorporate Tsallis entropy into a general framework that would be consistent with quantum gravity. Clearly, to substantiate this view in our case, one should consider an ab initio investigation of Kani-adakis cosmology equipped with a running parameter. This goes beyond the scope of the present analysis and will be explored elsewhere.
We have then explored the effects of the modified Kaniadakis entropic corrections on the growth of perturbations in the early stages of the Universe. Employing the spherically symmetric collapse formalism and working in the linear regime, we have extracted the differential equation for the evolution of the matter perturbations. Furthermore, for the matter density contrast we have observed that the modified profile of the growth of perturbations differs from the standard cosmological model, with increasing values of \(\kappa\) corresponding to a faster growth of perturbations. Interestingly enough, a similar outcome has been recently exhibited in [72; 73; 74] in the context of both Tsallis and Barrow Cosmologies.
Further aspects are yet to be investigated. In [75] a new description of inflation has been proposed by merging the Starobinsky model, the Appleby-Battye-Starobinsky parameterization of dark energy and a correction arising from the modified \(F(R)\) gravity. It would be interesting to compare our Eqs. (43) and (44) with predictions of this model, also in light of the Kaniadakis holographic description of dark energy proposed in [76]. Moreover, it is suggestive to examine Kaniadakis influence on the growth of perturbations and structure formation in relation to results from other deformed cosmologies based on extended gravity [77; 45; 78]. Finally, to better test our model of cosmic inflation, we are interested in studying Kaniadakis corrections to primordial local non-Gaussianity. In [79] (and references therein) it has been pointed out that Maldacena's consistency relation \(f_{NL}=5(1-n_{s})/12\) between the amount of local squeezed non-Gaussianity \(f_{NL}\) and the spectral index \(n_{s}\)[80] cannot be directly observed. In fact, the quantity that is actually observable is
\[f_{NL}^{obs}=0+\mathcal{O}\left(\frac{k_{L}}{k_{S}}\right)^{2}\,, \tag{65}\]
where \(k_{L(S)}\) denote the long (short) wavelength modes of the inflaton in Fourier space and the \(\mathcal{O}\left(\frac{k_{L}}{k_{S}}\right)^{2}\) terms arise from non-primordial phenomena, such as gravitational lensing and redshift perturbations. The reason why Eq. (65) holds is that the primordial value of \(f_{NL}\) predicted by Maldacena's consistency in the co-moving gauge is canceled out by a correction \(-5(1-n_{s})/12+\mathcal{O}\left(\frac{k_{L}}{k_{S}}\right)^{2}\), caused by a change of coordinates to render observables gauge invariant. Clearly, the simplification of the two terms occurs, leading to Eq. (65), provided that the prediction of primordial non-Gaussianity corresponds to Maldacena's consistency relation. The latter is known to be true for _attractor_ models of single-field inflation, i.e. models in which every background quantity during inflation is set by a single parameter (e.g. the Hubble rate), regardless of the initial conditions. Therefore, any appreciable measurement of local non-Gaussianity would rule out attractor single-field models of slow-roll inflation. On the other hand, more exotic descriptions of inflation, such as multi-field or non-attractor models, do not satisfy Maldacena's condition [81; 82] and would potentially be compatible with observations of primordial local non-Gaussianity. In this scenario, it is desirable to understand whether our Kaniadakis model of inflation falls within the latter class and, in that case, how the constraint (47) reconciles with the current bound on local non-Gaussianity from Planck [63]. Work along these directions is under active consideration and is left for future investigations.
###### Acknowledgements.
GGL acknowledges the Spanish "Ministerio de Universidades" for the awarded Maria Zambrano fellowship and funding received from the European Union - NextGenerationEU. He is also grateful for participation to LISA cosmology Working group. GGL and GL acknowledge networking support from the COST Action CA18108 "Quantum Gravity Phenomenology in the Multimessenger Approach".
|
2305.05540 | Direct Poisson neural networks: Learning non-symplectic mechanical
systems | In this paper, we present neural networks learning mechanical systems that
are both symplectic (for instance particle mechanics) and non-symplectic (for
instance rotating rigid body). Mechanical systems have Hamiltonian evolution,
which consists of two building blocks: a Poisson bracket and an energy
functional. We feed a set of snapshots of a Hamiltonian system to our neural
network models which then find both the two building blocks. In particular, the
models distinguish between symplectic systems (with non-degenerate Poisson
brackets) and non-symplectic systems (degenerate brackets). In contrast with
earlier works, our approach does not assume any further a priori information
about the dynamics except its Hamiltonianity, and it returns Poisson brackets
that satisfy Jacobi identity. Finally, the models indicate whether a system of
equations is Hamiltonian or not. | Martin Šípka, Michal Pavelka, Oğul Esen, Miroslav Grmela | 2023-05-07T15:24:41Z | http://arxiv.org/abs/2305.05540v1 | # Direct Poisson neural networks: Learning non-symplectic mechanical systems
###### Abstract
In this paper, we present neural networks learning mechanical systems that are both symplectic (for instance particle mechanics) and non-symplectic (for instance rotating rigid body). Mechanical systems have Hamiltonian evolution, which consists of two building blocks: a Poisson bracket and an energy functional. We feed a set of snapshots of a Hamiltonian system to our neural network models which then find both the two building blocks. In particular, the models distinguish between symplectic systems (with non-degenerate Poisson brackets) and non-symplectic systems (degenerate brackets). In contrast with earlier works, our approach does not assume any further a priori information about the dynamics except its Hamiltonianity, and it returns Poisson brackets that satisfy Jacobi identity. Finally, the models indicate whether a system of equations is Hamiltonian or not.
###### Contents
* 1 Introduction
* 2 Hamiltonian Dynamics on Poisson Geometry
* 2.1 General Formulation
* 2.2 \(3D\) Hamiltonian Dynamics
* 2.3 \(4D\) Hamiltonian Dynamics
* 2.4 Semi-direct Extension to a \(6D\) system
* 3
Learning Hamiltonian systems * 3.1 Rigid body * 3.2 Particle in 2D * 3.3 Shivamoggi equations * 3.4 Particle in 3D * 3.5 Heavy top
* 4 Learning non-Hamiltonian systems
* 5 Conclusion
## 1 Introduction
The estimation of unknown parameters in physics and engineering is a standard step in many well-established methods and workflows. One usually starts with a model - a set of assumptions and equations that are considered given and then, based on the available data, estimates the exact form of the evolution equations for the system of interest. As an example, we can consider a situation where we need to estimate the mass of a star far away based on its interaction with light [1], or when the moments of inertia of an asteroid are inferred from its rotations [2]. The assumptions can be of varying complexity and the method for parameter estimation should be therefore adequately chosen.
Techniques for machine learning of dynamical systems have sparked significant interest in recent years. With the rise of neural network-related advances, several methods have been developed for capturing the behavior of dynamical systems, each with its advantages and drawbacks. A symbolic approach (for instance [3]) allows us to learn precise symbolic form of equations from the predefined set of allowed operations. This can be often the most efficient approach that frequently leads to an exact match between the learned and target system, but the class of captured equations is by definition limited by the algebraic operations we consider as candidates.
Alternatively, one can learn directly the equations of motion
\[\dot{\mathbf{x}}=f(\mathbf{x},\theta) \tag{1}\]
by learning \(f\) parameterized by weights \(\theta\). The function can be represented by any function approximator, in many cases by a neural network. Although this approach is very general, it does not incorporate any known physics into our procedure. There is no concept of energy of the system, no quantities are implicitly conserved, and the method thus might produce unphysical predictions. A remedy is the concept of physics-informed machine learning that constrains the neural network models so that they obey some required laws of physics [4]. In particular, models of mechanical systems, which can be described by Hamiltonian mechanics, preserve several physical quantities like energy or angular momentum, as well as geometric quantities (for instance the symplectic two-form) that ensure the self-consistency of the systems. A neural-network model learning a Hamiltonian system from its trajectories that is compatible with the underlying geometry without any a priori knowledge about the system has been missing, to the best of our knowledge,
and it is the main purpose of the current manuscript to introduce it. Moreover, we present several models that vary in how strictly they reproduce the underlying geometry and the degree to which these models learn a system can be used to estimate whether the system is Hamiltonian or not.
**Geometrization of a dynamical system.** A dynamical system is described by a differential equation, in particular, a mechanical system obeys Hamiltonian evolution equations. These equations are of geometric origin that is invariant with respect to changes of coordinates and which is preserved during the motion of the system. The geometry of Hamiltonian systems goes back to Sophus Lie and Henry Poincare [5, 6]. Modern approaches extend to infinite-dimensional systems and provide foundations for many parts of nowadays physics [7, 8, 9, 10]. Formulating a mechanical system geometrically typically means finding a bracket algebra (such as symplectic, Poisson, Jacobi, Leibniz, etc.) and a generating function (such as Hamiltonian, energy, or entropy). The bracket is generally required to satisfy some algebraic conditions (Jacobi identity, Leibniz identity, etc.). However, there is no general algorithmic way to obtain the Hamiltonian formulation (even if it exists) of a given system by means of analytical calculations. So such an analysis is proper for machine learning.
Apart from Hamiltonian mechanics, one can also include dissipation [11, 12, 13] or extend the learning of Hamiltonian systems to control problems [14]. Such an approach then, with the suitable choice of integrator ensures the conservation of physically important quantities, such as energy, momentum or angular momentum.
A reversible system is a candidate for being a Hamiltonian system. For a reversible system, the beginning point could be to search for a symplectic manifold and a Hamiltonian function. Learning the symplectic character (if it exists) of a physical system (including particles in potential fields, pendulums of various complexities) can be done utilizing neural networks, see, for example, [15, 16]. The symplectic geometry exists only in even-dimensional models and due to the nondegeneracy criteria, it is very rigid. A generalization of symplectic geometry is Poisson geometry where the non-degeneracy requirement is relaxed [17, 18]. In Poisson geometry, there exists a Poisson bracket (defining an algebra on the space of functions) satisfying the Leibniz and the Jacobi identities. This generalization permits (Hamiltonian) analysis in odd dimensions. The degenerate character of Poisson geometry brings some advantages for the investigation of (total, or even super) integrability of the dynamics.
The point of this paper is to better learn Hamiltonian mechanics. We build neural networks that encode the building blocks of Hamiltonian dynamics admitting Poisson geometry. According to the Darboux-Weinstein theorem [19] for a Poisson manifold, there are canonical coordinates which make the Poisson bivector (determining the bracket) constant. This formulation has also geometric implications, since it determines the symplectic foliation of the Poisson manifold (see more details in the main body of this paper). Recently, Poisson neural networks (abbreviated as PNN) were proposed to learn Hamiltonian systems [20] by transforming the system in the Darboux-Weinstein coordinates. But, for many physical systems, the Poisson bracket is far from being in the canonical coordinates, and the dimension of the symplectic part of the Darboux-Weinstein Poisson bivector may be a priori unknown. For \(n-\)dimensional physical models, Jacobi identity, which ensures consistency of the underlying Poisson bivector, is a system of PDEs. To
determine the Poisson structure, one needs to solve this system analytically, which is usually difficult, or enforce its validity while learning the Poisson bivector.
**The goal of this work.** The novel result of this paper is what we call a Direct Poisson Neural Network (abbreviated as DPNN) which learns the Poisson structure without assuming any particular form of the Poisson structure such as Darboux-Weinstein coordinates. Instead, DPNN learns directly in the coordinate system in which the data are provided. There are several advantages of DPNN: **(i)** We do not need to know a priori the degeneracy level of the Poisson structure (or in other terms the dimensions of the symplectic foliations of the Poisson manifold) **(ii)** it is easier to learn the Hamiltonian (energy), and **(iii)** Jacobi identity is satisfied on the data, not only in a representation accessible only through another neural network (an Invertible Neural Network in [20]). DPNN learns Poisson systems by identifying directly the Poisson bivector and the Hamiltonian as functions of the state variables.
We actually provide three flavors of DPNNs. The least-informed flavor directly learns the Hamiltonian function and the Poisson bivector, assuming its skew-symmetry but not the Jacobi identity. Another flavor adds squares of Jacobi identity to the loss function and thus softly imposes its validity. The most geometry-informed flavor automatically satisfies Jacobi identity by building the Poisson bivector as a general solution to Jacobi identity in three dimensions. While the most geometry-informed version is typically most successful in learning Hamiltonian systems, it is restricted to three-dimensional systems, where the general solution of Jacobi identity is available. The second flavor is typically a bit less precise, and the least-informed flavor is usually the least precise, albeit still being able to learn Hamiltonian systems to a good degree of precision.
Interestingly, when we try to learn a non-Hamiltonian model by these three flavors of DPNNs, the order of precision is reversed and the least-informed flavor becomes most precise. The order of precision of the DPNNs flavors thus indicates whether a system of equations is Hamiltonian or not.
Section 2 recalls Poisson dynamics, in particular symplectic dynamics, rigid body mechanics, Shivamoggi equations, and evolution of heavy top. Section 3 introduces DPNNs and illustrates their use on learning Hamiltonian systems. Finally, Section 4 shows DPPNs applied on a non-Hamiltonian system (dissipative rigid body).
## 2 Hamiltonian Dynamics on Poisson Geometry
### General Formulation
A Poisson bracket on a manifold \(\mathcal{M}\) (physically corresponding to the state space, for instance position and momentum of the body) is a skew-symmetric bilinear algebra on the space \(\mathcal{F}(\mathcal{M})\) of smooth functions on \(\mathcal{M}\) given by
\[\{\bullet,\bullet\}:\mathcal{F}(\mathcal{M})\times\mathcal{F}(\mathcal{M}) \rightarrow\mathcal{F}(\mathcal{M}). \tag{1}\]
Poisson brackets satisfy the Leibniz rule
\[\{F,HG\}=\{F,H\}G+H\{F,G\}, \tag{2}\]
and the Jacobi identity,
\[\{F,\{H,G\}+\{H,\{G,F\}+\{G,\{F,H\}=0, \tag{3}\]
for arbitrary functions \(F\), \(H\) and \(G\), [10, 17, 18]. A manifold equipped with a Poisson bracket is called a Poisson manifold and is denoted by a two-tuple \((\mathcal{M},\{\bullet,\bullet\})\). A function \(C\) is called a Casimir function if it commutes with all other functions that is \(\{F,C\}=0\) for all \(F\). For instance, the magnitude of the angular momentum of a rigid body is a Casimir function.
**Hamiltonian Dynamics.** Hamiltonian dynamics can be seen as evolution on a Poisson manifold. For a Hamiltonian function (physically energy) \(H\) on \(\mathcal{M}\), the Hamiltonian vector field and Hamilton's equation are
\[X_{H}(F):=\{F,H\},\qquad\dot{\mathbf{x}}=X_{H}(\mathbf{x})=\{\mathbf{x},H\}, \tag{4}\]
respectively, where \(\mathbf{x}\in\mathcal{M}\) is a parametrization of manifold \(\mathcal{M}\). The algebraic properties of the Poisson bracket have some physical consequences. Skew-symmetry implies energy conservation,
\[\dot{H}=\{H,H\}=0, \tag{5}\]
while the Leibniz rule ensures that the dynamics does not depend on biasing the energy by a constant. Referring to a Poisson bracket, one may determine the Poisson bivector field according to
\[L(dF,dH):=\{F,H\}. \tag{6}\]
This identification makes it possible to define a Poisson manifold by a tuple \((\mathcal{M},L)\) consisting of a manifold and a Poisson bivector. In this notation, the Jacobi identity can be rewritten as \(\mathcal{L}_{\mathbf{X}_{H}}L=0\), that is the Lie derivative of the Poisson bivector with respect to the Hamiltonian vector field is zero [21]. In other words, the Jacobi identity expresses the self-consistency of the Hamiltonian dynamics in the sense that both the building blocks (Hamiltonian function and the bivector field) are constant along the evolution.
Assuming a local coordinate system \(\mathbf{x}=(x^{i})\) on \(\mathcal{M}\), Poisson bivector determines Poisson matrix \(L=[L^{kl}]\) which enables us to write [22]
\[L=L^{kl}\frac{\partial}{\partial x^{k}}\wedge\frac{\partial}{\partial x^{l}}. \tag{7}\]
In this realization, the Poisson bracket and Hamilton's equations are written as
\[\{F,H\}=L^{kl}\frac{\partial F}{\partial x^{k}}\frac{\partial H}{\partial x^{ l}},\qquad\dot{x}^{k}=L^{kl}\frac{\partial H}{\partial x^{l}}, \tag{8}\]
respectively. Here, we have assumed summation over the repeated indices. Further, the Jacobi identity (3) turns out to be the following system of PDEs
\[L^{kl}\frac{\partial L^{ij}}{\partial x^{k}}+L^{ki}\frac{\partial L^{jl}}{ \partial x^{k}}+L^{kj}\frac{\partial L^{li}}{\partial x^{k}}=0. \tag{9}\]
The left-hand side of this equation is called Jacobiator, and in the case of Hamiltonian systems, it is equal to zero. Jacobi identity (9) is a system of differential equations consisting of PDEs. In \(3D\), Jacobi identity (9) is a single PDE whose general solution is known [23]. In \(4D\), Jacobi identity (9) consists of four PDEs, and there are some partial results, but for an arbitrary \(n\), according to our knowledge, there is no general solution yet. We shall focus on \(3D\), \(4D\) and \(6D\) cases in the upcoming subsections.
**Symplectic Manifolds.** If there is no non-constant Casimir function for a Poisson manifold, then it is also a symplectic manifold. Although we can see symplectic manifolds as examples of Poisson manifolds, it is possible to define a symplectic manifold in a direct way without referring to a Poisson manifold. A manifold \(\mathcal{M}\) is called symplectic if it is equipped with a closed non-degenerate two-form (called a symplectic two-form) \(\Omega\). A two-form is called non-degenerate when
\[\Omega(X,Y)=0,\qquad\forall X\in\mathfrak{X}(\mathcal{M}) \tag{10}\]
implies \(Y=0\). A two-form is closed when being in the kernel of deRham exterior derivative, \(d\Omega=0\). A Hamiltonian vector field \(X_{H}\) on a symplectic manifold \((\mathcal{M},\Omega)\) is defined as
\[\iota_{X_{H}}\Omega=dH, \tag{11}\]
where \(\iota\) is the contraction operator (more precisely the interior derivative) [21]. Referring to a symplectic manifold one can define a Poisson bracket
\[\{F,H\}:=\Omega(X_{F},X_{H}), \tag{12}\]
where the Hamiltonian vector fields are defined through Equation (11). The closedness of the symplectic two-form \(\Omega\) guarantees the Jacobi identity (3). The non-degeneracy condition of \(\Omega\) puts an extra condition to the bracket (12) that the Casimir functions are only the constant functions, in contrast with Poisson manifolds, which may have also non-constant Casimirs. The Darboux-Weinstein coordinates show more explicitly the relationship between Poisson and symplectic manifolds in a local picture.
**Darboux-Weinstein Coordinates.** We start with \(n=(2m+k)\)-dimensional Poisson manifold \(\mathcal{M}\) equipped with Poisson bivector \(L\). Near every point of the Poisson manifold, the Darboux-Weinstein coordinates \((x^{i})=(q^{a},p_{b},u^{\alpha})\) (here \(a\) runs from \(1\) to \(m\), and \(\alpha\) runs from \(1\) to \(k\)) give a local form of the Poisson bivector
\[L=\frac{\partial}{\partial q^{a}}\wedge\frac{\partial}{\partial p_{a}}+\frac {1}{2}\lambda^{\alpha\beta}\frac{\partial}{\partial u^{\alpha}}\wedge\frac{ \partial}{\partial u^{\beta}} \tag{13}\]
with the coefficient functions \(\lambda^{\alpha\beta}\) equal zero at the origin. If \(k=0\) in the local formula (13), then there remains only the first term on the right-hand side and the Poisson manifold turns out to be a \(2m\)-dimensional symplectic manifold.
Newtonian mechanics, for instance, fits this kind of realization. On the other hand, if \(m=0\) in (13), then there remains only the second term which is a full-degenerate Poisson bivector. A large class of Poisson manifolds is of this form, namely Lie-Poisson structure on the dual of a Lie algebra including rigid body dynamics, Vlasov dynamics, etc. In general, Poisson bivectors have both the symplectic part as well as the fully degenerate part, for instance, the heavy top dynamics in Section 2.4.
When the Poisson bivector is non-degenerate, it generates a symplectic Poisson bracket, and it commutes with the canonical Poisson bivector
\[\mathbf{L}_{can}=\begin{pmatrix}\mathbf{0}&\mathbf{I}\\ -\mathbf{I}&\mathbf{0}\end{pmatrix} \tag{14}\]
in the sense that
\[\mathbf{L}\cdot\mathbf{L}_{can}-\mathbf{L}_{can}\cdot\mathbf{L}=\mathbf{0}. \tag{15}\]
This compatibility condition is employed later in Section 3 to measure the error of DPNNs when learning symplectic Poisson bivectors.
### \(3d\) Hamiltonian Dynamics
In this subsection, we focus on three-dimensional Poisson manifolds, following [24, 25, 26]. One of the important observations in \(3D\) is the isomorphism between the space of vectors and the space of skew-symmetric matrices given by
\[\mathbf{L}=\begin{pmatrix}0&-J_{z}&J_{y}\\ J_{z}&0&-J_{x}\\ -J_{y}&J_{x}&0\end{pmatrix}\leftrightarrow\mathbf{J}=(J_{x},J_{y},J_{z}). \tag{16}\]
This isomorphism lets us write Jacobi identity (9) as a single scalar equation
\[\mathbf{J}\cdot(\nabla\times\mathbf{J})=0, \tag{17}\]
see, for example, [25, 27, 28, 29]. The general solution of Jacobi identity (17) is
\[\mathbf{J}=\frac{1}{\phi}\nabla C \tag{18}\]
for arbitrary functions \(\phi\) and \(C\), where \(C\) is a Casimir function. Hamilton's equation then takes the particular form
\[\mathbf{\dot{x}}=\mathbf{J}\times\nabla H=\frac{1}{\phi}\nabla C\times\nabla H. \tag{19}\]
Note that by changing the roles of the Hamiltonian function \(H\) and the Casimir \(C\) one can arrive at another Hamiltonian structure for the same system. In this case, the Poisson vector is defined as \(\mathbf{J}=-(1/\phi)\nabla H\) and the Hamiltonian function is \(C\). This is an example of a bi-Hamiltonian system, that manifests integrability [23, 30, 31, 32]. A bi-Hamiltonian system admits two different but compatible Hamilton formulations. In \(3D\), two Poisson vectors, say
and \(\mathbf{J}_{2}\) are compatible if
\[\mathbf{J}_{1}\cdot(\nabla\times\mathbf{J}_{2})=\mathbf{J}_{2}\cdot(\nabla\times \mathbf{J}_{1}). \tag{20}\]
This compatibility condition will be used later in Section 3 to measure the error of learning Poisson bivectors in 3D by DPNNs.
**Rigid Body Dynamics.** Let us consider an example of a 3D Hamiltonian system, a freely rotating rigid body. The state variable \(\mathbf{M}\in\mathcal{M}\) is the angular momentum in the frame of reference co-rotating with the rigid body. The Poisson structure is
\[\{F,H\}^{(RB)}(\mathbf{M})=-\mathbf{M}\cdot\frac{\partial F}{\partial\mathbf{ M}}\times\frac{\partial H}{\partial\mathbf{M}}, \tag{21}\]
see [33]. Poisson bracket (21) is degenerate because it preserves any function of the magnitude of \(\mathbf{M}\). The Hamiltonian function is the energy
\[H=\frac{1}{2}\left(\frac{M_{x}^{2}}{I_{x}}+\frac{M_{y}^{2}}{I_{y}}+\frac{M_{z} ^{2}}{I_{z}}\right), \tag{22}\]
where \(I_{x}\), \(I_{y}\) and \(I_{z}\) are moments of inertia of the body. In this case, Hamilton's equation
\[\dot{\mathbf{M}}=\mathbf{M}\times\frac{\partial H}{\partial\mathbf{M}} \tag{23}\]
gives Euler's rigid body equation [34].
### \(4d\) Hamiltonian Dynamics
In four dimensions, we consider the following local coordinates \((u,x,y,z)=(u,\mathbf{x})\). A skew-symmetric matrix \(L\) can be identified with a couple of vectors \(\mathbf{U}=(U^{1},U^{2},U^{3})\) and \(\mathbf{V}=(V^{1},V^{2},V^{3})\) as
\[L=\begin{pmatrix}0&-U^{1}&-U^{2}&-U^{3}\\ U^{1}&0&-V^{3}&V^{2}\\ U^{2}&V^{3}&0&-V^{1}\\ U^{3}&-V^{2}&V^{1}&0\end{pmatrix}. \tag{24}\]
After this identification, Jacobi identity (9) turns out to be a system of PDEs consisting of four equations [35]
\[\partial_{u}(\mathbf{U}\cdot\mathbf{V}) =\mathbf{V}\cdot\left(\partial_{u}\mathbf{U}-\nabla\times \mathbf{V}\right), \tag{25a}\] \[\nabla(\mathbf{U}\cdot\mathbf{V}) =\mathbf{V}(\nabla\cdot\mathbf{U})-\mathbf{U}\times\left(\partial _{u}\mathbf{U}-\nabla\times\mathbf{V}\right). \tag{25b}\]
Note that \(L\) is degenerate (its determinant being zero) if and only if \(\mathbf{U}\cdot\mathbf{V}=0\). So, for degenerate Poisson matrices, the Jacobi identity is satisfied if
\[\nabla\cdot\mathbf{U}=0,\qquad\partial_{u}\mathbf{U}-\nabla\times\mathbf{V}= \mathbf{0}. \tag{26}\]
**Superintegrability.** Assume that a given dynamics admits two time-independent first integrals, say \(H_{1}\) and \(H_{2}\). Then, when vectors \(\mathbf{U}\) and \(\mathbf{V}\) have the form
\[\mathbf{U}=\nabla H_{1}\times\nabla H_{2},\qquad\mathbf{V}=\partial_{u}H_{1} \nabla H_{2}-\partial_{u}H_{2}\nabla H_{1}, \tag{27}\]
they constitute a Poisson bivector, and in particular Jacobi identity (26) is satisfied. Functions \(H_{1}\) and \(H_{2}\) are Casimir functions. For a Hamiltonian function \(H_{3}\), the Hamiltonian dynamics is
\[\dot{u} =-\left(\nabla H_{1}\times\nabla H_{2}\right)\cdot\nabla H_{3}, \tag{28a}\] \[\mathbf{\dot{x}} =(\nabla H_{1}\times\nabla H_{2})\partial_{u}H_{3}+(\nabla H_{2} \times\nabla H_{3})\partial_{u}H_{1}+(\nabla H_{3}\times\nabla H_{1})\partial _{u}H_{2}. \tag{28b}\]
By permuting the roles of the functions \(H_{1}\), \(H_{2}\), and \(H_{3}\) (two of them are Casimirs and one of them is Hamiltonian), one arrives at two additional Poisson realizations of the dynamics. This is the case of a superintegrable (tri-Hamiltonian) formulation, [36].
Two Poisson bivectors of a superintegrable 4D Hamiltonian system must satisfy the compatibility condition
\[\mathbf{U}_{1}\cdot\mathbf{V}_{2}+\mathbf{V}_{1}\cdot\mathbf{U}_{2}=0 \tag{29}\]
where \(\mathbf{U}_{1,2}\) and \(\mathbf{V}_{1,2}\) are the vectors identified from formula (24). We shall use this compatibility condition to measure the error of DPNNs when learning Poisson bivectors in 4D.
**Shivamoggi Equations.** An example of a 4D Poisson (and non-symplectic) system is the Shivamoggi equations, which arise in the context of magnetohydrodynamics,
\[\dot{u}=-uy,\qquad\dot{x}=zy,\qquad\dot{y}=zx-u^{2},\qquad\dot{z}=xy, \tag{30}\]
see [37, 38]. The first integrals of this system of equations are
\[H_{1}=x^{2}-z^{2},\qquad H_{2}=z^{2}+u^{2}-y^{2},\qquad H_{3}=u(z+x). \tag{31}\]
Vectors \(\mathbf{U}_{i}\) and \(\mathbf{V}_{i}\) of Poisson matrices \(N^{(i)}\) (\(i=1,2,3\)) for the Hamiltonian functions \(H_{1}\), \(H_{2}\), and \(H_{3}\) are
\[\mathbf{U}_{1} =2u\left(-y,z,y\right),\qquad\mathbf{V}_{1}=2\left(u^{2},y(x+z), u^{2}-z(x+z)\right),\] \[\mathbf{U}_{2} =2\left(x+z\right)\left(0,u,0\right),\qquad\mathbf{V}_{2}=2\left( x+z\right)\left(x,0,-z\right),\] \[\mathbf{U}_{3} =-4\left(yz,zx,xy\right),\qquad\mathbf{V}_{3}=4u\left(-x,0,z \right), \tag{32}\]
respectively. Note that all these three Poisson matrices are degenerate, since \(\mathbf{U}_{i}\cdot\mathbf{V}_{i}=0\) holds for all \(i=1,2,3\). The equations of motion can be written as
\[X=\vartheta N^{(1)}\bar{\nabla}H_{1}=\vartheta N^{(2)}\bar{\nabla}H_{2}= \vartheta N^{(3)}\bar{\nabla}H_{3},\quad\vartheta=-\frac{1}{4(x+z)}\]
up to multiplication with a conformal factor \(\vartheta\) in all three cases. Note that the 4D gradient is denoted by \(\bar{\nabla}H=(\partial_{u}H,\partial_{x}H,\partial_{y}H,\partial_{z}H)\).
### Semi-direct Extension to a \(6d\) system
Six-dimensional Hamiltonian systems can again be symplectic or non-symplectic (degenerate). The former case is represented by a particle in three dimensions while the latter is for instance the heavy top dynamics. Since the evolution of a particle in 3D is canonical and thus analogical to the 2D dynamics, we shall recall only the heavy top dynamics.
A supported rotating rigid body in a uniform gravitational field is called heavy top [39]. The mechanical state of the body is described by the position of the center of mass \(\mathbf{r}\) and angular momentum \(\mathbf{M}\). In this case, the Poisson bracket is
\[\{F,G\}^{(\mathrm{HT})}(\mathbf{r},\mathbf{M})=-\mathbf{M}\cdot\left(\frac{ \partial F}{\partial\mathbf{M}}\times\frac{\partial G}{\partial\mathbf{M}} \right)-\mathbf{r}\cdot\left(\frac{\partial F}{\partial\mathbf{M}}\times \frac{\partial G}{\partial\mathbf{r}}-\frac{\partial G}{\partial\mathbf{M}} \times\frac{\partial F}{\partial\mathbf{r}}\right), \tag{33}\]
Even though the model is even dimensional, it is not symplectic. Two non-constant Casimir functions are \(\mathbf{r}^{2}\) and \(\mathbf{M}\cdot\mathbf{r}\). In this case, we assume the Hamiltonian function as
\[H=\frac{1}{2}\left(\frac{M_{x}^{2}}{I_{x}}+\frac{M_{y}^{2}}{I_{y}}+\frac{M_{ z}^{2}}{I_{z}}\right)+Mgl\mathbf{r}\cdot\mathbf{\chi}, \tag{34}\]
where \(-g\mathbf{\chi}\) is the vector of gravitational acceleration. Hamilton's equation is then
\[\dot{\mathbf{M}}= \mathbf{M}\times\frac{\partial H}{\partial\mathbf{M}}+\mathbf{r} \times\frac{\partial H}{\partial\mathbf{r}} \tag{35a}\] \[\dot{\mathbf{r}}= -\mathbf{r}\times\frac{\partial H}{\partial\mathbf{M}}. \tag{35b}\]
In the following sections, we apply DPNNs to the here recalled models and we show that DPNNs are capable to extract the Poisson bivector and Hamiltonian from simulated trajectories of the models.
## 3 Learning Hamiltonian systems
When we have a collection of snapshots of a trajectory of a Hamiltonian system, how to identify the underlying mechanics? In other words, how to learn the Poisson bivector and energy from the snapshots? Machine learning provides a robust method for such task. It has been previously shown that machine learning can reconstruct GENERIC models [40, 11, 12], but the Poisson bivector is typically known and symplectic. Poisson Neural Networks [20] provide a method for learning also non-symplectic mechanics, which however relies on the identification of dimension of the symplectic subdynamics in the Darboux-Weinstein coordinates and on a transformation to the coordinates. Here, we show a robust method that does not need to know the dimension of the symplectic subsystem and that satisfies Jacobi identity also in the coordinates in which the data are prescribed. Therefore, we refer to the method as Direct Poisson Neural Networks (DPNNs).
DPNNs learn Hamiltonian mechanics directly by training a model for the \(\mathbf{L}(\mathbf{x})\) matrix and a model for the Hamiltonian \(H(\mathbf{x})\) simultaneously. The neural network that encodes \(\mathbf{L}(\mathbf{x})\) only learns the upper triangular part of \(\mathbf{L}\) and skew-symmetry is then automatically satisfied. The network has one hidden fully connected layer equipped with the softplus activation. The network that learns \(H(\mathbf{x})\) has the same structure. The actual learning was implemented within the standard framework PyTorch [41], using the Adam optimizer [42]. The loss function contains squares of deviation of the training data and predicted trajectories that are obtained by the implicit midpoint rule (IMR) numerically solving the exact equations (for the training data) or Hamilton's equation with the trained models for \(\mathbf{L}(\mathbf{x})\) and \(H(\mathbf{x})\) (for the predicted data). Although such a model leads to a good match between the validation trajectories and predicted trajectories, it does not need to satisfy Jacobi identity. Therefore, we use also an alternative model where squares of the Jacobiator (9) are added to the loss function, which enforces Jacobi identity in a soft way, see Figure 1. Finally, in 3D we know the form of the Poisson bivector since we have the general solution of Jacobi identity (19). In such a case, the neural network encoding \(\mathbf{L}\) can be simplified to a network learning \(C(\mathbf{x})\) and Jacobi identity is automatically satisfied, see Figure 2.
In summary, we use three methods:
* **(WJ)** Training \(\mathbf{L}(\mathbf{x})\) and \(H(\mathbf{x})\)_without_ the Jacobi identity.
* **(SJ)** Training \(\mathbf{L}(\mathbf{x})\) and \(H(\mathbf{x})\) with _soft_ Jacobi identity, where the \(L_{2}\)-norm of the Jacobiator (9) is a part of the loss function.
* **(IJ)** Training \(C(\mathbf{x})\) and \(H(\mathbf{x})\) with _implicitly_ valid Jacobi identity, based on the general solution of Jacobi identity in 3D (19).
The training itself then proceeds in the following steps:
Figure 1: Scheme SJ (Soft Jacobi) of the methods that learn both the energy and Poisson bivector.
Figure 2: Scheme IJ (Implicit Jacobi) of the learning method implicitly enforcing Jacobi identity.
1. Simulation of the training and validation data. For a randomly generated set of initial conditions, we simulate a set of trajectories by IMR. These trajectories are then split into steps and the collection of steps is split into a training set and a validation set.
2. Parameters of the neural networks WJ, SJ, and IJ are trained by back-propagation on the training data, minimizing the loss function. Then, the loss function is evaluated on the validation data to report the errors.
3. A new set of initial conditions is randomly chosen and new trajectories are generated using IMR, which gives the ground truth (GT).
4. Trajectories with the GT initial conditions are simulated using the trained models for \(\mathbf{L}\) and \(H\) and compared with GT.
In the following Sections, we illustrate this procedure for learning rigid body mechanics, a particle in 2D, Shivamoggi equations, a particle in 3D, and heavy top dynamics.
### Rigid body
Figure 2(a) shows a sample trajectory of rigid body dynamics (23) from the GT set, as well as trajectories with the same initial conditions, generated the DPNNs. The training was carried out on 200 trajectories while GT consisted of 400 trajectories. Errors of the three learning methods (WJ, SJ, and IJ) are shown in Table 6. All three methods were capable to learn the dynamics well. Figure 2(b) shows the norm of the Jacobiator evaluated on the validation set. Jacobiator is zero for IJ and small in SJ, while it does not go to zero in WJ. Therefore, IJ and SJ satisfy Jacobi identity while WJ does not.
Figure 4 shows the compatibility error of learning the Poisson bivector (20). All three methods learn the Poisson bivector well, but IJ is the most precise, followed by SJ and WJ. Finally, Figure 5 shows errors in learning the trajectories
Figure 3: Rigid body: comparison of learned models (WJ, SJ, and IJ) with GT.
\(\mathbf{M}(t)\). All three methods learn the trajectories well, but in this case, the SJ method works slightly better.
Table 6 shows the medians of the errors for all the models and applied learning methods. N/A indicates quantities that are not to be conserved. Error \(\Delta\mathbf{M}\) is calculated as the median of square deviation of \(\mathbf{M}^{2}\) over all time steps. Error \(\Delta\mathbf{r}\), \(\Delta\mathbf{M}\cdot\mathbf{r}\), \(\Delta\mathbf{M}^{2}\), and \(\Delta\mathbf{r}^{2}\) are calculated analogically. Error \(\Delta\mathbf{L}\) in the RB case is calculated as \(\log_{10}\) of the \(L^{2}\) norm of the compatibility condition (20), calculated for the learned \(\mathbf{J}\) divided by its trace and multiplied by factor 1000 (using the exact \(\mathbf{J}\) when generating the GT). In the P2D and P3D cases, where the Poisson bivector is symplectic, the error is calculated as \(Log_{10}\) of squares of the symplecticity condition (15). In the case of Shivamoggi equations, the \(\Delta\mathbf{L}\) error is the \(Log_{10}\) of the squared superintegrable compatibility condition (29). \(\Delta\det\mathbf{L}\) errors are medians of squares of learned \(\det\mathbf{L}\), and in the Shivamoggi and the heavy top cases, the values are logarithmic, since determinants are supposed to be zero in those cases.
Figure 4: Rigid body: Compatibility errors for RB evaluated as \(\log_{10}\) of squares of Equation (20). The distribution of errors is approximately log-normal. The Compatibility error of the IJ method is the lowest, followed by SJ and WJ.
Figure 5: Rigid body: Distribution of \(\log_{10}\) of squares of errors in \(\mathbf{M}\).
Figure 6: Summary of the learning errors for a rigid body (RB), particle in two dimensions (P2D), Shivamoggi equations (Sh), particle in three dimensions (P3D), and heavy top (HT).
### Particle in 2D
A particle moving in a 2D potential field represents a four-dimensional symplectic system. The simulated trajectories were learned by WJ and SJ methods. No implicit IJ method was used because no general solution of Jacobi identity in 4D is available that would work for both the degenerate and symplectic Poisson bivectors. Results of the learning are in Table 6, and both WJ and SJ learn the dynamics comparably well. Figure 7 shows a sample trajectory, and Figure 8 shows the distribution of learned \(\det(\mathbf{L})\). The median determinant (after a normalization such that the determinant is equal to \(1.0\) in GT), was close to this value for both SJ and WJ, indicating a symplectic system.
### Shivamoggi equations
Shivamoggi equations (30) represent a 4D Hamiltonian system that is not symplectic, and thus has a degenerate Poisson bivector. The equations were solved within the range of parameters \(u\in[-0.5,0.5]\), \(x\in[-0.5,0,5]\), \(y\in[-0.1,0.1]\), \(z\in[-0.5,0.5]\). It was necessary to constraint the range of \(\mathbf{r}=(u,x,y,z)\) because for instance when \(u=0\), the solutions explode [37]. Figure 9 shows the \(u\)-component over a sample trajectory. Figure 10 shows the distribution of \(\log_{10}(\det(\mathbf{L}))\), indicating that the system is indeed degenerate.
In comparison with determinants of \(\mathbf{L}\) learned in the symplectic case of a two-dimensional particle (P2D), see Table 6, the learned determinants are quite low in the Shivamoggi case (after the same normalization as in the P2D case). Therefore, DPNNs are able to distinguish between symplectic and non-symplectic Hamiltonian systems.
Figure 7: P2D: A sample trajectory. Both SJ and WJ learn the dynamics of a particle in two dimensions well.
Figure 8: P2D: Learned \(\det(\mathbf{L})\).
Figure 10: Shivamoggi: Learned \(\log_{10}(\det(\mathbf{L}))\).
Figure 9: Shivamoggi: A sample trajectory, component \(u\).
### Particle in 3D
Figure 11 shows momentum during a sample trajectory of a particle in 3D space taken from the GT set, as well as trajectories with the same initial conditions obtained by DPNNs (WJ and SJ flavors). The training and validation were done in two sets of trajectories (with 200 and 400 trajectories, respectively).
Table 6 contains numerical values of the learning errors. Both WJ and SJ learn the Poisson bivector as well as the trajectories (and thus also the energy) well. The median determinant is close to unity, which indicates a symplectic system.
### Heavy top
Figures 12a and 12b show a sample trajectory of a heavy top from the GT set and trajectories with the same initial conditions obtained by DPNNs. The training and validation were done in two sets of trajectories (with 300 and 400 trajectories, respectively). Numerical values of the learning errors can be found in Table 6. For instance, the \(\mathbf{L}\) matrix is close to being singular, indicating a non-symplectic system, but SJ learns slightly better than WJ. Similarly, as in the four-dimensional case, DPNNs distinguish between symplectic (P3D) and non-symplectic cases (HT).
Figure 11: P3D: Comparison of momentum \(\mathbf{M}(t)\) on an exact trajectory (GT) and trajectories obtained by integrating the learned models (without Jacobi and with soft Jacobi) in the case of a 3D harmonic oscillator.
## 4 Learning non-Hamiltonian systems
Let us now try to apply the WJ, SJ, and IJ methods, that are developed for learning purely Hamiltonian systems, to a non-Hamiltonian system, specifically a dissipative rigid body. A way to formulate the dissipative evolution of a rigid body is called the energetic Ehrenfest regularization [43], where the Hamiltonian evolution of a rigid body is supplemented with dissipative terms that keep the magnitude of angular momentum constant while dissipating the energy. The evolution equations are
\[\dot{\mathbf{M}}=\mathbf{M}\times E_{\mathbf{M}}-\frac{\tau}{2}\mathbf{\Xi} \cdot E_{\mathbf{M}} \tag{1}\]
where \(\tau\) is a positive dissipation parameter and where \(\mathbf{\Xi}=\mathbf{L}^{T}d^{2}E\mathbf{L}\) is a positive symmetric definite matrix (assuming that energy be positive definite) constructed from the Poisson bivector of the rigid body \(L^{ij}=-\epsilon^{ijk}M_{k}\) and energy \(E(\mathbf{M})\). These equations satisfy that \(\dot{\mathbf{M}}^{2}=0\) while \(\dot{E}\leq 0\), and their solutions converge to pure rotations around the principal axis of the rigid body (an axis with the highest moment of inertia), which is the physically relevant solution.
Results of learning trajectories generated by solving Equations (1) are shown in Figure 13. All the methods (WJ, SJ, and IJ) are capable to learn the trajectories to some extent, but WJ is the most successful, followed by SJ and IJ. As SJ, and especially IJ, use deeper properties of Hamiltonian systems (soft and exact validity of Jacobi identity), they are less robust in the case of non-Hamiltonian systems.
Figure 13 can be actually seen as an indication of non-Hamiltonianity of Equations (1). Systems where IJ learns best, followed by SJ and WJ much more likely to be Hamiltonian, in contrast with non-Hamiltonian systems where WJ learns best, followed by SJ and IJ. In other words, DPNNs can distinguish between Hamiltonian and non-Hamiltonian systems by the order in which the flavors of DPNNs perform.
Figure 12: Heavy top: Comparison on an exact trajectory (GT) and trajectories obtained by integrating the learned models (without Jacobi and with soft Jacobi) in case of the heavy top.
Figure 13: Distribution of errors in the angular momentum \(\mathbf{M}\) when learning dissipative rigid-body dynamics (1) by methods assuming purely Hamiltonian systems (WJ, SJ, and IJ). WJ method is the most robust, capable to learn also the dissipative system relatively well (although worse than in the purely Hamiltonian case). The SJ method, which moreover softly imposes Jacobi identity, is less capable to learn the dissipative system. The IJ method, which has the best performance in purely Hamiltonian systems, see Table 6, has the worst learning capability in the dissipative case.
## 5 Conclusion
This paper proposes a machine learning method for learning Hamiltonian systems from data. Direct Poisson Neural Networks (DPNN) learn directly the Poisson bivector and Hamiltonian of the mechanical systems with no further assumptions about the structure of the systems. In particular, DPNN can distinguish between symplectic and non-symplectic systems by measuring the determinant of the learned Poisson bivector.
DPNNs come in three flavors: (i) without Jacobi identity (WJ), (ii) with softly imposed Jacobi identity (SJ), and with implicitly valid Jacobi identity (IJ). Although all the methods are capable to learn the dynamics, only SJ and IJ satisfy also the Jacobi identity. Typical behavior is that IJ learns Hamiltonian models most precisely, see Table 6, followed by SJ and WJ.
When the three flavors of DPNNs are applied to learn a non-Hamiltonian system, it is expected that the order of precision gets reversed, making WJ the most precise, followed by SJ and IJ. This reversed order of precision can be used as an indicator that distinguishes between Hamiltonian and non-Hamiltonian systems.
In future, we would like to extend DPNNs to systems with dissipation prescribed by gradient dynamics.
## Acknowledgment
We are grateful to E. Cueto, F. Chinesta, and B. Moya for inspiring discussions about the purpose of Jacobi identity in the learning of physical systems. MP and MS were supported by the Czech Grant Agency, grant number 23-05736S.
|
2302.11592 | The ALMA view of MP Mus (PDS 66): a protoplanetary disk with no visible
gaps down to 4 au scales | We present ALMA multiwavelength observations of the protoplanetary disk
around the nearby (d$\sim$100 pc) young solar analog MP Mus (PDS 66). These
observations at 0.89 mm, 1.3 mm, and 2.2 mm have angular resolutions of $\sim$
1", 0.05", and 0.25", respectively, and probe the dust and gas in the system
with unprecedented detail and sensitivity. The disk appears smooth down to the
4 au resolution of the 1.3 mm observations, in contrast with most disks
observed at comparable spatial scales. The dust disk has a radius of 60$\pm$5
au, a dust mass of $0.14_{-0.06}^{+0.11} M_{\rm Jup}$, and a mm spectral index
$<2$ in the inner 30 au, suggesting optically thick emission from grains with
high albedo in this region. Several molecular gas lines are also detected
extending up to 130$\pm$15 au, similar to small grains traced by scattered
light observations. Comparing the fluxes of different CO isotopologues with
previous models yields a gas mass of $0.1-1 M_{\rm Jup}$, implying a gas to
dust ratio of 1-10. We also measure a dynamical stellar mass of $M_{\rm
dyn}$=1.30$\pm$0.08 $M_\odot$ and derive an age of 7-10 Myr for the system. The
survival of large grains in an evolved disk without gaps/rings is surprising,
and it is possible that existing substructures remain undetected due to
optically thick emission at 1.3 mm. Alternatively, small structures may still
remain unresolved with the current observations. Based on simple scaling
relations for gap-opening planets and gap widths, this lack of substructures
places upper limits to the masses of planets in the disk as low as 2
$M_\oplus$-0.06 $M_{\rm Jup}$ at $r > 40$ au. The lack of mm emission at radii
$r > 60$ au also suggests that the gap in scattered light between 30-80 au is
likely not a gap in the disk density, but a shadow cast by a puffed-up inner
disk. | Á. Ribas, E. Macías, P. Weber, S. Pérez, N. Cuello, R. Dong, A. Aguayo, C. Cáceres, J. Carpenter, W. R. F. Dent, I. de Gregorio-Monsalvo, G. Duchêne, C. C. Espaillat, P. Riviere-Marichalar, M. Villenave | 2023-02-22T19:00:10Z | http://arxiv.org/abs/2302.11592v1 | # The ALMA view of MP Mus (PDS 66): a protoplanetary disk with no visible gaps down to 4 au scales
###### Abstract
Context:
Aims:We aim to characterize the protoplanetary disk around the nearby (d\(\sim\)100 pc), young solar analog MP Mus (PDS 66) and to reveal any signs of planets or ongoing planet formation in the system.
Methods:We present new ALMA observations of MP Mus at 0.89 mm, 1.3 mm, and 2.2 mm with angular resolutions of \(\sim\) 1'', 0.05'', and 0.25'', respectively. These data probe the dust and gas in the disk with unprecedented detail and sensitivity.
Results:The disk appears smooth down to the 4 au resolution of the 1.3 mm observations, in contrast with most disks observed at comparable spatial scales. The dust disk has a radius of 60\(\pm\)5 au, a dust mass of 0.14\({}^{+0.11}_{-0.08}\)\(M_{\rm Jup}\), and a mm spectral index \(<2\) in the inner 30 au, suggesting optically thick emission from grains with high albedo in this region. Several molecular gas lines are also detected extending up to 130\(\pm\)15 au, similar to small grains traced by scattered light observations. Comparing the fluxes of different CO isotopologues with previous models yields a gas mass of 0.1 - 1 \(M_{\rm Jup}\), implying a gas to dust ratio of 1-10. We also measure a dynamical stellar mass of \(M_{\rm Jup}\)=1.30\(\pm\)0.08 \(M_{\odot}\) and derive an age of 7-10 Myr.
Conclusions:The survival of large grains in an evolved disk without gaps/rings is surprising, and it is possible that existing substructures remain undetected due to optically thick emission at 1.3 mm. Alternatively, small structures may still remain unresolved with the current observations. Based on simple scaling relations for gap-opening planets and gap widths, this lack of substructures places upper limits to the masses of planets in the disk as low as 2 \(M_{\rm Jup}\)-0.06 \(M_{\rm Jup}\) at \(r>40\) au. The lack of mm emission at radii \(r>60\) au also suggests that the gap in scattered light between 30-80 au is likely not a gap in the disk density, but a shadow cast by a puffed-up inner disk.
Conclusions:
## 1 Introduction
Our theories of planet formation are largely informed by observations of protoplanetary disks in young, nearby star-forming regions. Both surveys and studies of individual systems have built a general understanding of properties such as the disk typical lifetimes, accretion rates, masses and sizes (e.g., see Manara et al. 2022; Miotello et al. 2022; Pascucci et al. 2022, and other Protostars and Planets VII chapters for a recent review of the field), all of which are crucial to characterize the timescales and environment in which planets form. In recent years, SPHERE and ALMA observations have also revealed gaps, rings and other substructures to be very common in protoplanetary disks (e.g. Long et al. 2018; Andrews et al. 2018a; Avenhaus et al. 2018), providing new clues about planet-disk interactions and the underlying population of newborn planets.
Although a large portion of our knowledge of protoplanetary disk properties comes from statistical analysis of large samples, there are a few individual sources that have had a particularly high impact in our understanding of planet formation. Perhaps the most iconic example of such a system is TW Hya, which hosts what is arguably the most and best studied protoplanetary disk to date. A combination of different factors make TW Hya a unique cornerstone in the study of planet formation. At a distance of only 60 pc (Gaia Collaboration et al., 2022), it is significantly closer than the nearest (140-400 pc) star-forming regions such as Taurus, Ophiuchus, Lupus, Chamaeleon, Upper Scorpius or the Orion Molecular Cloud. Its proximity and almost face-on orientation allow for very detailed studies of the disk structure: high angular resolution observations of the gas and dust components have revealed, among other features, a concentric system of rings and gaps (including an inner gap as small as 1 au, Andrews et al., 2016), a clump of dust at \(\sim\)50 au which may be associated with circumplanetary material (Tsukagoshi et al., 2019), a spiral structure in its gas component (Teague et al., 2019), and shadows in scattered light moving azimuthally across the disk surface, probably cast by the inner disk (Debes et al., 2017). It is also one of the few protoplanetary disks for which a detection of hydrogen deuteride (HD) is available, allowing for a CO-independent estimate of its mass (Bergin et al., 2013) and dust-to-gas mass ratio (Macias et al., 2021), as well as the only disk in which line polarization has been measured (Teague et al., 2021). Its 0.6 \(M_{\odot}\) stellar mass also makes it a great target to better understand the early stages of the Solar System. TW Hya greatly exemplifies the potential of nearby protoplanetary disks for planet formation studies.
Within 100 pc, the only other gas-rich disk around a single star is MP Muscae (MP Mus, PDS 66), a K1V star (Mamajek et al., 2002) located at 97.9\(\pm\)0.1 pc (Gaia Collaboration et al., 2022). It was originally identified as a classical T Tauri star by Gregorio-Hetem et al. (1992) and first believed to belong to the \(\sim\)17 Myr old Lower Centaurus-Crux association (Mamajek et al., 2002), but later studies of its kinematic properties and parallax showed it to be a member of the younger, 3-5 Myr old \(\epsilon\) Chamaeleon (\(\epsilon\) Cha) association (Torres et al., 2008; Murphy et al., 2013; Dickson-Vandervelde et al., 2021). The source is still accreting weakly (Pascucci et al., 2007; Ingleby et al., 2013) at its estimated age of 7-10 Myr, and hosts a gas-rich disk extending up to 130 au (Kastner et al., 2010). The SED and mid-IR spectra of the system have also been studied (Schutz et al., 2005; Bouwman et al., 2008; Cortes et al., 2009), revealing signs of grain growth in the disk. More recently, Wolff et al. (2016) and Avenhaus et al. (2018) presented scattered light observations from the Gemini Planet Imager (GPI) and SPHERE/VLT, which revealed a drop in the disk brightness between 60-80 au. If this drop corresponds to a gap in the disk surface density, then it could be produced by the gravitational influence of one or multiple planets, representing an excellent source to study recently formed planets in a nearby system. With a stellar mass of 1.3 \(M_{\odot}\), MP Mus may be the nearest analog to the young Solar System.
Despite the obvious interest of MP Mus and many of its aspects being already well characterized, it still remains comparatively unexplored at millimeter wavelengths. Here we present new observations of the system with the Atacama Large Millimeter/submillimeter Array (ALMA), including 0.89 mm (Band 7), 1.3 mm (Band 6), and 2.2 mm (Band 4) continuum emission as well as several molecular gas lines. These observations, with an angular resolution down to 4 au at 1.3 mm, provide a wealth of new information and an unprecedented view of the system. We describe the ALMA observations as well as reprocessing of ancillary SPHERE data in Sect. 2. We then present the results and analysis in Sect. 3, and discuss their implications in Sect. 4. Finally, the main findings are summarized in Sect. 5.
## 2 Observations and data processing
### ALMA observations
MP Mus was observed during ALMA Cycle 5 by three different programs at 0.89 mm (Band 7), 1.3 mm (Band 6), and 2.2 mm (Band 4). Project 2017.1.01687.S (P.I.: Alvaro Ribas) included observations in Bands 4 and 7, while both projects 2017.1.01167.S (P.I.: Sebastian Perez, part of the Disks ARound TTauri Stars with ALMA (DARTTS-A) programme) and 2017.1.01419.S (P.I.: Claudio Caceres) used Band 6. Observations with two antenna configurations exist in all cases except for the Band 7 data, for which only observations with a compact configuration are available. Also, two different executions of the Band 4 compact configuration were made. A summary of the different datasets used and the corresponding correlator configurations is available in Tables 1 and 2. We used the standard pipeline calibration provided by ALMA staff using CASA (McMullin et al., 2007) version 5.1.1-5, including water vapor radiometer and system temperature correction, as well as bandpass, amplitude, and phase calibrations. Some additional flagging was applied to the Band 6 and Band 7 data.
Continuum emission from the disk was clearly detected with a high signal to noise ratio (S/N) at the three wavelengths. Therefore, after pipeline calibration, we performed phase only self-calibration on each individual dataset using the mtmfs deconvolver, Briggs weighting, a robust value of 0.5, and nterms=2. Channels with emission lines were excluded during this process. We then re-scaled all the data in each band to a common flux value (as reference, we chose the flux of the observation closest in time to observations of the corresponding amplitude calibrator by the ALMA observatory), set their phase centers to that of a Gaussian fit to the data (i.e., centered on the peak of the disk emission), and then set them to a common coordinate to correct for pointing deviations and proper motion. In the case of the Band 6 observations, we also performed one final round of phase only self-calibration to the combined data to ensure that they were properly aligned. The self-calibration process improved the peak S/N by factors of 2-10, depending on the dataset.
### SPHERE scattered light observations
MP Mus was observed in dual-beam polarimetric imaging mode (DPI, de Boer et al., 2020; van Holstein et al., 2020) with the InfraRed Dual-band Imager and Spectrograph (IRDIS) at SPHERE within the DARTTS-program. The data were taken in \(J\)-band and \(H\)-band and presented in Avenhaus et al. (2018, see this reference for details on the observational setup). We re-reduced the DARTTS scattered light data with the reduction pipeline IRDAP1 (IRDIS Data reduction for Accurate Polarimetry, version 1.3.3, van Holstein et al., 2020), which uses a data-independent polarization model specific for the optical instrument to correct for instrumental polarization and crosstalk. The double-sum/double-difference technique provides the total intensity \(I\) and the linear polarization components \(Q\) and \(U\) (rotated by 45 "with respect to each other) as data products. The total polarized
intensity can be calculated from those components:
\[PI=\sqrt{Q^{2}+U^{2}}\,. \tag{1}\]
However, in the case of a single central light source and single-scattering events, it is convenient to transform the polarized components to polar coordinates (Schmid et al. 2006; de Boer et al. 2020):
\[\begin{cases}Q_{\phi}=-Q\cos{(2\phi)}-U\sin{(2\phi)}\\ U_{\phi}=+Q\sin{(2\phi)}-U\cos{(2\phi)}\end{cases} \tag{2}\]
Here, positive \(Q_{\phi}\) is the polarization component perpendicular to the direction of the star. Positive \(Q_{\phi}\) is expected to capture all stellar light that was polarized in single-scattering events with the additional benefit of a lower noise level than \(PI\) due to the lack of squared operations. On the other hand, significant signal in \(U_{\phi}\) or negative \(Q_{\phi}\) can indicate regions where light is scattered more than once (Canovas et al. 2015) or where other, off-centered light sources contribute significantly to the scattering (Weber et al. 2023).
The reprocessing of the SPHERE observations does not differ significantly from the findings of Avenhaus et al. (2018), but it provides additional information about the angle and degree of linear polarization of the stellar halo. These results are discussed in Sec. 3.3.
## 3 Results
The new ALMA observations can be used to derive several parameters of the MP Mus system, including dust, gas, and stellar masses, its age, the spectral index of the millimeter continuum emission, and the overall morphology of the disk. The SPHERE observations provide additional information about the polarization level and the distribution of small grains in the system. The corresponding analysis is described throughout this section, and we provide a summary of the derived properties in Table 3.
### Dust continuum
#### 3.1.1 Continuum images and fluxes
We synthesized continuum images at 0.89 mm, 1.3 mm, and 2.2 mm from the self-calibrated data described in Sect. 2 using the tclean algorithm with the mtmfs deconvolved and interms=2. For each dataset at 1.3 and 2.2 mm, the extended and compact configurations were combined to produce the continuum images (in the particular case of the 1.3 mm data from project 2017.1.01419.S, the compact configuration was excluded since it was noisier and did not improve the image sensitivity). A robust value of 0.5 was used to synthesize the images to measure fluxes, resulting in beam sizes of 1.0''\(\times\)0.82'', 0.12''\(\times\)0.10'', and 0.39''\(\times\)0.31''at 0.89 mm, 1.3 mm, and 2.2 mm, respectively. The disk around MP Mus is clearly detected at all wavelengths (peak S/N values of several hundreds/thousands) and is well re
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline ALMA Project Code & Band & Conf. \& Baselines & Date & N\({}_{\rm ant}\) & Time On-Source & PWV & Flux Calibrator \\ & & (m) & & & (min) & (mm) & \\ \hline
2017.1.01687.S & 4 & C43-3, 15–500 & 2018 Apr 09 & 45 & 8.1 & 3.0 & J1107-4449 \\ (P.I: Alvaro Ribas) & 4 & C43-3, 15–500 & 2018 Apr 29 & 44 & 8.1 & 2.4 & J1427-4206 \\ & 4 & C43-6, 15–2500 & 2017 Dec 30 & 45 & 19.1 & 2.2 & J1617-5848 \\ & 7 & C43-1, 15–300 & 2018 Jul 10 & 45 & 8.8 & 0.2 & J1427-4206 \\ \hline
2017.1.01167.S & 6 & C43-5, 15–2400 & 2018 Jan 15 & 46 & 5.6 & 1.6 & J1427-4206 \\ (P.I: Sebastian Pérez) & 6 & C43-8, 90–3800 & 2017 Nov 16 & 44 & 11.4 & 1.1 & J1427-4206 \\ \hline
2017.1.01419.S & 6 & C43-2, 15–300 & 2018 Jul 06 & 44 & 8.7 & 0.7 & J1427-4206 \\ (P.I: Claudio Caceres) & 6 & C43-5, 15–2500 & 2017 Dec 26 & 43 & 17.1 & 0.3 & J1427-4206 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of ALMA observations
\begin{table}
\begin{tabular}{c c c c c} \hline \hline ALMA Project Code & Band & Central Freq. & Bandwidth & Channels & Spectral lines \\ & & (GHz) & (MHz) & & \\ \hline
2017.1.01687.S & 4 & 130.884 & 1875 & 3840 & \(\cdots\) \\ & & 132.801 & 1875 & 3840 & \(\cdots\) \\ & & 144.874 & 1875 & 3840 & DCO\({}^{+}\) (2-1), HCN (2-1), HC\({}_{3}\)N (16-15) \\ & & 142.994 & 1875 & 3840 & \(\cdots\) \\ \cline{2-5} & 7 & 330.575 & 469 & 3840 & \({}^{13}\)CO (3-2) \\ & & 331.709 & 1875 & 1920 & \(\cdots\) \\ & & 343.292 & 1875 & 3840 & CS (7-6), HC\({}^{15}\)N (4-3) \\ & & 345.783 & 469 & 3840 & \({}^{12}\)CO (3-2) \\ \hline
2017.1.01167.S & 6 & 230.525 & 1875 & 960 & \({}^{12}\)CO (2-1) \\ & & 232.483 & 1875 & 128 & \(\cdots\) \\ & & 244.983 & 1875 & 128 & \(\cdots\) \\ \hline
2017.1.01419.S & 6 & 217.542 & 1875 & 128 & \(\cdots\) \\ & & 219.486 & 1875 & 3840 & \({}^{13}\)CO (2-1), C\({}^{18}\)O (2-1) \\ & & 230.611 & 234 & 1920 & \({}^{12}\)CO (2-1) \\ & & 231.196 & 234 & 1920 & \(\cdots\) \\ & & 232.791 & 1875 & 128 & \(\cdots\) \\ \hline \end{tabular}
\end{table}
Table 2: Correlator configuration of the different ALMA projects used in this study
solved in both the 1.3 mm and 2.2 mm observations. We used aperture photometry to estimate continuum fluxes from these images, obtaining 370\(\pm\)40 mJy at 0.89 mm, 148\(\pm\)7 mJy at 1.3 mm, and 49\(\pm\)2 mJy at 2.2 mm (see Table 4). These uncertainties are largely dominated by absolute calibration and not the noise in the images. The emission at 1.3 mm and 2.2 mm extends up to 60 au (0.6''), as determined from the 3-5 \(\sigma\) contours. To ease the comparison with other studies, we also list the radii enclosing 68 % and 90 % of the total flux in Table 5. Based on the most compact antenna configuration in each band (Table 1), the maximum recoverable scales are \(\sim\)10.8'', 2.9'', and 8.3''at 2.2 mm, 1.3 mm and 0.89 mm, respectively2, which are significantly larger than the observed disk size. Therefore, the observations are likely recovering all the emission from the disk. We also produced images with a lower robust value of -0.5 to try to reveal small substructures, reaching angular resolutions of 0.89\(\times\)0.66'', 0.06''\(\times\)0.04'', and 0.25''\(\times\)0.19''at 0.89 mm, 1.3 mm, and 2.2 mm (corresponding to \(\sim\)75, 4, and 20 au at 98 pc). Interestingly, the disk appears smooth even at such resolutions, with no clear rings, gaps, or asymmetries. The continuum image at 1.3 mm is shown in Fig. 1, and the 0.89 mm and 2.2 mm observations can be found in Appendix A.
Footnote 2: See ALMA Technical Handbook
#### 3.1.2 Disk dust mass
Assuming that the (sub)mm emission from the disk is optically thin and isothermal, the measured flux is linearly related to the dust mass (e.g. Beckwith et al., 1990):
\[M_{\rm dust}=\frac{F_{\nu}\,d^{2}}{\kappa_{\nu}\,B_{\nu}(T_{\rm dust})}, \tag{3}\]
where \(M_{\rm dust}\) is disk dust mass, \(F_{\nu}\) is the flux at the observed frequency \(\nu\), \(d\) is the distance to the source, \(\kappa_{\nu}\) is the dust opacity at the frequency \(\nu\), and \(B_{\nu}(T_{\rm dust})\) is the blackbody emission at the corresponding frequency and dust temperature \(T_{\rm dust}\). Since we have observations at three different frequencies, we can compute three dust mass values. We adopted standard values for the opacity and dust temperatures of \(\kappa_{20GHz}\)=2.3 cm\({}^{2}\)/g and \(T_{\rm dust}=20\) K (e.g., Andrews & Williams, 2005), and a distance of \(d\)=98 pc (Gaia Collaboration et al., 2022). For the observations at 0.89 mm and 2.2 mm, we computed the corresponding \(\kappa_{\nu}\) value using a power-law dependence of the opacity with frequency, i.e. \(\kappa_{\nu}=\kappa_{230\,{\rm GHz}}\times(\nu/230\,{\rm GHz})^{\beta}\), where \(\beta\) is between 0.0-0.6 for most protoplanetary disks (e.g., Tazzari et al., 2021, also in agreement with the \(\beta\) range of 0.1-0.4 derived for MP Mus in Sec. 4.2). We bootstrapped the dust masses and their uncertainties by adopting uncertainties of 5 K for \(T_{\rm dust}\) and 20 % for \(\kappa_{230\,{\rm GHz}}\), and a uniform distribution of \(\beta\) values between 0 and 0.6. The derived disk dust masses are \(0.16^{+0.1}_{-0.05}\)\(M_{\rm Jup}\), \(0.13^{+0.07}_{-0.04}\)\(M_{\rm Jup}\), and \(0.13^{+0.07}_{-0.04}\)\(M_{\rm Jup}\) at 0.89 mm, 1.3 mm, and 2.2 mm (the reported values correspond to the median and the 16 %, and 84 % percentiles). These values are all compatible with each other, and we adopt a final dust mass value of \(M_{\rm dust}=0.14^{+0.11}_{-0.06}\)\(M_{\rm Jup}\) as the average of the three measurements.
Figure 1: ALMA images of MP Mus at 1.3 mm. These are displayed using both linear (left) and logarithmic (right) scales to emphasize details at different brightness levels. The 0.06′′x0.04′′beam is shown at the bottom left corners as white ellipses.
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline \(M_{\rm*,dyn}\) & \(1.30\pm 0.08M_{\odot}\) \\ \(L_{\rm*}\) & \(1.2\pm 0.1L_{\odot}\) \\ Age\({}^{\dagger}\) & 7 - 10 Myr \\ \(V_{\rm LSR}\) & \(3.98\pm 0.04\) km/s \\ \(M_{\rm dust}\) & \(0.14^{+0.11}_{-0.05}\)\(M_{\rm Jup}\) \\ \(M_{\rm gas}\) & \(0.1\)\(M_{\rm Jup}\) \\ \(R_{\rm dust}\)\({}^{\dagger\,\dagger}\) & \(60\pm 5\) au \\ \(R_{\rm gas}\)(\({}^{+1_{\rm CO}}\))\({}^{\dagger\,\dagger}\) & \(130\pm 15\) au \\ \(i_{\rm dust}\) & \(32\pm 1^{\circ}\) \\ PA\({}_{\rm dust}\) & \(10\pm 1^{\circ}\) \\ _DoLP_ & \(0.46\pm 0.08\)\% \\ _AoLP_ & \(98\pm 8^{\circ}\) \\ \hline \end{tabular} 1
\end{table}
Table 3: Summary of results derived for MP Mus.
#### 3.1.3 1.3 mm continuum radial profile
To further investigate the presence (or lack) of substructures in the disk, we focused on the 1.3 mm continuum data since they have the highest angular resolution (\(\sim\)4 au in the case of the robust\(=\)-0.5 image). We first de-projected this image adopting a disk inclination of 32\({}^{\circ}\) and a position angle (PA) of 10\({}^{\circ}\) based on a Gaussian fit to the data, in full agreement with previous estimates from scattered light observations (e.g., Schneider et al. 2014; Wolff et al. 2016; Avenhaus et al. 2018). The averaged radial profile was then calculated as the median intensity within concentric annuli centered on the source. To reveal even smaller details in the disk, we also used the frank software (Jenrings et al. 2020) to reconstruct the radial profile directly from the visibilities. frank calculates super-resolution radial profiles of protoplanetary disks assuming azimuthal symmetry, a condition which is met in the case of MP Mus. We tried different combinations of frank's \(\alpha\) and \(w_{\rm smooth}\) hyperparameters and found no major differences in the resulting radial profile, so we adopted \(\alpha\)\(=\)1.3 and \(w_{\rm smooth}=10^{-3}\) for the analysis. A comparison of the observed visibilities and the frank fit is shown in Fig. B. The inclination and PA derived from frank are in complete agreement with the values adopted earlier. Both the profile extracted directly from the image and the one from frank reveal radially decreasing emission extending up to \(\sim\)60 au, with changes in the slope at \(\sim\)10 and 30 au as well as a plateau between 30-40 au, and bump in the outermost region which may suggest the presence of a low-contrast gap and small, barely resolved ring. However, no clear signatures of substructures are found down to a 4 au scale. We also produced a residual map by extending the radial profile from frank azimuthally, projecting the resulting image with the corresponding disk inclination and orientation, and convolving it with the observed beam before subtracting it from the observations. The residuals (Fig. 2) are all below the 5-\(\sigma\) level and do not reveal any azimuthal substructures. The resulting radial profiles are shown in Fig. 3.
#### 3.1.4 (Sub)mm spectral indices
The spectral index (\(\alpha_{\rm mm}\)) of optically thin (sub)mm emission from protoplanetary disks depends on the size of dust grains in them, and has been used in the past to investigate grain growth in disks in several star-forming regions (e.g. Ricci et al. 2010b,a; Ribas et al. 2017; Tazzari et al. 2021). Using the derived continuum fluxes, we computed three spectral indices in different wavelength ranges: \(\alpha_{0.89-1.3\,{\rm mm}}=2.4\pm 0.3\), \(\alpha_{1,3-2.2\,{\rm mm}}=2.12\pm 0.11\), and \(\alpha_{0.89-2.2\,{\rm mm}}=2.25\pm 0.13\) (including the ab
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Wavelength & Frequency & Flux & RMS & Peak S/N & Beam \\ (mm) & (GHz) & (mJy) & (\(\mu\)Jy/beam) & & \\ \hline
0.89 & 338.187 & 370\(\pm\)40 & 130 & 2230 & 1.00\({}^{\prime\prime}\)\(\times\)0.82\({}^{\prime\prime}\), PA=46\({}^{\circ}\) \\
1.29 & 232.269 & 148\(\pm\)7 & 19 & 720 & 0.12\({}^{\prime\prime}\)\(\times\)0.10\({}^{\prime\prime}\), PA=-5\({}^{\circ}\) \\
2.17 & 137.883 & 49\(\pm\)3 & 14 & 1660 & 0.39\({}^{\prime\prime}\)\(\times\)0.31\({}^{\prime\prime}\), PA=\(-\)42\({}^{\circ}\) \\ \hline \end{tabular} 10
\end{table}
Table 4: ALMA Continuum Fluxes
\begin{table}
\begin{tabular}{c c c} \hline \hline Component & \(R_{68\,\%}\) (au) & \(R_{90\,\%}\) (au) \\ \hline Continuum (1.3 mm) & 30\(\pm\)5 & 45\(\pm\)5 \\ Continuum (2.2 mm) & 30\(\pm\)20 & 45\(\pm\)20 \\ Gas (\({}^{12}\)CO (2-1)) & 80\(\pm\)15 & 110\(\pm\)15 \\ \hline \end{tabular} 10
\end{table}
Table 5: Disk radius encompassing 68 % and 90 % of the total flux.
Figure 2: ALMA 1.3 mm observations and model of MP Mus. The observed continuum emission (left) and the resulting image reconstructed from the frank radial profile (middle) are shown. The residuals (right) are displayed in units of the image RMS and are below the 5-\(\sigma\) level.
solute calibration uncertainties of ALMA). These values were computed from the integrated fluxes and they reflect the average spectral index in the disk only, but the spectral index is expected to vary spatially as a result of factors such as radial changes in the optical depth and grains sizes. To investigate such spatial variations, we combined the resolved 1.3 mm and 2.2 mm observations to produce a resolved map of the spectral index. During this process, the most extended configuration of the available Band 6 data was excluded to avoid problems with very different coverage of the uv-plane at different bands. The 1.3 and 2.2 mm data were jointly imaged with the tclean algorithm, the mtnfs deconvolved, and interms=2, and we then used the resulting alpha image as the spectral index map. After various tests we adopted a robust parameter of 0.0 in this case, which yielded a beam of 0.2\({}^{\prime\prime}\)\(\times\)0.17\({}^{\prime\prime}\). The derived spectral index map and its radial profile are shown in Fig. 4. As expected, the value of \(\alpha_{1.3-2.2\,\mathrm{mm}}\) is not constant throughout the disk and increases as a function of radius, ranging from \(\sim\)1.7 in the inner regions to \(\sim\)3 in the outer parts of the disk. These results are discussed in further detail in Sec. 4.2.
### Gas lines
#### 3.2.1 Line cubes, fluxes, and morphology
The observations in this study cover multiple molecular gas emission lines, which were imaged using tclean after applying the self-calibration solutions and re-centering derived in Sec. 2 and subtracting the corresponding continuum. We detected \({}^{12}\)CO (3-2), \({}^{13}\)CO (3-2), and CS (7-6) in Band 7, \({}^{12}\)CO (2-1), \({}^{13}\)CO (2-1), and C\({}^{18}\)O (2-1) in Band 6, and DCO\({}^{+}\) (2-1), DCN (2-1), and HC\({}_{3}\)N (16-15) in Band 4. HC\({}^{15}\)N (4-3) was also tentatively detected in the Band 7 data. We used CASA to produce the zero-th and first moments for each line, and applied Keplerian masking during the process to minimize noise from signal-free areas (e.g., Salinas et al., 2017). The moments and spectra for the \({}^{12}\)CO (2-1), \({}^{13}\)CO (2-1), and C\({}^{18}\)O (2-1) are shown in Fig. 5, and similar figures for the remaining lines are provided in Appendix C. The line fluxes measured from the zero-th moments are listed in Table 6. Note that each project used different antenna and correlator configurations, so the spectral and spatial resolutions are different for each line (also listed in Table 6). In the case of the Band 6 observations, we only used the observations from project 2017.1.01419.S to generate the cubes (data from 2017.1.01167.S have a significantly higher angular resolution which results in a much lower sensitivity per channel, as well as a coarser spectral resolution). We also used different weighting depending on the line as a compromise between angular resolution and sensitivity: all lines in Band 4 as well as HC\({}^{15}\)N (4-3) in Band 7 were imaged with natural weighting to maximize the S/N, and the remaining lines were imaged using a robust value of 0.5.
The observations show gas emission at velocities from \(\sim\)-5 to 13 km/s, and we measured a systematic velocity of 3.98 \(\pm\) 0.04 km/s (local standard of rest) using a Keplerian disk model (see Sec. 3.2.2), similar to previous estimates (e.g., Kastner et al., 2010). As shown in Figs. 5, C.1, and C.2, the emission shape of most lines is that of a full disk, i.e. no clear gaps or rings are found. Exceptions are DCO\({}^{+}\) (2-1), which shows a clear ring-like morphology with a gap radius of \(\sim\)20 au (a morphology commonly observed for this line, e.g., Huang et al., 2017), and
Figure 4: Derived millimeter spectral index for MP Mus between 1.3 and 2.2 mm. Top: Spectral index map. Only pixels with S/N\(>\)5 are considered. The solid and dashed contours correspond to \(\alpha_{1.3-2.2\,\mathrm{mm}}\) of 2 and 2.5, respectively. The image beam is shown on the bottom left corner. Bottom: Corresponding de-projected \(\alpha_{1.3-2.2\,\mathrm{mm}}\) radial profile and uncertainties (solid red line and area). The beam FWHM is shown as a horizontal red line. The dashed line marks the \(\alpha=2\) transition.
Figure 3: Radial profiles of the 1.3 mm continuum and \({}^{12}\)CO (2-1) emission of MP Mus. The continuum profile derived from the synthesized ALMA image is shown as a red line, and the orange line corresponds to the resulting profile from Frank (Jennings et al., 2020). The profile for the \({}^{12}\)CO (2-1) line is also shown as a blue line. The 1-\(\sigma\) uncertainties for the continuum and \({}^{12}\)CO (2-1) are also plotted as red and blue shaded areas. In all cases, we adopted a disk inclination and PA of 32\({}^{\circ}\)and 10\({}^{\circ}\). For comparison, the FWHM of the continuum and \({}^{12}\)CO (2-1) beams are also shown as solid horizontal red and blue lines, respectively.
DCN (2-1) and HC\({}^{15}\)N (4-3), where the low S/N prevents any reliable estimate of their morphology. The gaseous disk extends up to 130 au (1.3\({}^{\prime\prime}\)) in \({}^{12}\)CO (2-1) based on the 3-5 \(\sigma\) contours, in agreement with previous studies using unresolved APEX observations of the \({}^{12}\)CO (3-2) line (when corrected from the updated Gaia distance, Kastner et al. 2010). We also provide the radii encircling 68 % and 90 % of the total \({}^{12}\)CO (2-1) emission in Table 5. The gas radius is \(\sim\)twice that of the dust disk (60 au, Sec. 3.1) yielding a ratio of the gas and dust radii similar to those found for disks in Lupus (Ansdell et al. 2018; Sanchis et al. 2021). A comparison of the de-projected radial profiles of the continuum and \({}^{12}\)CO (2-1) emission is shown in Fig. 3. We note that none of the ALMA projects used in this work aimed at studying the chemistry of MP Mus and yet these observations detected various molecular emission lines of multiple species, evidencing the large potential of this system for future astronomical studies of protoplanetary disks.
#### 3.2.2 Dynamical stellar mass and disk kinematics
Emission from gas lines experiences Doppler shifts due to the disk rotation, and can thus be used to estimate stellar mass that independently of theoretical isochrones and stellar evolution models. For this purpose, we used the eddy software (Teague 2019) to model the first moment map of the \({}^{12}\)CO (2-1) emission using a Keplerian rotation profile. This line was chosen as it offers the best compromise between S/N and the available spatial resolution. Only the extended configuration of 2017.1.01419.S was used for this purpose since the compact one is significantly noisier, but tests including this second observations yielded noisier but completely compatible results (as mentioned in Sec. 3.2.1, the observations from 2017.1.01167.S have a coarser spectral resolution and significantly less sensitivity per channel, so they were not included in this analysis). Given the high S/N of the observations, we imaged the line with a robust=0.0 weighting (0.17\({}^{\prime\prime}\)x0.15\({}^{\prime\prime}\)beam) to improve the angular resolution. The first moment map and its corresponding uncertainties were then calculated using the betterments software (Teague & Foreman-Mackey 2018). In Sec. 3.1.3 we derived a disk inclination and PA of 32\({}^{\circ}\)and 10\({}^{\circ}\)based on the dust continuum observations which have a significantly higher angular resolution, so we kept those values fixed during the fitting 3. We performed various tests during this process, including the use of first moment maps computed with the CASA immoments task instead, down-sampling the map to the beam size to ensure that only spatially-independent pixels are fit, masking the inner 0.3\({}^{\prime\prime}\)of the disk, and modifying the disk inclination by \(\pm\)1\({}^{\circ}\)to account for this uncertainty in the final results. In all cases, we obtain very similar stellar mass values for MP Mus, and adopt a final value of \(M_{*}\)=1.30 \(\pm\) 0.08 \(M_{\odot}\). This value is slightly higher than the 1.2 \(M_{\odot}\) value in the literature based on pre-MS evolutionary tracks (e.g. Mamajek et al. 2002), but in complete agreement when updating the luminosity with the new Gaia distance. This process also yields the aforementioned systemic velocity of 3.98\(\pm\)0.04 m/s (local standard of rest) for MP Mus. The \({}^{12}\)CO (2-1) map used, model, and residuals are shown in Fig. 6.
Footnote 3: eddy defines the PA with respect to the red-shifted semi-major axis, so the adopted value was 190\({}^{\circ}\).
The \({}^{12}\)CO (2-1) data and moment maps show no obvious flaring, and the disk back surface is also not visible. This suggests a very flat morphology of the gaseous disk, in agreement with the results from the scattered light observations (Avenhaus et al. 2018). To further explore this, we also tested the elevated surface emission prescription from a tapered disk in eddy, which parametrizes the emission height as a function of radius following \(z(r)=z_{0}(r/1^{\prime\prime})^{\theta}\exp\left[-(r/r_{upper})^{q_{upper}}\right]\) (where \(z_{0}\) is the disk aspect ratio at 1 \({}^{\prime\prime}\), \(\psi\) is the disk flaring, and \(r_{upper}\) and \(q_{upper}\) account for the disk tapering). This resulted in a stellar mass similar to that of the geometrically thin disk case, largely unconstrained values for \(\psi\), \(r_{upper}\) and \(q_{upper}\), and a \(z_{0}\) value of 0.1\(\pm\)0.1, showing that the disk is indeed significantly flat.
There are two interesting features worth noticing from the fitting of Keplerian rotation profiles. Firstly, the \({}^{12}\)CO (2-1) first moment shows a tentative twist in the inner disk (i.e., change in the PA in the inner regions), and the residuals are structured and quite high in this area (up to \(\sim\)1 km/s). These residuals could indicate the presence of a warped, twisted, or misaligned inner disk (e.g. Marino et al. 2015; Casassus et al. 2015; Min et al. 2017; Benisty et al. 2018; Mayama et al. 2018; Bohn et al. 2022). Such structures can cast azimuthal shadows on the disk that can be detected in scattered light data, and Wolff et al. (2016) tentatively detected such a shadow in GPI observations of MP Mus. Later observations with SPHERE did not recover this feature, and Avenhaus et al. (2018) suggested that an imperfect correction of instrumental or interstellar polarization may create a similar effect. Alternatively, variability in the inner disk could also
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Line & Band & Rest. Frequency & Line Flux & Beam size & Spectral resolution \\ & & (GHz) & (Jy km/s) & (\({}^{\prime\prime}\)) & (km/s) \\ \hline \({}^{12}\)CO (3-2) & 7 & 345.796 & 10 \(\pm\) 1 & 1.06\(\times\)0.86 & 0.11 \\ CS (7-6) & 7 & 342.883 & 0.43 \(\pm\) 0.05 & 1.05\(\times\)0.87 & 0.43 \\ HC\({}^{15}\)N (4-3) & 7 & 344.200 & 0.04 \(\pm\) 0.02 & 1.14\(\times\)0.97 & 0.85 \\ \({}^{13}\)CO (3-2) & 7 & 330.588 & 1.8 \(\pm\) 0.2 & 1.10\(\times\)0.91 & 0.11 \\ \({}^{12}\)CO (2-1) & 6 & 230.538 & 4.6 \(\pm\) 0.2 & 0.23\(\times\)0.20 & 0.16 \\ \({}^{13}\)CO (2-1) & 6 & 220.399 & 0.79 \(\pm\) 0.05 & 0.24\(\times\)0.21 & 0.66 \\ C\({}^{18}\)O (2-1) & 6 & 219.560 & 0.21 \(\pm\) 0.02 & 0.24\(\times\)0.21 & 0.67 \\ HC\({}_{3}\)N (16-15) & 4 & 145.561 & 0.34 \(\pm\) 0.02 & 0.50\(\times\)0.40 & 1.01 \\ DCN (2-1) & 4 & 144.828 & 0.08 \(\pm\) 0.01 & 0.50\(\times\)0.40 & 1.01 \\ DCO\({}^{+}\) (2-1) & 4 & 144.078 & 0.11 \(\pm\) 0.01 & 0.50\(\times\)0.40 & 1.02 \\ \hline \end{tabular} 1
\end{table}
Table 6: Gas Line Detections and Properties
change the appearance of a shadow considering the 2 year separation between both observations. The angle of linear polarization of the unresolved stellar signal derived from the SPHERE data (see Sec. 3.3) also suggests that the inner and outer disks are aligned. Secondly, we also find some localized residuals in the first moment at radii 50-100 au between PA 110-170\({}^{\circ}\)(Fig. 6). A detailed analysis of this structure is beyond the scope of this work, but its localized nature is suggestive of the velocity perturbations attributed to unseen planets in other systems (e.g., Pinte et al., 2018; Perez et al., 2018; Teague et al., 2018). The emission in some other channels also displays tentative deviations from the expected Keplerian profile that are typically interpreted as such kinks (e.g., see the \({}^{12}\)CO (2-1) NE emission in Fig. 7), but the sensitivity and resolution of our observations do not allow us to draw any conclusion. Further observations of the \({}^{12}\)CO (2-1) and optically thinner tracers at higher spatial and spectral resolution are needed to better characterize the inner regions and possible deviations from Keplerian rotation in MP Mus.
#### 3.2.3 Disk gas mass and gas-to-dust ratio
The bulk of mass in protoplanetary disks is in gaseous form. However, in contrast with the dust mass which can be derived (or at least approximated) from mm fluxes, such estimates are much more complex for the gas. A number of methods can be used for this purpose, but reliable gas mass measurement usually require detailed modeling using chemical networks, radiative transfer, a good knowledge of the disk structure, and resolved observations of multiple emission lines (together with a large number of assumptions regarding chemical abundances). Such a study is
Figure 5: ALMA Band 6 observations of CO isotopologues from the protoplanetary disk around MP Mus. The top, middle, and bottom rows correspond to \({}^{12}\)CO (2-1), \({}^{13}\)CO (2-1), and C\({}^{18}\)O (2-1), respectively. The zero-th (left column) and first (middle column) moments are shown, together with the 5\(\sigma\) contour of the 1.3 mm continuum as a black line. The corresponding spectra are displayed in the right column.
outside the scope of this work, but we can obtain some order-of-magnitude estimates by comparing the observed \({}^{13}\)CO (2-1) and C\({}^{18}\)O (2-1) line luminosities (9.5\(\times\)10\({}^{4}\) Jy km/s pc\({}^{2}\) and 2.5\(\times\)10\({}^{4}\) Jy km/s pc\({}^{2}\), respectively) with model grids.
Williams & Best (2014) produced a suite of disk models with various properties and derived the resulting line fluxes for different CO lines. Their modeling did not include the selective photodissociation for the \({}^{13}\)CO and C\({}^{18}\)O isotopologues and, instead, they included this effect by calculating half of their models with the usual CO abundances and the other half with a [C\({}^{18}\)O]/[\({}^{13}\)CO] ratio three times lower. Comparing the observed luminosities of \({}^{13}\)CO (2-1) and C\({}^{18}\)O (2-1) in MP Mus with their grid of models (focusing on the M\({}_{*}\)=1 M\({}_{\odot}\) and inclination=10\({}^{\circ}\)models since they are the closest to this system) results in gas masses between 3\(\times\)10\({}^{-4}\) M\({}_{\odot}\) and 1\(\times\)10\({}^{-3}\) M\({}_{\odot}\) for the cases without and with C\({}^{18}\)O depletion, respectively. Similarly, Miotello et al. (2016) investigated the dependence between disk masses and the luminosities of various CO lines for different disk properties using a grid of models including chemical modeling. Comparing the derived line luminosities with their grid of models (see Fig.7 in Miotello et al. 2016) places the gas mass of MP Mus between 10\({}^{-4}\) and 10\({}^{-3}\) M\({}_{\odot}\), depending on whether isotope-selective processes are considered or not.
Although these models are not specifically tailored to MP Mus, these general comparisons suggest that its total gas mass is M\({}_{\rm disk}\)=1\(\times\)10\({}^{-4}\) - 10\({}^{-3}\) M\({}_{\odot}\) (0.1-1 M\({}_{\rm Jup}\)). Taken at face value and combined with the dust mass estimate in Sec. 3.1.2, this implies a global gas-to-dust ratio of 1-10, lower than the standard value of 100 in the ISM. We note that these values are derived using the global dust and gas masses but, since the dust disk is considerably smaller than the gas one, this implies an even lower gas-to-dust ratio in the area where both gas and large dust grains are present. MP Mus then joins the increasing number of sources with a gas-to-dust ratio below 100, which recent ALMA surveys have shown to be common in protoplanetary disks: as an example, Miotello et al. (2017) found that 23 out of 34 disks surveyed in the Lupus star-forming region showed gas-to-dust ratios below 10. Traditionally, these values are interpreted as a signpost of disk evolution, where this ratio decreases over time as gas dissipates in the disk while dust grains remain. Such a scenario is reasonable for an evolved and flat disk such as MP Mus. However, another possible explanation is that the CO abundance in disks is lower than expected and results in fainter line emission, as suggested by mass measurements based on HD for TW Hya, GM Aur and DM Tau (Bergin et al. 2013; McClure et al. 2016) and in Lupus using N\({}_{2}\)H\({}^{+}\)(Anderson et al. 2022). Unfortunately, testing this hypothesis requires reliable gas mass estimates that are independent of the CO abundance, which are extremely challenging and not yet available for MP Mus. The actual reason for the low CO-based gas masses in disks is still an open question for planet formation theories.
### Scattered light results
Avenhaus et al. (2018) presented the DPI data taken in \(J\)- and \(H\)-band within the DARTSS program. These data were reduced by minimizing the non-azimuthal component of the radiation field, as an accurate polarization model for the IRDIS instrument was not available at the time. In contrast, we employed the instrument-specific polarization model used within the IRDAP pipeline, which allowed us to subtract instrumental polarization and polarization crosstalk from the image without any assumption about the data. Besides the scattered light of the disk, the image then still contains a background level of polarization and some unresolved polarization carried within the stellar halo. IRDAP measures the background polarization in regions where the signal intensity is at noise level and subtracts this background from the entire image. To estimate the stellar polarization, we measured \(Q\) and \(U\) from the stellar halo in image areas seemingly free of disk-scattered light (in an annulus mask of 1.72''-2.08''distant to the star). Then, we can calculate the degree of polarization (_DoLP_) within the stellar halo and its angle of linear polarization (_AoLP_):
\[DoLP = PI/I\,, \tag{4}\] \[AoLP = \frac{1}{2}\mathrm{arctan}\left(U/Q\right). \tag{5}\]
We found that the light carried in the stellar halo is polarized with a _DoLP_ of 0.46 \(\pm\) 0.08% and with an _AoLP_ of 98 \(\pm\) 8 deg East of North. Because direct stellar light is expected to be entirely unpolarized, a measurement of polarization from the isolated stellar halo indicates that there is an unresolved polarized signal within the star-centered point spread function (PSF) of the observation. This is typically attributed to parts of the observed light scattering off interstellar dust within the line-of-sight, or due to an unresolved disk component close to the star (e.g. Keppler et al. 2018; van Holstein et al. 2020). The measured
Figure 6: Keplerian fit to the \({}^{12}\)CO (2-1) line emission of MP Mus using **eddy**. The observed first moment (left), model (middle), and corresponding residuals (right) are shown. The 0.17′′\(\times\) 0.15′′beam is shown on the bottom left corner. The area inside the black ellipse shown in the residuals was masked during the fit. The tentative localized residuals mentioned in the text are also marked with a red line. Only the extended configuration from 2017.1.01419.S was used for this analysis.
_AoLP_ of the unresolved polarization is perpendicular to the PA of the outer disk, consistent with a co-planar inner disk component. This is further consistent with the continuous disk around MP Mus, extending close to the central star as observed with ALMA for gas and dust. We subtracted the respective measured unresolved polarization from the \(Q\)- and \(U\)-components before calculating \(Q_{\phi}\).
Figure 7 shows the resulting \(Q_{\phi}\)-image of MP Mus in \(H\)-band in logarithmic stretch and the comparison with the continuum and \({}^{12}\)CO (2-1) from ALMA. We used the flux frames of the IRDIS observation to convert the observed polarized flux from counts to Jy/arcsec\({}^{2}\), assuming an \(H\)-band magnitude of \(7.64\pm 0.02\) (2MASS, Cutri et al., 2003). The final product is very similar to the presented image in Avenhaus et al. (2018). Figure 7 also compares the radial profile of \(Q_{\phi}\times r^{2}\) in \(H\)-band with those of the ALMA 1.3 mm and the \({}^{12}\)CO (2-1) zero-th moment. The SPHERE image show a bright inner region (the inner \(\sim\)20 au are attenuated due to the SPHERE coronagraph) and a decrease in intensity from 20 to 30 au. The emission then increases slowly up to 60 au and then rises faster in an outer ring centered at \(\sim\) 80 au, dropping below the noise level at 130 au, similar to the extent of the \({}^{12}\)CO (2-1). The slight asymmetry of this outer ring suggests that the near side of the disk is to the east, the far side is west. This is consistent with promoted forward-scattering (e.g. Stolker et al., 2016), as the near side of the disk appears brighter than the far side (by a factor of \(\sim\) 1.5 on average).
Avenhaus et al. (2018) mentioned an azimuthally-localized intensity decrease in the western half of the disk (P.A.\(\sim\) 270deg). This feature is also present in our image. Its location at the far side of the disk, where signal-to-noise is lowest, makes it a very tentative detection. We further find that the disk's brightest part is to the northeast of the star.
## 4 Discussion
### Comparison with stellar evolutionary models
The stellar mass \(M_{\rm*dyn}=1.30\pm 0.08\,M_{\odot}\) derived from the disk rotation (Sec. 3.2.2) is independent of theoretical isochrones and evolutionary models, and offers an interesting comparison with such models.
MP Mus is classified as a K1V star (Mamajek et al., 2002), corresponding to \(T_{\rm eff}\) values between 4900 and 5100 K in the spectral type (SpT)-\(T_{\rm eff}\) relations of Kenyon & Hartmann (1995) and Pecaut & Mamajek (2013). This temperature is also in agreement with the 5110 K value derived in the Gaia DR3 (Gaia Collaboration et al., 2022). Its interstellar extinction \(A_{V}\) has been measured in the range of 0.2-0.7 mag (Mamajek et al., 2002; Cortes et al., 2009). More recently, Asensio-Torres et al. (2021) derived values of 4600 K and 0.8 mag for this system from fitting the optical/near-IR SED, although this \(T_{\rm eff}\) value appears a bit too low for a K1 star based on the formerly mentioned SpT-\(T_{\rm eff}\) tables. We perform a similar process and fit the Tycho and 2MASS photometry using the BT-Settl photospheres (Allard et al., 2011, 2012) with \(T_{\rm eff}\) values between 4500 to 5500 in 100 K steps, \(A_{V}\) values from 0 to 1 mag in 0.1 mag steps, and stellar luminosity values between 0.8-1.6 \(L_{\odot}\) in steps of 0.1 \(L_{\odot}\). Adopting a distance of d=98 pc (Gaia Collaboration et al., 2022) we derive values of \(T_{\rm eff}\)=4900 K, \(A_{V}\)=0.6 mag, and \(L_{\rm*}\)=1.3 \(L_{\odot}\). Such a \(T_{\rm eff}\) value is higher than the effective temperature in Asensio-Torres et al. (2021) and more comparable to those adopted in earlier studies. Alternatively, we also normalize 4900 and 5100 K photospheres to the observed 2MASS photometry using \(A_{V}\) values of 0.2 and 0.7 mag, which yields stellar luminosities between 1.1-1.3 \(L_{\odot}\) (compatible with previous values when corrected for the updated distance from Gaia). We thus adopt \(T_{\rm eff}=5000\pm 100\) K and \(L_{\rm*}=1.2\pm 0.1\,L_{\odot}\) for MP Mus. Figure 8 shows its location in the corresponding HR diagram using the MESA Isochrones and Stellar Tracks (MIST, Dotter et al., 2016; Choi et al., 2016). This results in a stellar mass and age of \(\sim\)1.3 \(M_{\odot}\) and 7-10 Myr for the system. MP Mus is a confirmed member of the old \(\epsilon\) Cha association based on its kinematic properties (e.g., Murphy et al., 2013; Dickson-Vandervelde et al., 2021). Murphy et al. (2013) assigned a 3-5 Myr age to that region (younger than our estimate), but the re-analysis of \(\epsilon\) Cha by Dickson-Vandervelde et al. (2021) including Gaia data yielded an age of 5\({}^{+3}_{-2}\) Myr that is compatible with our results. A lower \(T_{\rm eff}\) such as the one proposed in Asensio-Torres et al. (2021) would result in a younger (but still compatible) age for MP Mus, although the dynamical stellar mass of \(\sim\)1.3 \(M_{\odot}\) derived from the ALMA observations appears more difficult to reconcile with this lower \(T_{\rm eff}\) value.
The mass derived from the MIST tracks also appears fully compatible with the dynamical mass estimate. Given the good agreement between various recent pre-main sequence evolutionary models (Simon & Toraskar, 2017), this result is not restricted to these tracks. Studies comparing dynamical stellar masses with predictions from evolutionary models without magnetic fields typically find the latter to underestimate masses by a significant amount, especially for low-mass stars (by 30 %-80 % for stellar masses \(<\)1.4 \(M_{\odot}\), e.g., Simon et al., 2019; Pegues et al., 2021). In contrast, evolutionary tracks including stellar magnetic fields (e.g., Feiden, 2016) yield more compatible results with dynamical mass estimates. This discrepancy is usually attributed to starspots (e.g., Pegues et al., 2021; Flores et al., 2022), which become more relevant for late-type stars. Given the relatively high mass of MP Mus compared to the samples in the aforementioned works, the agreement between the dynamical mass estimate and the evolutionary tracks without magnetic fields used in our study is likely an indication that the effect of starspots in this system is moderate/negligible. We also notice that these works focused on younger sources (mostly in Taurus and Ophiuchus, with estimated ages of 1-2 Myr). Similar comparisons for larger samples of stars covering a range of masses and ages could help to further inform theoretical models of early stellar evolution.
### Grain growth and optically thick emission in MP Mus
Assuming optically thin emission and a sufficiently warm dust temperature for the emission to be in the Rayleigh-Jeans regime, \(\alpha_{\rm mm}\) relates to the power law index of the dust opacity, \(\kappa_{\nu}\propto\nu^{\beta}\), via \(\alpha_{\rm mm}=2+\beta\). \(\beta\) is sensitive to the maximum grain size (\(a_{\rm max}\)) of the dust size distribution (e.g. Natta & Testi, 2004; Draine, 2006; Testi et al., 2014): dust distributions with maximum grain sizes around or larger than 1 mm have opacity power-law indices \(\beta\lesssim 1\), while smaller grains have higher \(\beta\) values (ISM-like grains show \(\beta\sim\) 1.7). The spectral indices measured in Sec. 3.1.4 then translate to average \(\beta\) values of \(\sim\)0.1-0.4 and suggest the presence of mm-sized grains in MP Mus. Our results are very similar to those of Cortes et al. (2009), who found the mm spectral index of MP Mus between 3 mm and 12 cm to be \(\alpha=2.4\pm 0.1\), and first indicated the presence of large grains in the disk. Similar \(\alpha_{\rm mm}\) values are found in disks in younger star-forming regions such as Taurus, Ophiuchus, Lupus, and Chamaeleon I (Ricci et al., 2010, 2010, 2012; Ribas et al., 2017; Tazzari et al., 2021), but also in other evolved disks such as TW Hya, HD 98800B or TWA 3 (Macias et al., 2021; Ribas et al., 2018; Czekala et al., 2021). This implies
that, despite grain drift and evolution, protoplanetary disks can retain a population of large grains during most of their lifetimes.
The spectral index map in Sec. 4 provides additional spatial information about the disk, and two of its features are worth discussing. First, \(\alpha_{1.3-2.2\,\mathrm{mm}}\) increases with radius, similar to many other sources observed at multiple (sub)mm wavelengths with enough angular resolution (e.g., Perez et al. 2015; Carrasco-Gonzalez et al. 2016; Tazzari et al. 2016; Dent et al. 2019; Macias et al. 2019, 2021). This is attributed to a combination of higher optical depths in the inner regions and to the inward drift of large grains due to gas drag (Weidenschilling 1977). Thus, this can be interpreted as additional evidence of grain growth and radial drift in the disk. It is also worth noticing that the disk radii at 1.3 mm and 2.2 mm are similar within the uncertainties.
Figure 7: Comparison of ALMA and SPHERE observations of MP Mus. _Top left:_\(Q_{\phi}\)-image in \(H\)-band (\(\lambda_{\mathrm{obs}}=1.6\,\mu\)m) taken with SPHERE/IRDIS. The colormap is shown in logarithmic stretch. The central blue circle masks the area covered by the coronagraph of the observation. _Top right:_ Comparison of the 1.3 mm continuum emission (orange contour), the SPHERE \(Q_{\phi}\)\(H\)-band data (purple), and one of the \({}^{12}\)CO (2-1) channels (blue). The 1.3 mm contour corresponds to the 5 RMS level, while only data above the corresponding 3 RMS levels are shown for the 12CO (2-1) and scattered light observations. _Bottom:_ Comparison of the radial profiles of the SPHERE \(Q_{\phi}\times r^{\mathrm{2}}\) in \(H\)-band (purple line), the 1.3 mm continuum (orange line), and the \({}^{12}\)CO (2-1) emission (blue line). The SPHERE coronagraph is also shown.
Although this interpretation is limited by the moderate angular resolution of the 2.2 mm observations (\(\sim\)20 au), if both radii are indeed similar this could suggest the presence of some mechanism preventing strong dust radial drift in the disk, since it would otherwise appear more compact at 2.2 mm than at 1.3 mm.
Maybe more interesting is the fact that MP Mus displays values \(\alpha_{\rm 1.3-2.2\,mm}<2\) in its inner region (\(r<\)30 au). Such values are below the spectral index of black body radiation, and they have been traditionally interpreted as indicators of additional emission mechanisms such as free-free radiation from ionized photoevaporative winds or stellar chromospheric activity (e.g., MacGregor et al., 2015; Macias et al., 2016). However, these processes usually become significant at wavelengths longer than those considered here. Cortes et al. (2009) also observed MP Mus at cm wavelengths and obtained only upper limits at 3 cm and 6 cm, suggesting a negligible contribution from a possible stellar wind. Alternatively, \(\alpha_{\rm mm}<2\) values can also arise from optically thick emission from dust grains with high albedo (Miyake & Nakagawa, 1993; Liu, 2019; Zhu et al., 2019). Recently, ALMA has shown that spectral indices below 2 in the inner regions of disks are not unusual (e.g. Huang et al., 2018; Dent et al., 2019), and an integrated \(\alpha_{\rm mm}\) value below 2 was also found in the circumbinary disk of the TWA 3 triple system (Czekala et al., 2021). The spectral index map of MP Mus is one more example of these cases, adding observational support to the idea that at least part of the emission from protoplanetary disks is optically thick even at \(\sim\)2 mm, and that dust scattering needs to be considered at these wavelengths. In turn, this is yet another indication that dust masses derived from mm surveys may be underestimated, and explains why SED modeling of disks accounting for the disk structure and dust scattering results in systematically higher dust masses (Ballering & Eisner, 2019; Ribas et al., 2020; Rilinger et al., 2023). The fact that these low \(\alpha_{\rm mm}\) values are also found on evolved disks such as TW Hya or MP Mus shows that this effect could be significant throughout most of the disk lifetime.
### Lack of mm substructures and comparison with TW Hya
The high angular resolution view of MP Mus presents a stark contrast with respect to most protoplanetary disks imaged at similar spatial resolutions to date: while the majority of disks display some substructures in high angular resolution mm observations (i.e. gaps, rings, spiral arms, asymmetries), MP Mus shows a smooth disk down to a 4 au resolution, with the possible exception of a barely resolved outer ring at \(\sim\)50 au. There are also no signs of an inner cavity, in agreement with the _AoLP_ derived from the SPHERE observations (Sec. 3.3) as well as with previous modeling of its SED which suggested that the disk extends down to the dust sublimation radius (Cortes et al., 2009). The featureless appearance is even more puzzling when we consider its age (7-10 Myr), which is quite older than typical disk lifetimes. Without substructures acting as dust traps (e.g. Pinilla et al., 2012; Zhu et al., 2014), the gas drag would quickly deplete the disk of large grains on timescales much shorter than disk lifetimes (Weidenschilling, 1977), so one of the following scenarios must be true for MP Mus: either its dust population comprises small (\(<100\,\mu\)m) dust grains only that are weakly affected by radial drift, or there are undetected/unresolved substructures in the disk that are stopping the inward migration of large grains.
In the first scenario, a population of smaller grains would have both a lower opacity at mm wavelengths and a steeper opacity power law index (e.g. Natta & Testi, 2004; Testi et al., 2014; Tazzari et al., 2016). A significantly higher dust mass would be needed to both account for the observed fluxes and to maintain a spectral index of 2-2.5 in most of the disk, which in this case would be mostly due to optically thick emission. Although this explanation cannot be ruled out with the current data, it seems unlikely that the actual dust mass would be much higher than the measured value, as it would imply that MP Mus was originally very massive (possibly above the disk instability limit). Moreover, the disk size at mm wavelengths and those of the gas and scattered light observations would be difficult to reconcile with a population of small grains only. Additional high-resolution observations at other mm wavelengths covering a broader wavelength range are needed to perform a detailed modeling of the dust population in the disk (e.g., Macias et al., 2021) and to further investigate this possibility.
The second explanation for the survival of large grains in an evolved disk with apparently no substructures is that this conclusion is limited by the optical depth and/or angular resolution of the observations, i.e., there are structures in the disk but they remain undetected or unresolved with the current observations. Structures narrower than 5 au have been found in some nearby disks such as TW Hya, HD 169142 and V4046 Sgr (Andrews et al., 2016; Perez et al., 2019; Martinez-Brunner et al., 2022), although the last two also displayed much broader gaps in the dust distribution (and V4046 Sgr is a circumbinary disk). TW Hya probably provides the best comparison given its similarities with MP Mus: high angular resolution observations revealed a system of concentric rings and gaps with 1-5 au widths and varying amplitudes (Andrews et al., 2016). These authors argued that such structures may be common in disks, playing a fundamental role in their evolution and in the planet formation process. Figure 9 compares the radial profiles of MP Mus and TW Hya at 1.3 mm from Macias et al. (2021) after degrading the angular resolution of the latter to match the \(\sim\)5 au of the MP Mus data. This comparison demonstrates that both profiles are significantly different, and it is still possible to identify several features in TW Hya at this resolution such as a flat profile in the inner region (caused by the central cavity) and two gaps at 25 and 40 au. On the other hand, a number of the known substructures in TW Hya are no longer visible in the profile, as could be the case for MP Mus. We note that gas rings with radial widths much smaller than their vertical extent are unstable (Ono et al., 2016; Dullemond et al.
Figure 8: Location of MP Mus (black dot) in the HR diagram. MIST mass tracks (1.2, 1.3, and 1.4 \(M_{\odot}\)) and isochrones (5, 7, 10, and 13 Myr) are also shown (Dotter, 2016; Choi et al., 2016).
2018), which limits how packed substructures can be. Given the flat appearance of MP Mus, realistic values for \(h/r\) are likely in the range of 0.03-0.07, which could allow for the presence of rings in the outer regions that are narrow/close enough to remain unresolved. This consideration is also relevant for the comparison with TW Hya: the stellar mass of MP Mus is \(\sim\)twice that of TW Hya (1.3 \(M_{\odot}\) for MP Mus, and \(\sim\)0.6 \(M_{\odot}\) in the case of TW Hya Sokal et al. 2018), so its scale height is expected to be smaller (by a factor of \(\sim\sqrt{2}\) when ignoring the higher stellar luminosity of MP Mus). As a result, MP Mus may host narrower rings than TW Hya. In addition, Xu & Bai (2022a,b) showed that within one broad gas pressure bump, two radial dust rings separated by a distance comparable or less than \(h\) may form, assisted by the dust back reaction to gas. This further promotes the possibility of having packed dust rings currently unresolved. Finally, high optical depths also limit our ability to identify gaps/rings in the system. As suggested by the map of spectral index, the inner 30 au of the disk appear to be optically thick, displaying \(\alpha<2\) (Fig. 4). It is thus possible that the observed 1.3 mm continuum emission in these regions is coming from an elevated surface, preventing the detection of substructures in the midplane.
Overall, despite the high angular resolution and structureless appearance of MP Mus, the current data cannot discard small (\(<\)4 au) rings especially in the inner regions, or even larger ones if the disk is sufficiently optically thick. If undetected substructures are the explanation for the long-lived disk around MP Mus, then this system lends further support to the idea that small rings may be a common feature in disks. As proposed by Tripathi et al. (2017, 2018); Andrews et al. (2018b), if some of these rings are (partially) optically thick, they could help to explain the observed relation between the radii and mm luminosity of disks. Evolutionary disk models by Toci et al. (2021) also showed that the gas-to-dust radii ratios (\(R_{\rm CO}\)/\(R_{\rm dust}\)) become \(>\)5 in very short timescales (\(<\)1 Myr) in the absence of mechanisms preventing radial drift, in clear conflict with the \(R_{\rm CO}\)/\(R_{\rm dust}\)\(\sim\) 2 value found in MP Mus at 7-10 Myr. Given its proximity, ALMA observations of MP Mus at even better angular resolutions and longer wavelengths would aid in explaining the long lifetime of the system and exploring the frequency and properties of small structures in disks.
### Upper limits to the presence of planets
Planets with enough mass are expected to open gaps in the gas and dust distribution (e.g., Crida et al. 2006; Zhu et al. 2011; Dong & Fung 2017; Dong et al. 2018), although the properties of these gaps are highly dependent on the conditions in the disk. Deriving planetary masses from gap widths and contrasts requires detailed hydrodynamic models including many (uncertain) free parameters and is a quite degenerate process, but comparisons with results from different models provide some constraints on the maximum mass of planets in the MP Mus system based on the absence of visible substructures larger than \(>\)4 au in dust distribution. Here, we use the scaling relation between the planet Hill sphere (\(R_{\rm H}\)) and width of the gap (\(\triangle\), defined as the separation between the minimum brightness in a gap and the maximum brightness of the corresponding external ring) following Lodato et al. (2019). For disks with viscosity values \(\alpha\lesssim 0.01\), these two quantities are related following:
\[\triangle=kR_{\rm H}=k\left(\frac{M_{\rm p}}{3M_{*}}\right)^{1/3}r, \tag{6}\]
where \(M_{\rm p}\) is the mass of the planet, \(M_{*}\) is the mass of the star, \(r\) is the orbital radius of the planet, and \(k\) is a proportionality constant that depends on disk properties and ranges between \(\sim\)4-8 for gaps observed in the mm (see Lodato et al. 2019, and references therein, where they adopt a value of \(k\)=5.5). Assuming that we would have resolved widths \(\triangle\geq\) 4 au and the derived 1.30 \(M_{\odot}\) mass for MP Mus, Fig. 10 shows the corresponding upper limits for \(M_{\rm p}\) at different radii for \(k\)=4, 5.5, and 8. Based on this simple relation and further assuming that each gap is carved by a single planet, the lack of resolved gaps in these observations would imply that there are no planets more massive than 0.5-4 \(M_{\rm Jup}\) at orbital radii \(r>10\) au (depending on the adopted value of \(k\)), and these constraints decrease to 0.05\(-\)0.5 \(M_{\rm Jup}\) and 2 \(M_{\rm op}\)\(-\)0.06 \(M_{\rm Jup}\) for \(r>20\) au and \(r>40\) au, respectively. We emphasize that this analysis assumes that any sufficiently large gap carved by planets would be visible in the 1.3 mm observations, but the high optical depth within the inner 30 au suggested by their \(\alpha<2\) spectral index could hide such gaps (see Sec. 4.3). Therefore, the derived limits are likely underestimated, especially in the inner regions of the disk. Also, these limits only apply for \(r\leq 60\) au since that is the radial extent of the mm dust continuum emission.
The 1.3 mm radial profile obtained with _frank_ shows a shallow plateau between 30-40 au, which could be interpreted as a low-contrast gap. Although it is not clear that this structure is an actual gap in the dust density distribution of the disk, here we also use Eq. 6 to calculate the mass of a hypothetical planet that could carve such a gap at that location. Assuming that the planet is at the center of the plateau (\(\sim\)35 au) and given the location of the outer bump (\(\sim\)45 au), the gap width would be \(\triangle\)=10 au, and the corresponding planet mass is 0.2 \(M_{\rm Jup}\), 0.6 \(M_{\rm Jup}\), and 1.5 \(M_{\rm Jup}\) for a value of \(k\) of 8, 5.5, and 4, respectively.
On the other hand, the drop in scattered light signal between 30-80 au seen in the radial profile of the scattered light (see Sec. 4.5) may indicate the presence of one or more massive planets outside the continuum emission, and studies based on scattered light data have placed different upper limits to their masses. Wolff et al. (2016) observed MP Mus with GPI and could reject the presence of a 3 \(M_{\rm Jup}\) planet outside of 40 au with a 90 % confidence for a 7 Myr disk. Similarly, SPHERE observations of MP Mus allowed Asensio-Torres et al. (2021) to place upper limits on the masses of possible companions4, now reaching a 5
Figure 9: Comparison of the radial profiles of MP Mus (red) and TW Hya (black) at 1.3 mm. The TW Hya data were taken from Macías et al. (2021), and convolved with a 0.08′′Gaussian to match the resolution and distance of the MP Mus observations. The inner cavity and two of the known gaps of TW Hya (42 54 and 45 au) are still visible in the case of TW Hya, in contrast with the smooth profile of MP Mus.
\(\sigma\) limit of \(\sim\)2.5-3 \(M_{\rm Jup}\) at \(r\geq\)50 au. Asensio-Torres et al. (2021) also proposed that a planet located at 55 au may be the origin of this decrease in scattered light intensity. However, the lack of a wide gap at this location in the ALMA continuum observations suggests that the lower intensity of scattered light between 30-80 au is not due to an actual gap in the disk surface density (see Sec. 4.5). Recently, the GAPlanets Survey (Follette et al., 2022) targeted several sources in H\({}_{\alpha}\) searching for accreting protoplanets in fourteen disks, but did not identify any potential candidate in the disk around MP Mus. If the lack of any clear rings/gaps in the 1.3 mm continuum is real and not due to the high optical depth of the disk in the inner regions, then this work and previous studies analyzing scattered light observations appear to rule out the presence of planets more massive than 3 \(M_{\rm Jup}\) at radii \(r>10-15\) au. If that is the case, the exoplanet population of a hypothetical planetary system around MP Mus may not be very different from that of the Solar System, making it an ideal laboratory for further planet formation studies given its proximity and age.
Footnote 1: [http://www.astro.princeton.edu/](http://www.astro.princeton.edu/)\(\sim\)m/
Finally, the mm observations in this work also place some constraints on the presence of circumplanetary disks outside of the dust disk (\(r>\)60 au). The RMS of the 1.3 mm continuum image is 19 \(\mu\)Jy/beam, and thus we can reject the presence of point-like sources with fluxes \(\geq 90\,\mu\)Jy in the dust-free region at a 5 \(\sigma\) confidence. There are two \(\sim 4\,\sigma\) peaks at angular separations 1.4" NE and 1.6" SE from the source center (\(\sim\)140 and 160 au projected orbital radii) which lie outside the scattered light outer ring, but these are likely associated with noise. Converting these limits to disk masses involves many unknown quantities (see Isella et al., 2014, 2019), but we can compare the sensitivity limit of these data to the flux of the circumplanetary disk recently detected around PDS 70c (Isella et al., 2019; Benisty et al., 2021): considering its 86 \(\mu\)Jy flux at 855 \(\mu\)m, an spectral index of 2.3, and the closer distance of MP Mus, a similar circumplanetary disk would have a flux of 42 \(\mu\)Jy at 1.3 mm and would therefore remain undetected with our observations.
### The panchromatic view of MP Mus
The ALMA observations of MP Mus reveal a smooth disk in the mm continuum with a radius of 60 au, and a gaseous disk that extends further out to 130 au. In contrast, the scattered light images probing small dust grains show a different morphology, with a bright inner region and a significant drop in flux between 30-80 au (Cortes et al., 2009; Schneider et al., 2014; Wolff et al., 2016; Avenhaus et al., 2018). In particular, SPHERE observations of MP Mus show that: (1) the strongest signal in scattered light arises from the inner 25 au, (2) there is a discontinuity in the brightness at 60 au, (3) there is an outer ring at 80 au, and (4) the disk extends up to 125 au. Many of these coincide with features seen in the ALMA images (see Fig. 7 for a comparison of the mm continuum, \({}^{12}\)CO, and scattered light observations and radial profiles). Regarding the inner regions, the 1.3 mm continuum emission also arises mostly from the inner 25-30 au, the slopes of the radial profiles of both continuum and \({}^{12}\)CO emission changes at this radius (Fig. 3), and the emission has a spectral index \(\alpha_{\rm mm}<2\) within this radius (Fig. 4). The discontinuity observed at 60 au in scattered light coincides with the outer radius of the mm continuum emission, and the extent of the gaseous component of the disk is in great agreement with the one inferred from the scattered light data. Overall, this matches predictions from the radial drift of dust grains: while small grains remain well coupled to the gas and are thus co-located, larger grains migrate toward pressure maxima and accumulate in the inner regions of the disk or localized pressure bumps (e.g., Weidenschilling, 1977; Pinilla et al., 2012).
The gap seen in scattered light between 30-80 au is quite prominent (it is the largest of all the sources in Avenhaus et al., 2018), and two scenarios have been proposed to explain it, namely that it is a real decrease in the disk density carved by one or multiple planets, and that it is shadow cast by the disk itself (Wolff et al., 2016). We can now revisit these explanations in the light of the new ALMA observations.
Planets are one of the leading explanations for the plethora of rings and gaps found in protoplanetary disks, and the properties of many of these structures appear to support this explanation (e.g., Huang et al., 2018; Zhang et al., 2018; Perez et al., 2019). In these cases, the gravitational influence from the planet decreases the local gas density and induces a pressure bump outside of its orbit, which acts as a dust trap for large (mm/cm-sized) grains. For MP Mus, however, the lack of any mm emission in the outer ring visible in scattered light implies that the ring is mostly devoid of large dust grains, suggesting that the gap in scattered light is not a gap in the surface density opened by one or several planets. Although the available observations do not reject planet masses below sub-Jovian values, the lack of companions discussed in Sec. 4.3 also supports the interpretation that the drop in scattered light at 30-80 au is due to a different process.
A shadow cast by the disk inner rim (e.g. Dullemond & Dominik, 2004) is also a plausible explanation for the apparent gap in the scattered light observations, especially considering that MP Mus shows a rather flat structure. The possibility of a shadowed disk in the system was proposed by Cortes et al. (2009) based on its low near/mid-IR excess (4-20 \(\mu\)m) with respect to the median disk SED of Taurus, even before high-angular resolution observations in scattered light were available. Dong (2015) found that a puffed-up inner rim with a sharp edge in the vertical direction could produce a pattern similar to that seen in
Figure 10: Upper limits to the presence of planets in the disk around MP Mus as a function of radius. These upper limits are derived using equation 6 (Lodato et al., 2019) and the fact that no gaps with width \(\Delta\geq 5\) au are detected in the disk. Three different values are used for the proportionality constant (\(k\)) linking the planet Hill radius and the corresponding gap width.
MP Mus (i.e., a drop in scattered light emission at intermediate radii), although rims with more physically-motivated structures did not produce such results. It is interesting to note that a steep increase in the SPHERE radial profile occurs at 60 au, matching the outer radius of the mm emission. Based on this and the overall SPHERE radial profile, we propose that MP Mus probably has a puffed-up inner disk that shadows radii beyond 30 au. However, the lack of large grains in the disk midplane at radii \(>\)60 au could result in a less efficient cooling of the disk (and hence a warmer midplane), increasing in the disk scale height and allowing the disk surface to grow out of the shadow at longer radii. Direct observational evidence for such an effect (i.e., higher temperatures in the outer regions devoid of large grains) exist for the edge-on disk Oph 163131 based on tomographic reconstruction of its temperature structure (Flores et al. 2022b; Villenave et al. 2022).
We explore this idea by calculating a simple disk model for MP Mus using the MCFOST code (Pinte et al. 2006, 2009). We adopt the stellar and disk parameters derived in this study and include two different dust populations: one for small grains (0.01 \(\mu\)m - 10 \(\mu\)m) extending from 0.1 to 130 au, and a more compact disk of larger grains (10 \(\mu\)m - 1 mm) from 0.1 to 60 au. At 60 au, the inter-phase between the large and small grain disks results in an increase of the midplane temperature from 10 to 15 K. For a vertically isothermal disk with a scale height \(H=c_{s}/\Omega\), this change in temperature would imply a local increase of \(\sim\)20 % in the scale height, which could expose the upper disk layers to stellar radiation again. We emphasize that none of these numbers are to be considered as accurate estimates for MP Mus given the oversimplified model used, but they show that the change in the disk opacity at the end of the mm continuum radius could lead to a local increase of the disk temperature at that location. We do not find signs of a colder disk at \(r=60-80\) au in the \({}^{12}\)CO (2-1) radial profile, which does not show any significant feature at these radii down to 3 K (the brightness temperature uncertainty at 70 au). However, the \({}^{12}\)CO (2-1) emission arises from above the midplane and may not reflect the midplane temperature. Unfortunately, the same analysis cannot be performed for optically thinner isotopologues such as \({}^{13}\)CO (2-1) or C\({}^{18}\)O (2-1), since their emission only extends to \(\lesssim\)60 au. Detailed physico-chemical modeling of the gas and dust components of MP Mus is needed to calculate a consistent temperature structure for the disk and to determine the origin of the gap seen in the scattered light observations.
## 5 Summary
We present new ALMA observations of the nearby, evolved protoplanetary disk around MP Mus, including 0.89 mm, 1.3 mm, and 2.2 mm continuum emission, as well as multiple gas emission lines. These data are the first spatially resolved observations of the disk at these wavelengths and provide a wealth of new information of this system. Our key results and findings are:
* The continuum emission shows a disk with no detected inner cavity and a radius of 60\(\pm\)5 au. Despite the high angular resolution of these data, the dust disk appears smooth down to 4 au scales, making MP Mus an interesting exception when compared with the plethora of substructures found in most disks observed at comparable resolutions, and an great counterpart to TW Hya in particular.
* Based on the mm fluxes and using standard assumptions for the dust opacity and disk temperature, we derive a dust disk mass of \(M_{\rm dust}=0.14^{+0.11}_{-0.06}\,M_{\rm Jup}\).
* The continuum spectral index between 1.3 and 2.2 mm has a value \(<2\) for radii \(r<30\) au, indicative of optically thick emission from dust grains with a high albedo in this region.
* These observations yield detections of \({}^{12}\)CO (3-2), CS (7-6), HC\({}^{15}\)N (4-3), and \({}^{15}\)CO (3-2) in Band 7, \({}^{12}\)CO (2-1), \({}^{13}\)CO (2-1), and C\({}^{18}\)O (2-1) in Band 6, as well as HC\({}_{3}\)N (16-15), DCN (2-1), and DCO\({}^{+}\) (2-1) in Band 7.
* The \({}^{12}\)CO (2-1) observations reveal a gaseous disk extending up to 130\(\pm\)15 au, a factor of \(\sim\)2 larger than the dust disk. Similar to the dust, no clear gaps are found.
* By fitting a Keplerian profile to the first moment of the \({}^{12}\)CO (2-1), we derive a dynamical mass for MP Mus of \(1.30\pm 0.08M_{\odot}\). This value is consistent with predictions from theoretical stellar evolutionary models, which date the system at an age of 7-10 Myr.
* By comparing the \({}^{13}\)CO (2-1) and C\({}^{18}\)O (2-1) fluxes with grids of disk models, we estimate the gas mass in the system to be \(10^{-4}-10^{-3}\,M_{\odot}\), resulting in a global gas to dust ratio of 1-10.
* A comparison of these data with previous scattered light observations shows that small grains and the gas are co-located while larger grains concentrate in the inner regions, in line with expectations from dust radial drift.
* From the scattered light observations, we derive an angle of linear polarization that speaks for disk material inside the stellar PSF, co-planar with the outer disk. This is in agreement with the expectations from a disk extending inward to regions close to the star (within \(~{}0.05\arcsec\)).
* The survival of large grains in a gas-rich disk is surprising for such an evolved system, especially considering the lack of substructures in the continuum emission. This suggests that structures preventing the radial drift of large grains may be present in the disk but not visible due to a high optical of the emission at 1.3 mm, or they may be smaller than the resolution limit of these observations (4 au).
* 40 au is caused by a planet, it would have a mass \(\sim 0.2-1.5\,M_{\rm Jup}\).
* The scattered light observations also revealed a drop in intensity between 30 and 80 au, and an outer ring from 80 to 130 au. The lack of mm emission from this outer ring suggests this drop in scattered light emission is probably not an actual gap in the disk surface density due to planets, since such a gap would trap large grains in the ring. Instead, the data appear more consistent with this feature being a shadow, cast between 30-80 au by a puffed-up inner rim. The rapid increase of scattered light signal at radii \(>\) 60 au may be explained by the lack of large dust grains at these locations, which could result in a warmer disk. This would increase the disk scale height and expose the disk surface to stellar radiation at longer radii, explaining the outer ring visible in scattered light.
Given its nearby location, age, and properties, MP Mus is an optimal target to study many aspects of protoplanetary disks, a great laboratory to probe the chemistry of planet formation, an interesting counterpart to the TW Hya system and, possibly, the closest analog to the young Solar System. Because of all these
factors, MP Mus may be one of the most promising individual sources to advance our understanding of planet formation.
###### Acknowledgements.
We thank the anonymous referee for their constructive comments, which helped to improve the quality of the manuscript. We also thank Richard Teague and Jeff Jennings for useful comments on using eddy and franks. This paper makes use of the following ALMA data: ADS/JAO.ALMA/2017.1.01687.S. ADS/JAO.ALMA#2017.1.01167.S. and ADS/JAO.ALMA#2017.1.01419.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Tianwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AU/IMRAO and NAOO. A.R. has been supported by the UK Science and Technology Research Council (STFC) via the consolidated grant ST/S000623/1 and by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 823823 (RISE DUSTISPERS project). P.W. acknowledges support from FONDECYT grant 1191934. This work was funded by ANDL - Millennium Science Initiative Program - Center Code N0221, 080. This project has received funding from the European Research Council (ERC) under the European Union Horizon 2020 research and innovation program (grant agreement No. 101042275, project Stellar-MADE). A.A. acknowledges support through a Fellowship for National PhD students from ANID, grant number 21212094 and funding by ANID, Millennium Science Initiative, via the Nucleo Milenio de Formacion Planetario (NPE). P.R.M. thanks the Spanish MINECO for funding support from PID2019-10-20523G-B0-0.C.C. acknowledges support by ANID BASAL project FB210003 and ANID, - Millennium Science Initiative Program - NCNB\(\_\)1171.M.V. research was supported by an appointment to the NASA Postdoctoral Program at the NASA Jet Propulsion Laboratory, administered by Oak Ridge Associated Universities under contract with NASA.
|
2307.03748 | Incentive-Theoretic Bayesian Inference for Collaborative Science | Contemporary scientific research is a distributed, collaborative endeavor,
carried out by teams of researchers, regulatory institutions, funding agencies,
commercial partners, and scientific bodies, all interacting with each other and
facing different incentives. To maintain scientific rigor, statistical methods
should acknowledge this state of affairs. To this end, we study hypothesis
testing when there is an agent (e.g., a researcher or a pharmaceutical company)
with a private prior about an unknown parameter and a principal (e.g., a
policymaker or regulator) who wishes to make decisions based on the parameter
value. The agent chooses whether to run a statistical trial based on their
private prior and then the result of the trial is used by the principal to
reach a decision. We show how the principal can conduct statistical inference
that leverages the information that is revealed by an agent's strategic
behavior -- their choice to run a trial or not. In particular, we show how the
principal can design a policy to elucidate partial information about the
agent's private prior beliefs and use this to control the posterior probability
of the null. One implication is a simple guideline for the choice of
significance threshold in clinical trials: the type-I error level should be set
to be strictly less than the cost of the trial divided by the firm's profit if
the trial is successful. | Stephen Bates, Michael I. Jordan, Michael Sklar, Jake A. Soloff | 2023-07-07T17:59:01Z | http://arxiv.org/abs/2307.03748v2 | # Incentive-Theoretic Bayesian Inference for Collaborative Science
###### Abstract
Contemporary scientific research is a distributed, collaborative endeavor, carried out by teams of researchers, regulatory institutions, funding agencies, commercial partners, and scientific bodies, all interacting with each other and facing different incentives. To maintain scientific rigor, statistical methods should acknowledge this state of affairs. To this end, we study hypothesis testing when there is an agent (e.g., a researcher or a pharmaceutical company) with a private prior about an unknown parameter and a principal (e.g., a policymaker or regulator) who wishes to make decisions based on the parameter value. The agent chooses whether to run a statistical trial based on their private prior and then the result of the trial is used by the principal to reach a decision. We show how the principal can conduct statistical inference that leverages the information that is revealed by an agent's strategic behavior--their choice to run a trial or not. In particular, we show how the principal can design a policy to elucidate partial information about the agent's private prior beliefs and use this to control the posterior probability of the null. One implication is a simple guideline for the choice of significance threshold in clinical trials: the type-I error level should be set to be strictly less than the cost of the trial divided by the firm's profit if the trial is successful.
## 1 Introduction
Scientific research is increasingly distributed throughout government, academia, and business. Teams of researchers interface with regulatory institutions, funding agencies, commercial partners, scientific bodies, and each other. For example, drug development is conducted by large teams in pharmaceutical companies, with clinical trials carried out in collaboration with academic scientists, whose results are in turn analyzed by public regulators. In such cases, the conclusions from a scientific data analysis materially impact the many stakeholders involved. That is, in addition to its role in quantifying evidence and supporting decision making, statistical analysis serves as a gatekeeping or reward system, with high stakes for the participating parties. Contrast this situation with the classical viewpoint in which statistical protocols are used only as an analytic aid for an impartial researcher. This classical viewpoint underlies the established guidelines for statistical practice. In light of the way statistics is used in present-day research, however, it is essential that we expand the scope of statistical analysis and provide principles that allow economic principles to be wedded with statistical principles in an overall endeavor viewed as a sociotechnical system.
In any consideration of foundational issues in statistical inference, it is essential to take into account that there are the two major, differing perspectives on statistical inference--the frequentist and the Bayesian. The frequentist paradigm provides guarantees on the correctness of an inference procedure over repeated runs of the procedure. This paradigm is particularly natural in settings in which a software artifact is built and subsequently used by many individuals on many kinds of data and for many scientific problems. The Bayesian paradigm focuses on the specific problem at hand, making full use of probability theory to combine past knowledge with current observations, via conditioning and the computation of posterior probabilities. It provides opportunities for exploiting expert knowledge, requiring effort to elicit such knowledge but potentially rewarding the effort via inferences that can be tailored and sensitive. Moreover, the Bayesian paradigm accommodates the merging of analyses from multiple investigators who may have different prior distributions and different data (Morris, 1977), and it also provides for meta-analyses performed by a central aggregator (Sutton and Abrams, 2001). These advantages are compelling for the kinds of collaborative science that is our focus, but the need to specify prior distributions remains a stumbling block in many domains. This
is particularly true in complex problem domains in which there are many interacting variables, where it can be difficult to formulate the necessary high-dimensional prior probability distributions, even for domain experts. This can introduce an undesirable subjectivity and even arbitrariness into scientific decision-making. Accordingly, the frequentist paradigm, with its focus on confidence intervals and \(p\)-values, remains the standard in scientific and medical research. But the limitations of the paradigm are also widely recognized, in particular, they struggle to combine multiple confidence intervals and or \(p\)-values from multiple sources, and frequentist error control need not translate into good decision-making.
In this work, we show how an incentive-theoretic perspective sheds light on Bayes-frequentist duality and leads to new guidelines for statistical analysis. In particular, we show how a regulator can conduct a Bayesian statistical analysis without assuming a prior distribution. Instead, we view the researcher (who has more information about the subject of study than the regulator) as acting according to an implicit prior distribution and show how the regulator can deduce information about this prior distribution from the researcher's behavior. Loosely speaking, when a researcher makes a large investment in a research undertaking, this credibly signals to the regulator that they have a high degree of belief that the research will be successful, and the regulator can use this information as part of their analysis.
### Overview of our setup
We consider a setting with two parties: the _principal_ (e.g., a regulator such as the FDA) and the _agent_ (e.g., a pharmaceutical company). The agent makes an investment to conduct research but must garner approval from the principal. The principal wishes to ensure that only correct conclusions are reached, and they have the ability to approve research results or not. For example, a pharmaceutical company must run a clinical trial to demonstrate that drugs are safe and effective to a regulator who controls whether a new drug is approved.
We view the goal of a scientific study as inferring some aspect of the underlying state of the world from observations. Before conducting the study, we assume that the agent is better informed and has a _prior belief_ about the quantity to be inferred. We refer to this quantity as a parameter \(\theta\), and we assume that it is a random variable taking values in a sample space \(\Theta\) according to a prior distribution \(Q\). This distribution is private and not known to the principal. Moreover, the principal ought not simply ask the agent about \(Q\), since the agent may engage strategically and report incorrect information for their own benefit. Instead, as a requirement for approving the agent's research, the principal requires that the agent support the conclusions by gathering data (e.g., by running a clinical trial). The agent must decide based on their private information whether or not to invest effort to run such a trial to gain approval.
In this work, we show how the principal can set up a hypothesis-testing protocol that elicits information about the agent's private prior distribution. This enables the principal to perform Bayesian inference supported by revealed information rather than by assuming a prior distribution.
## 2 Bayesian Inference Supported by Revealed Preferences
### Setting
The parameter space \(\Theta\) is partitioned into the null set \(\Theta_{0}\) and nonnull set \(\Theta_{1}\). The principal wishes to approve nonnulls but not nulls--this will be formalized shortly. The agent has a prior \(Q\) over \(\Theta\) that is private and not known to the principal. Noisy evidence about \(\theta\) may be available to the principal, however; indeed, the agent may choose to gather data that is visible to both the principal and agent. We encode the evidence as a random variable \(X\in\mathcal{X}\) drawn from the distribution \(P_{\theta}\).
The interaction between the principal and agent goes as follows:
**Principal-Agent Statistical Trial**
The agent chooses to run a trial at cost \(C\), or opts out.
If the trial is run, evidence is collected according to \(X\sim P_{\theta}\).
The principal makes a decision \(\{\texttt{approve},\texttt{deny}\}\) based on the evidence \(X\). Formally, the principal has a decision policy \(f:\mathcal{X}\rightarrow\{\texttt{approve},\texttt{deny}\}\).
If the principal makes the decision \(\texttt{approve}\), the agent receives reward \(R\).
The agent cost \(C\) and reward \(R\) are known in advance to the principal and the agent.
Without essential loss of generality, we will consider the evidence \(X\) to be a \(p\)-value for the null hypothesis \(\theta\in\Theta_{0}\); that is, \(P_{\theta}(X\leq t)\leq t\) for all \(\theta\in\Theta_{0}\). Moreover, we assume that the principal's decision rule is based on thresholding the \(p\)-value \(X\) at a level \(\tau\):
\[f(x)=\begin{cases}\texttt{approve}&x\leq\tau\\ \texttt{deny}&x>\tau.\end{cases}\]
We next turn to the agent. Let \(\beta_{\theta}(\tau)=P_{\theta}(X\leq\tau)=P_{\theta}(\texttt{approve})\) denote the power function. The agent's expected profit if they run a trial based on their prior \(Q\) is then \(v_{\tau}(Q)=\mathbb{E}_{\theta\sim Q}[R\cdot\beta_{\theta}(\tau)]-C\). The agent additionally has some increasing utility function of their profit \(u:\mathbb{R}\rightarrow\mathbb{R}\) with \(u(0)=0\), so their expected utility is \(\mathbb{E}[u(R\cdot 1\{X\leq\tau\}-C)]\). We assume the utility function \(u\) is concave, meaning the agent is risk averse. It follows from Jensen's inequality that the agent has negative expected utility whenever their expected profit is negative. We assume that the agent chooses to opt out when their expected utility is negative.
### Revealed preferences
We next show that if the agent acts in their best interest, then the principal can draw conclusions about the posterior probability of the null hypothesis. Our basic assumption is that the agent runs a trial only if \(v_{\tau}(Q)\geq 0\), opting out otherwise. We then have the following:
**Theorem 1**.: _Suppose the agent runs a trial only if \(v_{\tau}(Q)\geq 0\). Then, when a trial is run the posterior odds of nonnull given approval are bounded from below:_
\[\frac{P\left(\theta\in\Theta_{1}\mid\texttt{approve}\right)}{P\left(\theta\in \Theta_{0}\mid\texttt{approve}\right)}\geq\frac{C/R-\tau}{\tau}, \tag{1}\]
_where the probabilities are according to the agent's prior \(Q\) and the randomness in \(X\)._
The proof is elementary and presented in Appendix A. This result means that the principal can control the posterior odds of a false discovery by choosing \(\tau\). To see the significance of this, let us rearrange (1) to obtain
\[P\left(\theta\in\Theta_{0}\mid\texttt{approve}\right)\leq\tau R/C. \tag{2}\]
This is a Bayesian form of error control, with the quantity on the left the posterior probability of the null under a decision to \(\texttt{approve}\). This is closely related to the false discovery rate (FDR), the expected fraction of false positives, although the latter is a frequentist statistic, defined with respect to the probability measure induced by repeated sampling without reference to any prior. We comment further on the relationships to FDR in Appendix B. Henceforth we use term _Bayes FDR_ to refer to the posterior probability in (2).
Since the right-hand side of (2) is decreasing with \(\tau\), the principal can guarantee a pre-specified Bayes FDR level \(\alpha\) by setting \(\tau=\alpha C/R\). The surprising takeaway is that the principal can use the agent's private prior to achieve a desired Bayes FDR level. The principal does not need to have their own prior beliefs about \(\theta\) and can instead use the information revealed by the agent (i.e., whether the agent opts in) to learn information about \(Q\) and draw conclusions accordingly.
### Agents with incorrect priors
We next give another version of this result where the agent's prior is not assumed to be correct. In fact, the agents need not have classical priors at all, with the classical notion of prior belief replaced by an economic metaphor. Our result is that no matter what the true values of the unknown parameters are, the fraction of false positives will be small unless the agents are losing money.
**Theorem 2**.: _Consider \(\tau<C/R\). Suppose a set of agents with parameters \(\theta^{(1)},\ldots,\theta^{(n)}\) opt into the trial above. Then_
\[\frac{1}{n}\underbrace{\sum_{i=1}^{n}\mathbb{I}_{\{\theta^{(i)}\in\Theta_{1} \}}\cdot P_{\theta^{(i)}}\left(\mathsf{approve}\right)}_{\text{expected number of true positives}}<\left(\frac{C/R-\tau}{\tau}\right)\cdot\frac{1}{n}\underbrace{\sum_{i=1}^{n} \mathbb{I}_{\{\theta^{(i)}\in\Theta_{0}\}}\cdot P_{\theta^{(i)}}\left(\mathsf{ approve}\right)}_{\text{expected number of false positives}}\]
_only if_
\[\frac{1}{n}\underbrace{\sum_{i=1}^{n}\left(R\cdot P_{\theta^{(i)}}\left( \mathsf{approve}\right)-C\right)}_{\text{expected total profit of agents}}<0.\]
Note that the statement above does not rely on the agents having explicit priors. If an arbitrary collection of agents engages with the principal, then either the fraction of false positives is small or the agents are losing money in aggregate.
The reader should view Theorem 1 and Theorem 2 as two expressions of the same underlying idea. They are linked by the betting interpretation of Bayesian probability; indeed, Theorem 1 means that if the agents are not behaving according to correct beliefs \(Q\), they will lose money. Theorem 2 is one precise version of this statement.
### A simple illustration
We now turn to an explicit example to demonstrate the upper bound. We consider a normal test statistic \(Z\sim\mathcal{N}(\theta,1)\), which is converted into a p-value \(X=1-\Phi(Z)\), where \(\Phi\) is the CDF of the standard normal distribution. We take the parameter space to be \(\Theta=\{0,1\}\) with null set \(\Theta_{0}=\{0\}\). We suppose there are two types of agents, promising agents and unpromising agents. Promising agents have a prior distribution such that \(\theta=1\) has probability \(0.8\), whereas unpromising agents have a prior such that \(\theta=0\) has probability \(1\). Here, the agents' priors are correct. The cost of a trial is \(1\) and the reward of a successful trial is \(100\), and we suppose that each agent chooses to run a trial exactly when their expected value is nonnegative. We consider the case where \(1\%\) of the agents are promising agents, which is motivated by clinical trials where nearly all drug candidates are abandoned before conducting a trial, see below for more detail.
We report the fraction of false discoveries across different choices of the p-value threshold (\(\tau\)) in Figure 1. We also report the upper bound from (2), which is a valid bound by Theorem 1. Notice that the fraction of false discoveries is discontinuous at points where agents change their behavior: promising agents run trials whenever \(\tau\geq 0.0005\), and unpromising agents run trials whenever \(\tau\geq 0.01\). The upper bound is relatively close to the actual value at these two points, but is somewhat loose in between.
## 3 Implications for Clinical Trials
We next turn to the choice of significance level when regulating clinical trials, focusing on the US Food and Drug Administration (FDA) as the regulator. A rough sketch of the typical clinical trial pipeline is as follows. Before a new drug can be marketed, the FDA requires the pharmaceutical company to sponsor a confirmatory clinical trial to establish the drug's safety and efficacy. The costs of the trial are borne by the company, and if the drug is approved the company can then make a profit by selling the drug. Roughly speaking, the drug is approved if the \(p\)-value from a hypothesis test based on the data from the clinical trial is below a significance threshold--see Section 3.2 for more detail regarding the FDA's current policy.
### Guidelines from our mathematical model
What should the significance level be? We suppose that a trial costs \(C\) and that the company would be rewarded with a revenue of \(R\) if the drug were to be approved. We argue that the type-I error level of the test, \(\tau\), should be set to roughly \(C/4R\). Our analysis suggests this will result in a fraction of false positives (henceforth referred to as _FDR_) of less than 25% and that this threshold cannot be loosened greatly without resulting in a large FDR.
This conclusion follows from three facts. First, the work of Tetenov (2016) implies that the type-I error level needs to be less than \(C/R\), since otherwise companies would be incentivized to run trials for drugs that are not promising or even known a priori to be ineffective. That work notes that there are a large number of drug candidates available that are abandoned before clinical trials are conducted--indeed, there are an estimated 30,000 abandoned candidates for every drug submitted to Phase III clinical trials. If FDA's policy is too loose, many of these unpromising candidates may be submitted to trials. In particular, they conclude that the resulting FDR would be near 1 if the type-I error level is chosen to be greater than \(C/R\).
Second, the societal cost of false negatives is large compared to false positives, so we should seek a relatively loose FDR level such as 25%. See the cost-benefit analysis of Isakov et al. (2019) for quantitative estimates of the costs of false positives to false negatives. Our conclusion about the right choice of type-I error level is not sensitive to the exact ratio of the societal cost of false positives to false negatives beyond the fact that we should tolerate a moderate fraction of false positives.
Third, our theoretical results suggest that the threshold need not be much smaller than \(C/R\). For instance, Theorem 1 and Theorem 2 show that a threshold of \(C/4R\) suffices to control the FDR at level 25%. In more detail, we note that pharmaceutical companies incur many research and operating costs other than that of running clinical trials, but their revenues are primarily from selling FDA-approved drugs. Since these companies remain commercially viable, they must be acting in a way such they are not systematically losing money when deciding whether to sponsor clinical trials. In view of Theorem 2, we conclude that the FDR level of approved drugs should be at most \(R/C\cdot\tau\). Thus, setting \(\tau=C/4R\) would result in an FDR level of at most 25%.
Of course, the cost \(C\) and reward \(R\) are different for different drugs. We next turn to a detailed analysis of the costs, profits, and existing type-I error levels of clinical trials in the US.
Figure 1: Comparison of the fraction of false discoveries computed numerically to the upper bound from Theorem 1 in the example of Section 2.4.
### The FDA
Turning to current FDA policy, FDA regulators retain significant flexibility in assessing this evidence for approval; but the level of evidence required is typically either two confirmatory trials with positive results (often interpreted as level \(p<.05\) on a two-sided test, with the result in the correct direction), or one confirmatory trial (typically multi-center, well-controlled, with strong results) in conjunction with reasoning to support a determination of "substantial evidence." According to Morant et al. (2019), from 2012-2016 all non-orphan, non-oncology drugs which were approved with a single pivotal trial achieved \(p\)-values of less than 0.005. The study of Janiaud et al. (2021) "[suggests] that the FDA does not have a consistent implicit or operational standard for defining'substantial evidence' in contexts when flexible criteria are used." The FDA also has an accelerated approval pathway, where a drug may be released under less rigorous standards of evidence, based on proxies for the primary outcome of interest, and sometimes also with lower efficacy standards. For more information on the criteria employed to select among running one or two trials, see FDA (2019); Haslam and Prasad (2019); Janiaud et al. (2021).
To facilitate formal analysis, we analyze two simplified statistical protocols that track the current FDA policy. First, we consider a protocol that requires significance in two independent trials--we call this the _standard_ protocol. The probability that a null drug is approved with this protocol is \(0.025^{2}=0.000625\). Second, we consider the case where only one trial is required, but at the 0.01 level for a two-sided test. We call this the _modernized_ protocol, in view of the FDA Modernization Act of 1997. We assume the probability that a placebo drug is approved with the modernized protocol is 0.005.
Estimates for the cost of Phase III trials vary widely. (Moore et al., 2018) estimate a median of $20 million in direct costs for a Phase III trial, but note that costs can vary by a factor of ten in either direction. (Moore et al., 2020) estimate median total costs for pivotal trials (which may include two trials) at $50 million. DiMasi et al. (2016) estimate average total Phase III costs as $255 million out-of-pocket and $314 million accounting for opportunity cost of capital, although this analysis is based on a private dataset and may focus on large companies with large trials and higher costs figures (Love, 2019). Wouters et al. (2020) estimates the mean out-of-pocket costs at $291 million among trials whose costs were broken out in published SEC filings, although this condition may systematically exclude cheaper trials. Schlander et al. (2021) provides a review. To be conservative, we use a value of \(C=\$50\) million for total Phase III costs.
Turning to typical values of the reward, \(R\), various estimates place the average total capitalized cost of developing a single new successful drug at $161 million to $4.54 billion (Schlander et al., 2021). In equilibrium, expected rewards of each approved drug should be least as large as costs. In addition, there is a long tail of drug profitability, and there exist blockbuster drugs with sales of over $100 billion (Elmhirst and Urquhart, 2019).
We report on the FDR bound implied by our theoretical model for the three protocols above in Table 1. We find that for typical drugs that would gain $1 billion to $10 billion profit if approved, the standard protocol that requires two trials results in a low FDR level. Thus, our analysis suggests the FDA should loosen the significance level in this case. For extremely profitable drugs earning $100B or more, the protocols are not strict enough, and companies many be incentivized to run clinical trials for unpromising candidates. In this
\begin{table}
\begin{tabular}{|c|r|r|r|r|} \hline Protocol & type-I error level (\(\tau\)) & Revenue if approved (\(R\)) & Expected profit if null & Bayes FDR bound \\ \hline \hline standard & 0.0625\% & $1B & -$49M & 1.25\% \\ standard & 0.0625\% & $10B & -$44M & 12.5\% \\ standard & 0.0625\% & $100B & $13M & n/a \\ \hline modernized & 0.5\% & $1B & -$45M & 10\% \\ modernized & 0.5\% & $10B & $0M & n/a \\ \hline \end{tabular}
\end{table}
Table 1: The behavior of two statistical protocols for varying drug market values, assuming a Phase III cost of \(C=\$50\) million. The FDR bound reported in the rightmost column is \(R/C\cdot\tau\), which is derived from Theorem 1 and Theorem 2. An entry of n/a in the rightmost column indicates that the FDR cannot be bounded below 100%.
regime, our results do not provide reassurance that the FDR is controlled at a reasonable level.
An important limitation of our analysis is that it omits additional regulatory checks against approving ineffective drugs and punishments for agents that intentionally run clinical trials for drugs they believe to be ineffective. These considerations include additional evidence standards that the FDA may impose; the reluctance of insurers to compensate generously for a drug with marginal evidence of efficacy; lawsuits and liability costs; and the risk of a company developing a negative reputation among consumers, insurers, and regulators. In general, these checks would mean that a type-I error level looser than our model suggests may be sufficient to avoid false discoveries in practice.
## 4 Related Work
We study the choice of type-I error level in a case where the regulator sets the statistical protocol and the agent decides whether to pay to conduct research, a setting introduced in the insightful work of Tetenov (2016). That work concludes that the optimal type-I error level must be set to be at least as strict as the agent reward divided by cost, otherwise, there may be a large number of false positives. Our work addresses the reverse direction, showing that the type-I error level should not be much stricter than the reward divided by cost. Going beyond the single hypothesis testing setup, Viviano et al. (2021) also consider a regulator setting the protocol for multiple hypothesis tests, and analyze when and how multiple testing adjustments should be carried out. In a different direction, Bates et al. (2022) show how the regulator can grant partial approval across multiple stages while controlling the number of false positives.
More broadly, the incentives of researchers and their interaction with statistical protocols is a topic commanding increasing attention in econometrics. For example, Chassang et al. (2012) and Di Tillio et al. (2017) study randomized trials where an agent has a private action affecting the trial outcome. By contrast, in our work, the statistical trials are not influenced by hidden effort. Spiess (2018) studies a principal-agent problem where the agent's action space is the choice of an estimator, showing that the principal restricting the agent to unbiased estimators is worst-case optimal for the principal. Yoder (2022) study delegation of research to a researcher of unknown efficiency. Similarly, McClellan (2022) derives how the principal should change the approval threshold as evidence is collected sequentially in order to incentivize a researcher to continue experimentation.
One particularly important case of researcher incentives is the \(p\)-hacking problem, wherein a researcher who is rewarded for positive findings may act in a way that causes false positives. Classically, Sterling (1959); Tullock (1959) point out that when only positive findings are published, the total false positive rate of reported results in the scientific literature may be large. Leamer (1974) shows how this same phenomenon can occur when a researcher chooses the specification of a model. More recently, there is a vast statistical literature on how a single objective analysis can account for multiple comparisons and selective inference (e.g., Berk et al., 2013; Taylor and Tibshirani, 2015). Incorporating incentives more explicitly, McCloskey and Michaillat (2020) study how a principal should optimally set the \(p\)-value threshold when the researcher sequentially collects data and reports a subset of their findings. Similarly, Frankel and Kasy (2022) study how to select papers for publication based on confidence and effective size.
There is a growing understanding of persuasion and signaling in the communication of research results from an econometric perspective. An important early study in this direction is Carpenter and Ting (2007), who consider a game where a researcher can signal their confidence by seeking approval at earlier stages of development. Andrews and Shapiro (2021) investigate how research output may be used by multiple downstream decision-makers. Henry and Ottaviani (2019) study a researcher-approver persuasion model where the researcher continuously gathers data. Williams (2021) analyze pre-registration as a form of costly signaling by the analyst, and Banerjee et al. (2020) shows how randomized controlled trials emerge from a model with an analyst trying to persuade an adversarial audience. The present work can be situated within this line of thought as a setting in which the agent sends a costly signal by conducting research and the principal can use this signal as part of their statistical analysis.
Lastly, Theorem 2 takes inspiration from the rapidly evolving field of game-theoretic statistics--see Ramdas et al. (2022) for a recent overview. See the Appendix therein for a discussion of the philosophical
underpinnings of game-theoretic statistics, especially in relation to frequentist and Bayesian paradigms. Game-theoretic statistics adopts the language of betting to quantify uncertainty. Evidence against a null hypothesis is measured by the outcome of a bet--one which is fair in the sense that it would not be profitable under the null hypothesis. Shafer (2021) advocates for betting terminology as an effective framework for communicating results to a broad audience. In our work, of course, the agent has an actual financial stake in the outcome of the experiment. The betting analogy delivers another key benefit to game-theoretic statistics: bets can be made sequentially, so evidence can be accumulated gradually within an experiment or aggregated across studies. In our current work, however, the agent computes a \(p\)-value, effectively placing an all-or-nothing bet against the null hypothesis. Bates et al. (2022) show how the principal can employ more general betting scores (also known as \(e\)-values) to align the incentives of the agent.
## 5 Discussion
We have shown how the structure of economic incentives can support statistical inference. Our work provides a new connection between frequentist error control (e.g., controlling the false discovery rate) and Bayesian statistics. A frequent criticism of Bayesian statistical methods is their reliance on a prior distribution. Our work uses a prior distribution, but in a way that is objectively verifiable--in Theorem 2 we see that if the agent priors are invalid, then we would observe that the agents are losing money. In fact, a closer inspection of the proof reveals that if FDR is not controlled, the agents would lose profit at a rate linear in the number of trials, i.e., a consistently large amount of money. In settings such as clinical trials, we know that this is not the case, which lends credence to the use of the agents' implicit prior distributions as a basis for inference. More generally, accounting for and leveraging the broader economic context of statistical analysis is increasingly important as the process of research becomes more complex with many interacting, strategic stakeholders.
## Acknowledgements
We thank Jon McAuliffe and Aaditya Ramdas for helpful discussions.
|
2310.11783 | Overview of ATLAS forward proton detectors for LHC Run 3 and plans for
the HL-LHC | The status of the ATLAS Roman Pot detectors (AFP and ALFA) for LHC Run 3
after all refurbishments and improvements done during Long Shutdown 2 is
discussed. | Maciej Trzebinski | 2023-10-18T08:21:29Z | http://arxiv.org/abs/2310.11783v1 | # Overview of ATLAS forward proton detectors for LHC Run 3 and plans for the HL-LHC
###### Abstract
The status of the ATLAS Roman Pot detectors (AFP and ALFA) for LHC Run 3 after all refurbishments and improvements done during Long Shutdown 2 is discussed.
keywords: ATLAS, AFP, Roman Pot, 3D Silicon Tracker, Time-of-Flight +
Footnote †: journal: Nuclear Physics B
## 1 ATLAS Forward Proton Detectors
Diffractive processes are an important part of the physics programme at hadron colliders. This is also true for ATLAS [1], where a large community works on both phenomenological and experimental aspects of diffraction. ATLAS Forward Proton detectors (AFP) [2] aim to measure events in which one or both protons remain intact after the interaction [3]. Since these protons are scattered at very small angles, dedicated detectors must be located far away from the interaction point and close to the proton beam. This results in proton trajectories being influenced by the LHC components: magnets and collimators [4]. Since settings of such components change over time, detectors must have a possibility to move wrt. the accelerator beam pipe. This is realised using the Roman Pot (RP) technology.
AFP consists of four RP stations [2], two on each side of ATLAS. Stations located 204 m from ATLAS collision point are called NEAR whereas those at 217 m are named FAR. All stations contain 3D edgeless Silicon Trackers (SiT). FAR stations are also equipped with a Time-of-Flight system (ToF).
## 2 Silicon Tracker
The purpose of the SiT is to provide a precise reconstruction of the proton trajectory necessary to unfold its kinematics [5]. Each station houses four detector planes, each consisting of a 230 \(\mu\)m thick silicon sensor. Sensor is a matrix of \(336\times 80\) pixels, each having size of \(50\times 250\)\(\mu\)m\({}^{2}\). Sensors are coupled to the FE-14b chip [6], radiation-hard (tested for \(>250\) Mrad) and produced with the "edgeless" technology with a dead edge from beam side of only about 100 \(\mu\)m. Detectors are tilted by 14 degrees wrt. beam direction. The expected reconstruction resolution is 6 \(\mu\)m in \(x\) and 30 \(\mu\)m in \(y\)[7]. SiT can provide the trigger signal with a dead-time of about 400 ns.
During Run 2 data-taking SiT detectors showed very good efficiency. As discussed in Ref. [8], NEAR stations had an overall efficiency over 98% whereas FAR stations performed slightly worse, 95% - 98%. A possible explanation is the radiation degradation of the silicon tracker, as the FAR stations are inserted slightly closer (by about 1 mm) to the beam and are more exposed to the beam halo. In addition, FAR station efficiency is affected by the showers created by interactions with detector material in the NEAR stations.
## 3 Time-of-Flight Detectors
The purpose of the ToF is to reduce the combinatorial background coming from the pile-up, denoted as \(\mu\) - multiple, independent proton-proton collisions during a single bunch crossing. The idea is to measure difference in the time of flight of scattered protons on both sides obtaining a "proton vertex" and compare it to the vertex position reconstructed by ATLAS. AFP ToF uses 16 L-shaped quartz bars (LQBars) to produce Cherenkov light and guide it into a Micro-Channel Plate Photo Multiplier (MCP-PMT). During LHC Run 2 the measured time resolution was \(35\pm 6\) ps and \(37\pm 6\) ps per train, depending on side [9]. This translates to a spatial resolution of \(5.2\pm 0.9\) mm for the vertex \(z\)-position. AFP ToF is equipped with radiation-hard readout and provides a trigger signal with a dead-time smaller than 25 ns (\(<1\) bunch-crossing). Unfortunately, as described in Ref. [9], during the Run 2 data-taking ToF system suffered from very low efficiency (a few percent).
## 4 Readiness for Run 3 Data-taking
During Run 2 data-taking AFP collected 32 fb\({}^{-1}\) during high-\(\mu\) runs and took part in few low-\(\mu\) runs. Afterward, AFP underwent a significant refurbishment. In order to address the major issue of ToF system - the very low efficiency and repeated failures of MCP-PMTs in vacuum - the new design of the detector flange (Out-of-Vacuum solution) was made. In addition, a new design of the MCP-PMT back-end electronics was developed. In Run 3 ToF system is also equipped with a set of new, glueless LQBars. The AFP tracker system was equipped with newly produced SiT modules. In addition, new heat exchangers were installed to improve cooling capabilities. Finally, developments of a new trigger module and picosecond Time to Digital Converter (TDC) readout are ongoing.
## 5 High Luminosity LHC Consideration
At the HL-LHC high pile-up environment the key focus is on photon-induced processes and Beyond Standard Model (BSM) searches. The presence of RPs during Run 4 may improve the measure/search capabilities of "central" detectors. The discussion about HL-LHC physics case with use of RPs is held in [10] and references within [11]. Below only an idea is described.
Processes which are particularly interesting to be studied are \(pp\to pXp\), where \(p\) denotes a proton staying intact and \(X\) denotes a "central" system, _e.g._ photon-induced WW (\(\gamma\gamma\to WW\)) production, axion-like particles \(\gamma\gamma\to a\to\gamma\gamma\), supersymmetry and dark matter particles (\(\gamma\gamma\to\tilde{l}\,\tilde{l}\)). The information from the forward protons, combined with that of the central detector where particles are produced out of the energy lost by the protons, allows full kinematic reconstruction of the events using the difference and sum of the energy lost by the two protons, respectively.
The presence of pileup makes things more complicated, because the association between the central and forward systems is not obvious, and the two protons tagged by the forward detectors are likely originating from different collisions. For this reason, and especially in the extreme pile-up scenarios foreseen at the HL-LHC, it is essential that the position measurements from the pixel detectors is complemented by a high-resolution ToF detectors.
As there are significant constraints on using RPs due to the HL-LHC layout, a discussion of project feasibility is ongoing within the ATLAS community1. According to the newest available HL-LHC machine layout only a few locations are possible for the RP installation: 195.5 m (RP1A), 198.0 m (RP1B), 217.0 m (RP2A), 219.5 m (RP2B), 234.0 m (RP3A), 237.0 m (RP3B) and 245.0 m (RP3C). Depending on the number of stations and their location, the mass acceptance was estimated - see Fig. 1. The legend should be interpreted as:
Footnote 1: At the moment of writing these proceedings, the decision was taken to not have RPs around ATLAS during Run 4, with an opened possibility for Run 5 and beyond.
* "RPX" means combination two stations RPXA and RPXB on both sides of IP when proton is tagged in all of them;
* "RPX+RPY" means tagged proton in [(RPXA and RPXB on side A) or (RPYA and RPYB on side A)] and [(RPXA and RPXB on side C) or (RPYA and RPYB on side C)].
## 6 Summary
During Long Shutdown 2, AFP underwent hardware upgrades before being deployed in the LHC tunnel. This should allow efficient data-taking with a focus on a diffractive and minimum-bias studies as well as BSM searches during Run 3. The presence of Roman Pots in High-Luminosity LHC is being discussed within ATLAS as a variety of physics studies would benefit from the forward proton tagging. This would require the development of a rather complex integration design with the new HL-LHC layout.
## Acknowledgements
The work of MT was partially supported by Polish National Science Centre (project no. UMO-2019/34/E/ST2/00393).
Copyright 2022 CERN for the benefit of the ATLAS Collaboration. Reproduction of this article or parts of it is allowed as specified in the CC-BY-4.0 license.
|
2304.02106 | eSSVI Surface Calibration | In this work I test two calibration algorithms for the eSSVI volatility
surface. The two algorithms are (i) the robust calibration algorithm proposed
in Corbetta et al. (2019) and (ii) the calibration algorithm in Mingone (2022).
For the latter I considered two types of weights in the objective function. I
fitted 108 end-of-month SPXW options chains from the period 2012-2022. The
option data come from FactSet. In addition to this empirical part, this paper
contains also a theoretical contribution which is a sharpening of the
Hendriks-Martini proposition about the existence of crossing points between two
eSSVI slices. | Leo Pasquazzi | 2023-04-04T20:22:44Z | http://arxiv.org/abs/2304.02106v3 | # eSSVI Surface Calibration
## Abstract
In this work I test two calibration algorithms for the eSSVI volatility surface. The two algorithms are (i) the robust calibration algorithm proposed in Corbetta _et al._ (2019) and (ii) the calibration algorithm in Mingone (2022). For the latter I considered two types of weights in the objective function. I fitted 108 end-of-month SPXW options chains from the period 2012-2022. The option data come from FactSet. In addition to this empirical part, this paper contains also a theoretical contribution which is a sharpening of the Hendriks-Martini proposition about the existence of crossing points between two eSSVI slices.
**Keywords:** eSSVI volatility surface, Calibration, Arbitrage free interpolation
## 1 Introduction
At the Global Derivatives & Risk Management conference in Madrid, Gatheral (2004) presented the SVI parametrization for the implied variance smile \(w(k)\) at a fixed maturity. This parametrization reads
\[w(k)=a+b\left\{\rho(k-m)+\sqrt{(k-m)^{2}+\sigma^{2}}\right\}\]
where \(k=\ln K/F\) is the natural logarithm of the ratio between the strike and the forward price of the underlying, and where \(a\in\mathbb{R}\), \(b\geq 0\), \(|\rho|<1\), \(m\in\mathbb{R}\) and \(\sigma>0\) are parameters governing the shape and position of the smile.
The following appealing features of the SVI parametrization are well known:
* the SVI smiles are asymptotically linear in \(k\) as \(|k|\to\infty\) and therefore consistent with Roger Lee's moment formula (Lee, 2004) and
2. the large maturity limit of the implied variance smile of a Heston model with correlation parameter \(\rho\) is SVI with the same value of \(\rho\)(Gatheral and Jacquier, 2011).
However, it is also well known that the SVI parametrization, in its full generality, is not arbitrage free. For example, it is not difficult to show that \(\inf_{k}w(k)=a+b\sigma\sqrt{1-\rho^{2}}\) and hence it should always be required that \(a+b\sigma\sqrt{1-\rho^{2}}\geq 0\). However, the latter condition is not enough to rule out butterfly arbitrage as can be seen from a well known counterexample of Axel Vogt (see Gatheral and Jacquier 2014). Moreover, fitting the SVI parametrization to more than a single maturity date may produce slices that cross over each other which is equivalent to the existence of calendar spread arbitrage opportunities. To overcome these issues, Gatheral and Jacquier (2014) introduced the SSVI parametrization which is a global parametrization for the whole implied total variance _surface_ where the fixed maturity slices are restricted to a subfamily of the SVI parametrization (the first "S" in front of SSVI stands for "_surface_"). In the SSVI parametrization, the implied total variance surface is given by
\[w_{t}(k)=\frac{\theta_{t}}{2}\left\{1+\rho\varphi(\theta_{t})k+\sqrt{(\varphi( \theta_{t})k+\rho)^{2}+(1-\rho^{2})}\right\} \tag{1}\]
where \(\theta_{t}\) is the ATM implied total variance at maturity \(t\), \(|\rho|<1\) and \(\varphi(\theta_{t})\) is a smooth function from \(\mathbb{R}_{+}^{*}\) to \(\mathbb{R}_{+}^{*}\) such that the limit \(\lim_{t\to\infty}\theta_{t}\varphi(\theta_{t})\) exists in \(\mathbb{R}\). According to Theorems 4.1 and 4.2 in Gatheral and Jacquier (2014), the SSVI surface (1) is free of calendar spread arbitrage if and only if
1. \(\partial_{t}\theta_{t}\geq 0\) for all \(t\geq 0\)
2. and \(0\leq\partial_{\theta}\left(\theta\varphi(\theta)\right)\leq\frac{1}{\rho^{2} }\left(1+\sqrt{1-\rho^{2}}\right)\varphi(\theta)\) for all \(\theta>0\) (the upper bound is infinite when \(\rho=0\))
and it is free of butterfly arbitrage if for all \(\theta>0\) the following two conditions are both satisfied:
1. \(\theta\varphi(\theta)(1+|\rho|)<4\),
2. \(\theta\varphi(\theta)^{2}(1+|\rho|)\leq 4\).
The latter conditions are quite close to necessary as well. In fact, Lemma 4.2 in Gatheral and Jacquier (2014) says that absence of butterfly arbitrage holds only if \(\theta\varphi(\theta)(1+|\rho|)\leq 4\) for all \(\theta>0\) (which is only slightly weaker than B1) and that condition B2 becomes necessary when \(\theta\varphi(\theta)(1+|\rho|)=4\).
In order to make the SSVI surface more flexible, Hendriks and Martini (2019) made the \(\rho\)-parameter maturity dependent as well and called the resulting implied total variance surface model _eSSVI surface_ (the "e" in front of SSVI stands for "_extended_"). Proposition 3.1 in Hendriks and Martini (2019) provides necessary and sufficient conditions for the absence of calendar spread arbitrage between two time slices. In order to state these conditions, we will indicate the parameters of two slices with \(\theta_{i}\), \(\varphi_{i}=\varphi(\theta_{i})\) and \(\rho_{i}\), where the subscript \(i\) takes
on the value 1 or 2 according to whether the closer (\(i=1\)) or farther (\(i=2\)) maturity date is referred to. Proposition 3.1 in Hendriks and Martini (2019) says that two time slices do not cross over each other only if
* \(\frac{\theta_{2}}{\theta_{1}}\geq 1\) and \(\left(\frac{\theta_{2}\varphi_{2}}{\theta_{1}\varphi_{1}}\rho_{2}-\rho_{1} \right)^{2}\leq\left(\frac{\theta_{2}\varphi_{2}}{\theta_{1}\varphi_{1}}-1 \right)^{2}\)
and that condition N along with condition S below is sufficient to rule out the existence of crossing points:
* \(\frac{\varphi_{2}}{\varphi_{1}}\leq 1\) or \((\frac{\theta_{2}\varphi_{2}}{\theta_{1}\varphi_{1}}\rho_{2}-\rho_{1})^{2} \leq(\frac{\theta_{2}}{\theta_{1}}-1)(\frac{\theta_{2}\varphi_{2}^{2}}{\theta _{1}\varphi_{1}^{2}}-1)\)
However, condition N' and condition S are not jointly sufficient to rule out the existence of crossing point. In fact, as can be seen from Proposition 4.14 in the appendix of this paper,
* when \(\frac{\theta_{2}}{\theta_{1}}=1\) there are no crossing points if and only if either (i) \(\rho_{1}=\rho_{2}=0\) and \(\varphi_{2}/\varphi_{1}\geq 1\) or (ii) \(\varphi_{2}/\varphi_{1}=\rho_{1}/\rho_{2}\) and \(\rho_{1}^{2}\geq\rho_{2}^{2}\)
* and when \(\frac{\theta_{2}}{\theta_{1}}\neq 1\) there are no crossing points if and only if condition S holds jointly with condition
* \(\frac{\theta_{2}}{\theta_{1}}>1\) and \(1-\frac{\theta_{2}\varphi_{2}}{\theta_{1}\varphi_{1}}\leq\frac{\theta_{2} \varphi_{2}}{\theta_{1}\varphi_{1}}\rho_{2}-\rho_{1}\leq\frac{\theta_{2} \varphi_{2}}{\theta_{1}\varphi_{1}}-1\).
Almost all of the proof of Proposition 4.14 in the appendix of this paper is built on the main ideas of the proof of Proposition 3.1 in Hendriks and Martini (2019). As far as I know the only novelty are the result about tangency points in Lemma 4.11 and the two final Lemmas 4.12 and 4.13.
The next section describes the calibration algorithms which I tested: the robust algorithm of Corbetta _et al._ (2019) and the algorithm of Mingone (2022). The test results are summarized in Section 3.
## 2 Two calibration algorithms for the eSSVI surface
### The robust algorithm of Corbetta _et al._ (2019)
The robust calibration algorithm of Corbetta _et al._ (2019) gives rise to eSSVI surfaces which satisfy conditions B1, B2, N and the first inequality of condition S. The key ingredient of this algorithm is a reparametrization of the SSVI slices which forces them to go through the data point \((k^{*},\theta^{*})\) which is closest to the ATM forward implied total variance. To enforce this restriction the parameter \(\theta\) is expressed in terms of the parameters \(\rho\), \(\varphi\) and the data-driven pair \((k^{*},\theta^{*})\), i.e. \(\theta\) is taken to be
\[\theta=\theta^{*}-\rho\theta\varphi k^{*}:=\theta^{*}-\rho\psi k^{*}\]
which is a first order approximation to the solution of the equation \(w(k^{*})=\theta^{*}\), where \(w(k)\) is defined as in (1).
What are the allowed values for the new parameter \(\psi:=\theta\varphi\)? Of course, the non-negativity constraints on \(\theta^{*}\), \(\theta\) and \(\varphi(\theta)=\varphi\) translate immediately to condition
R0)
i) \(0\leq\psi\) if \(\rho k^{*}\leq 0\)
ii) \(0\leq\psi\leq\theta^{*}/(\rho k^{*})\) if \(\rho k^{*}>0\)
and condition B1 translates to
R1) \(\psi<\frac{4}{1+|\rho|}\)
Moreover, using \(\theta=\theta^{*}-\rho\psi k^{*}\) it is easily seen that condition B2 translates to
R2) \(\psi^{-}\leq\psi\leq\psi^{+}\) where
\[\psi^{\pm}:=\frac{-2\rho k^{*}}{1+|\rho|}\pm\sqrt{\frac{4\rho^{2}(k^{*})^{2}}{ (1+|\rho|)^{2}}+\frac{4\theta^{*}}{1+|\rho|}}.\]
Next, consider the conditions N and S for preventing calendar spread arbitrage. As in Section 1, the subscript \(i=1\) will always refer to the closer maturity date. Keeping this mind it is not difficult to verify that condition N translates to
R3) \(\theta_{2}^{*}-\rho_{2}\psi_{2}k_{2}^{*}\geq\theta_{1}^{*}-\rho_{1}\psi_{1}k_ {1}^{*}\) and \(\psi_{2}\geq\psi_{1}\max\left\{\frac{1+\rho_{1}}{1+\rho_{2}},\frac{1-\rho_{1} }{1-\rho_{2}}\right\}\)
and that the first inequality in condition S becomes
R4) \(\psi_{2}(\theta_{1}^{*}-\rho_{1}\psi_{1}k_{1}^{*})\leq\psi_{1}(\theta_{2}^{*} -\rho_{2}\psi_{2}k_{2}^{*})\).
The algorithm suggested in Corbetta _et al._ (2019) neglects the second inequality in S, which could hold when the first one fails and could therefore allow for more flexibility.
Now, in order to describe the algorithm, we first observe that it is a sequential algorithm which starts from the closest maturity date and works forward in time finding optimal values of the parameters \(k_{i}^{*}\), \(\theta_{i}^{*}\), \(\psi_{i}\) and \(\rho_{i}\) for each each maturity date \(i=1,2,\ldots,n\) given the optimal values for the previous maturity date. It is based on the observation that given the parameter values for maturity date \(i-1\), and given an arbitrary value of \(\rho_{i}\), one can use the constraints R0 - R4 to find an interval of admissible values for \(\psi_{i}\). The bounds of this interval depend on the sign of \(\rho_{i}k_{i}^{*}\) and in order to write them down it is convenient to define
\[B_{1i}=\frac{4}{1+|\rho_{i}|}\qquad B_{2i}^{\pm}:=\psi_{i}^{\pm}:=\frac{-2\rho k _{i}^{*}}{1+|\rho_{i}|}\pm\sqrt{\frac{4\rho_{i}^{2}(k_{i}^{*})^{2}}{(1+|\rho_ {i}|)^{2}}+\frac{4\theta_{i}^{*}}{1+|\rho_{i}|}},\]
\[B_{3i}:=\frac{\theta_{i}^{*}-\theta_{i-1}^{*}+\rho_{i-1}\psi_{i-1}k_{i-1}^{*}} {\rho_{i}k_{i}^{*}}\qquad B_{4i}:=\psi_{i-1}\max\left\{\frac{1+\rho_{i-1}}{1+ \rho_{i}},\frac{1-\rho_{i-1}}{1-\rho_{i}}\right\},\]
\[B_{5i}:=\frac{\psi_{i-1}\theta_{i}^{*}}{\theta_{i-1}^{*}-\psi_{i-1}(\rho_{i-1} k_{i-1}^{*}-\rho_{i}k_{i}^{*})}.\]
for \(i=1,2,\ldots,n\), where \(\theta_{0}^{*}\), \(k_{0}^{*}\), \(\rho_{0}\) and \(\psi_{0}\) are all defined to be zero. Now, it is not difficult to see that the constraints R0 - R4 are equivalent to the following bounds for \(\psi_{i}\) for \(i=1,2,\ldots,n\):
* If \(\rho_{i}k_{i}^{*}>0\), we get the lower bound \(L_{i}:=\max\left\{B_{2i}^{-},B_{4i}\right\}\) and the upper bound \(U_{i}:=\min\left\{B_{1i},B_{2i}^{+},B_{3i},B_{5i}\right\}\).
* If \(\rho_{i}k_{i}^{*}=0\), we get the lower bound \(L_{i}:=B_{4i}\) and the upper bound \(U_{i}:=\min\left\{B_{1i},B_{2i}^{+},B_{5i}\right\}\); in this case we must also check whether \(\theta_{i}^{*}\geq\theta_{i-1}^{*}-\rho_{i-1}\psi_{i-1}k_{i-1}^{*}\), in order to make sure that the first inequality in condition R3 be satisfied.
* If \(\rho_{i}k_{i}^{*}<0\), we get the lower bound \(L_{i}:=\max\left\{B_{2i}^{-},B_{3i},B_{4i}\right\}\) and the upper bound \[U_{i}:=\begin{cases}\min\left\{B_{1i},B_{2i}^{+},B_{5i}\right\}&\text{ if }B_{5i}>0\\ \min\left\{B_{1i},B_{2i}^{+}\right\}&\text{ if }B_{5i}\leq 0.\end{cases}\]
Note that the bounds \(L_{i}\) and \(U_{i}\) depend on \(k_{i-1}^{*}\), \(k_{i}^{*}\), \(\theta_{i}^{*}\), \(\theta_{i-1}^{*}\), \(\psi_{i-1}\), \(\rho_{i-1}\) and \(\rho_{i}\). In order to compute these bounds we must therefore know the optimal values of \(\psi_{i-1}\) and \(\rho_{i-1}\) which refer to the previous maturity date, and we must also guess a value for the parameter \(\rho_{i}\) which refers to the current maturity date \(i\). As we will see in a moment, the robust algorithm of Corbetta _et al._ (2019) deals with this problem by calibrating the SSVI slices sequentially from the closest to farthest maturity date. As pointed out in Corbetta _et al._ (2019), one could actually adapt the algorithm and change the order in which the slices are calibrated, but usually there are good reasons not to do so.
Of course, if \(L_{i}>U_{i}\), there will not exist any value of \(\psi_{i}\) such that \((k_{i}^{*},\theta_{i}^{*},\rho_{i},\psi_{i})\) gives rise to a SSVI slice which is free of butterfly arbitrage and/or calendar spread arbitrage with respect to the optimal slice for maturity \(i-1\). For the given value of \(\rho_{i}\) it is therefore impossible to avoid arbitrage. In this case the objective function to be minimized should be set equal to infinity in order to force the algorithm to consider other \(\rho_{i}\) values for which \(L_{i}\leq U_{i}\). For some maturity date \(i\) it might also happen that \(\theta_{i}^{*}<\theta_{i-1}^{*}-\rho_{i-1}\psi_{i-1}k_{i-1}^{*}\), in which case R3 cannot be satisfied with \(\rho_{i}=0\). If \(k_{i}^{*}\neq 0\), R3 can however be satisfied for every non zero value of \(\rho_{i}\). Of course, nothing can be done in the remote case (which never occurred with the data I used) where \(k_{i-1}^{*}=k_{i}^{*}=0\) and \(\theta_{i}^{*}<\theta_{i-1}^{*}-\rho_{i-1}\psi_{i-1}k_{i-1}^{*}\): in this case we must increase the value of \(\theta_{i}^{*}\) and make it equal \(\theta_{i-1}^{*}-\rho_{i-1}\psi_{i-1}k_{i-1}^{*}\) or perhaps a little larger. Note that in the first case the algorithm might produce a slice which is affected by calendar spread arbitrage (see the case \(\Theta=1\) in Proposition 4.14 in the appendix). Keeping in mind these issues, we can now describe the algorithm:
* Choose an objective function to minimize. In this work I used the sum of the absolute values of the differences between the observed option mid prices and their theoretical counterparts which would obtain if \(w_{i}(k)\) was the Black and Scholes implied variance. Set \(i=1\), i.e. start from the closest maturity date which is identified by the subscript \(i=1\).
1. Set \(\zeta=0\) (the role of \(\zeta\) will become clear in a moment) and choose \(r\) values for \(\rho_{i}\in(-1,1)\) spaced apart at equal distances. These values will be denoted by \(\rho_{i,j}\), \(j=1,2,\dots r\). For each value of \(\rho_{i,j}\) compute the corresponding bounds \(L_{i,j}:=L_{i}\) and \(U_{i,j}:=U_{i}\). In my empirical investigation I set \(r=100\). At first sight this may seem a large value, but this choice is motivated by the fact that with smaller values of \(r\) I obtained \(L_{i,j}>U_{i,j}\) for every \(j\) in some option chains.
2. For each value of \(\rho_{i,j}\), search the optimal value of \(\psi_{i}\) within the corrseponding interval \((L_{i,j},U_{i,j})\). The optimal value of \(\psi_{i}\) corresponding to \(\rho_{i,j}\) will be denoted \(\psi_{i,j}\). In my empirical investigation I used the Brent method as implemented by the fminbound function from the scipy.optimize library for the minimum search. I set the maximum number of objective function evaluations to 1000 and set xtol to 1e-8.
3. Compare the minima of the objective function obtained for the considered \(\rho_{i,j}\)-values and pick out \(\rho_{i}^{*}:=\rho_{i,j}\) and \(\psi_{i}^{*}:=\psi_{i,j}\) which give rise to the smallest minimum.
4. Increment \(\zeta\) by one unit and choose \(r\) values for \(\rho_{i}\) in the interval \((\rho_{i}^{*}-1.2/r^{\zeta},\rho_{i}^{*}+1.2/r^{\zeta})\). Denote these values by \(\rho_{i,j}\), \(j=1,2,\dots r\) and go back to step 2) until \(2\times 1.2\times r^{-\zeta}\) is smaller than a small value \(\varepsilon\). In this work I used \(\varepsilon=10^{-5}\).
5. Define the parameters of the optimal slice at maturity date \(i\) by setting \(\rho_{i}:=\rho_{i}^{*}\) and \(\psi_{i}:=\psi_{i}^{*}\). If the current \(i\) corresponds to the last maturity date stop. Otherwise, increment the index \(i\) and go back to step 1.
Once the algorithm is done we know the parameters \((\theta_{i},\varphi_{i},\rho_{i})\) which identify an optimal SSVI slice for each maturity date for which we have observed option prices. In order to extend these slices to a continuous variance surface we can employ the interpolation scheme described in Corbetta _et al._ (2019). Note that the calibrated surface is free of arbitrage of any kind.
### The algorithm of Mingone (2022)
The algorithm proposed in Mingone (2022) is an attempt to overcome the sequential feature of the robust algorithm of Corbetta _et al._ (2019) which may cause a poor fit at large maturity dates. To overcome this problem, Mingone introduces a reparametrization of the eSSVI surface which leads to a rectangular parameter space for which the conditions B1, B2, N and the first inequality in condition S are all satisfied. Actually, Mingone proposes also a second rectangular reparametrization where conditions B1, N and the first inequality in condition S are still satisfied, but where only a weaker condition than B2 holds. The second reparametrization accounts for the necessary and sufficient no butterfly arbitrage conditions which are given in Martini and Mingone (2022). However, it is more difficult to implement than the first one and for this reason I tested
only the calibration algorithm which is based on the first one. In order to describe the latter, it is convenient to recast conditions B1, B2, N and the first inequality in condition S in terms of \(\theta\), \(\psi:=\theta\varphi\) and \(\rho\). Doing so we get the conditions
* \(\psi<\frac{4}{1+|\rho|}\),
* \(\psi^{2}\leq\frac{4\theta}{1+|\rho|}\),
* \(\theta_{2}\geq\theta_{1}\) and \(\psi_{2}\geq\psi_{1}\max\left\{\frac{1+\rho_{1}}{1+\rho_{2}},\frac{1-\rho_{1} }{1-\rho_{2}}\right\}\),
* \(\psi_{2}\leq\psi_{1}\theta_{2}/\theta_{1}\).
Now, in order to get a rectangular parameter space, Mingone introduces the auxiliary quantities
\[p_{i} :=\max\left\{\frac{1+\rho_{2}}{1+\rho_{1}},\frac{1-\rho_{2}}{1- \rho_{1}}\right\} \text{for }i>1,\] \[f_{i} :=\min\left\{\frac{4}{1+|\rho_{i}|},\sqrt{\frac{4\theta_{i}}{1+| \rho_{i}|}}\right\} \text{for }i\geq 1,\] \[A_{\psi_{1}} :=0 \text{for }i=1,\] \[A_{\psi_{i}} :=\psi_{i-1}p_{i} \text{for }i>1,\] \[C_{\psi_{1}} :=\min\left\{f_{1},\frac{f_{2}}{p_{2}},\frac{f_{3}}{p_{2}p_{3}}, \ldots,\frac{f_{n}}{\prod_{j=2}^{n}p_{j}}\right\} \text{for }i=1,\] \[C_{\psi_{i}} :=\min\left\{\frac{\psi_{i-1}}{\theta_{i-1}}\theta_{i},f_{i}, \frac{f_{i+1}}{p_{i+1}},\frac{f_{i+2}}{p_{i+1}p_{i+2}},\ldots,\frac{f_{n}}{ \prod_{j=i+1}^{n}p_{j}}\right\} \text{for }i>1.\]
and substitutes the old parameters \(\theta_{2}\), \(\theta_{3}\),..., \(\theta_{n}\) and \(\psi_{1}\), \(\psi_{2}\),..., \(\psi_{n}\) with the new parameters
\[a_{i}:=\theta_{i}-\theta_{i-1}p_{i},\quad i>1,\]
and
\[c_{i}:=\frac{\psi_{i}-A_{\psi_{i}}}{C_{\psi_{i}}-A_{\psi_{i}}},\quad i\geq 1,\]
respectively. Then she shows that for every choice of
\[(\rho_{1},\ldots,\rho_{n},\theta_{1},a_{2},\ldots,a_{n},c_{1},\ldots,c_{n}) \in(-1,1)^{n}\times(0,\infty)^{n}\times(0,1)^{n}\]
the conditions G1 - G4, with strict inequalities in place of weak ones, are all satisfied.
As emphasized by Mingone, her algorithm aims to overcome the problems caused by the sequential feature of the robust algorithm of Corbetta _et al._ (2019). For this reason she considers a global objective function of the form
\[\sum_{i=1}^{n}\sum_{j}[C(K_{j},i)-\widehat{C}(K_{j},i)]^{2}w(k_{j},i),\]
where \(C(K_{j},i)\) is the observed option mid price at maturity \(i\) and strike \(K_{j}\), \(j=1,2,\ldots,J_{i}\), \(\widehat{C}(K_{j},i)\) is the theoretical option price which is obtained by applying the Black and Scholes formula to the suitable variance from the SSVI slice for maturity \(i\), and where the \(w(k_{j},i)\)'s are positive weights. For the latter Mingone suggests to use the inverses of the squared market Black and Scholes vegas in order to achieve calibration in implied volatilities at the first order. In this work I followed this suggestion. For the sake of fair comparison with the robust algorithm, I also tried to use constant weights. For minimizing the objective function I used the Trust Region Reflective algorithm as implemented by the least_squares function from the scipy.optimize library. I set the maximum number of objective function evaluations to 500 and used the default value 1e-8 for the termination conditions ftol, xtol and gtol. As initial parameter vector for the optimization algorithm I used the parameter vector corresponding to the optimal SSVI slices from the robust algorithm. This choice is also suggested by Mingone.
## 3 Results
This section describes the results of my empirical investigation. The data I used refer to SPXW options and are provided by FactSet. In order to cover a possibly wide range of market situations, I considered the option chains from the last trading date of each month starting from June 2012 until July 2022 (122 option chains). As usual in calibration exercises, I excluded some options from the analysis. To identify the excluded options, I first computed an implied forward price for the underlying. In order to do so, I considered all put-call pairs from which I computed an implied dividend yield according to the formula
\[q_{imp}:=\frac{1}{t}\left[\ln(C-P+Ke^{-rt})-\ln S\right]\]
where \(t\) is time to maturity, \(C\) and \(P\) are the call and put mid-prices, \(K\) is the strike price, \(S\) is the underlying price and \(r\) is the risk-free rate. To determine the latter I interpolated the US treasury yield curve provided by FactSet. Once I got the implied dividend yields, I averaged them for each maturity date and used the average values in order to compute forward prices for the underlying. Then I discarded all forward ITM options and all options with bid-ask percentage spread larger than 5%. From the remaining options I computed forward ATM implied volatilities for each maturity date \(t\). I did this only for maturity dates where I was left with at least one option with forward-to-strike ratio larger and smaller than 1. In this case, I considered the Black and Scholes implied volatility of the option whose forward-to-strike ratio is closer to 1 as forward ATM implied volatility. Otherwise, I excluded all options with the given maturity date from the subsequent analysis.
Having applied the above filtering operations, I was left with zero options for the 14 chains of 2012-09-28, 2018-01-31, 2013-05-31, 2017-05-31, 2013-06-28, 2012-10-31, 2018-03-29, 2012-11-30, 2012-12-31, 2018-05-31, 2013-01-31, 2013-05-32, 2013-06-28, 2012-10-32, 2012-11-30, 2012-12-31, 2013-06-28, 2012-10-32, 2012-11-30, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-31, 2012-31, 2012-12-31, 2012-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-12-31, 2012-31, 2012-12-31, 2012-12-31, 2012-31, 2012-31, 2012-12-31, 2012-3
11-29, 2017-04-28, 2012-08-31. Thus, my analysis refers to the remaining \(122-14=108\) option chains. The two graphs in Figure 1 show the number of available SPXW options and the number of available maturity dates in each option chain before and after the filtering operations.
For evaluating the fit of the calibrated eSSVI surfaces I computed four measures:
1. the ratio \[F_{1}:=\frac{N_{w}}{N_{tot}}=\frac{\text{number of theoretical option prices }\widehat{C}(K_{j},i)}{\text{which are between the observed bid and ask prices}}\] (1) \[F_{1}:=\frac{N_{w}}{N_{tot}}=\frac{\text{number of theoretical option prices }\widehat{C}(K_{j},i)}{\text{total number of options used for calibration}}\] (2) \[F_{2}:=\frac{1}{N_{tot}}\sum_{i=1}^{n}\sum_{j}|C(K_{j},i)-\widehat{C}(K_{j},i)|\] (3) (the average of the squared pricing errors \[F_{3}:=\frac{1}{N_{tot}}\sum_{i=1}^{n}\sum_{j}|C(K_{j},i)-\widehat{C}(K_{j},i) |^{2}\] (4) (the weighted average of squared pricing errors with weights \(w(k_{j},i)\) given by the inverses of the squared market Black and Scholes vegas \[F_{4}:=\frac{1}{\sum_{i=1}^{n}\sum_{j}w(k_{j},i)}\sum_{i=1}^{n}\sum_{j}|C(K_{ j},i)-\widehat{C}(K_{j},i)|^{2}w(k_{j},i).\] (5)
Of course, none of these measures on its own is perfectly fair for comparing the algorithms. In fact, one might expect \(F_{1}\) to be in favor of the global algorithm with the quadratic inverse vega weighted objective function, since bid-ask spreads and vegas are usually positively correlated (see panel (a) in Figure 2). However, as can be seen from panels (a) and (b) in Figure 3, the global algorithm with constant weights seems to achieve smaller \(F_{1}\) values on average. Also, in the comparison between the two global algorithms and the robust algorithm one might expect \(F_{1}\) to be more helpful for the latter since bid-ask spreads and time to maturity are usually positively correlated (see panel (b) in Figure 2) and the robust algorithm tends to have problems to fit the larger maturity dates in chains with many different maturity dates. As can be seen from panel (a) in Figure 3, evidence for the latter conjecture is ambiguous.
Consider now the \(F_{2}\) measure of fit. Also this measure can be expected to favor the robust algorithm since this algorithm aims to minimize sequentially, one after another, the inner sums in the definition of \(F_{2}\). Moreover, in the comparison between the two implementations of the global algorithm \(F_{2}\) should
be in favor of the implementation with constant weights. The results shown in panel (c) and panel (d) of Figure 3 are slightly supportive for these conjectures.
Finally, as for the \(F_{3}\) and \(F_{4}\) measures, the results are as expected. The global algorithm with constant weights performs best w.r.t. to the \(F_{3}\) measure, and the global algorithm with quadratic inverse vega weights is the best one w.r.t. \(F_{4}\).
To complete the picture I will spend some words about computational aspects. As already mentioned, in both implementations of the global algorithm I used the optimal parameters from the robust algorithm as initial values. With this choice of initial values the least_squares function was quite efficient in finding minima for the objective function. In fact, for the squared inverse vega weighted objective function the minimization process terminated before 500 objective function evaluations for 93 of the 108 option chains because either the ftol, gtol or xtol termination condition was satisfied. For the objective function with constant weights the least_squares function was able to find a minimum before 500 function evaluations in 85 of the 108 option chains. However, my implementation of the robust algorithm could be too slow for some purposes. In fact, on average over all 2023 calibrated eSSVI slices, it took about 5523 objective function evaluations for finding the optimal parameters of a single slice. Motivated by this fact, I tested also the less data-driven initial values suggested in Mingone (2022) which are much faster to compute (these initial values are given by \(\rho_{i}=0\), \(a_{i}=\max\{\theta_{i}^{*}-\theta_{i-1}^{*},0\}\) and \(c_{i}=0.5\) for \(i=1,2,\ldots,n\), where the \(\theta_{i}^{*}\)'s are the closest to ATM forward implied variances as defined in Section 2.1). As can be seen from Figure 4, the results of this test are not very encouraging: the less data-driven initial values often lead to local minima where the \(F_{1}\) measure of fit is much larger than what can be achieved through the robust initial values.
## 4 Appendix: Revised proof of Proposition 3.1 in Hendriks and Martini (2019)
Consider two eSSVI slices which we shall denote by
\[w_{i}(k)=\frac{\theta_{i}}{2}\left\{1+\rho_{i}\varphi_{i}k+\sqrt{\varphi_{i}^{ 2}k^{2}+2\varphi_{i}\rho_{i}k+1}\right\},\quad i=1,2.\]
As in the main text, assume that the subscript \(i=1\) refers to the closer maturity date. Then there is absence of calendar spread arbitrage if and only if \(w_{1}(k)\leq w_{2}(k)\) for all \(k\in\mathbb{R}\).
Note that
\[w_{i}^{\prime}(k) =\frac{1}{2}\theta_{i}\varphi_{i}\left(\frac{k\varphi_{i}+\rho_ {i}}{\sqrt{\varphi_{i}^{2}k^{2}+2\varphi_{i}\rho_{i}k+1}}+\rho_{i}\right)\] \[w_{i}^{\prime\prime}(k) =\frac{\theta_{i}\left(1-\rho_{i}^{2}\right)\varphi_{i}^{2}}{2( \varphi_{i}^{2}k^{2}+2\varphi_{i}\rho_{i}k+1)^{3/2}}\]
so that \(w_{i}^{\prime\prime}(k)>0\) for all \(k\in\mathbb{R}\). Since \(w_{i}^{\prime}(k)=0\) if and only if \(k=k_{i}^{*}:=-\frac{2\rho_{i}}{\varphi_{i}}\), we conclude that
\[\inf_{k}w_{i}(k)=w_{i}(k_{i}^{*})=\theta_{i}(1-\rho_{i}^{2}).\]
By combining this result with the fact that \(w_{i}(0)=\theta_{i}\), we see that absence of calendar spread arbitrage implies
\[\Theta:=\frac{\theta_{2}}{\theta_{1}}\geq\max\left\{1,\frac{1-\rho_{1}^{2}}{1 -\rho_{2}^{2}}\right\}. \tag{2}\]
Another necessary condition may be obtained by considering the asymptotes of the two slices. Since
\[2w_{i}(k)\sim\begin{cases}\theta_{i}\varphi_{i}(1+\rho_{i})k&\text{ if }k\to \infty,\\ \theta_{i}\varphi_{i}(1-\rho_{i})k&\text{ if }k\to-\infty,\end{cases}\]
we conclude that absence of calendar spread arbitrage also implies
\[\Theta\Phi:=\frac{\theta_{2}\varphi_{2}}{\theta_{1}\varphi_{1}}\geq\max\left\{ \frac{1+\rho_{1}}{1+\rho_{2}},\frac{1-\rho_{1}}{1-\rho_{2}}\right\} \tag{3}\]
The latter condition is satisfied if and only if
\[\Theta\Phi\geq 1\quad\text{ and }\quad(\Theta\Phi\rho_{2}-\rho_{1})^{2}\leq( \Theta\Phi-1)^{2}. \tag{4}\]
Of course, in the argument leading to the necessary condition (2) we are tacitly assuming that \(\varphi_{1}\), \(\varphi_{2}\) and \(\theta_{1}\) are all strictly positive and in this case it follows from (2) that \(\theta_{2}\geq\theta_{1}\), i.e. that \(\Theta\geq 1\). Note that if \(\varphi_{1}=0\) and/or \(\theta_{1}=0\), then \(w_{1}(k)=\theta_{1}\) for all \(k\in\mathbb{R}\), and in this case we have absence of calendar spread arbitrage if and only if \(\theta_{2}\geq\theta_{1}\) or \(\theta_{2}(1-\rho_{2}^{2})\geq\theta_{1}\) according to whether \(\varphi_{2}\) is also zero or not. On the other hand, if \(\varphi_{2}=0\), then we have \(w_{2}(k)=\theta_{2}\) for all \(k\in\mathbb{R}\), and in this case it follows from the asymptotic behavior of \(w_{1}(k)\) that we have absence of calendar spread arbitrage if and only if \(\varphi_{1}=0\) and \(\theta_{1}\leq\theta_{2}\). **In what follows we rule out these trivial cases by assuming that \(\Phi:=\varphi_{2}/\varphi_{1}\) and \(\Theta:=\theta_{2}/\theta_{1}\) are well defined (i.e. that their denominators are strictly positive) and that \(\Phi>0\) and \(\Theta\geq 1\).**
**Lemma 4.1**.: _If \(\theta_{1}\), \(\varphi_{1}\) and \(\varphi_{2}\) are all strictly positive, then there is absence of calendar spread arbitrage only if conditions (2) and (3) are both satisfied._
Now it arises the question whether the conditions (2) and (3) are sufficient as well. To answer this question we look for conditions under which the graphs of \(w_{1}(k)\) and \(w_{2}(k)\) have at least one point in common. I will proceed as in Hendriks and Martini (2019), but I will try to make some steps more explicit. So let \(x:=\varphi_{1}k\),
\[\alpha:=\alpha(x)=\Theta-1+(\Theta\Phi\rho_{2}-\rho_{1})x,\quad z_{1}:=z_{1}(x )=\sqrt{x^{2}+2\rho_{1}x+1},\]
\[z_{2}:=z_{2}(x)=\sqrt{\Phi^{2}x^{2}+2\rho_{2}\Phi x+1}\]
and note that the two eSSVI slices do have points in common if and only if the equation
\[\alpha(x)+\Theta z_{2}(x)=z_{1}(x)\]
has real solutions. Squaring twice yields the quartic polynomial
\[P(x):=4\alpha^{2}\Theta^{2}z_{2}^{2}-(z_{1}^{2}-\alpha^{2}-\Theta^{2}z_{2}^{2})^ {2}\]
where we have omitted the independent variable \(x\) on the RHS. Note that every root of \(P(x)\) must satisfy one (and only one) of the following conditions:
\[\begin{split}\text{a) }2\alpha\Theta z_{2}&=-(z_{1}^{2}- \alpha^{2}-\Theta^{2}z_{2}^{2})\quad\text{ and }\quad\alpha-\Theta z_{2}=\pm z_{1},\\ \text{b) }2\alpha\Theta z_{2}&=z_{1}^{2}-\alpha^{2}- \Theta^{2}z_{2}^{2}\quad\text{ and }\quad\alpha+\Theta z_{2}=-z_{1},\\ \text{c) }2\alpha\Theta z_{2}&=z_{1}^{2}-\alpha^{2}- \Theta^{2}z_{2}^{2}\quad\text{ and }\quad\alpha+\Theta z_{2}=z_{1}.\end{split} \tag{5}\]
Of course, a root of \(P(x)\) is a point where the two slices intersect if and only if it satisfies condition c). To explore the existence of such roots we first observe that \(P(x)=x^{2}Q(x)\), where
\[\begin{split} Q(x):=&\left[\left(\Theta\Phi\rho_{2} -\rho_{1}\right)^{2}-\left(\Theta\Phi-1\right)^{2}\right]\left[\left(\Theta \Phi+1\right)^{2}-\left(\Theta\Phi\rho_{2}-\rho_{1}\right)^{2}\right]x^{2}\\ &+4\Theta\left[\rho_{1}\left(-\Theta^{2}\Phi^{2}+(\Theta-2) \Theta\rho_{2}^{2}\Phi^{2}+2\Theta\Phi^{2}-1\right)+\right.\\ &\qquad\qquad\left.+\rho_{2}\Phi\left(\Theta^{2}\rho_{2}^{2}\Phi^ {2}-\Theta^{2}\Phi^{2}+2\Theta-1\right)+(1-2\Theta)\rho_{2}\rho_{1}^{2}\Phi+ \rho_{1}^{3}\right]x\\ &+4(\Theta-1)\Theta\left(\Theta\Phi^{2}\rho_{2}^{2}-\Theta\Phi^ {2}-\rho_{1}^{2}+1\right).\end{split} \tag{6}\]
Note that \(x=0\) (which is a root of \(P(x)\)) is an intersection point if and only if \(\theta_{1}=\theta_{2}\), i.e. if and only if \(\Theta=1\) (in fact, \(w_{1}(0)=\theta_{1}=\theta_{2}=w_{2}(0)\) if and only if \(\Theta=1\)). Assuming that this is the case, we will now find conditions under which the two slices do cross over each other. To this aim we consider their derivatives. With \(\theta_{1}=\theta_{2}=\theta\) (i.e. with \(\Theta=1\)) we obtain
\[w_{i}^{\prime}(0)=\theta\varphi_{i}\rho_{i}\quad\text{ and }\quad w_{i}^{\prime \prime}(0)=\frac{1}{2}\theta\varphi_{i}^{2}(1-\rho_{i}^{2}).\]
To rule out the possibility that the two slices cross over in \(x=0\), we must therefore impose
\[w_{1}^{\prime}(0)=w_{2}^{\prime}(0)\quad\text{ and }\quad w_{1}^{\prime\prime}( 0)\leq w_{2}^{\prime\prime}(0). \tag{7}\]
If either one of these conditions fails, the two slices cross over in \(x=0\). Since we are assuming that the \(\theta_{i}\)'s and \(\varphi_{i}\)'s are all strictly positive, the conditions (7) can be jointly satisfied only if
* either \(\rho_{1}=\rho_{2}=0\) and \(\varphi_{2}\geq\varphi_{1}\), in which case it is easy to verify that \(w_{2}(k)\geq w_{1}(k)\) for all \(k\in\mathbb{R}\);
* or \(\Phi=\rho_{1}/\rho_{2}\) and \(\rho_{1}^{2}\geq\rho_{2}^{2}\), in which case the constant term and the coefficient of \(x\) in the polynomial \(Q(x)\) do both vanish, and hence the two slices have no intersection points other than \(x=0\).
These considerations prove the following lemma:
**Lemma 4.2**.: _Assume that \(\Phi\) and \(\Theta\) are well defined and that \(\Phi>0\). If \(\Theta=1\), there is no calendar spread arbitrage if and only if either (i) \(\rho_{1}=\rho_{2}=0\) and \(\Phi\geq 1\) or (ii) \(\Phi=\rho_{1}/\rho_{2}\) and \(\rho_{1}^{2}\geq\rho_{2}^{2}\)._
Note that conditions (2) and (3) do not imply condition (i) or (ii) of the previous lemma (take for example \(\Phi=1.2\), \(\rho_{1}=0.9\) and \(\rho_{2}=0.81\)) and the former are therefore not strong enough to rule out calendar spread arbitrage even if we restrict to the case where \(\Theta=1\).
Consider now what happens when \(\Theta>1\). In this case \(w_{1}(0)=\theta_{1}<\theta_{2}=w_{2}(0)\) and \(x=0\) is therefore not an intersection point. To investigate the existence of intersection points we analyze the polynomial \(Q(x)\). We begin with the following lemma:
**Lemma 4.3**.: _Assume that \(\Phi>0\) and \(\Theta>1\). Then \(Q(x)\) is of second degree if and only if_
\[(\Theta\Phi\rho_{2}-\rho_{1})^{2}\neq(\Theta\Phi-1)^{2}. \tag{8}\]
_and in this case its discriminant is given by_
\[D:=16\Theta(\rho_{1}^{2}-\Theta^{2}\Phi^{2}\rho_{2}^{2}+\Theta^{2}\Phi^{2}-1)^ {2}\left[(\Theta\Phi\rho_{2}-\rho_{1})^{2}-(\Theta-1)(\Theta\Phi^{2}-1)\right] \tag{9}\]
Proof.: The coefficient of \(x^{2}\) in \(Q(x)\) vanishes if and only if either
\[(\Theta\Phi\rho_{2}-\rho_{1})^{2}=(\Theta\Phi-1)^{2}\quad\text{ or }\quad( \Theta\Phi\rho_{2}-\rho_{1})^{2}=(\Theta\Phi+1)^{2}.\]
The second condition implies \(\Theta\Phi<0\), and hence we conclude that \(Q(x)\) is of second degree if and only if condition (8) holds. In this case the discriminant of \(Q(x)\) can be written as in expression (9).
From the previous lemma we know that \(Q(x)\) must have real roots if condition (8) holds jointly with
\[\rho_{1}^{2}-\Theta^{2}\Phi^{2}\rho_{2}^{2}+\Theta^{2}\Phi^{2}-1=0\quad\text{ or }\quad(\Theta-1)(\Theta\Phi^{2}-1)\leq(\Theta\Phi\rho_{2}-\rho_{1})^{2} \tag{10}\]
Since we are only concerned with the case where the necessary conditions (2) and (4) hold, we must consider the set of \((\rho_{1},\rho_{2})\)-pairs where the equality in (10) holds and the set \((\rho_{1},\rho_{2})\)-pairs where
\[(\Theta-1)(\Theta\Phi^{2}-1)\leq(\Theta\Phi\rho_{2}-\rho_{1})^{2}\leq(\Theta \Phi-1)^{2}.\]
We denote these two sets by \(H_{\Theta,\Phi}\) and \(R_{\Theta,\Phi}\), respectively:
\[H_{\Theta,\Phi} :=\{(\rho_{1},\rho_{2})\in(-1,1)^{2}:\rho_{1}^{2}-\Theta^{2}\Phi^{ 2}\rho_{2}^{2}+\Theta^{2}\Phi^{2}-1=0\},\] \[R_{\Theta,\Phi} :=\{(\rho_{1},\rho_{2})\in(-1,1)^{2}:(\Theta-1)(\Theta\Phi^{2}-1) \leq(\Theta\Phi\rho_{2}-\rho_{1})^{2}\leq(\Theta\Phi-1)^{2}\}.\]
It will be useful to visualize these two sets. Since we have already dealt with the case \(\Theta=1\), and since we are assuming the necessary condition (4), we need to consider only the case where \(\Theta>1\) and \(\Theta\Phi\geq 1\). Figure 5 shows all possible shapes of \(H_{\Theta,\Phi}\) and \(R_{\Theta,\Phi}\). The set \(H_{\Theta,\Phi}\) is the graph of a hyperbola. It is always symmetric with respect to both axes of the \((\rho_{1},\rho_{2})\)-plane and its prolongation always goes through the four vertices of the square \([-1,1]^{2}\subset\mathbb{R}^{2}\). Moreover,
* if \(\Theta\Phi>1\), \(H_{\Theta,\Phi}\) does not intersect the \(\rho_{1}\) axis and intersects the \(\rho_{2}\) axis in \(\rho_{2}=\pm\sqrt{\frac{\Theta^{2}\Phi^{2}-1}{\Theta^{2}\Phi^{2}}}\);
* if \(\Theta\Phi=1\), \(H_{\Theta,\Phi}\) reduces to the straight lines \(\rho_{1}=\pm\rho_{2}\).
Also for the set \(R_{\Theta,\Phi}\) there are essentially only two possible shapes:
* If \(\Theta\Phi^{2}-1\leq 0\) (since we are assuming \(\Theta>1\), this implies \(\Phi<1\)), the set \(R_{\Theta,\Phi}\) is given by the stripe \[S:=\{(\rho_{1},\rho_{2})\in(-1,1)^{2}:\Theta\Phi\rho_{2}-\Theta\Phi+1\leq\rho_ {1}\leq\Theta\Phi\rho_{2}+\Theta\Phi-1\}.\] The stripe reduces to the line \(\rho_{1}=\rho_{2}\) if \(\Theta\Phi=1\).
* If \(\Theta\Phi^{2}-1>0\) (since we are assuming \(\Theta>1\), this implies \(\Theta\Phi>1\)), the set \(R_{\Theta,\Phi}\) is the union of the two parallel and disjoint stripes \[S_{1}:=\{(\rho_{1},\rho_{2})\in(-1,1)^{2}:\Theta\Phi\rho_{2}+\sqrt{(\Theta-1)( \Theta\Phi^{2}-1)}\leq\rho_{1}\leq\Theta\Phi\rho_{2}+\Theta\Phi-1\}\] and \[S_{2}:=\{(\rho_{1},\rho_{2})\in(-1,1)^{2}:\Theta\Phi\rho_{2}-\Theta\Phi+1\leq \rho_{1}\leq\Theta\Phi\rho_{2}-\sqrt{(\Theta-1)(\Theta\Phi^{2}-1)}\}.\] The two stripes reduce to the lines \(\rho_{1}=\Theta\rho_{2}\pm(\Theta-1)\) if \(\Phi=1\).
From the description of the sets \(H_{\Theta,\Phi}\) and \(R_{\Theta,\Phi}\) we see immediately that \(H_{\Theta,\Phi}\cap R_{\Theta,\Phi}=\{(\rho_{1},\rho_{2})\in(-1,1)^{2}:\rho_{1} =\rho_{2}\}\) when \(\Theta\Phi=1\). Next we prove that except for this special case the intersection is empty.
**Lemma 4.4**.: _If \(\Theta\Phi\neq 1\), then it follows that \(S\cap H_{\Theta,\Phi}=\emptyset\)._
Proof.: If \((\rho_{1},\rho_{2})\in H_{\Theta,\Phi}\) we must have
\[\Theta\Phi(1-\rho_{2}^{2})=1-\rho_{1}^{2}\quad\Rightarrow\quad\Theta\Phi= \sqrt{\frac{1-\rho_{1}^{2}}{1-\rho_{2}^{2}}}.\]
From the latter equality we obtain
\[(\Theta\Phi\rho_{2}-\rho_{1})^{2}-(\Theta\Phi-1)^{2}=2\sqrt{\frac{1-\rho_{1}^{ 2}}{1-\rho_{2}^{2}}}(1-\rho_{1}\rho_{2})-2(1-\rho_{1}^{2}).\]
The quantity on the RHS is positive because
\[\frac{1-\rho_{1}^{2}}{1-\rho_{2}^{2}}(1-\rho_{1}\rho_{2})^{2}>(1-\rho_{1}^{2}) ^{2}\quad\Leftrightarrow\quad(\rho_{1}-\rho_{2})^{2}>0\]
and because we are assuming that \(\Theta\Phi=\sqrt{\frac{1-\rho_{1}^{2}}{1-\rho_{2}^{2}}}\neq 1\). Hence we conclude that \((\rho_{1},\rho_{2})\notin S\), because otherwise we should have
\[(\Theta\Phi\rho_{2}-\rho_{1})^{2}-(\Theta\Phi-1)^{2}\leq 0,\]
From now on we consider roots of \(Q(x)\) as functions of \(\rho_{1}\) and \(\rho_{2}\). Any root of \(Q(x)\) will be denoted by \(x_{(\rho_{1},\rho_{2})}\). Of course the subset of the \((\rho_{1},\rho_{2})\)-plane where such a root exists depends on \(\Theta\) and \(\Phi\). We denote this set with \(Q_{\Theta,\Phi}\). Since we are only interested in \(\rho_{i}\) values within the interval \((-1,1)\), we consider \(Q_{\Theta,\Phi}\) as a subset of the open square \((-1,1)^{2}\). Note that \(x_{(\rho_{1},\rho_{2})}\) must be a continuous function of \((\rho_{1},\rho_{2})\in Q_{\Theta,\Phi}\) and that \((\rho_{1},\rho_{2})\in Q_{\Theta,\Phi}\) only if condition (10) holds.
As we have already seen, a root \(x_{(\rho_{1},\rho_{2})}\) must be of one of the three types in (5) and only roots of type c) are intersection points. In order to determine which type applies, we will need the following functions:
\[(\rho_{1},\rho_{2})\mapsto\alpha(x_{(\rho_{1},\rho_{2})}):=\Theta-1+(\Theta \Phi\rho_{2}-\rho_{1})x_{(\rho_{1},\rho_{2})}\]
and
\[(\rho_{1},\rho_{2})\mapsto Z(x_{(\rho_{1},\rho_{2})}) :=z_{1}^{2}(x_{(\rho_{1},\rho_{2})})-\Theta^{2}z_{2}^{2}(x_{(\rho _{1},\rho_{2})})-\alpha^{2}(x_{(\rho_{1},\rho_{2})})\] \[=x_{(\rho_{1},\rho_{2})}^{2}+2\rho_{1}x_{(\rho_{1},\rho_{2})}+1+\] \[\quad-\Theta^{2}\left(\Phi^{2}x_{(\rho_{1},\rho_{2})}^{2}+2\Phi \rho_{2}x_{(\rho_{1},\rho_{2})}+1\right)+\] \[\quad-\left[\Theta-1+(\Theta\Phi\rho_{2}-\rho_{1})x_{(\rho_{1}, \rho_{2})}\right]^{2}.\]
Note that these functions must all be continuous functions of \((\rho_{1},\rho_{2})\in Q_{\Theta,\Phi}\).
**Lemma 4.5**.: _Assume that \(\Phi>0\) and \(\Theta>1\). Then \(\alpha(x_{(\rho_{1},\rho_{2})})Z(x_{(\rho_{1},\rho_{2})})=0\) only if \((\rho_{1},\rho_{2})\in H_{\Theta,\Phi}\)._
Proof.: The proof is essentially the same as the proof of Lemma A.2 in Hendriks and Martini (2019). Since \(x_{(\rho_{1},\rho_{2})}\) must satisfy one of the two equalities
\[2\alpha(x_{(\rho_{1},\rho_{2})})\Theta z_{2}(x_{(\rho_{1},\rho_{2})})=\pm Z(x _{(\rho_{1},\rho_{2})})\]
(otherwise \(x_{(\rho_{1},\rho_{2})}\) is not a root of \(P(x)\) and hence neither a root of \(Q(x)\)) and since \(z_{2}(x)>0\) for all \(x\in\mathbb{R}\), we conclude that \(\alpha(x_{(\rho_{1},\rho_{2})})Z(x_{(\rho_{1},\rho_{2})})=0\) if and only if \(\alpha(x_{(\rho_{1},\rho_{2})})\) and \(Z(x_{(\rho_{1},\rho_{2})})\) do both vanish. In this case we must have
\[\begin{split}&\alpha(x_{(\rho_{1},\rho_{2})})=0\quad\Rightarrow \quad\Theta\left(1+\rho_{2}\Phi x_{(\rho_{1},\rho_{2})}\right)=1+\rho_{1}x_{( \rho_{1},\rho_{2})}\quad\Rightarrow\\ &\Theta^{2}\left(1+2\Phi\rho_{2}x_{(\rho_{1},\rho_{2})}+\Phi^{2} \rho_{2}^{2}x_{(\rho_{1},\rho_{2})}^{2}\right)=1+2\rho_{1}x_{(\rho_{1},\rho_{2 })}+\rho_{1}^{2}x_{(\rho_{1},\rho_{2})}^{2}\end{split} \tag{11}\]
and we must also have
\[\begin{split}&\Theta^{2}z_{2}^{2}(x_{(\rho_{1},\rho_{2})})=z_{1}^{2 }(x_{(\rho_{1},\rho_{2})})\quad\Rightarrow\\ \Rightarrow\quad\Theta^{2}\left(\Phi^{2}x_{(\rho_{1},\rho_{2})}^{2 }+2\Phi\rho_{2}x_{(\rho_{1},\rho_{2})}+1\right)=x_{(\rho_{1},\rho_{2})}^{2}+2 \rho_{1}x_{(\rho_{1},\rho_{2})}+1,\end{split} \tag{12}\]
Subtracting equation (11) from equation (12) yields
\[x_{(\rho_{1},\rho_{2})}^{2}\left(\Theta^{2}\Phi^{2}-\Theta^{2}\Phi^{2}\rho_{2}^{ 2}-1+\rho_{1}^{2}\right)=0\]
Since \(x_{(\rho_{1},\rho_{2})}\) must be different from zero (otherwise we would have \(\alpha(x_{(\rho_{1},\rho_{2})})=\Theta-1>0\) contrary to our assumption), this implies that \(\Theta^{2}\Phi^{2}-\Theta^{2}\Phi^{2}\rho_{2}^{2}-1+\rho_{1}^{2}=0\) which is equivalent to \((\rho_{1},\rho_{2})\in H_{\Theta,\Phi}\)
**Corollary 4.6**.: _Assume that \(\Phi>0\), \(\Theta>1\) and that \(A\) is a connected subset of \(Q_{\Theta,\Phi}\) which does not intersect the set_
\[H_{\Theta,\Phi}:=\{(\rho_{1},\rho_{2})\in(-1,1)^{2}:\Theta^{2}\Phi^{2}(1-\rho_{ 2}^{2})=(1-\rho_{1}^{2})\}.\]
_Then it follows that function \((\rho_{1},\rho_{2})\mapsto\alpha(x_{(\rho_{1},\rho_{2})})Z(x_{(\rho_{1},\rho_{ 2})})\) does not change sign on \(A\)._
The previous corollary will be useful to distinguish whether a given root \(x_{(\rho_{1},\rho_{2})}\) is of type a) rather than of type b) or c). Once we know that it is not of type a), we will apply the next lemma in order to find out whether it is of type b) or c).
**Lemma 4.7**.: _Let \(A\) be a connected subset of \(Q_{\Theta,\Phi}\) such that \(\alpha(x_{(\rho_{1},\rho_{2})})Z(x_{(\rho_{1},\rho_{2})})>0\) for all \((\rho_{1},\rho_{2})\in A\). Then it follows that the function \((\rho_{1},\rho_{2})\mapsto\alpha(x_{(\rho_{1},\rho_{2})})+\Theta z_{2}(x_{( \rho_{1},\rho_{2})})\) does not change sign on \(A\)._
Proof.: Under the assumptions of the lemma \(x_{(\rho_{1},\rho_{2})}\) must be a root of either type b) or c) in (5). Hence we must have either
\[\alpha(x_{(\rho_{1},\rho_{2})})+\Theta z_{2}(x_{(\rho_{1},\rho_{2})})=z_{1}(x_ {(\rho_{1},\rho_{2})})\]
or
\[\alpha(x_{(\rho_{1},\rho_{2})})+\Theta z_{2}(x_{(\rho_{1},\rho_{2})})=-z_{1}(x_ {(\rho_{1},\rho_{2})}).\]
The conclusion of the lemma follows now from the fact that \(\alpha(x_{(\rho_{1},\rho_{2})})+\Theta z_{2}(x_{(\rho_{1},\rho_{2})})\) is continuous and that \(z_{1}(x)>0\) for all \(x\in\mathbb{R}\).
Now we are finally ready to investigate about the existence of intersection points. We start from the special cases where \(\Phi=1\) or \(\Theta\Phi=1\).
**Lemma 4.8**.: _Assume that \(\Theta>1\) and that either \(\Phi=1\) or \(\Theta\Phi=1\). Then \((\rho_{1},\rho_{2})\in R_{\Theta,\Phi}\) implies \(w_{1}(k)<w_{2}(k)\) for all \(k\in\mathbb{R}\)._
Proof.: If \(\Theta>1\), we must have \(w_{1}(0)<w_{2}(0)\) and thus there exist intersection points only if the polynomial \(Q(x)\) has real roots different from \(x=0\). Now, consider first the case \(\Phi=1\). Since we are assuming that \((\rho_{1},\rho_{2})\in R_{\Theta,\Phi}\), it follows that \(\rho_{1}=\Theta\rho_{2}\pm(\Theta-1)\) (see the description of the set \(R_{\Theta,\Phi}\) for the special case where \(\Phi=1\)). However, it can be verified that in this case we must have
\[Q(x)=-4\Theta^{2}(\Theta-1)^{2}\left(\rho_{2}\pm 1\right){}^{2}<0\]
which has no roots at all.
Next, consider the case \(\Theta\Phi=1\). In this case we must have \(\rho_{1}=\rho_{2}\) (see the description of the set \(R_{\Theta,\Phi}\) for the special case where \(\Theta\Phi=1\)). Substituting \(\rho_{1}=\rho_{2}=\rho\) and \(\Phi=1/\Theta\) in the coefficients of \(Q(x)\) shows that
\[Q(x)=4(\Theta-1)^{2}\left(1-\rho^{2}\right)>0\]
which has no roots at all.
Next, we deal with the case where \(\Phi\) is strictly smaller than \(1\) and different from \(1/\Theta\) (i.e. \(\Theta\Phi\neq 1\)). The inequality \(\Theta\Phi\geq 1\), which is necessary by condition (4), allows then only for values of \(\Phi\) in the range \(1/\Theta<\Phi<1\). Note that for \(\Phi\leq 1\) the necessary condition (2) is already implied by condition (4) and therefore we do not need to assume condition (2) explicitly.
**Lemma 4.9**.: _Assume that \(\Theta>1\), \(\Theta\Phi>1\) and \(\Theta\Phi^{2}\leq 1\) (this implies \(\Phi<1\)). Then \((\rho_{1},\rho_{2})\in R_{\Theta,\Phi}\) implies \(w_{1}(k)<w_{2}(k)\) for all \(k\in\mathbb{R}\)._
Proof.: Once again, if \(\Theta>1\) we must have \(w_{1}(0)<w_{2}(0)\) and there exist points \(k\in\mathbb{R}\) where \(w_{1}(k)\geq w_{2}(k)\) if and only if intersection points exist, i.e. if and only if the polynomial \(Q(x)\) has at least one real root which satisfies condition c) in (5). From Lemma 4.3 we know that \(Q(x)\) must have roots if \(\Theta>1\), \(\Theta\Phi>1\), \(\Theta\Phi^{2}\leq 1\) and if \((\rho_{1},\rho_{2})\) belongs to the interior of \(R_{\Theta,\Phi}\). Since under the present conditions \(R_{\Theta,\Phi}\) is connected and does not intersect \(H_{\Theta,\Phi}\) (see Lemma 4.4), we may apply Corollary 4.6 to check whether the roots in \(R_{\Theta,\Phi}\) are of type a). This will be the case if there exists a single \((\rho_{1},\rho_{2})\in int(R_{\Theta,\Phi})\) such that \(\alpha(x_{(\rho_{1},\rho_{2})})Z(x_{(\rho_{1},\rho_{2})})<0\). Under the present conditions the origin belongs to \(int(R_{\Theta,\Phi})\). Hence we use \((\rho_{1},\rho_{2})=(0,0)\) as test point. Of course, \(\alpha(x_{(0,0)})=\Theta-1>0\). Moreover, it is easy to check that
\[x_{(0,0)}=2\frac{\sqrt{\Theta(\Theta-1)\left(1-\Theta\Phi^{2}\right)}}{\Theta ^{2}\Phi^{2}-1}\]
so that
\[Z(x_{(0,0)})=-\frac{2(\Theta-1)\Theta\left[\Theta^{2}\Phi^{2}-1+2(1-\Theta\Phi ^{2})\right]}{\Theta^{2}\Phi^{2}-1}<0.\]
We conclude that \(x_{(\rho_{1},\rho_{2})}\) must be of type a) whenever \((\rho_{1},\rho_{2})\in R_{\Theta,\Phi}\).
In the previous lemma we have assumed that \(\Theta\Phi^{2}\leq 1\) which forces \(\Phi<1\). To apply the same method of proof for the case where \(\Theta\Phi^{2}>1\) we must however _assume_ that \(\Phi<1\).
**Lemma 4.10**.: _Assume \(\Theta>1\), \(\Theta\Phi^{2}>1\) and \(\Phi<1\) (note that \(\Theta>1\) and \(\Theta\Phi^{2}>1\) implies \(\Theta\Phi>1\)). Then \((\rho_{1},\rho_{2})\in R_{\Theta,\Phi}\) implies \(w_{1}(k)<w_{2}(k)\) for all \(k\in\mathbb{R}\)._
Proof.: The proof is similar to the proof of the previous lemma. However, in the present case we must deal with the fact that the set \(R_{\Theta,\Phi}\) is not connected but only the union of the two connected sets \(S_{1}\) and \(S_{2}\). In each one of these two sets we must therefore find a point \((\rho_{1},\rho_{2})\) such that \(\alpha(x_{(\rho_{1},\rho_{2})})\) and \(Z(x_{(\rho_{1},\rho_{2})})\) are of opposite sign. To locate these points, note that the \(\rho_{2}\)-axis intersects both sets and hence we choose the \((\rho_{1},\rho_{2})\)-points with \(\rho_{1}=0\) and
\[\rho_{2}=\rho_{2}^{\pm}:=\pm\frac{\sqrt{(\Theta-1)(\Theta\Phi^{2}-1)}}{\Theta \Phi}.\]
This choice is convenient because it makes the discriminant of \(Q(x)\) vanish. According to the sign in \(\rho_{2}^{\pm}\), it gives rise to the roots
\[x_{(0,\rho_{2}^{\pm})}=\pm\frac{2\sqrt{\left(\Theta-1\right)\left(\Theta\Phi^{2}- 1\right)}}{\Theta(1-\Phi^{2})} \tag{13}\]
which, regardless of the sign, yields
\[\alpha\left(x_{(0,\rho_{2}^{\pm})}\right)=\frac{(\Theta-1)(\Theta\Phi^{2}+ \Theta-2)}{\Theta(1-\Phi^{2})} \tag{14}\]
and
\[Z\left(x_{(0,\rho_{2}^{\pm})}\right)=-\frac{2(\Theta-1)\left(\Theta\Phi^{2}+ \Theta-2\right)^{2}}{\Theta\left(\Phi^{2}-1\right)^{2}}. \tag{15}\]
Note that \(Z\left(x_{(0,\rho_{2}^{\pm})}\right)<0\) regardless of the value of \(\Phi\) (provided that \(\Phi\neq 1\)), but to make sure that \(\alpha\left(x_{(0,\rho_{2}^{\pm})}\right)>0\) we need to assume \(\Phi<1\).
Lemma 4.8, Lemma 4.9 and Lemma 4.10 show that the necessary condition (4) along with \(\Theta>1\) and \(\Phi\leq 1\) are jointly sufficient to rule out calendar spread arbitrage. The next lemma deals with the condition
\[(\Theta-1)(\Theta\Phi^{2}-1)\geq(\Theta\Phi\rho_{2}-\rho_{1})^{2} \tag{16}\]
which allows for values of \(\Phi\) larger than \(1\).
**Lemma 4.11**.: _Assume that \(\Theta>1\) and that condition (16) holds (note that these conditions jointly imply the necessary condition (4)). Then it follows that \(w_{1}(k)\leq w_{2}(k)\) for all \(k\in\mathbb{R}\). Under the assumptions of this lemma there exist tangency points (i.e. values of \(k\) where \(w_{1}(k)=w_{2}(k)\)) if and only if \(\Phi>1\) and condition (16) holds with equality sign. In that case there must exist exactly one tangency point._
Proof.: If \(\Theta>1\) and condition (16) holds, we must have \(\Theta\Phi^{2}\geq 1\) and hence \(\Theta\Phi>1\) (otherwise there would not exist any \((\rho_{1},\rho_{2})\)-pair for which (16) holds). Consider first what happens when \(\Theta\Phi^{2}=1\). In this case we must have \(\Phi<1\) and for \(\Phi\leq 1\) we have already proved that \(w_{1}(k)<w_{2}(k)\) for all \(k\in\mathbb{R}\).
Consider now what happens when \(\Phi>1\). Since we are assuming that \(\Theta>1\) (and hence we must have \(\Theta\Phi^{2}>1\)), we must have \(R_{\Theta,\Phi}=S_{1}\cup S_{2}\) and the assumed inequality (16) is satisfied if and only if the \((\rho_{1},\rho_{2})\)-pair belongs to the area between the two stripes \(S_{1}\) and \(S_{2}\) or to one of the inner boundaries of \(S_{1}\) or \(S_{2}\). We indicate this set of \((\rho_{1},\rho_{2})\)-pairs with \(S_{3}\). Note that \(S_{3}\) must be a proper subset of \(S\) since we are assuming that \(\Phi>1\). Since \(\Phi>1\) implies \(\Theta\Phi>1\), we can apply Lemma 4.4 and conclude that \(S_{3}\cap H_{\Theta,\Phi}\subset S\cap H_{\Theta,\Phi}=\emptyset\). Now it follows from Lemma 4.3 that \(Q(x)\) has no real roots when \((\rho_{1},\rho_{2})\) belongs to the interior of \(S_{3}\), i.e. if the inequality (16) is strict. Since the two slices
can intersect only if \(Q(x)\) has real roots, we conclude that \(w_{1}(k)<w_{2}(k)\) for all \(k\in\mathbb{R}\) in this case. On the other hand, if the \((\rho_{1},\rho_{2})\)-pair belongs to the boundary of \(S_{3}\), then it must also belong to the inner boundary of one of the two stripes \(S_{1}\) or \(S_{2}\). In other words, there must be equality in (16) which means that
\[\rho_{2}=\rho_{2}^{\pm}(\rho_{1}):=\frac{1}{\Theta\Phi}\left(\rho_{1}\pm\sqrt{( \Theta-1)(\Theta\Phi^{2}-1)}\right)\]
and that the discriminant of \(Q(x)\) must be zero (see Lemma 4.3). Hence \(Q(x)\) must have a root and this root must be unique. As usual we write \(x_{(\rho_{1},\rho_{2})}\) to indicate the root. Since we are assuming that \(\Theta\) and \(\Phi\) are both larger than \(1\), we can apply Lemma 4.5 and conclude that \(\alpha(x_{(\rho_{1},\rho_{2})})Z(x_{(\rho_{1},\rho_{2})})\neq 0\) when the \((\rho_{1},\rho_{2})\)-pair belongs to \(R_{\Theta,\Phi}=S_{1}\cup S_{2}\) and hence that \(\alpha(x_{(\rho_{1},\rho_{2})})Z(x_{(\rho_{1},\rho_{2})})\neq 0\) for all \((\rho_{1},\rho_{2})\)-pairs which belong to the boundary of \(S_{3}\) where a root \(x_{(\rho_{1},\rho_{2})}\) must exist and must be unique. Since \(S_{1}\) and \(S_{2}\) are two disjoint and connected sets, \(\alpha(x_{(\rho_{1},\rho_{2})})Z(x_{(\rho_{1},\rho_{2})})\) does not change sign on each of these two sets. We will now show that the sign of \(\alpha(x_{(\rho_{1},\rho_{2})})Z(x_{(\rho_{1},\rho_{2})})\) is positive on both sets. This can be done by proving that the sign of \(\alpha(x_{(\rho_{1},\rho_{2})})Z(x_{(\rho_{1},\rho_{2})})\) is positive at a single point in each of the two sets (see Corollary 4.6). As in the proof of Lemma (4.10) we use the \((\rho_{1},\rho_{2})\)-pairs with \(\rho_{1}=0\) and \(\rho_{2}=\rho_{2}^{\pm}:=\pm\frac{1}{\Theta\Phi}\sqrt{(\Theta-1)(\Theta\Phi^{ 2}-1)}\) as test points (note that these points also belong to the boundary of \(S_{3}\)). With this choice we still get the expressions in (13), (14) and (15) for \(x_{(0,\rho_{2}^{\pm})}\), \(\alpha(x_{(0,\rho_{2}^{\pm})})\) and \(Z(x_{(0,\rho_{2}^{\pm})})\). However, since we are now assuming that \(\Phi>1\), we see that \(\alpha(x_{(0,\rho_{2}^{\pm})})<0\) and not \(\alpha(x_{(0,\rho_{2}^{\pm})})>0\) as in the proof of Lemma 4.10 (of course, \(Z(x_{(0,\rho_{2}^{\pm})})\) remains still negative). We conclude that the roots we are considering now must be either of type b) or c) in (5). Hence we must have \(\alpha(x_{(\rho_{1},\rho_{2})})+\Theta z_{2}(x_{(\rho_{1},\rho_{2})})=\pm z_{1 }(x_{(\rho_{1},\rho_{2})})\). In order to prove that the roots correspond to intersection points, we first note that the mapping \((\rho_{1},\rho_{2})\mapsto\alpha(x_{(\rho_{1},\rho_{2})})+\Theta z_{2}(x_{( \rho_{1},\rho_{2})})\) does not change sign on each of the two sets \(S_{1}\) and \(S_{2}\) (use Lemma 4.7). However, as far as we know by now, the sign of \(\alpha(x_{(\rho_{1},\rho_{2})})+\Theta z_{2}(x_{(\rho_{1},\rho_{2})})\) might be different according to whether \((\rho_{1},\rho_{2})\) belongs to \(S_{1}\) or to \(S_{2}\). Thus, if there exists a single point \((\rho_{1},\rho_{2})\in S_{i}\) such that the sign of \(\alpha(x_{(\rho_{1},\rho_{2})})+\Theta z_{2}(x_{(\rho_{1},\rho_{2})})\) is positive, we can conclude that \(x_{(\rho_{1},\rho_{2})}\) is of type c) and hence that \(w_{1}(x_{(\rho_{1},\rho_{2})}/\varphi_{1})=w_{2}(x_{(\rho_{1},\rho_{2})}/ \varphi_{1})\) for every \((\rho_{1},\rho_{2})\in S_{i}\) (\(i=1,2\)). Again, we use the two \((\rho_{1},\rho_{2})\)-pairs with \(\rho_{1}=0\) and \(\rho_{2}=\rho_{2}^{\pm}:=\pm\frac{1}{\Theta\Phi}\sqrt{(\Theta-1)(\Theta\Phi^{ 2}-1)}\) as test points. For these points we get the same expressions of \(x_{(0,\rho_{2}^{\pm})}\) and \(\alpha(x_{(0,\rho_{2}^{\pm})})\) as in equations (13) and (14), while for \(z_{2}(x_{(0,\rho_{2}^{\pm})})\) we get the expression
\[z_{2}(x_{(0,\rho_{2}^{\pm})})=\frac{\Theta\Phi^{2}+\Theta-2}{\Theta(\Phi^{2}-1)}.\]
Hence we conclude that
\[\alpha(x_{(0,\rho_{2}^{\pm})})+\Theta z_{2}(x_{(0,\rho_{2}^{\pm})})=\frac{ \Theta\Phi^{2}+\Theta-2}{\Theta(\Phi^{2}-1)}>0.\]
This argument shows that every root \(x_{(\rho_{1},\rho_{2})}\) with \((\rho_{1},\rho_{2})\in S_{1}\cup S_{2}\) is an intersection point and that there must be a unique intersection point when \((\rho_{1},\rho_{2})\in(S_{1}\cup S_{2})\cap S_{3}\). It is not difficult to show that in the latter case the intersection point must be a tangency point. In fact, if it was a crossing point, there should exist one further crossing point because under our present assumptions the left and right asymptotes of \(w_{2}(k)\) are both steeper than those of \(w_{1}(k)\) (recall that we are assuming \(\Theta>1\) and \(\Phi>1\): on \((S_{1}\cup S_{2})\cap S_{3}\) condition (4) must therefore hold with strict inequality sign).
As far as I know, the results in the next two lemmas are new.
**Lemma 4.12**.: _If_
\[\Theta>1,\quad\Phi>1,\quad\text{ and }\quad(\Theta-1)(\Theta\Phi^{2}-1)<( \Theta\Phi\rho_{2}-\rho_{1})^{2}<(\Theta\Phi-1)^{2}, \tag{17}\]
_there must exist exactly two points where the slices \(w_{1}(k)\) and \(w_{2}(k)\) cross over each other._
Proof.: From Lemma 4.3 and Lemma 4.4 we know that under condition (17) there must exist two roots \(x_{(\rho_{1},\rho_{2})}\). Moreover, from the proof of Lemma 4.11 we know that both these roots must be intersection points. The chained inequality in (17) says that both asymptotes of \(w_{2}(k)\) are steeper than those of \(w_{1}(k)\). Therefore only two cases can occur: either (i) both intersection points are tangency points, or (ii) both intersection points are crossing points. It is not difficult to see that case (i) is impossibile. In fact, if both intersection points were tangency points, any increase of \(\Theta\) should lead to \(w_{1}(k)<w_{2}(k)\) for all \(k\in\mathbb{R}\). However, given fixed values of \(\Phi\), \(\rho_{1}\) and \(\rho_{2}\), a small enough increase of \(\Theta\) does not lead to a violation of condition (17) which implies the existence of two intersection points.
Now, it remains to see what happens when
\[\Theta>1,\quad\Phi>1,\quad\text{ and }\quad(\Theta\Phi\rho_{2}-\rho_{1})^{2}=( \Theta\Phi-1)^{2} \tag{18}\]
i.e. when the left or right asymptote of \(w_{2}(k)\) is the same as the corresponding asymptote of \(w_{1}(k)\).
**Lemma 4.13**.: _Assume condition (18) holds. Then there must exist exactly one point where the slices \(w_{1}(k)\) and \(w_{2}(k)\) cross over each other._
Proof.: Define \(B_{\Theta,\Phi}\) as the subset of the \((\rho_{1},\rho_{2})\)-plane where the equality in condition (18) holds. \(B_{\Theta,\Phi}\) is then the boundary of \(S\), i.e. the subset of the \((\rho_{1},\rho_{2})\)-plane where
\[\rho_{1}=\rho_{1}^{+}(\rho_{2}):=\Theta\Phi\rho_{2}+\Theta\Phi-1\quad\text{ and }\quad-1<\rho_{2}<\min\left\{\frac{2}{\Theta\Phi}-1,1\right\}\]
or
\[\rho_{1}=\rho_{1}^{-}(\rho_{2}):=\Theta\Phi\rho_{2}+1-\Theta\Phi\quad\text{ and }\quad\max\left\{1-\frac{2}{\Theta\Phi},-1\right\}<\rho_{2}<1.\]
On \(B_{\Theta,\Phi}\) the polynomial \(Q(x)\) reduces to
\[Q^{\pm}(x)=-4\Theta^{2}\left(\rho_{2}\pm 1\right)\Phi\left[(\Theta-1)^{2}\Phi \rho_{2}\pm(\Theta-1)(\Theta\Phi+\Phi-2)-2x(\Phi-1)(\Theta\Phi-1)\right]\]
and the only root of \(Q^{\pm}(x)\) is given by
\[x_{(\rho_{1}^{\pm}(\rho_{2}),\rho_{2})}=\frac{(\Theta-1)^{2}\Phi\rho_{2}\pm( \Theta-1)(\Theta\Phi+\Phi-2)}{2(\Phi-1)(\Theta\Phi-1)}.\]
We will show that this root must be a crossing point. To this aim note that \(B_{\Theta,\Phi}\) is the union of two disjoint connected sets which we denote with \(B_{\Theta,\Phi}^{\pm}\). From Lemma 4.4 it follows that \(B_{\Theta,\Phi}\cap H_{\Theta,\Phi}=\emptyset\). Hence we may apply Corollary 4.6 and conclude that \(\alpha(x_{(\rho_{1}^{\pm},\rho_{2})})Z(x_{(\rho_{1}^{\pm},\rho_{2})})\) does not change sign on each one of the two connected components of \(B_{\Theta,\Phi}=B_{\Theta,\Phi}^{+}\cup B_{\Theta,\Phi}^{-}\). In order to show that \(\alpha(x_{(\rho_{1}^{\pm},\rho_{2})})\) and \(Z(x_{(\rho_{1}^{\pm},\rho_{2})})\) are of the same sign, it is therefore sufficient to find a single point \((\rho_{1},\rho_{2})\) in each of the two sets \(B_{\Theta,\Phi}^{+}\) and \(B_{\Theta,\Phi}^{-}\) for which \(\alpha(x_{(\rho_{1}^{\pm},\rho_{2})})\) and \(Z(x_{(\rho_{1}^{\pm},\rho_{2})})\) are of the same sign. As test points we choose the points \((\rho_{1},\rho_{2})=(\rho_{1}^{\pm}(\rho_{2}),\rho_{2})\) where \(\rho_{1}^{\pm}(\rho_{2})=0\). It is easily seen that these points are \((0,\pm\rho_{*})\) where \(\rho_{*}=\frac{1}{\Theta\Phi}-1\). Substituting in the formula for the root we get
\[x_{(0,\pm\rho_{*})}=\pm\frac{2\Theta^{2}\Phi-\Theta^{2}-2\Theta\Phi+1}{2\Theta (\Phi-1)(\Theta\Phi-1)}.\]
Regardless of the sign in \(\rho_{2}=\rho_{*}\), this root yields
\[\alpha(x_{(0,\pm\rho_{*})})=-\frac{(\Theta-1)^{2}}{2\Theta(\Phi-1)}\]
and
\[Z(x_{(0,\pm\rho_{*})})=-\frac{(\Theta-1)^{2}(\Theta^{2}\Phi+2\Theta\Phi^{2}-4 \Theta\Phi-\Phi+2)}{2\Theta(\Phi-1)^{2}(\Theta\Phi-1)}.\]
Of course \(\alpha(x_{(0,\pm\rho_{*})})<0\). As for \(Z(x_{(0,\pm\rho_{*})})\), its sign depends on the sign of
\[\Theta^{2}\Phi+2\Theta\Phi^{2}-4\Theta\Phi-\Phi+2\]
which is positive whenever \(\Theta>1\) and \(\Phi>1\) (we omit the details of the proof of this assertion). Hence also \(Z(x_{(0,\pm\rho_{*})})<0\) and thus we conclude that \(x_{(0,\pm\rho_{*})}\) is a root of either type b) or c) in (5). To find out which type applies, we must determine the sign of \(\alpha(x_{(0,\pm\rho_{*})})+\Theta z_{2}(x_{(0,\pm\rho_{*})})\). It is not difficult to verify that
\[z_{2}(x_{(0,\pm\rho_{*})})=\frac{\Theta^{2}\Phi+2\Theta\Phi^{2}-4\Theta\Phi- \Phi+2}{2\Theta(\Phi-1)(\Theta\Phi-1)}\]
and hence we get
\[\alpha(x_{(0,\pm\rho_{*})})+\Theta z_{2}(x_{(0,\pm\rho_{*})})=\frac{2\Theta^{2 }\Phi^{2}-2\Theta^{2}\Phi+\Theta^{2}-2\Theta\Phi+1}{2\Theta(\Phi-1)(\Theta\Phi -1)}.\]
The numerator in this expression can be written as
\[\Theta^{2}(\Phi-1)^{2}+(\Theta\Phi-1)^{2}\]
and therefore we must have \(\alpha(x_{(0,\pm\rho_{*})})+\Theta z_{2}(x_{(0,\pm\rho_{*})})>0\). By Lemma 4.7 we conclude that \(x_{(\rho_{1},\rho_{2})}\) must be an intersection point for every \((\rho_{1},\rho_{2})\in B_{\Theta,\Phi}\).
To complete the proof it remains to show that for every \((\rho_{1},\rho_{2})\in B_{\Theta,\Phi}\) the corresponding root \(x_{(\rho_{1},\rho_{2})}\) is a crossing point. To this aim we apply the argument in the proof of Lemma 4.12 once again: if \(x_{(\rho_{1},\rho_{2})}\) was a tangency point, then by increasing \(\Theta\) a little bit we should have no intersection points at all. However, if we increase \(\Theta\) a little bit while leaving \(\Phi\), \(\rho_{1}\) and \(\rho_{2}\) unchanged, we pass from condition (18) to condition (17) which implies the existence of two crossing points.
Combining the statements in Lemma 4.1, Lemma 4.2, Lemma 4.8, Lemma 4.9, Lemma 4.10, Lemma 4.11, Lemma 4.12 and Lemma 4.13 yields a corrected and sharper version of Proposition 3.1 in Hendriks and Martini (2019). The little corrections concern
* the special case where \(\Theta=1\) and
* the fact that Proposition 3.1 in Hendriks and Martini (2019) seems to imply that with \(\Theta>1\), \((\Theta\Phi\rho_{2}-\rho_{1})^{2}\leq(\Theta\Phi-1)^{2}\) and \(\Phi<1\) it would be possible to have absence of calendar spread arbitrage even if \(\Theta\Phi<1\) which goes against the necessary condition (4). However, the preprint version of the article contains a slightly different version of the proposition which is not subject to this problem but where the two necessary conditions are a little too strong due to strong inequality signs instead of weak ones (see Proposition 3.5 in Hendriks and Martini (2017)).
The sharper (and corrected) statement of the Hendriks-Martini proposition is given below. To make it more concise, the necessary condition (3) will be stated as in (19).
**Proposition 4.14**.: _Assume that \(\theta_{1}\) and \(\varphi_{1}\) are both strictly positive and let \(\Theta:=\theta_{2}/\theta_{1}\) and \(\Phi:=\varphi_{2}/\varphi_{1}\). Then, there is absence of calendar spread arbitrage (i.e. \(w_{1}(k)\leq w_{2}(k)\) for all \(k\in\mathbb{R}\)) only if \(\Theta\geq 1\) and_
\[1-\Theta\Phi\leq\Theta\Phi\rho_{2}-\rho_{1}\leq\Theta\Phi-1 \tag{19}\]
_Moreover,_
* _when_ \(\Theta=1\) _there is absence of calendar spread arbitrage if and only if either (i)_ \(\rho_{1}=\rho_{2}=0\) _and_ \(\Phi\geq 1\) _or (ii)_ \(\Phi=\rho_{1}/\rho_{2}\) _and_ \(\rho_{1}^{2}\geq\rho_{2}^{2}\)_;_
* _when_ \(\Theta>1\) _there is absence of calendar spread arbitrage if and only if condition (_19_) holds jointly with_ \[\Phi\leq 1\quad\text{ or }\quad(\Theta\Phi\rho_{2}-\rho_{1})^{2}\leq(\Theta-1)( \Theta\Phi^{2}-1);\]
* _when_ \(\Theta>1\) _and condition (_19_) holds jointly with_ \[\Phi\leq 1\quad\text{ or }\quad(\Theta\Phi\rho_{2}-\rho_{1})^{2}<(\Theta-1)( \Theta\Phi^{2}-1)\] _there are no intersection points (i.e._ \(w_{1}(k)<w_{2}(k)\) _for all_ \(k\in\mathbb{R}\)_)_
* _when_ \(\Theta>1\)_,_ \(\Phi>1\) _and_ \((\Theta\Phi\rho_{2}-\rho_{1})^{2}=(\Theta-1)(\Theta\Phi^{2}-1)\) _the two slices have exactly one intersection point which is a tangency point;_
* _when_ \(\Theta>1\)_,_ \(\Phi>1\) _and_ \((\Theta-1)(\Theta\Phi^{2}-1)<(\Theta\Phi\rho_{2}-\rho_{1})^{2}<(\Theta\Phi-1)^{2}\) _there must exist exactly two points where the slices_ \(w_{1}(k)\) _and_ \(w_{2}(k)\) _cross over each other._
* _when_ \(\Theta>1\)_,_ \(\Phi>1\) _and_ \((\Theta\Phi\rho_{2}-\rho_{1})^{2}=(\Theta\Phi-1)^{2}\) _there must exist exactly one point where the slices_ \(w_{1}(k)\) _and_ \(w_{2}(k)\) _cross over each other._
|
2310.19334 | Scalable Two-Minute Feedback: Digital, Lecture-Accompanying Survey as a
Continuous Feedback Instrument | Detailed feedback on courses and lecture content is essential for their
improvement and also serves as a tool for reflection. However, feedback methods
are often only used sporadically, especially in mass courses, because
collecting and analyzing feedback in a timely manner is often a challenge for
teachers. Moreover, the current situation of the students or the changing
workload during the semester are usually not taken into account either. For a
holistic investigation, the article used a digital survey format as formative
feedback which attempts to measure student stress in a quantitative part and to
address the participants' reflection in a qualitative part, as well as to
collect general suggestions for improvement (based on the so-called One-Minute
Paper) at two educational institutions. The feedback during the semester is
evaluated qualitatively and discussed on a meta-level and special features
(e.g. reflections on student work ethic or other courses) are addressed. The
results show a low, but constant rate of feedback. Responses mostly cover
topics of the lecture content or organizational aspects and were intensively
used to report issues within the lecture. In addition, artificial intelligence
(AI) support in the form of a large language model was tested and showed
promising results in summarizing the open-ended responses for the teacher.
Finally, the experiences from the lecturers are reflected upon and the results
as well as possibilities for improvement are discussed. | Armin Egetenmeier, Sven Strickroth | 2023-10-30T08:14:26Z | http://arxiv.org/abs/2310.19334v2 | # Scalable Two-Minute Feedback:
###### Abstract
Detailed feedback on courses and lecture content is essential for their improvement and also serves as a tool for reflection. However, feedback methods are often only used sporadically, especially in mass courses, because collecting and analyzing feedback in a timely manner is often a challenge for teachers. Moreover, the current situation of the students or the changing workload during the semester are usually not taken into account either. For a holistic investigation, the article used a digital survey format as formative feedback which attempts to measure student stress in a quantitative part and to address the participants' reflection in a qualitative part, as well as to collect general suggestions for improvement (based on the so-called One-Minute Paper) at two educational institutions. The feedback during the semester is evaluated qualitatively and discussed on a meta-level and special features (e.g. reflections on student work ethic or other courses) are addressed. The results show a low, but constant rate of feedback. Responses mostly cover topics of the lecture content or organizational aspects and were intensively used to report issues within the lecture. In addition, artificial intelligence (AI) support in the form of a large language model was tested and showed promising results in summarizing the open-ended responses for the teacher. Finally, the experiences from the lecturers are reflected upon and the results as well as possibilities for improvement are discussed.
action research, automatic text summarization, content analysis, formative feedback, one-minute paper
## I Introduction
Feedback is an essential aspect of successful teaching-learning processes [1]. Most often the focus is on feedback for students, although teachers also need or want feedback. Universities use a variety of evaluation and quality assurance measures to support teachers and ensure high-quality teaching. Institutionalized forms consist of standardized questionnaires (paper or digital) and rarely consider the students' context. In practice, these often only have a limited contribution to improving teaching due to the one-off nature of an end-of-term survey and its standardization [2, 3]. The aim of this kind of evaluation is usually the long-term assurance of teaching quality or the fulfillment of accreditation requirements. Hence, the gathered feedback is particularly relevant for subsequent cohorts [4]. For short-term adjustments in teaching, a more targeted, continuous survey over the term is required. Here, feedback such as students' current open questions or level of stress can also be helpful and incorporated into following teaching units. Particularly in the first semester, being able to react to student's needs and issues can ease the transition to university. It is therefore useful to supplement the end-of-term evaluation with additional, regular surveys on one's own course, so that it can be tailored to the current needs of the students and one gets quick feedback on (newly) used teaching methods [3]. In this way, teaching innovations can be consolidated, further developed, or discarded more quickly. However, courses in the first semesters are often attended by several hundred students at universities and making frontal lectures a last resort [5]. For this reason, social interactions, discussions and feedback become rather limited. Only a few people give direct feedback to the teachers about the course, as new students are more reluctant to give feedback and to suggest improvements [6]. Therefore, collecting feedback is complex and with a large number of responses, a (manual) evaluation can often only be done selectively or takes a significant amount of time [7]. Consequently, technological support is essential for scaling up and for use in digital or hybrid formats.
This article describes a digital, formative feedback approach that was used continuously during the term at two different educational institutions. It was implemented as a short, weekly survey and is referred to in the text as a two-minute feedback survey (2MF survey). The feedback is based on typical questions of the "Minute Paper" [8] with additional close-ended questions and aims to be completed within two minutes. The feedback received is intended to enable the lecturer to efficiently get feedback on the course and to gain insight into the learning progress and workload of the students. The following research questions (RQ) are investigated using three case studies: (RQ1) How do motivation to attend, workload and perceived stress affect the subjective understanding of being able to follow the lecture, how does this vary over the semester, and are there differences between educational institutions (or subject areas)? (RQ2) What free text answers do students give as feedback to open-ended questions? What content clusters can be formed and do they change over time? (RQ3) How well can large language models (i.e., ChatGPT) summarize the feedback given by students?
The contributions of this article are twofold: First, it is examined how this tool was used throughout the semester, what topics the feedback covers and how subjective factors of the students affect their assessment of their ability to follow the course. Only few studies have qualitatively analyzed the responses to explore common topics. This helps instructors to design and optimize surveys. The advantages of digital
implementation for teachers are also discussed. Second, it presents results of a follow-up investigation on how large language models (LLM) can help to summarize feedback to meet the needs of large classes in a timely manner. This is particularly important to scale the feedback analysis and make it easier to grasp for teachers.
## II Related Work
Feedback methods that are easy to use and implement, such as hand signals or audience response systems, can give a first impression of the learning progress in the courses. They are, however, often not sufficient for a deeper insight and offer little opportunity to get students' suggestions for improvement. A suitable method for this is the "One-Minute Paper" (OMP) [8], which can support learners and teachers as formative feedback. Short questions to learners are used to reflect on the learning progress, to formulate open questions about the course and, if necessary, to collect general suggestions for improvement. The implementation includes a survey with two or three short reflection questions, usually at the end of a lesson, which allows for timely and concrete feedback [9]. Typical questions relate to content, materials, or specific points of interest to the teacher [9]. These may relate to the use of technology [10], teaching methods and style [7], newly learned concepts [11], or expectations of the course [6]. In literature, a combination of the questions "What was the most important thing you learned today?" and "What question(s) remained unanswered?" are often used (cf. [4, 10, 12]). The focus usually is on the learner's reflection on the learning content and thus on the learning progress of the group [4]. The teacher evaluates the written feedback and can address ambiguities, misunderstandings, or misconceptions [4, 13]. Responses about possible suggestions for improvement, can provide teachers with concrete ideas on how to better adapt the course to the group's needs [10, 11]. The uncomplicated feedback of the OMP has shown to encourage active engagement [4], even in large or mostly reserved groups [14], and to establish an atmosphere of trust [15]. Especially when used during a lecture, students can be re-engaged [9]. Choosing the right time and frequency to use OMP is still a point of dispute [16]. Previous studies have shown the greatest effect when used at the beginning of the term [12], or for targeted course feedback [6]. In contrast, this paper examines the effects, when the feedback is collected continuously throughout the whole semester.
Evaluating many written responses is time-consuming [8], especially when they are collected on paper [4]. Despite the scalability of a digital version for collecting the responses, there are only few studies dealing with purely digital implementations or investigating (continuous) use in mass courses (e.g., [7, 10, 17]). Digital versions of the OMP seem to be mainly used in courses with a significant digital component or in fully virtual courses (e.g., [11, 17, 18]). As technology becomes more important in teaching, face-to-face courses are also using digital OMP formats [18]. However, qualitative analyses of the responses to explore common topics is rare (cf. [11, 13]). This paper investigates the responses of a large course. Additionally, the (manual) analysis of a large number of responses to open-ended questions is still an open issue (cf. [19]). Here, automatic text summarization approaches (e.g., [20]) or upcoming (easier to use) LLMs such as ChatGPT might help and need to be investigated. The latter is addressed in this paper to provide practical insights for teachers and lecturers.
OMPs rarely consider students' current situation, such as the workload or perceived stress during the semester. For instance, self-estimated workload shows correlation with the received grades [17], but the information is not used within the semester (e.g. to support the students). Studies dealing with students' perceptions of stress (cf. [21, 22]) often focus on the causal dimensions or on the use of external stress-regulating help. Usually, extensive, established questionnaires from psychology are used as a basis, such as the Perceived Stress Questionnaire (PSQ) with 30 items [23]. There seems to be less focus on using the results to adjust a course. A digital implementation of the OMP combined with a short, quantitative part for a rough assessment of the students' current situation during the semester has not yet been investigated in the literature.
## III Study Design and Methodology
An experimental case study was conducted with the aim to implement and evaluate a scalable, digital feedback process. The 2MF survey allows continuous, low-threshold collection and quick analysis for reflection, adaptation of teaching and provide a first insight into the student workload, even in large courses. At the same time, the feedback will allow students to reflect on the course content. The data were collected in parallel at two different German educational institutions during the winter term 2022/2023 by administering the 2MF survey weekly in an introductory computer science course at the Ludwig-Maximilians-Universitat Munchen (LMU Munich, referred to as Uni in the following) and an introductory mathematics course at the Aalen University of Applied Sciences (UAS). To investigate the feedback topics, a content analysis with an inductively developed categorization (based on ideas from [11]) was carried out by one researcher on the given responses. Starting with general topics based on random selected answers (e.g. "teacher related" or "organization"), the researcher used an iterative process to add emerging responses in categories such as "generic answers" or "self-reflection", or details due to repeated mention (e.g. "slides/script" or "exercises"). Thus, the answers could fall into several categories. The topics were used in both case studies. The closed questions were examined for statistical correlation. The feedback of the students and the identified topics are then used as a basis for the investigation the quality regarding completeness of the summarized feedback of three weeks using ChatGPT.
### _Study Design and Methodology_
The 2MF survey is structured in two parts: The first part consists of six closed questions about the student's situation ("I feel stressed." and "I feel overwhelmed by my studies."), motivation ("I am motivated to attend the lecture."), and understanding ("I can follow the content of the lecture well.") each on a five-point Likert scale with -1 for abstention, 1 for disagreement, 5 for agreement; and two yes/no questions regarding attendance ("I attended/watched the lecture/
...exercises last week."). The second part is strongly based on the OMP and includes a question about unclear content, a question about suggestions for improvement, and a reflection question about what the students liked most in the lecture last week. The survey was implemented in the exercise system GATE [24] (Uni) or as a protected website (UAS) with Single Sign-on so that the data could be pseudonymized and collected once a week per person. The front end for the students is kept simple and only shows a form with the survey questions in order to have the lowest possible response burden. In contrast to surveys for reflection and evaluation, the first two questions (stress and feeling overwhelmed) were asked about the overall subjective assessment of the student situation. The teacher receives the feedback in a dashboard, which contains all collected free text answers and pie charts for each closed question per week (see Fig. 1), supplemented by an overview of the quantitative results over the last weeks (see Fig. 2).
## IV Context of the Case Studies
The first case study was conducted in the compulsory first semester course "Fundamentals of Business Mathematics" at the Aalen University of Applied Sciences. The course is divided into a lecture part and two separate exercise groups, each of which is led by an experienced lecturer. There was no supplementary tutorial offer by student teaching assistants. Around 80 students were enrolled in the course. The lectures took place every second week with irregular durations, whereas the exercises took place over almost every week of the semester. The individual lessons were either face-to-face or virtually (via video conferencing tool). The voluntary 2MF survey was explained in the first lecture, mentioned several times in the following lectures and regularly mentioned in (virtual) announcements. The information and link were provided in the Learning Management System (LMS). Questions raised in the responses were taken up, repeated and answered at the beginning of the next lecture. The suggestions were commented on and implemented as far as possible.
The second case study took place as part of the first-semester course "Introduction to Programming" at LMU University. This course introduces fundamental concepts of computer science using the programming language Java and consists of four hours of lectures (usually 2 hours theory and 2 hours practice session) and two hours exercises each week. The lecture was attended by around 900 students and offered in hybrid form. Recordings were made available online. A reminder for the weekly survey was displayed (using a QR code link) at the end of the last lecture of each week. In addition, there was a link in the LMS that was displayed above the assignments if the survey had not yet been carried out in the current week. Raised questions and comments were taken up and answered as far as possible in terms of time within the lecture.
## V Findings of the Case Studies
### _Descriptive Evaluation_
In total 274 (Uni) and 14 (UAS) distinct students provided feedback in the two courses. A total of 726 (Uni) and 18 (UAS) responses were recorded from these individuals (2.6 and 1.3 responses per person, respectively). A total of 486 free text answers were received at the Uni and 24 at the UAS. Whereas the suggestions for improvement slightly predominate at the Uni (approx. 30 more answers), the answers at the UAS are distributed more or less evenly (between 7-9 answers) (see Fig. 3, bottom). The UAS shows 3 people who have repeatedly taken part in the survey (max. twice). A total of 140 people from Uni repeatedly took part in the survey (max. 14x, 25 people more than 7x). It should be noted that the feedback is distributed over almost all days of the term (Uni) and was also received during the lecture session. The feedback from the UAS all came afterwards. At the Uni, consistent participation in the 2MF survey can be seen throughout all weeks, whereas at the UAS there are individual weeks of no participation despite taught courses (see Fig. 3, top). Over time, a small core (of 25 students) developed at the Uni who gave feedback almost continuously. After week 50 (before the last week of the course), no more feedback was received at the UAS. According to the 2MF survey, a total of 673 (Uni) or 16 (UAS) students confirmed having attended the lecture in the corresponding last week and 445 (Uni) or 17 (UAS) for the exercises during the term. There are 31 data sets (1x UAS, 30x Uni) from students who did not attend either the exercise or the lecture in the corresponding last week.
Fig. 1: Detailed evaluation for “understanding” Uni, week 44 (1=do not agree; 5=fully agree; -1=abstain).
Fig. 2: Semester overview (median) for “stress” and feeling of being “overwhelmed”.
Table I shows an excerpt of the data relating to the first nine weeks for Uni (starting at week 42) and ten weeks for UAS (starting at week 41). In addition to the number of participants, the number of responses on the questions "What remained unclear", "I liked" and suggestions for improvement, and four identified categories of the content analysis are also displayed: The most common content of the coding of the UAS falls into the areas of "lecture content", "structure/procedure" (organization) and "generic answers" (general), as well as for Uni in "lecture content", "exercises" and organization resp. general. In both cases, there is a large coverage of the categories. Allocations of the answers to different subject areas is possible.
### _Analysis of time and content_
The first part of the 2MF survey is about the students' assessment of their current state of health. This part was answered more frequently during the term than the open-ended questions. The number of participants steadily decreased over the term (see Fig. 3 and Tab. I). While no linear correlation can be determined for the UAS, there is a statistically significant linear correlation with time for the Uni (Pearson's r(14)=-.88, p<.00). This is also reflected in the submission of weekly assignments (r(8)=-.97, p<.00).
Initially, placeholder symbols such as "-." were used frequently as responses (only Uni). There are also more generic comments ("general") in the first few weeks, such as "everything", "nothing", "yes", "no", which were used much less frequently towards the end of the term. Questions about lecture content ("unclear") were asked throughout who older term (see Tab. I). The median of comment lengths is 6 words in the first six weeks of the semester (Uni) and increases towards the end of the term (median 9.5 words). At the UAS, the median for the entire term is 8.5 words. Criticism is usually formulated in more detail. There is a particularly large amount of feedback on the organization of the course at the beginning of the term at the Uni (37x in week 44, see Tab. I). The vast majority referred to technical and organizational issues that arose during the practice session. In this case, the lecturer (Uni) was only able to give a regular lecture starting from week 45 due to illness, so in week 44 there was only a practice session with a substitute teacher without a theory lecture. The number of responses also shows that there were no more lectures at the UAS from week 51 onwards (but only exercises).
### _Special features and content (meta) analysis_
A closer look at the content reveals that there are repeated entries, i.e. a statement was copied into all free text fields to emphasize it (Uni, week 47 and 48). These were of a more critical nature (e.g. "Do you actually read the feedback?"), combined with a suggestion for improvement. The feedback (Uni) indicated problems at an early stage, e.g. a tense mood in a lecture (week 44) or spatial challenges (missing power sockets, week 45) as well as technical and didactic subtleties ("Repeat questions of the audience for online participants", week 50). Problems with accompanying exercises could be identified and tackled this way (week 45). Some answers were also conspicuous, which extended the actual question in an interpretative manner ("[This course] is great, I'm stressed out by Analysis and Algebra :(" Uni, week 45, in category "unclear", or "So far I only had positive emotions about the lecture" Uni, week 46, in category "improve"), or completely different points were taken up ("everything was understandable" Uni, week 48, "unclear"). Some comparisons were made with other courses and these were related to the course. Interestingly, the responses included (self-)reflections by individual students on their own work ethic ("My mistake because I didn't attended the lecture regularly" Uni, week 3, "unclear"), or general insights regarding live programming submissions ("Forty submissions from almost 900 participants [...] are not a good result [...] a submission rate of 20 % [is] pathetic..." Uni, week 48, "improve").
### _Data linking and correlation (Uni only)_
If the responses to the closed questions are examined for correlations, there is a statistically significant linear connection between "stressed" and "overwhelmed" (r(720)=-.75, p=.00). There is also a slightly negative linear relationship between "stressed"/ "overwhelmed" and "could follow" (r(704)=-.20, p=.00 or r(706)=-.27, p=.00). Furthermore, "overwhelmed" seems to have a slightly negative effect on "motivation" (r(704)=..14, p=.00), whereas "could follow" has a positive effect (r(699)=-.46, p=.00).
A total of 713 students (241 of whom provided feedback)
Fig. 3: Number of responses per feedback category and week for Uni (top) and for UAS (bottom).
consented to the use of their data for research. A statistically significant difference in the number of exercise sheets submitted in the LMS could be found: Median of submissions with feedback 7 and without feedback 4 (UTest: U=39.938, p=.00).
## VI Reflection on the results by the teachers
In the UAS case study, the number of responses seems to indicate that this form of continuous feedback is not (yet) desired on a broad basis or that its added value is not (yet) recognized. Nevertheless, from the teacher's point of view, the feedback on content that was not understood by students provided a good starting point for repetition in the next session. Issues did not have to be laboriously asked or anticipated from the exercises. The added (anonymous and) direct communication channel is welcome despite the low level of participation, since existing options (e.g. forum posts, email inquiries) were even less used. Due to the lack of data, insights into the general workload of the individual students could not be evaluated.
In the Uni case study, there were several responses that offered exciting insights from the teacher's point of view, e.g. repeating questions from the audience for the live stream. This shows that students apparently do not dare to address this directly in the chat. Also, not all questions or comments can always be addressed, e.g. regarding the speed of the lecture or material discussed several weeks ago. It remains unclear why questions on the content were not discussed in the exercises - maybe no satisfactory answer was given there. Nevertheless, such feedback is not only time-consuming for students, but also for teachers. Dealing with it costs extra time, which is worth it, but it would be nice to reduce the time needed even further.
## VII Artificial intelligence (AI) Support
The third case study involves the use of AI to get the key insights from a large amount of responses to improve scalability. After the term and the manual coding, the responses of weeks with the most answers (Uni, week 43 and 44) and a high number of words per answer (Uni, week 47) were summarized using the LLM ChatGPT (May 24 Version on June 1, 2023). Each free text field of the 2MF survey (unclear, liked, improve) was investigated individually and the summarization was conducted three times for not relying on a single result. As all responses are German, German was also used for the AI prompts. The goal was to investigate whether this can be used to quickly summarize (many responses) and find a proper categorization of main topics (cf. Sec. 3).
First, the summarization of the responses was investigated using the prompt "Summarize the main points of the following answers in the form of notes." (translated). The median number of answers decreased from 26 to 11, the average number of words from 270 to approx. 100 (see Tab. 2). So, in most cases the number of words could be reduced by more than half without losing the essence of the answers. Responses like request for more testing options, emphasized as a copied response (cf. Sect. 5.3) were still in the summary as well as the "desire to speak more slowly" (single response) or "difficulties with the practice session" (multiple responses). Thus, ChatGPT summarized all the relevant information. No new topics were identified because of hallucinations.
Next, the focus was on categorization of the results using the prompt "Summarize the most important points of the following statements in bullet points in categories, use as many suitable categories as necessary. Also provide a concise, bullet-type overview of the most important, possible main categories" (translated). Besides a comparison with the first summary, this prompt should offer an even more reduced form of the main topics, which can be compared to the developed categorization (see Sect. 3). As expected, the summary produced similar results with minor variations (such as "Please speak more slowly") as the response mentioned above - easy to understand and useful for teachers. However, the categorization of the LLM showed interesting results: Some categories were similar to the general categorization (e.g. "practical orientation" or "communication and interaction"), while others were mostly ignored by the LLM (such as generic answers or placeholder symbols). In most cases, the LLM offered more details in the topics. For instance, the LLM used "Symptathy and positive characteristics of the lecturer" or "pace and motivation of the lecturer" instead of a general "lecturer related".
Finally, the prompts above were used on responses on the course of the UAS (cf. Tab. 1, week 42). The number of comments and words was identical or increased with the use of the AI. The already low number of responses encouraged the LLM to elaborate further. The categorization showed similar results as above.
## VIII Discussion
The development of clear and quickly answerable questions for the 2MF survey was difficult, especially with regard to subjective assessments. The selection of questions quickly raised ethical implications. To increase reliability, the use of established and tested questionnaires is desirable, but are often quite long and come from psychology (cf. PSQ [23]). Hence, quickly one is dealing with health data that are
under extra protection of the General Data Protection Regulation. Also, dealing with such information is tricky: If, for example, data of a student is conspicuous, should be intervened to provide help or is addressing the issue also inappropriate, since lecturers cannot make a diagnosis. Hence, a few self-developed questions were used here. Asking for suggestions for improvement is also not unproblematic as it raises students' expectations that a change will then occur [9]. Furthermore, the case studies show that there are apparently only few incentives for (continuous) participation. The added value of participation (e.g. problem solving) is not recognized by all students, although questions and comments were answered timely (cf. Sect. 5.4).
The implementation as a digital 2MF survey provides the desired, simplified collection of feedback for the lecturer and creates a more flexible feedback option for the students. Not only could more students be reached, such as non-participants from the last week, but ubiquitous feedback was also given (cf. Sec. 5.1, [16]). Nevertheless, in relation to the respective course sizes (UAS: 80, Uni: 900 people), the number of responses is rather low overall. In relation to the group sizes considered, the 2MF survey reached around 17,5 % (UAS) and 30,4 % (Uni), but not consistently. Such low numbers have already been identified in interactive tools e.g. for participation in discussion forums (cf. [25]) and seem to be a general problem (at least in Germany). On first sight, it may seem questionable to display the QR code at the end of the lecture, as entering free text answers is not ideal for smartphones. However, today's students are quite experienced in typing on smartphones, especially for short, chat-like posts, due to the intensive use of messengers etc. Also, the feedback was given at different points in time and not only during the lectures. The feedback (Uni, Fig. 3) in weeks 51, 52, and 1 stands out, because there are no events at the universities at the turn of the year. Nevertheless, more than 20 people gave feedback. It is interesting why there is a peek for stress and feeling of being overwhelmed (see Fig. 2).
The study shows that the initial number of responses decreases as the term progresses and levels off at a constant but low level (cf. [12]). There was an increased number of responses at the beginning of the term and especially in the case of (acute) problems relating to the course (see Tab. 1, Uni, week 44). Other communication channels (mail, forums, etc.) were not used to the same extent, which supports the use of a low-threshold feedback offer (cf. [12]). In particular, more engaged students seem to give (and demand) feedback, which is indicated by the higher number of submissions. The positive correlation of feedback and submissions on learning outcome is also seen in other research (cf. [14, 17]). A significant change in stress over the term (RQ1) can hardly be proven because there is too little feedback (UAS) or the evaluation of individual data (e.g., perceived stress level) leads to ethical implications. Nevertheless, the data provides a first impression of the context (at least for the Uni). The sometimes linear connection between feeling of being overwhelmed, stress and the perception "could follow" also points to this (cf. Sect. 5.4). An influence of the workload (Fig. 1) on the delivery of solutions is not recognizable. The results and clusters of the content analysis are consistent with other studies (cf. [11]). The content analysis (RQ2) nevertheless revealed surprising aspects such as the self-reflection of individual students or the comparison with other courses. This seems to strengthen the author' assumption that evaluations of their courses depend to some extent on other courses students attended and that a holistic view of the student group therefore makes sense.
Using an LLM (here ChatGPT, RQ3) has already shown to be a good way of summarizing and providing a very good overview of the content, especially with a larger number of responses. The amount of feedback from the case studies was still manageable "by hand" (even in the Uni case study). If more feedback is given from students, the chances of missing important responses due to skimming by teacher will increase. The main aspects of the free text responses were found by the LLM with a simple prompt instruction (cf. Sect. 7) - even single (important) responses were honored in the summary. The LLM seems to collect requests and informs about all critical findings like difficulties or issues (Uni, week 44), which seems to be suitable for lecturers. In addition, detailed categories could be discovered by the AI without any specifications (e.g. "motivation of the lecturer"). Yet, some aspects (e.g. self-reflection) remained hidden. Using AI on a small amount of comments seems to offer limited benefit. These results suggest that the use of LLM can be a starting point for identifying relevant topics (from scratch). However, it should be used and evaluated with caution, as some categories may remain unseen by the AI. In this case a scientific evaluation with the content analysis comes in handy. From a teacher's point of view, this easy-to-use AI black box already offers a practical application for summarizing during the term - without hallucinating on topics.
Surveys of this type are limited by the self-selection of the participants and their willingness to provide feedback. For non-participants, no conclusions can be drawn about the reasons for the lack of feedback, nor can their responses be predicted. Therefore, no feedback does not mean that there are no issues. Another limitation can be the content analysis itself. The inductive categorization was only carried out by one of the authors at this state, which is why a subjective view cannot be ruled out. Also, due to the rather low participation and the ethical aspects, an individual assessment of the students' workload is not possible.
## IX Conclusion and Outlook
In this article, a low-threshold, digital offer for continuous feedback called two-minute survey during the semester at two educational institutions was proposed and analyzed. This survey consists of open-ended and closed questions asking for further information on the context of the students such as workload and motivation. The digital implementation enables a scalable, ubiquitous and efficient way of data collection and evaluation, which enables formative feedback even in mass events with little effort. There is added value for students and teachers alike, with manageable effort. The study results show that formative feedback for teachers can be used to better adapt teaching to the student group, especially at the beginning of the semester. Feedback on unclear teaching content not only allows learners to reflect,
but also provides a suitable basis for in-class discussion and a starting point to revising the content or teaching format. In particular, the digital collection and evaluation enriches these options enormously, especially regarding learning analytics (visualizations in dashboards). The exemplary use of an LIM showed to be suitable to summarize the open-ended feedback for teachers, even if the used AI is more of a black box. It facilitates scaling even with a large number of responses despite issues when there are only few responses to summarize. This low-effort use of AI to evaluate (continuous) feedback can address concerns of teachers about being overwhelmed by feedback in large classes. Hence, the proposed approach shows how teachers can use such feedback approaches also in their large classes efficiently.
There is a need for further research on several levels: This primarily concerns increasing regular participation in the surveys, whether through external incentives (e.g., gamification) or dashboards for students for self-reflection (e.g., workload and group comparison). Next, even if ChatGPT already showed a promising performance, more research on automatic summarization or clustering of the open-ended answers is desirable, so that central points can be seen quickly regardless of the number of responses. Furthermore, knowledge of the content of the feedback can be a basis for developing technological enhancements (e.g. chat bots) that can provide immediate responses to students (and teachers). Also, there could be proactive notifications to the lecturer based on the responses if specific aspects are mentioned. In this case, the categorization can be used as a starting point for implementing personalized alerts in advance.
## Conflict of Interest
The authors declare no conflict of interest.
## Author Contributions
Armin Egetenneier conducted the content analysis and executed the case study involving AI support; Sven Strickroth came up with the ideas for the analysis and implemented the prototypes. Both authors contributed equally to the analysis of the case studies and the writing on the paper. All authors had approved the final version.
## Funding
This research is part of the project AIM@LMU funded by the German Federal Ministry of Education and Research (BMBF) under the grant number 16DHBKI013. The responsibility for the content of this publication lies with the authors.
|
2308.03126 | Missing digits, and good approximations | James Maynard has taken the analytic number theory world by storm in the last
decade, proving several important and surprising theorems, resolving questions
that had seemed far out of reach. He is perhaps best known for his work on
small and large gaps between primes (which were discussed, hot off the press,
in my 2015 CEB lecture). In this article we will discuss two other Maynard
breakthroughs: -- Mersenne numbers take the form $2^n-1$ and so appear as
$111\dots 111$ in base 2, having no digit `$0$'. It is a famous conjecture that
there are infinitely many such primes. More generally it was, until Maynard's
work, an open question as to whether there are infinitely many primes that miss
any given digit, in any given base. We will discuss Maynard's beautiful ideas
that went into partly resolving this question. -- In 1926, Khinchin gave
remarkable conditions for when real numbers can usually be ``well
approximated'' by infinitely many rationals. However Khinchin's theorem
regarded 1/2, 2/4, 3/6 as distinct rationals and so could not be easily
modified to cope, say, with approximations by fractions with prime
denominators. In 1941 Duffin and Schaefer proposed an appropriate but
significantly more general analogy involving approximation only by reduced
fractions (which is much more useful). We will discuss its recent resolution by
Maynard together with Dimitris Koukoulopoulos. | Andrew Granville | 2023-08-06T14:27:33Z | http://arxiv.org/abs/2308.03126v2 | # Missing Digits, and Good Approximations
###### Abstract.
James Maynard has taken the analytic number theory world by storm in the last decade, proving several important and surprising theorems, resolving questions that had seemed far out of reach. He is perhaps best known for his work on small [27] and large [28] gaps between primes (which were discussed, hot off the press, in [15]). In this article we will discuss two other Maynard breakthroughs:
-- Mersenne numbers take the form \(2^{n}-1\) and so appear as \(111\ldots 111\) in base 2, having no digit '0'. It is a famous conjecture that there are infinitely many such primes. More generally it was, until Maynard's work, an open question as to whether there are infinitely many primes that miss any given digit, in any given base. We will discuss Maynard's beautiful ideas that went into partly resolving this question [29].
-- In 1926, Khinchin gave remarkable conditions for when real numbers can usually be "well approximated" by infinitely many rationals. However Khinchin's theorem regarded 1/2, 2/4, 3/6 as distinct rationals and so could not be easily modified to cope, say, with approximations by fractions with prime denominators. In 1941 Duffin and Schaefer proposed an appropriate but significantly more general analogy involving approximation only by reduced fractions (which is much more useful). We will discuss its recent resolution by Maynard together with Dimitris Koukoulopoulos [25].
Thanks to Dimitris Koukoulopoulos, Sun-Kai Leung and Cihan Sabuncu for their comments on a draft of this article, and to James Maynard for sharing his graphics. The author is partially supported by NSERC of Canada, both by a Discovery Grant and by a CRC.. +
Footnote †: In my forthcoming textbook about the distribution of primes, starting from the basics, about one-sixth of the book is dedicated to various Maynard theorems. This, in one of the oldest and most venerable subjects of mathematics.
This year's Current Events Bulletin highlights the work of the 2022 Fields medalists. In James Maynard's case there are a surprising number of quite different breakthroughs that could be discussed.1 In my 2014 CEB lecture I described the work of Yitang Zhang [38] on bounded gaps between primes and noted that a first-year postdoc, James Maynard, had taken a different, much simpler but related approach, to also get bounded gaps [27] (and a similar proof had been found, independently, by Terry Tao, and given on his blog). Versions of both Zhang's proof and the Maynard-Tao proof appear in my article [15], where it is also announced that Maynard had within months made another spectacular breakthrough, this time on the largest known gaps between consecutive primes [28] (and a rather different proof [11] had been found by Ford, Green, Konyagin and Tao, the two proofs combining to give an even better result [12]). It has been like this ever since with Maynard, many breakthrough results, some more suitable for a broad audience, some of primary importance for the technical improvements. Rather than attempt
to summarize these all, I have selected two quite different topics, in both of which Maynard proved spectacular breakthroughs on questions that had long been stuck.
### Part 1. Primes missing digits
Most integers have many of each of the digits, \(0\) through \(9\), in their decimal expansion, so integers missing a given digit, or digits, are rare, making them hard to analyze. For example, there are \(3^{k}\) integers up to \(10^{k}\) having only \(7,8\) and \(9\) in their decimal expansion as there are \(3\) possibilities for each of the \(k\) digits in the expansion.2 When we begin to explore we find the primes
Footnote 2: So there are about \(x^{\alpha}\) integers up to \(x\) having only \(7,8\) and \(9\) in their decimal expansion, where \(\alpha:=\frac{\log 3}{\log 10}=0.4771\dots\)
\[7,79,89,97,787,797,887,977,997,\dots\]
having only the digits \(7,8\) and \(9\) in their decimal expansions. Are there infinitely many such primes? It seems likely given how many we have already found but this question, and questions like it, have long been wide open, researchers finding it difficult to find a method to plausibly attack such problems (as we will discuss below). Indeed it was only recently that researchers succeeded on the following related but seemingly less difficult problems:
- In 2010 Mauduit and Rivat [32] finally resolved Gelfond's problem that the sum of the base-\(q\) digits of prime numbers are equidistributed in arithmetic progressions, for all \(q>2\).
- In 2015 Bourgain [3] showed that there are the expected number of primes with \(k\) binary digits, for which \([ck]\) of those digits have preassigned values (and see Swaenepoel [34] for base-\(q\)).
Maynard simplified and (in some aspects) sharpened the tools used in these proofs but also added a perspective, and a technical confidence, that allowed him to surmount some of the established technical barriers. Here we will sketch his proof giving an asymptotic for the number of primes up to large \(x\), missing _one_ given digit in base \(q\) (for \(q\) sufficiently large), though his proof can be extended to counting the number of primes missing no more than \(\frac{1}{5}\,q^{2/5}\) base-\(q\) digits (again, for \(q\) sufficiently large). His proof works best if the allowed digits lie in an interval, and in that case he was able to count the number of primes whose digits come from any sub-interval of \([0,q-1]\) of length \(\gg q^{4/5}\log q\).
We begin by discussing where we should expect to find primes, and how many there are:
## 1. Primes in arithmetic sequences
We believe that an arithmetically natural set \(\mathcal{A}\) of integers contains infinitely many primes unless there is an obvious reason why not (like say, if \(\mathcal{A}\) is the set of even integers, or the set of values of a reducible polynomial). Well known examples include,
* \(\mathcal{A}\) is the set of all integers;
* \(\mathcal{A}\) is the set of all integers in a given arithmetic progression (like \(a\pmod{q}\) with \((a,q)=1\));
* \(\mathcal{A}=\{p+2:p\text{ is prime}\}\), which is a way to ask for twin primes;
* \(\mathcal{A}=\{n^{2}+1:n\in\mathbb{Z}\}\).
The first two questions are resolved and we even know an asymptotic estimate for how many such primes there are up to a given \(x\), while the second two questions are (wide) open.
### Guessing at the number of primes in \(\mathcal{A}\)
The prime number theorem asserts that there are \(\sim\frac{x}{\log x}\) primes \(\leq x\) (so roughly \(1\) in \(\log x\) of the integers around \(x\) are prime).3 As a first guess we might think that the primes are equidistributed amongst the arithmetic progressions mod \(q\) and so the answer to the second question is \(\sim\frac{1}{q}\cdot\frac{x}{\log x}\); however \((a,q)\) divides any element of \(a\pmod{q}\) and so if \((a,q)>1\) then this arithmetic progression contains at most one prime. Therefore we should restrict our attention to \(a\) with \((a,q)=1\). There are \(\phi(q)\) such progressions, and so we should adjust our guess so that if \((a,q)=1\) then there are \(\sim\frac{1}{\phi(q)}\cdot\frac{x}{\log x}\) primes \(\leq x\) that are \(\equiv a\pmod{q}\). This is the _prime number theorem for arithmetic progressions_.4
Footnote 3: To prove such a result it helps to include a weight \(\log p\) at each prime \(p\) and prove instead that \(\sum_{p\text{ prime},p\leq x}\log p\sim x\), since \(x\) is a more natural function to work with than \(\int_{2}^{x}\frac{dt}{\log t}\) (which is a more precise approximation than \(\frac{x}{\log x}\)). The prime number theorem can be deduced by the technique of “partial summation” which allows one to multiply or divide the summand by smooth weights.
Footnote 4: First claimed by de la Vallee Poussin in 1899 based on ideas from his proof of the prime number theorem, and Dirichlet’s proof of the infinitude of primes in arithmetic progressions. Thanks to Siegel and Walfisz this can be given, when \(x\) is large enough compared to \(q\), as follows: Fix reals \(A,B>0\). If \(q\leq(\log x)^{A}\) then the number of primes \(\equiv a\pmod{q}\) up to \(x\),
\[\pi(x;q,a)=\frac{\pi(x)}{\phi(q)}\bigg{(}1+O\bigg{(}\frac{1}{(\log x)^{B}} \bigg{)}\bigg{)}\text{ whenever }(a,q)=1.\]
Let \(\mathcal{A}(x)\) be the set of integers in \(\mathcal{A}\) up to \(x\), and \(\pi_{\mathcal{A}}(x)\) be the number of primes in \(\mathcal{A}(x)\). If the elements of \(\mathcal{A}\) are as likely to be prime as random integers (roughly \(1\) in \(\log x\) around \(x\)) then we'd guess that \(\pi_{\mathcal{A}}(x)\approx\frac{|\mathcal{A}(x)|}{\log x}\). This can be wrong since we have not accounted for any obvious biases in the set \(\mathcal{A}\); for example, if \(\mathcal{A}\) is the set of integers in an arithmetic progression mod \(q\) then much depends on whether the progression is coprime with \(q\). So we adjust our guess by a factor which is the probability that a random integer in \(\mathcal{A}\) is coprime with \(q\), divided by the probability, \(\phi(q)/q\) that a random integer is coprime with \(q\). This then yields the guess
\[\pi_{\mathcal{A}}(x)\approx\frac{1}{\phi(q)/q}\cdot\frac{|\mathcal{A}(x)|}{ \log x}\sim\frac{1}{\phi(q)}\cdot\frac{x}{\log x},\]
when \((a,q)=1\) (and \(0\) when \((a,q)>1\)) so we recover the correct prediction. This suggests a general strategy for guessing at \(\pi_{\mathcal{A}}(x)\).
### Sparse sets of primes
The first three questions above involve sets \(\mathcal{A}\) that are quite dense amongst the integers. Our well-worn methods usually have limited traction with sets \(\mathcal{A}\) that are _sparse_ such as
* \(\mathcal{A}=\{n\in(x,x+x^{.99}]\}\);
* \(\mathcal{A}=\{n\equiv a\pmod{q}:n\leq x:=q^{100}\}\) for given integer \(q\) and \((a,q)=1\);
* \(\mathcal{A}=\{n\leq x:\alpha n\pmod{1}\in[0,x^{-.01}]\}\) for a given real, irrational \(\alpha\).
In each of these examples, \(|\mathcal{A}|\sim x^{.99}\), a rather sparse set. Each was shown to have more-or-less the expected number of primes over 50 years ago (Theorems of Hoheisel, Linnik and Vinogradov, respectively), though all known proofs are rather
difficult. Moreover if we change "\(.99\)" to an exponent \(<\frac{1}{2}\) then these questions are far beyond our current state of knowledge.5
Footnote 5: The sparsest sets known in these questions to contain primes are \((x,x+x^{.525}],x=q^{5}\) and \(\alpha n\pmod{1}\in[0,x^{-\frac{1}{3}+\epsilon}]\) due to [1, 37, 26] respectively.
A family of sparse arithmetic sequences are given by the sets of values of polynomials (perhaps in several variables). Examples of sparse sets of values for which infinitely many primes have been found include
\(\mathcal{A}=\{c^{2}+d^{4}:c,d\geq 1\}\) which has \(|\mathcal{A}(x)|\asymp x^{3/4}\) (see [13]); and
\(\mathcal{A}=\{a^{3}+2b^{3}:a,b\geq 1\}\) which has \(|\mathcal{A}(x)|\asymp x^{2/3}\) (see [20]).
This last set is an example of the set of values of a _norm-form_ as \(a^{3}+2b^{3}\) is the norm an element, \(a+2^{1/3}b\), of the ring of integers of \(\mathbb{Q}(2^{1/3})\):
For a number field \(K/\mathbb{Q}\), with ring of integers \(\mathbb{Z}[\omega_{1},\ldots,\omega_{d}]\) the norm
\[\mathrm{Norm}_{K/\mathbb{Q}}(x_{1}\omega_{1}+\ldots+x_{d}\omega_{d})\in \mathbb{Z}[x_{1},\ldots,x_{d}]\]
is a degree \(d\) polynomial in the \(d\) variables \(x_{1},\ldots,x_{d}\). For example, \(a+2^{1/3}b+2^{2/3}c\) has norm \(a^{3}+2b^{3}+4c^{3}-6abc\). The prime ideal theorem implies that norm-forms takes on infinitely many prime values with the \(x_{i}\)s all integers, provided that this polynomial has no fixed prime factor. These sequences are not so sparse (since they represent something like \(x/(\log x)^{C}\) different integers up to \(x\)). However, in the last example we know that there are roughly the expected number of prime values of the norm-form for \(a+2^{1/3}b+2^{2/3}c\) even when we fix \(c=0\) (in which case one obtains a sparse set of integer values).
There are infinitely many prime values of the norm, \(m^{2}+n^{2}\), of \(m+in\) for integers \(m,n\), but if we fix \(n=1\) we get the open question of primes of the form \(m^{2}+1\). In 2002 Heath-Brown and Moroz [21] proved that any cubic norm form with one of the variables equal to \(0\) (as long as the new form is irreducible) takes roughly the expected number of prime values. Moreover in 2018, Maynard [30] proved such a result for norms of
\[\sum_{i=1}^{r}x_{i}\omega^{i}\in\mathbb{Z}[\omega]\text{ where }[\mathbb{Q}( \omega):\mathbb{Q}]\leq\frac{4}{3}r.\]
Other than primes in short intervals, in short arithmetic progressions, and amongst polynomial values perhaps the best known questions involving primes are those without some explicitly named digit or digits in their decimal or binary expansion:
## 2. Primes with missing digits
How many primes only have the digits \(1,\ 3\) and \(4\) in their decimal expansions? When we start searching we find many:
\[3,11,13,31,41,43,113,131,311,313,331,431,433,443,\ldots\]
and our guess is that there are infinitely many such primes. To guess how many up to \(x\), we can follow the above recipe: Here \(|\mathcal{A}(10^{k})|=3^{k}\), and so \(|\mathcal{A}(x)|\asymp x^{\alpha}\) where \(\alpha=\frac{\log 3}{\log 10}\).6 We expect that the elements of \(\mathcal{A}\) are independently equi-distributed modulo every prime \(p\) except perhaps for those dividing the base \(10\): Since the last digit of an element of \(\mathcal{A}\) is \(1,3\) or \(4\), it is coprime with \(10\) with probability \(\frac{2}{3}\)
whereas regular integers are coprime with 10 with probability \(\frac{1}{2}\cdot\frac{4}{5}=\frac{2}{5}\), and so we guess that
\[\pi_{\mathcal{A}}(x)\sim\frac{2/3}{2/5}\cdot\frac{|\mathcal{A}(x)|}{\log x}= \frac{5}{3}\cdot\frac{|\mathcal{A}(x)|}{\log x}.\]
### General prediction
If \(\mathcal{A}\) is the set of integers \(n\) which have only digits from \(\mathcal{D}\subset\{0,1,\ldots,q-1\}\) in their base \(q\) expansion let \(\mathcal{D}_{q}=\{d\in\mathcal{D}:(d,q)=1\}\) and then we predict that
\[\pi_{\mathcal{A}}(x)\sim\frac{|\mathcal{D}_{q}|/|\mathcal{D}|}{\phi(q)/q} \cdot\frac{|\mathcal{A}(x)|}{\log x},\]
via the same reasoning. Maynard proved this [29, 31] for certain general families of sparse sets \(\mathcal{A}\). His most spectacular result [29] yields (close to) the above with \(q=10\) and \(|\mathcal{D}|=9\); that is, Maynard proved that there are roughly the expected number of primes that are missing _one_ given digit in decimal.7 His methods give a lot more (as we will describe). His methods can't handle sets as sparse as \(\mathcal{D}=\{1,3,4\}\) with \(q=10\); that is for another day.8 We will sketch slightly more than the easier argument from [31] which gives many results of this type though only for bases that are significantly larger than 10.
Footnote 7: “Roughly” meaning “up to a multiplicative constant” rather than an asymptotic.
Footnote 8: Moreover there may be other, as yet undiscovered, reasons why there might not be any primes for a given \(\mathcal{D}\). The one obstruction I know about is when every element of \(\mathcal{D}\) is divisibly be some given prime \(p\), which implies that all the elements of \(\mathcal{A}\) are also divisible by \(p\).
### Who cares?
Is this a silly question? It is certainly diverting to wonder whether there are infinitely primes with given missing digits, but how does that impact any other serious questions in mathematics? This is a case of "the proof of the pudding is in the eating", that is, its real value can be judged only from the beautiful mathematics that unfolds. The story is two-fold. The relevant Fourier coefficients have an extraordinary structure that allows Maynard to import ideas from Markov processes, and so prove such theorems in bases \(>20000\). To get the base down to 10, Maynard develops his ideas with a virtuosity in all sorts of deep techniques that spin an extraordinary (though technical) tale.
## 3. The circle method
### Fourier analysis
We use the identity
\[\int_{0}^{1}e(n\theta)d\theta=1_{=0}(n):=\begin{cases}1&\text{ if }n=0;\\ 0&\text{ otherwise,}\end{cases}\]
where \(e(t):=e^{2i\pi t}\) for any real \(t\), and its discrete analog
\[\frac{1}{N}\sum_{j=0}^{N-1}e\bigg{(}\frac{jn}{N}\bigg{)}=1_{=0}(n)\text{ whenever }|n|<N,\]
obtained by summing the geometric series.
Let \(\mathcal{P}\) denote the set of primes and \(\mathcal{A}\) the set of integers missing some given digit or digits in base-\(q\). To identify whether prime \(p\) equals some \(a\in\mathcal{A}\) we can take
the above identities with \(n=p-a\) and sum over all \(a\in\mathcal{A}(N)\) and \(p\in\mathcal{P}(N)\), to obtain, in the discrete case,
\[\pi_{\mathcal{A}}(N)=\sum_{p\leq N}\sum_{a\in\mathcal{A}(N)}\frac{1}{N}\sum_{j= 0}^{N-1}e\bigg{(}\frac{j(p-a)}{N}\bigg{)}=\frac{1}{N}\sum_{j=0}^{N-1}S_{ \mathcal{P}}\bigg{(}\frac{j}{N}\bigg{)}S_{\mathcal{A}}\bigg{(}\frac{-j}{N} \bigg{)} \tag{3.1}\]
where, for a given set of integers \(T\), we define the _exponential sum_ (or the _Fourier transform_ of \(T(N)\)) by
\[S_{T}(\theta):=\sum_{n\in T(N)}e(n\theta)\text{ for any real }\theta.\]
Similarly, in the continuous case,
\[\pi_{\mathcal{A}}(N)=\sum_{p\leq N}\sum_{a\in\mathcal{A}(N)}\int_{0}^{1}e((p- a)\theta)d\theta=\int_{0}^{1}S_{\mathcal{P}}(\theta)S_{\mathcal{A}}(-\theta)d\theta.\]
One can work with either version, depending on whether discrete or continuous seems more convenient in the particular argument.9 Writing
Footnote 9: Some of this discussion will make more sense to the novice if they think about the continuous version (though the discussion also applies to the discrete version).
\[\pi_{\mathcal{A}}(N)=\sum_{n\leq N}1_{\mathcal{P}}(n)1_{\mathcal{A}}(n)\]
one can also obtain (3.1), and the continuous analog, from the Parseval-Plancherel identity in Fourier analysis.
### The circle method
To establish a good estimate for \(\pi_{\mathcal{A}}(N)\) using (3.1) one needs to identify those \(j\) for which the summand on the right-hand side is large; for example, \(S_{T}(0)=|T|\) and so the \(j=0\) term in (3.1) yields
\[\frac{1}{N}|\mathcal{A}(N)|\cdot\pi(N)\sim\frac{|\mathcal{A}(N)|}{\log N}\]
which is the expected order of magnitude of our main term (though it may be out by a multiplicative constant). Other terms where \(\frac{j}{N}\) is small, or is close to a rational with small denominator often also contribute to the main term, whereas we hope that the combined contribution of all of the other terms is significantly smaller. At first sight this seems unlikely since we only have the trivial bound \(|S_{T}(\theta)|\leq|T|\) for the other terms, but the trick is to use the Cauchy-Schwarz inequality followed by Parseval's identity so that
\[\frac{1}{N}\sum_{j=0}^{N-1}|S_{T}(\tfrac{j}{N})|\leq\bigg{(}\frac{1}{N}\sum_{ j=0}^{N-1}|S_{T}(\tfrac{j}{N})|^{2}\bigg{)}^{1/2}=|T|^{1/2}.\]
This implies for example that a typical term in the sum on the right-hand side of (3.1) has size \(\sqrt{|\mathcal{A}(N)|}\cdot\sqrt{\pi(N)}\) which is a little bigger than the main term but certainly not so egregiously as would happen if we used the trivial bound.
We have just described the basic thinking behind the _circle method_ used when one sums or integrates over the values of an exponential sum as the variable rotates around the unit circle (that is, \(e(\frac{j}{N})\) for \(0\leq j\leq N-1\), or \(e(\theta)\) for \(0\leq\theta<1\)). When trying to estimate the sum on the right-hand side of (3.1), we are most interested in those \(\theta=\frac{j}{N}\) for which \(S_{\mathcal{P}}(\theta)S_{\mathcal{A}}(-\theta)\) is "large". Experience shows that with
arithmetic problems, the exponential sums can typically only be large when \(\theta\) is close to a rational with small denominators, and so we cut the circle up into these _major arcs_, usually those \(\theta\) near to a rational with small denominator, and _minor arcs_, the remaining \(\theta\), bounding the contribution from the minor arcs, and being as precise as possible with the major arcs to obtain the main terms.
Fourier analysis/the circle method is most successful when one has the product of at least three exponential sums to play with. For example the ternary Goldbach problem was more-or-less resolved by Vinogradov 85 years ago, whereas the binary Goldbach problem remains open.10
Footnote 10: It is known that _almost all_ integers \(n\) can be written as the sum of two primes in the expected number of ways, since by counting over all integers \(n\), one can estimate the variance via an integral involving three exponential sums.
### The ternary Goldbach problem
The number of representations of odd \(N\) as the sum of three primes is given by
\[\int_{0}^{1}e(-N\theta)S_{\mathcal{P}(N)}(\theta)^{3}d\theta,\]
and the arc of width \(\asymp\frac{1}{N}\) around \(0\) yields a main term of size \(\asymp\frac{N^{2}}{(\log N)^{3}}\). We have the trivial bound \(|S_{\mathcal{P}(N)}(\theta)|\leq\pi(N)\) and we will define here the minor arcs to be
\[\mathfrak{m}:=\{\theta\in[0,1]:\ |S_{\mathcal{P}(N)}(\theta)|\leq\pi(N)/(\log N )^{2}\}.\]
(Since the typical size of \(|S_{\mathcal{P}(N)}(\theta)|\) is \(\sqrt{\pi(N)}<N^{1/2}\) we expect that all but a tiny subset of the \(\theta\) belong to these minor arcs.) Then
\[\bigg{|}\int_{\theta\in\mathfrak{m}}e(-N\theta)S_{\mathcal{P}(N )}(\theta)^{3}d\theta\bigg{|} \leq\int_{\theta\in\mathfrak{m}}|S_{\mathcal{P}(N)}(\theta)|^{3}d\theta\] \[\leq\frac{\pi(N)}{(\log N)^{2}}\cdot\int_{\theta\in[0,1)}|S_{ \mathcal{P}(N)}(\theta)|^{2}d\theta\] \[=\frac{\pi(N)^{2}}{(\log N)^{2}}\sim\frac{N^{2}}{(\log N)^{4}}\]
which is significantly smaller than the main term. Thus if we can identify which \(\theta\) belong to \(\mathfrak{m}\), then we can focus on evaluating \(S_{\mathcal{P}(N)}(\theta)\) on the major arcs \(\mathfrak{M}:=[0,1)\setminus\mathfrak{m}\). There are strong bounds known for \(S_{\mathcal{P}(N)}(\theta)\), as we will see later, so these ambitions can all be achieved in practice.
### Major and minor arcs
The usual way to dissect the circle is to pick a parameter \(1<M<N\) and recall that, by Dirichlet's Theorem (see the discussion in Part II), for every \(\alpha\in[0,1]\) there exists a reduced fraction \(r/s\) with \(s\leq M\) for which
\[\bigg{|}\alpha-\frac{r}{s}\bigg{|}\leq\frac{1}{sM}\]
(and the right-hand side is \(\leq 1/s^{2}\)). Therefore we may cover \([0,1]\) (and so cover the circle, by mapping \(t\to e(t)\)) with the intervals (arcs),
\[\bigcup_{s\leq M}\bigcup_{\begin{subarray}{c}0\leq r\leq s\\ (r,s)=1\end{subarray}}\bigg{[}\frac{r}{s}-\frac{1}{sM},\ \frac{r}{s}+\frac{1}{sM}\bigg{]}.\]
The arcs with \(s\) small are usually the major arcs, those with \(s\) large are the minor arcs.
In our problem the partition of major and minor arcs will be a bit more complicated. The major arcs will be given by
\[\bigcup_{s\leq(\log N)^{A}}\bigcup_{\begin{subarray}{c}0\leq r\leq s\\ (r,s)=1\end{subarray}}\ \biggl{[}\frac{r}{s}-\frac{(\log N)^{A}}{N},\ \frac{r}{s}+\frac{(\log N)^{A}}{N}\biggr{]},\]
and the main term will be obtained from those major arcs for which the prime factors of \(s\) all divide \(q\). The minor arcs with be determined from the arcs above with \(M=[\sqrt{N}]\), and then removing the major arcs.
Of course there is far more to say on the circle method than the brief discussion in this article. The reader should look into the two classic books on the subject [6, 36] for much more detail, and for applications to a wide variety of interesting questions.
## 4. The missing digit problem
Throughout let \(\mathcal{A}\) be the set of integers whose digits come from the set \(\mathcal{D}\subset\{0,1,\ldots,q-1\}\). Our aim is to estimate \(\pi_{\mathcal{A}}(N)\), and it will be convenient to let \(N=q^{k}\) for some large even integer \(k\).11
Footnote 11: For other large \(N\) the key ideas are the same, but dull technicalities arise.
The major arcs are typically given by the points \(\theta\in[0,1)\) for which the integrand is large.12 If \(S_{\mathcal{P}}(\theta)S_{\mathcal{A}}(-\theta)\) is large then \(S_{\mathcal{P}}(\theta)\) and \(S_{\mathcal{A}}(-\theta)\) must both individually be large. As we will see, Vinogradov proved that \(S_{\mathcal{P}}(\theta)\) is only large when \(\theta\) is near to a rational with small denominator. \(S_{\mathcal{A}}(\theta)\) behaves differently; it is only large when there are many \(0\)'s and \(q-1\)'s in the decimal expansion of \(\theta\). The simplest \(\theta\) that satisfy both criteria take the form \(\theta=\frac{i}{q^{k}}\) for some small \(\ell\), perhaps with \(\ell=1\) or, if \(\ell>1\) then \(\frac{i}{q^{\ell}}=\frac{r}{s}\), so that all the prime factors of \(s\) must divide \(q\). We therefore split the major arcs into three parts: Those \(\frac{i}{N}=\frac{i}{q^{k}}\) with
Footnote 12: That is the goal, but one may have to include other points that one cannot easily exclude.
\[\left|\frac{j}{N}-\frac{r}{s}\right|\leq\frac{(\log N)^{A}}{N}\text{ for some }0\leq r\leq s\leq(\log N)^{A}\text{ with }(r,s)=1,\]
for some fixed \(A>1\) where
-- \(s\) divides \(q\), which contributes the main term;
-- \(s\) only has prime factors which divide \(q\) (excluding the \(\frac{j}{N}\) from the first case);
-- \(s\) is divisible by a prime not dividing \(q\).
We remark that \(|\frac{j}{N}-\frac{r}{s}|\leq\frac{(\log N)^{A}}{N}\) if and only if \(|j-\frac{r}{s}\,N|\leq(\log N)^{A}\).
### The primary major arcs
Surprisingly the main term (in the discrete formulation) is obtained by simply taking those \(\theta=\frac{j}{q^{k}}\) for which \(\theta=\frac{\ell}{q}\) for some integer \(\ell\) (where \(\ell\) and \(q\) are not necessarily coprime). The contribution of such
points to the above sum is
\[q^{-k}\sum_{\ell=0}^{q-1}S_{\mathcal{P}}\biggl{(}\frac{\ell}{q} \biggr{)}S_{\mathcal{A}}\biggl{(}\frac{-\ell}{q}\biggr{)} =q^{-k}\sum_{a\in\mathcal{A},a\leq q^{k}}\sum_{p\text{ prime},\leq q^{k}}\sum_{\ell=0}^{q-1}e\biggl{(} \frac{\ell}{q}(p-a)\biggr{)}\] \[=q^{1-k}\sum_{a\in\mathcal{A},a\leq q^{k}}\pi(q^{k};q,a).\]
Now if a prime \(p\) does not divide \(q\) and has last digit \(d\) in base \(q\) then \((d,q)=1\), and if \(d\equiv p\equiv a\pmod{q}\) then \(d\in\mathcal{D}\) so that \(d\in\mathcal{D}_{q}\). There are \(|\mathcal{D}|^{k-1}\) integers \(a\in\mathcal{A},a\leq q^{k}\) with \(a\equiv d\pmod{q}\), and so this sum becomes, using the prime number theorem for arithmetic progressions and as \(|\mathcal{A}(q^{k})|=|\mathcal{D}|^{k}\),
\[q^{1-k}\sum_{d\in\mathcal{D}_{q}}|\mathcal{D}|^{k-1}\pi(q^{k};q,d) \sim\frac{q^{1-k}\cdot|\mathcal{A}(q^{k})|}{|\mathcal{D}|}\sum_{d \in\mathcal{D}_{q}}\frac{1}{\phi(q)}\frac{q^{k}}{\log q^{k}} \tag{4.1}\] \[=\frac{|\mathcal{D}_{q}|/|\mathcal{D}|}{\phi(q)/q}\cdot\frac{| \mathcal{A}(N)|}{\log N},\]
which is precisely the prediction we had for \(\pi_{\mathcal{A}}(N)\) above.
The asymptotic for \(\pi_{\mathcal{A}}(N)\) now follows provided we can show that
\[\frac{1}{N}\sum_{\begin{subarray}{c}0\leq j\leq N-1\\ \frac{j}{N}\neq\frac{r}{q},\ 0\leq r\leq q-1\end{subarray}}\biggl{|}S_{ \mathcal{P}}\biggl{(}\frac{j}{N}\biggr{)}\biggr{|}\cdot\biggl{|}S_{\mathcal{A} }\biggl{(}\frac{-j}{N}\biggr{)}\biggr{|}\ll\frac{|\mathcal{A}(N)|}{(\log N)^{A}} \tag{4.2}\]
for some \(A>1\). That is, we will be looking only at the absolute values of the exponential sums \(S_{\mathcal{P}}(\frac{j}{N})\) and \(S_{\mathcal{A}}(\frac{-j}{N})\) and not trying to detect any surprising identities or cancelations based on angles.
### Other major arcs, when all prime factors of \(s\) divide \(q\)
Throughout this subsection we assume that if prime \(p\) divides \(s\) then it divides \(q\) so that \(s\) divides \(N=q^{k}\) for all sufficiently large \(k\), and so \(r/s\) may be written as \(j/N\) for some integer \(j\). We also assume that \(s\leq(\log N)^{A}\).
For these arcs we will find a strong upper bound on the values of \(|S_{\mathcal{P}}(\frac{j}{N})|\), and only bound \(|S_{\mathcal{A}}(\frac{j}{N})|\leq A(N)\), trivially: The prime number theorem for arithmetic progressions gives, if \((r,s)=1\),
\[S_{\mathcal{P}}\biggl{(}\frac{r}{s}\biggr{)} =\sum_{p\leq N}e\biggl{(}\frac{pr}{s}\biggr{)}=\sum_{a:(a,s)=1}e \biggl{(}\frac{ar}{s}\biggr{)}\pi(N;s,a)+O(1)\] \[=\frac{\pi(N)}{\phi(s)}\sum_{a:(a,s)=1}e\biggl{(}\frac{ar}{s} \biggr{)}+O\biggl{(}\frac{\pi(N)}{(\log N)^{B}}\biggr{)} \tag{4.3}\] \[=\pi(N)\biggl{(}\frac{\mu(s)}{\phi(s)}+O\biggl{(}\frac{1}{(\log N )^{B}}\biggr{)}\biggr{)}\]
as \(\sum_{a:(a,s)=1}e(\frac{ar}{s})=\sum_{b:(b,s)=1}e(\frac{b}{s})=\mu(s)\) (an identity often credited to Ramanujan). Therefore, by partial summation, if \(i\) is a non-zero integer with \(|i|\leq(\log N)^{A}\), or if \(i=0\) and \(\mu(s)=0\),
\[S_{\mathcal{P}}\biggl{(}\frac{r}{s}+\frac{i}{q^{k}}\biggr{)}=\pi(N)\frac{\mu( s)}{\phi(s)}\int_{0}^{N}e\biggl{(}\frac{it}{N}\biggr{)}dt+O\biggl{(}\frac{i\pi(N)}{( \log N)^{B}}\biggr{)}\ll\frac{\pi(N)}{(\log N)^{B-A}}.\]
We will write \(\frac{j}{N}=\frac{r}{s}+\frac{i}{q^{k}}\) so that \(|i|\leq(\log N)^{A}\) if and only if \(|j-\frac{r}{s}N|\leq(\log N)^{A}\). Therefore, since \(|S_{\mathcal{A}}(\frac{-j}{N})|\leq|\mathcal{A}(N)|\) trivially, taking \(B=4A-1\) with \(A\geq 2\) we obtain
\[\frac{1}{N}\sum_{\begin{subarray}{c}s\leq(\log N)^{A}\\ p|s\Longrightarrow p|q\pmod{r,s}=1\end{subarray}}\sum_{\begin{subarray}{c}0 \leq r<s\\ n(s)^{2}\leq|j-\frac{r}{s}N|\leq(\log N)^{A}\end{subarray}}\bigg{|}S_{\mathcal{ P}}\bigg{(}\frac{j}{N}\bigg{)}S_{\mathcal{A}}\bigg{(}\frac{-j}{N}\bigg{)} \bigg{|}\ll\frac{|\mathcal{A}(N)|}{(\log N)^{A}}\]
since there are \(\ll(\log N)^{A}\) terms in the each of the sums. This upper bound is much smaller than the main term in (4.1).
The only \(r/s\) which are not accounted for here are those where \(s\) is squarefree and all of its prime factors divide \(q\). But this implies that \(s\) divides \(q\), and these terms were already included in the sum in the previous subsection that led to (4.1). Therefore the calculations in this and the previous subsection account for the contributions to the sum in (3.1) of the "\(q\)-smooth" major arcs
\[\bigcup_{\begin{subarray}{c}s\leq(\log N)^{A}\\ p|s\Longrightarrow p|q\pmod{r,s}=1\end{subarray}}\bigcup_{\begin{subarray}{c}0 \leq r\leq s\\ n(r,s)=1\end{subarray}}\bigg{[}\frac{r}{s}-\frac{(\log N)^{A}}{N},\ \frac{r}{s}+\frac{(\log N)^{A}}{N}\bigg{]}.\]
Before finishing with the major arcs we will need to introduce a key perspective for working with the exponential sums \(|S_{\mathcal{A}}(\alpha)|\).
## 5. What makes restricted digit problems tractable?
From Parseval we know that for a given set \(T\), we typically have \(|S_{T}(\alpha)|\ll T(N)^{1/2}\) (and for most \(T\), we expect that \(|S_{T}(\alpha)|\asymp T(N)^{1/2}\) for almost all \(\alpha\)). Therefore using Parseval we have, for most \(\alpha\),
\[|S_{\mathcal{A}}(\alpha)|\cdot|S_{\mathcal{P}}(\alpha)|\ll(A(N)\cdot\pi(N))^{ 1/2}\asymp N^{1-\delta+o(1)}\]
where we define \(\delta>0\) by \(|\mathcal{D}|=q^{1-2\delta}\). However this is _much bigger_ than the main term \(\frac{|\mathcal{A}(N)|}{\log N}=N^{1-2\delta+o(1)}\), and so the circle method approach to digit sum problems has long seemed hopeless, since the sum of the absolute values of the contributions from the minor arcs seems likely to be so much larger than the main terms.
However, Maynard observed that the values of \(|S_{\mathcal{A}}(\alpha)|\) are quite unusual in that they are not typically of size \(A(N)^{1/2}\) but rather they are usually much smaller, as we shall see. Therefore restricted digit problems in base \(q\) are more tractable because the structure of \(\mathcal{A}\) leads to an unusual distribution of the sizes of its corresponding exponential sums, and so the contributions from the minor arcs are typically surprisingly small.
### The extraordinary structure of these exponential sums
If \(\mathcal{A}\) is the set of integers, whose base-\(q\) digits come only from the set \(\mathcal{D}\subset\{0,1,\ldots,q-1\}\), and \(N=q^{k}\), then we can write
\[\mathcal{A}(N)=\bigg{\{}n=\sum_{i=0}^{k-1}a_{i}q^{i}:\text{ Each }a_{i}\in \mathcal{D}\bigg{\}}.\]
Since \(e(n\theta)=\prod_{i=0}^{k-1}e(a_{i}q^{i}\theta)\), therefore
\[S_{\mathcal{A}}(\theta) =\sum_{\text{Each }a_{i}\in\mathcal{D}}\prod_{i=0}^{k-1}e(a_{i}q^{i} \theta)=\prod_{i=0}^{k-1}\bigg{(}\sum_{a_{i}\in\mathcal{D}}e(a_{i}q^{i}\theta) \bigg{)} \tag{5.1}\] \[=\prod_{i=0}^{k-1}\bigg{(}\frac{e(q^{i+1}\theta)-1}{e(q^{i}\theta )-1}-e(bq^{i}\theta)\bigg{)}\]
where we have assumed that \(\mathcal{D}=\{0,1,\ldots,q-1\}\setminus\{b\}\) only in the last displayed line. It is very unusual for an exponential sum of interest to be a product of much simpler exponential sums like this. If the exponential sums in the product were independent of each other then we could focus on each \(i\) separately and get best possible results; however the value of \(q^{i}\theta\mod 1\) can be used to determine \(q^{i+1}\theta\mod 1\) and so these are not independent. However, in practice, especially if \(q\) is large, they will be independent enough to get some surprisingly strong upper bounds on \(|S_{\mathcal{A}}(\theta)|\) for most \(\theta\).
We define
\[F_{\mathcal{D}}(\phi):=\bigg{|}\sum_{n\in\mathcal{D}}e(n\phi)\bigg{|}\leq| \mathcal{D}|,\]
so that
\[|S_{\mathcal{A}}(\theta)|=|\mathcal{A}(N)|\cdot\prod_{i=0}^{k-1}\frac{1}{| \mathcal{D}|}\,F_{\mathcal{D}}(q^{i}\theta)\]
since \(|\mathcal{A}(N)|=|\mathcal{D}|^{k}\).
### First upper bounds when \(\mathcal{D}=\{0,1,\ldots,q-1\}\setminus\{b\}\)
Taking absolute values and using the triangle inequality we have
\[F_{\mathcal{D}}(\phi) =\bigg{|}\frac{e(q\phi)-1}{e(\phi)-1}-e(b\phi)\bigg{|}\leq 1+ \frac{|e(q\phi)-1|}{|e(\phi)-1|} \tag{5.2}\] \[\leq 1+\frac{2}{|e(\phi)-1|}=1+\frac{1}{\sin(\pi\|\phi\|)}\]
where \(\|t\|=\min_{n\in\mathbb{Z}}|t-n|\), and therefore
\[F_{\mathcal{D}}(\phi)\leq 1+\frac{1}{2\|\phi\|} \tag{5.3}\]
since \(\sin(\pi\|t\|)\geq 2\|t\|\).
Now if \(\theta=\sum_{j\geq 1}\frac{t_{j-1}}{q^{j}}\) in base \(q\) (with the \(t_{i}\in\{0,1,\ldots,q-1\}\)) then
\[q^{i}\theta\mod 1=\frac{t_{i}}{q}+\frac{t_{i+1}}{q^{2}}+\cdots=\frac{t_{i}+(q^{i+ 1}\theta\mod 1)}{q},\]
and so \(q^{i}\theta\mod 1\in[\frac{t_{i}}{q},\frac{t_{i}+1}{q})\). This implies that \(\|q^{i}\theta\|\geq\min\{\frac{t_{i}}{q},1-\frac{t_{i}+1}{q}\}\) and so, by (5.2),
\[F_{\mathcal{D}}(q^{i}\theta)\leq\min\bigg{\{}q-1,1+\frac{1}{\min\{\sin(\pi\frac {t_{i}}{q}),\sin(\pi\frac{q-1-t_{i}}{q}\}}\bigg{\}},\]
and we obtain, in (5.1),
\[|S_{\mathcal{A}}(\theta)|\leq\prod_{i=0}^{k-1}\min\bigg{\{}q-1,1+\frac{1}{\min \{\sin(\pi\frac{t_{i}}{q}),\sin(\pi\frac{q-1-t_{i}}{q}\}}\bigg{\}}. \tag{5.4}\]
In particular if, as is typical, \(q^{2/3}<t_{i}<q-q^{2/3}\) then the \(i\)th term in (5.1) is \(\ll q^{1/3}\), a big improvement over the Parseval bound \(\sqrt{q-1}\).
In fact for almost all \(\theta\) the \(t_{i}\) are uniformly distributed in \([0,q-1]\), that is \(\#\{i\in[1,k]:t_{i}=r\}\sim k/q\) for all \(r\in[0,q-1]\), and so
\[|S_{\mathcal{A}}(\theta)|\ll\left(q\prod_{1\leq r\leq q/2}\left(1+\frac{1}{ \sin(\pi\frac{r}{q})}\right)\right)^{\{2+o(1)\}k/q}=(C+o(1))^{k}\]
where \(C:=\exp(\frac{4}{\pi}L(2,(\frac{-4}{\cdot}))\approx 3.209912300\). This is much smaller than \(q^{k/2}\) for large \(k\). As promised we have shown that the \(|S_{\mathcal{A}}(\theta)|\), where \(\mathcal{A}\) is the set of integers missing one particular digit in base \(q\), have a very different distribution from the \(|S_{T}(\theta)|\) for a typical set of integers \(T\). This distribution indeed implies that the set of \(\theta\) for which \(|S_{\mathcal{A}}(\theta)|\) is not very small, has tiny measure. We follow Maynard's argument to exploit this.
### Major arcs, where \(s\) has a prime factor that does not divide \(q\)
A weaker bound on the \(i\)th term, but which is easier to work with, comes from noting that
\[|e(a\phi)+e((a+1)\phi)|^{2}=2+2\cos(2\pi\phi)<4\exp(-2\|\phi\|^{2}),\]
so that \(|e(a\phi)+e((a+1)\phi)|\leq 2\exp(-\|\phi\|^{2})\). If \(q>3\) then there are two consecutive integers in \(\mathcal{D}\) and so
\[\sum_{a\in\mathcal{D}}e(a\phi)\leq q-3+2\exp(-\|\phi\|^{2})\leq(q-1)\exp\bigg{(} -\frac{\|\phi\|^{2}}{q}\bigg{)},\]
and therefore, by (5.1),
\[|S_{\mathcal{A}}(\theta)|\leq|\mathcal{A}(N)|\exp\bigg{(}-\frac{1}{q}\sum_{i= 0}^{k-1}\|q^{i}\theta\|^{2}\bigg{)} \tag{5.5}\]
We use this not very good upper bound to deal with the (few) remaining possible major arcs, though these arguments, and so (5.5), can easily be sharpened.
Suppose that prime \(p|s\) but \(p\not|q\). Then \(p\) divides the denominator of the reduced fraction for \(q^{i}\cdot\frac{r}{s}\) so that \(\|q^{i}\cdot\frac{r}{s}\|\geq\frac{1}{p}\). Moreover if \(|\theta-\frac{r}{s}|\leq\frac{1}{2pN^{1/2}}\) and \(i\leq\frac{k}{2}\) then
\[\|q^{i}\theta\|\geq\|q^{i}\cdot\frac{r}{s}\|-q^{i}|\theta-\frac{r}{s}|\geq \frac{1}{p}-\frac{q^{k/2}}{2pN^{1/2}}=\frac{1}{2p}.\]
Now if \(\|q^{i}\theta\|<\frac{1}{2q}\) then \(\|q^{i+1}\theta\|=q\|q^{i}\theta\|\). Therefore, for every integer \(i\) there exists an integer \(j,i\leq j\leq i+\lfloor\frac{\log p}{\log q}\rfloor\) for which \(\|q^{j}\theta\|\geq\frac{1}{2q}\), which implies that
\[\sum_{i=0}^{k-1}\|q^{i}\theta\|^{2}\geq\sum_{i=0}^{k/2}\|q^{i}\theta\|^{2}\geq \frac{1}{4q^{2}}\#\{j\in[0,\tfrac{k}{2}):\|q^{j}\theta\|\geq\tfrac{1}{2q}\} \geq\frac{1}{4q^{2}}\frac{\log q^{k/2}}{\log pq}\geq\frac{k}{8mq^{2}}\]
for \(s\leq q^{m}\) and \(m\in\mathbb{Z}\), since then \(\lfloor\frac{\log p}{\log q}\rfloor\leq m-1\). Here we let \(m=\lfloor\sqrt{k}/9q^{3}\rfloor\) and assume that \(k\geq 100q^{6}\).
Thus \(|S_{\mathcal{A}}(\theta)|\leq|\mathcal{A}(N)|\exp(-\frac{k}{8mq^{3}})\) by (5.5), and \(|S_{\mathcal{P}}(\theta)|\leq\pi(N)\) trivially, so that as \(2q^{2m}\leq N^{1/2}\) then
\[\frac{1}{N}\sum_{\begin{subarray}{c}s\leq q^{m}\\ 2p|s,p|\dot{q}\end{subarray}}\sum_{\begin{subarray}{c}0\leq r<s\\ (r,s)=1\end{subarray}}\sum_{j:|j-\frac{r}{s}N|\leq q^{m}}\bigg{|}S_{\mathcal{P} }\bigg{(}\frac{j}{N}\bigg{)}S_{\mathcal{A}}\bigg{(}\frac{-j}{N}\bigg{)}\bigg{|} \ll\frac{|\mathcal{A}(N)|}{\log N}q^{3m}\exp(-\frac{k}{8mq^{3}})\] \[\ll\frac{|\mathcal{A}(N)|}{\log N}e^{-\sqrt{k}},\]
which is much smaller than the main term in (4.1).
This subsection accounts for the major arcs,
\[\bigcup_{\begin{subarray}{c}s\leq q^{m}\\ \exists p|s\text{ such that }p|\dot{q}\end{subarray}}\bigcup_{\begin{subarray}{c}0 \leq r\leq s\\ (r,s)=1\end{subarray}}\bigg{[}\frac{r}{s}-\frac{q^{m}}{N},\ \frac{r}{s}+\frac{q^{m}}{N}\bigg{]}.\]
where \(q^{m}=c_{q}^{\sqrt{k}}\) for some \(c_{q}>1\), which is much larger than \((\log N)^{A}\) for \(k\) sufficiently large.
## 6. The remaining challenge; the minor arcs
When dealing with each of the second two types of major arcs we bounded one of the exponential sums trivially; we will have no such luxury when bounding the contribution of the minor arcs. We obtain the minor arcs \(\mathfrak{m}\), for \(M=\lfloor\sqrt{N}\rfloor=q^{k/2}\), from subtracting the major arcs from a partition of the unit circle:
\[\bigcup_{\begin{subarray}{c}0\leq r\leq s\leq M\\ (r,s)=1\end{subarray}}\bigg{[}\frac{r}{s}-\frac{1}{sM},\ \frac{r}{s}+\frac{1}{sM}\bigg{]}\ \setminus\bigcup_{ \begin{subarray}{c}0\leq r\leq s\leq(\log N)^{A}\\ (r,s)=1\end{subarray}}\bigg{[}\frac{r}{s}-\frac{(\log N)^{A}}{N},\ \frac{r}{s}+\frac{(\log N)^{A}}{N}\bigg{]}.\]
We can further partition these arcs according to the sizes of \(s\):13
Footnote 13: \(x\asymp X\) means \(x\) runs through the integers or reals (as appropriate) in the interval \((X,qX]\).
\[s\asymp S\text{ with }1\leq S=q^{i}\leq M/q\]
where \(i\geq 0\) is an integer, with \(i\leq k/2\) (where \(k\) is even); and the size of \(\|s\theta\|\) for \(\theta=\frac{j}{N}\):
\[\bigg{|}\frac{j}{N}-\frac{r}{s}\bigg{|}\leq\frac{1}{N},\text{ or }\bigg{|} \frac{j}{N}-\frac{r}{s}\bigg{|}\asymp\frac{B}{N}\text{ with }1\leq B=q^{\ell}\]
where \(\ell\geq 0\) is an integer, and so that
\[B=q^{\ell}\leq\frac{N}{q^{2}SM}\]
since \(|\frac{j}{N}-\frac{r}{s}|<\frac{1}{sM}\); that is, \(i+\ell\leq\frac{k}{2}-2\). This also implies that \(\|s\frac{j}{N}\|=s\|\frac{j}{N}\|\leq\frac{1}{M}\).
The major arcs took account of the cases in which both \(B,S\ll(\log N)^{A}\), and so for the minor arcs we have \(BS\gg(\log N)^{A}\), so that
\[(\log N)^{A}\ll BS\leq\frac{N}{q^{2}M}\]
(that is, \(\log k\ll_{q}i+\ell\leq\frac{k}{2}-2\)).
### Well-known bounds on \(S_{\mathcal{P}}(\theta)\)
Vinogradov's estimate for exponential sums ([5], pg 142) gives that if \(\alpha=\frac{j}{N}=\frac{r}{s}+\beta\) with \((r,s)=1\) and \(|\beta|<\frac{1}{s^{2}}\) then
\[|S_{\mathcal{P}}(\alpha)|\ll\bigg{(}N^{4/5}+(sN)^{1/2}+\frac{N}{s^{1/2}} \bigg{)}(\log N)^{4}\ll\bigg{(}N^{4/5}+\frac{N}{S^{1/2}}\bigg{)}(\log N)^{4}\]
since \((sN)^{1/2}\leq(MN)^{1/2}\leq N^{4/5}\) as \(M\leq N^{3/5}\) and as \(s\asymp S\). We use this in the first range above.
In the second range above we have \(\|s\frac{j}{N}\|\asymp\frac{BS}{N}\). By a slight modification of Vinogradov's proof, we also have the bound
\[|S_{\mathcal{P}}(\alpha)| \ll\bigg{(}N^{4/5}+\frac{N^{1/2}}{\|s\alpha\|^{1/2}}+\|s\alpha\|^ {1/2}N\bigg{)}(\log N)^{4} \tag{6.1}\] \[\ll\bigg{(}N^{4/5}+\frac{N}{(BS)^{1/2}}\bigg{)}(\log N)^{4}\]
since \(\|s\frac{j}{N}\|^{1/2}N\asymp(BSN)^{1/2}\ll\frac{N}{M^{1/2}}\leq N^{4/5}\) as \(M\geq N^{2/5}\).
### The end-game
Our main goal in this section is to show that if \(q\geq 133359\) and \(\mathcal{D}=\{0,1,\ldots,q-1\}\setminus\{b\}\) then
\[\sum_{\begin{subarray}{c}0\leq r<s\leq S\\ (r,s)=1\end{subarray}}\sum_{j:|j-q^{k}\cdot\frac{r}{s}|\leq B}\bigg{|}S_{ \mathcal{A}}\bigg{(}\frac{j}{q^{k}}\bigg{)}\bigg{|}\ll_{q}|\mathcal{A}(N)|(BS^ {2})^{\frac{1}{5}-\eta}, \tag{6.2}\]
for some \(\eta>0\), where the "\(\ll\)" depends only on \(q\).14 Using the bound in (6.1) we then deduce that
Footnote 14: In this case, \(A(N)=N^{1-\delta_{q}}\) for \(N=q^{k}\) where \(\delta_{q}=\frac{\log(1+\frac{1}{q-1})}{\log q}\), so that the bigger that \(q\) gets, the more (Hausdorff)-dense \(\mathcal{A}\) is. This is why these arguments work better as \(q\) gets larger.
\[\sum_{\begin{subarray}{c}0\leq r<s\leq S\\ (r,s)=1\end{subarray}}\sum_{\begin{subarray}{c}|\frac{j}{N}-\frac{r}{s}|\leq \frac{1}{N}\text{ or }\\ |\frac{j}{N}-\frac{r}{s}|\asymp\frac{R}{N}\end{subarray}}\bigg{|}S_{\mathcal{P} }\bigg{(}\frac{j}{N}\bigg{)}\cdot S_{\mathcal{A}}\bigg{(}\frac{-j}{N}\bigg{)} \bigg{|}\ll_{q}|\mathcal{A}(N)|\bigg{(}N^{1-\eta}+\frac{N}{(BS)^{\frac{1}{10}}} \bigg{)}(\log N)^{4}\]
since \(BS^{2}\leq BSM\ll N\) and \(BS^{2}\leq(BS)^{2}\). Now we sum this bound over all \(B=q^{\ell},S=q^{i}\) where \(i\) and \(\ell\) are integers \(\geq 0\), with \((\log N)^{A}\ll BS=q^{i+\ell}\leq N\) (so that there are \(\ll(\log N)^{2}\) such pairs \(i,\ell\)). Therefore we obtain
\[\frac{1}{N}\sum_{j:\frac{j}{N}\in\mathfrak{m}}\bigg{|}S_{\mathcal{P}}\bigg{(} \frac{j}{N}\bigg{)}\cdot S_{\mathcal{A}}\bigg{(}\frac{-j}{N}\bigg{)}\bigg{|} \ll_{q}\frac{|\mathcal{A}(N)|}{(\log N)^{C}}\]
provided \(A\geq 10(C+4)\). (We can therefore define our arcs using any fixed \(A>50\), and then select \(C\) with \(A=10(C+4)\), ensuring that \(C>1\).) Therefore Maynard's result, that we have asymptotically the predicted number of primes missing some given digit in base \(q\), follows for all bases \(q\geq 133359\).
### The mean value of \(|S_{\mathcal{A}}(\alpha)|\)
For any real \(\theta\) the set of values of the first \(k\) base-\(q\) digits of
\[\bigg{\{}\theta+\frac{j}{q^{k}}\mod 1:0\leq j\leq q^{k}-1\bigg{\}}\]
run once through each \((t_{0},\ldots,t_{k-1})\in\{0,1,\ldots,q-1\}^{k}\). Therefore, by (5.4),
\[\sum_{j=0}^{q^{k}-1}\bigg{|}S_{\mathcal{A}}\bigg{(}\theta+\frac{j}{q^{k}}\bigg{)} \bigg{|}\leq\prod_{i=0}^{k-1}\sum_{t_{i}=0}^{q-1}\min\bigg{\{}q-1,1+\frac{1}{ \min\{\sin(\pi\frac{t_{i}}{q}),\sin(\pi\frac{q-1-t_{i}}{q}\}}\bigg{\}}. \tag{6.3}\]
Now
\[\sum_{t=0}^{q-1}\min\bigg{\{}q-1,1+\frac{1}{\min\{\sin(\pi\frac{t }{q}),\sin(\pi\frac{q-1-t}{q}\}}\bigg{\}}\] \[\qquad=3q-4+2\sum_{1\leq t<\frac{q-1}{2}}\frac{1}{\sin(\pi\frac{ t}{q})}+\frac{1_{2|q-1}}{\sin(\pi\frac{q-1}{2q})}\]
The value of this sum is \(\frac{2}{\pi}q\log q+O(q)\), but for our application we need the much weaker but fully explicit upper bound
\[\leq(q-1)q^{\tau}\text{ for all }q\geq 133359\]
where \(\tau=\frac{1}{5}-\eta\) and \(\eta=10^{-9}\). The exponent "\(\frac{1}{5}\)" here is critical because of the \(N^{4/5}\) in (6.1)). Substituting this into (6.3) we deduce that
\[\sum_{j=0}^{q^{k}-1}\bigg{|}S_{\mathcal{A}}\bigg{(}\theta+\frac{j}{q^{k}} \bigg{)}\bigg{|}\leq(q-1)^{k}q^{k\tau}. \tag{6.4}\]
Therefore the average value of \(|S_{\mathcal{A}}(\alpha)|\) is given by
\[\int_{0}^{1}|S_{\mathcal{A}}(\alpha)|d\alpha=\int_{0}^{q^{-k}}\sum_{j=0}^{q^{ k}-1}\bigg{|}S_{\mathcal{A}}\bigg{(}\theta+\frac{j}{q^{k}}\bigg{)}\bigg{|}d\theta \leq\big{(}\tfrac{q-1}{q}\big{)}^{k}\cdot q^{k\tau}. \tag{6.5}\]
This is \(<q^{k/5}=N^{1/5}\) much smaller than the \(N^{1/2}\) obtained from the mean square which is what is important in this argument. But it is also much larger than \((C+o(1))^{k}\), the bound we obtained for \(|S_{\mathcal{A}}(\theta)|\) for the typical \(\theta\) (that is, \(\theta\) for which their base-\(q\) digits are equidistributed) and it is feasible one can end up doing significantly better than we do here with cleverer arguments better exploiting the typical \(\theta\).
### The mean value of \(|S_{\mathcal{A}}^{\prime}(\alpha)|\)
For \(n=\sum_{j=0}^{k-1}a_{j}q^{j}\) we have
\[\frac{d}{d\theta}e(n\theta)=2i\pi\cdot ne(n\theta)=2i\pi\cdot\sum_{j=0}^{k-1} a_{j}q^{j}e(a_{j}q^{j})\prod_{i\neq j}e(a_{i}q^{i}).\]
We can modify the above argument from bounds for a sum of \(|S_{\mathcal{A}}(\cdot)|\)-values to a sum of \(|S_{\mathcal{A}}^{\prime}(\cdot)|\)-values, by bounding the contribution of the \(j\)th term in the product by \(q^{j}\) times
\[\bigg{|}\sum_{a=0}^{q-1}a\,e(a\phi)-b\,e(b\phi)\bigg{|} \leq\min\bigg{\{}\frac{q(q-1)}{2},b+\bigg{|}\frac{\sum_{j=1}^{q-1 }e(j\phi)-(q-1)e(q\phi)}{1-e(\phi)}\bigg{|}\bigg{\}}\] \[\leq(q-1)\min\bigg{\{}\frac{q}{2},1+\frac{1}{\sin(\pi\|\phi\|)} \bigg{\}}\] \[\leq(q-1)\min\bigg{\{}q-1,1+\frac{1}{\sin(\pi\|\phi\|)}\bigg{\}}\]
with \(\phi=q^{j}\theta\). Therefore, as \((q-1)\sum_{j=0}^{k-1}q^{j}<q^{k}\) we obtain
\[\int_{0}^{1}|S^{\prime}_{\mathcal{A}}(\alpha)|d\alpha\leq 2\pi(q-1)^{k}q^{k\tau}. \tag{6.6}\]
### Bounds on \(|S_{\mathcal{A}}(\theta_{i})|\) at well spread-out points
One can bound a differentiable function \(f(\cdot)\) at a point by its values in a neighbourhood by the classical inequality
\[|f(\theta)|\leq\frac{1}{2\Delta}\int_{\theta-\Delta}^{\theta+\Delta}|f(\phi)|d \phi+\frac{1}{2}\int_{\theta-\Delta}^{\theta+\Delta}|f^{\prime}(\phi)|d\phi\]
We can sum this over a set of points (on the unit circle), \(\theta_{1},\dots,\theta_{m}\) where \(|\theta_{i}-\theta_{j}|\geq 2\Delta\) if \(i\neq j\) so the integrals above do not overlap, to obtain
\[\sum_{i=1}^{m}|f(\theta_{i})|\leq\frac{1}{2\Delta}\int_{0}^{1}|f(\phi)|d\phi+ \frac{1}{2}\int_{0}^{1}|f^{\prime}(\phi)|d\phi. \tag{6.7}\]
Our choice of points is a bit complicated: The \(\theta_{i}\) will be selected within \(\Delta=\frac{1}{4S^{2}}\) of the fractions \(\frac{r}{s}\) with \((r,s)=1\) and \(0\leq r<s\leq S\) with \((r,s)=1\) displaced by a fixed quantity \(\xi\). The fractions are distinct so any two differ by \(|\frac{r}{s}-\frac{r^{\prime}}{s^{\prime}}|\geq\frac{1}{ss^{\prime}}>\frac{1}{ S^{2}}\), and therefore the points differ by \(\geq\frac{1}{S^{2}}-2\Delta=2\Delta\) and so
\[\sum_{s\leq S}\sum_{\begin{subarray}{c}0\leq r<s\\ (r,s)=1\end{subarray}}\max_{|\eta|\leq\Delta}\left|f\bigg{(}\frac{r}{s}+\xi+ \eta\bigg{)}\right|\leq 2S^{2}\int_{0}^{1}|f(\phi)|d\phi+\frac{1}{2}\int_{0}^{1}|f^{ \prime}(\phi)|d\phi.\]
We now apply this with \(f=S_{A}\) and use (6.5) and (6.6) to obtain
\[\sum_{\begin{subarray}{c}0\leq r<s\leq S\\ (r,s)=1\end{subarray}}\max_{|\eta|\leq\frac{1}{4S^{2}}}\left|S_{A}\bigg{(} \frac{r}{s}+\xi+\eta\bigg{)}\right|\leq(2S^{2}q^{-k}+\pi)(q-1)^{k}q^{k\tau}. \tag{6.8}\]
### Hybrid estimate
We need notation that reflects that our sum is up to \(q^{k}\), since we will now vary \(k\). So let
\[\widehat{A_{k}}(\theta):=S_{\mathcal{A}}(\theta)=\sum_{n\in\mathcal{A}(q^{k}) }e(n\theta)\]
Our formula (5.1), implies that if \(\ell\leq k\) then
\[\widehat{A_{k}}(\theta)=\widehat{A_{k-\ell}}(\theta)\widehat{A_{\ell}}(q^{k- \ell}\theta).\]
For \(m\leq k-\ell\) replace \(k\) by \(k-\ell\) and \(k-\ell\) by \(m\) so that
\[\widehat{A_{k-\ell}}(\theta)=\widehat{A_{m}}(\theta)\widehat{A_{k-\ell-m}}(q^ {m}\theta),\]
and therefore
\[\widehat{A_{k}}(\theta)=\widehat{A_{m}}(\theta)\widehat{A_{k-\ell-m}}(q^{m} \theta)\widehat{A_{\ell}}(q^{k-\ell}\theta).\]
Since \(|\widehat{A_{k-\ell-m}}(q^{m}\theta)|\leq(q-1)^{k-\ell-m}\) this yields
\[|\widehat{A_{k}}(\theta)|=(q-1)^{k-\ell-m}|\widehat{A_{m}}(\theta)|\cdot| \widehat{A_{\ell}}(q^{k-\ell}\theta)|,\]
and so
\[\begin{split}\left|\widehat{A_{k}}\bigg{(}\frac{j}{q^{k}}\bigg{)} \right|&\leq(q-1)^{k-\ell-m}\bigg{|}\widehat{A_{m}}\bigg{(}\frac{j} {q^{k}}\bigg{)}\bigg{|}\cdot\bigg{|}\widehat{A_{\ell}}\bigg{(}\frac{j}{q^{\ell }}\bigg{)}\bigg{|}\\ &\leq(q-1)^{k-\ell-m}\bigg{|}\widehat{A_{\ell}}\bigg{(}\frac{j}{q ^{\ell}}\bigg{)}\bigg{|}\cdot\max_{i:|i-q^{k}\cdot\frac{r}{s}|\leq B}\bigg{|} \widehat{A_{m}}\bigg{(}\frac{i}{q^{k}}\bigg{)}\bigg{|}.\end{split}\]
provided \(|j-q^{k}\cdot\frac{r}{s}|\leq B\).
We let \(B=q^{\ell}\) and \(S^{2}=q^{m}\) so that \(2S^{2}/q^{m}+\pi\ll 1\) and \(q^{m}=S^{2}\leq SM\ll N/B\ll q^{k-\ell}\). We have
\[\begin{split}&\sum_{s\leq S}\sum_{\begin{subarray}{c}0\leq r<s \\ (r,s)=1\end{subarray}}\sum_{j:|j-q^{k}\cdot\frac{r}{s}|\leq B}\bigg{|}S_{\mathcal{ A}}\bigg{(}\frac{j}{q^{k}}\bigg{)}\bigg{|}\\ &\leq(q-1)^{k-\ell-m}\sum_{\begin{subarray}{c}0\leq r<s\leq S\\ (r,s)=1\end{subarray}}\max_{i:|i-q^{k}\cdot\frac{r}{s}|\leq B}\bigg{|}\widehat{A _{m}}\bigg{(}\frac{i}{q^{k}}\bigg{)}\bigg{|}\cdot\sum_{j:|j-q^{k}\cdot\frac{r}{ s}|\leq B}\bigg{|}\widehat{A_{\ell}}\bigg{(}\frac{j}{q^{\ell}}\bigg{)} \bigg{|}.\end{split}\]
We extend the final sum to a sum over all \(j\pmod{q^{\ell}}\) so that
\[\sum_{j:|j-q^{k}\cdot\frac{r}{s}|\leq B}\bigg{|}\widehat{A_{\ell}}\bigg{(} \frac{j}{q^{\ell}}\bigg{)}\bigg{|}\leq(q-1)^{\ell}q^{\tau\ell}\]
by (6.4), and therefore
\[\begin{split}\sum_{\begin{subarray}{c}0\leq r<s\leq S\\ (r,s)=1\end{subarray}}\sum_{j:|j-q^{k}\cdot\frac{r}{s}|\leq B}\bigg{|}S_{ \mathcal{A}}\bigg{(}\frac{j}{q^{k}}\bigg{)}\bigg{|}\leq(q-1)^{k-m}q^{\tau\ell }\sum_{\begin{subarray}{c}0\leq r<s\leq S\\ (r,s)=1\end{subarray}}\max_{i:|i-q^{k}\cdot\frac{r}{s}|\leq B}\bigg{|}\widehat{A _{m}}\bigg{(}\frac{i}{q^{k}}\bigg{)}\bigg{|}.\end{split}\]
For the next sum we use that \(B\leq N/q^{2}SM\) and \(S\leq M/q\) so that \(B/N\leq 1/q^{3}S^{2}\). Therefore
\[\max_{i:|i-q^{k}\cdot\frac{r}{s}|\leq B}\bigg{|}\widehat{A_{m}}\bigg{(}\frac{ i}{q^{k}}\bigg{)}\bigg{|}\leq\max_{i:|\eta|\leq\frac{B}{q^{k}}}\bigg{|} \widehat{A_{m}}\bigg{(}\frac{r}{s}+\eta\bigg{)}\bigg{|}\leq\max_{i:|\eta|\leq \frac{1}{4s^{2}}}\bigg{|}\widehat{A_{m}}\bigg{(}\frac{r}{s}+\eta\bigg{)} \bigg{|}\]
and so the internal sum above is
\[\begin{split}\sum_{\begin{subarray}{c}0\leq r<s\leq S\\ (r,s)=1\end{subarray}}\max_{i:|i-q^{k}\cdot\frac{r}{s}|\leq B}\bigg{|}\widehat{A _{m}}\bigg{(}\frac{i}{q^{k}}\bigg{)}\bigg{|}&\leq\sum_{ \begin{subarray}{c}0\leq r<s\leq S\\ (r,s)=1\end{subarray}}\max_{i:|\eta|\leq\frac{1}{4s^{2}}}\bigg{|}\widehat{A_{m}} \bigg{(}\frac{r}{s}+\eta\bigg{)}\bigg{|}\\ &\ll q^{O(1)}(q-1)^{m}q^{\tau m}\end{split}\]
by (6.8). Therefore
\[\sum_{s\leq S}\sum_{\begin{subarray}{c}0\leq r<s\\ (r,s)=1\end{subarray}}\sum_{j:|j-q^{k}\cdot\frac{r}{s}|\leq B}\bigg{|}S_{ \mathcal{A}}\bigg{(}\frac{j}{q^{k}}\bigg{)}\bigg{|}\ll_{q}|\mathcal{A}(N)|q^{( \ell+m)\tau}.\]
which implies that (6.2) holds as \(q^{\ell+m}=BS^{2}\).
## 7. Reducing \(q\)
We have proved Maynard's Theorem, for primes missing one digit in base \(q\), for all \(q\geq 133359\). The goal is base \(q=10\), so we need to find ways to improve the above argument to significantly reduce the size of \(q\) to which it applies.
### More calculation
Now that we only have to work with the finite set of integers \(q<133359\), and the finite set of values \(b\in[0,q-1)\) we can do a separate calculation tailored more carefully to each individual case. For example, instead of using the bound \(F_{\mathcal{D}}(\phi)\leq 1+\frac{1}{\sin(\pi\|\phi\|)}\) we might instead work with the definition of \(F_{\mathcal{D}}\) so that if \(\phi\in[\frac{t}{q},\frac{t+1}{q})\)with \(t\in\mathbb{Z}\) then
\[F_{\mathcal{D}}(\phi)\leq\max_{0\leq\eta<1}\bigg{|}\frac{e(\eta)-1}{e(\frac{t +\eta}{q})-1}-e(b\cdot\tfrac{t+\eta}{q})\bigg{|}.\]
Therefore we can replace the calculation after (6.3), bounding the sum for each \(i\), by the more precise
\[\max_{0\leq b\leq q-1}\sum_{t=0}^{q-1}\max_{0\leq\eta<1}\bigg{|}\frac{e(\eta)- 1}{e(\frac{t+\eta}{q})-1}-e(b\cdot\tfrac{t+\eta}{q})\bigg{|}.\]
For example if \(q=101\), this improves the previous bound of \(\leq 602.82\dots\) to something like \(\leq 497\), but requires substantially more calculation. Using this type of bound one gets weaker bounds for some \(b\)-values than for others, for a given \(q\), and this ends up requiring more elaborate though stronger arguments.
### A new cancelation
By (3.1) we have
\[|S_{\mathcal{A}}(\theta)|\leq\prod_{i=0}^{k-1}\min\bigg{\{}q-1,1+\bigg{|} \frac{e(q^{i+1}\theta)-1}{e(q^{i}\theta)-1}\bigg{|}\bigg{\}}.\]
The second bound, \(\leq 1+\frac{1}{\sin(\pi\|q^{i}\theta\|)}\), gives the minimum if \(1\leq t_{i}\leq q-2\).
In section 6 we proceeded by bounding the \(i\)th term of the product on average for each \(i\), treating different \(i\) independently (and so our upper bounds give the "worst case" for each \(i\)). We did so by simply using that \(q^{i}\theta\mod 1\in[\frac{t_{i}}{q},\frac{t_{i}+1}{q})\), and bounding \(|e(q^{i+1}\theta)-1|\leq 2\).
This ignored the fact that \(\|q^{i+1}\theta\|\) can be determined given \(\|q^{i}\theta\|\). If we use the more precise \(q^{i}\theta\mod 1\in[\frac{t_{i}+\frac{t_{i+1}}{q}}{q},\frac{t_{i}+\frac{t_{i +1}+1}{q}}{q})\) then the upper bound (5.2) for the \(i\)th and \((i+1)\)st terms are
\[\leq 1+\frac{1}{\sin(\pi\|\frac{t_{i}+\frac{t_{i+1}+\dots}{q}}{q}\|)}\text{ and } \leq 1+\frac{1}{\sin(\pi\|\frac{t_{i+1}+\dots}{q}\|)}\]
respectively, which are not independent but the dependence here is not so complicated, and we will be able to work with this level of dependence.
The idea is that we will obtain better upper bounds on \(|S_{\mathcal{A}}(\theta)|\) by taking each two consecutive terms of the product together. For example,
\[|S_{\mathcal{A}}(\theta)|\leq q\prod_{j=0}^{k/2-1}R_{2j}\]
where we take the \(i\)th and \((i+1)\)st terms together, and
\[R_{i}=\min\bigg{\{}q-1,1+\bigg{|}\frac{e(q^{i+1}\theta)-1}{e(q^{i}\theta)-1} \bigg{|}\bigg{\}}\cdot\min\bigg{\{}q-1,1+\bigg{|}\frac{e(q^{i+2}\theta)-1}{e( q^{i+1}\theta)-1}\bigg{|}\bigg{\}}.\]
Now if \(1\leq t_{i},t_{i+1}\leq q-2\) then
\[R_{i} \leq\left(1+\left|\frac{e(q^{i+1}\theta)-1}{e(q^{i}\theta)-1} \right)\cdot\left(1+\left|\frac{e(q^{i+2}\theta)-1}{e(q^{i+1}\theta)-1}\right.\right)\] \[\leq 1+\left|\frac{e(q^{i+2}\theta)-1}{e(q^{i}\theta)-1}\right|+ \left|\frac{e(q^{i+1}\theta)-1}{e(q^{i}\theta)-1}\right|+\left|\frac{e(q^{i+2} \theta)-1}{e(q^{i+1}\theta)-1}\right|\] \[\leq 1+\frac{2+|e(q^{i+1}\theta)-1|}{|e(q^{i}\theta)-1|}+\frac{2} {|e(q^{i+1}\theta)-1|}\] \[\leq 1+\frac{1+\max\{\sin(\pi\frac{t_{i+1}+1}{q})),\sin(\pi\frac{ q-t_{i+1}}{q})\}}{\min\{\sin(\pi\frac{t_{i}}{q}),\sin(\pi\frac{q-1-t_{i}}{q})\}}+ \frac{1}{\min\{\sin(\pi\frac{t_{i+1}}{q}),\sin(\pi\frac{q-1-t_{i+1}}{q})\}}.\]
Summing our bounds over \(0\leq t_{i},t_{i+1}\leq q-1\) (using the upper bound \(q-1\) on the \(i\)th term whenever \(t_{i}\) equals \(0\) or \(q-1\), and similarly for the \((i+1)\)st term) we get
\[(3q-4)^{2}+\left(2\sum_{1\leq t<\frac{q-1}{2}}\frac{1}{\sin(\pi\frac{t}{q})}+ \frac{1_{2|q-1}}{\sin(\pi\frac{q-1}{2q})}\right)\left(6q-8+2\sum_{2\leq u\leq \frac{u}{2}}\sin(\pi\frac{u}{q})+1_{2|q-1}\sin(\pi\frac{q+1}{2q})\right)\]
which is \(<(q-1)^{2}q^{2/5}\) for \(q\geq 18647\) (by a computer calculation), and therefore we have proved the claimed result for such \(q\).
We can combine this improvement with that of the previous subsection and the two ideas together should improve the bound on \(q\) further.
By taking two consecutive \(i\)-values together we have improved our lower bound on \(q\) by factor of more than \(7\), so we can probably get further improvements if we multiply together three consecutive \(i\)-values, or more? When we do this, it is natural to ask how to keep track of useful cancelations, like the
\[\frac{e(q^{i+2}\theta)-1}{e(q^{i+1}\theta)-1}\cdot\frac{e(q^{i+1}\theta)-1}{e( q^{i}\theta)-1}=\frac{e(q^{i+2}\theta)-1}{e(q^{i}\theta)-1}\]
used above, and when do we chose to use the trivial upper bound "2" on the numerator? Maynard's surprising idea is to keep track of all this by regarding the different terms of the product, averaged over all possible sets of \(t_{i}\)'s, as transition probabilities in a Markov process.
Better bounds on \((\ref{eq:eq
\(M_{a,b}:=\frac{F(a,b)}{q-1}\) for \(0\leq a,b\leq q-1\). Then for \(t_{0},t_{k}\in\{0,\dots,q-1\}\)
\[(q-1)^{k}M_{t_{0},t_{k}}^{k}=\sum_{t_{1},\dots,t_{k-1}\in\{0,\dots,q-1\}}\prod_{ i=0}^{k-1}F(t_{i},t_{i+1})\approx\sum_{\begin{subarray}{c}t_{1},\dots,t_{k-1} \in\{0,\dots,q-1\}\\ \theta=\sum_{i=0}^{k}t_{i}/q^{i+1}\end{subarray}}|S_{\mathcal{A}}(\theta)|.\]
Summing this over all \(t_{0},t_{k}\in\{0,\dots,q-1\}\) gives the complete sum over the \(\theta=j/q^{k}\); that is,
\[(q-1)^{-k}\sum_{j=0}^{q^{k}-1}|S_{\mathcal{A}}(\tfrac{j}{q^{k}})|\approx(1,1,\dots,1)M^{k}(1,1,\dots,1)^{T}\leq c_{M}|\lambda_{M}|^{k}\]
where \(\lambda_{M}\) is the largest eigenvalue of \(M\) and \(c_{M}>0\) is some computable constant.15
Footnote 15: We need to change the “\(\approx\)” in \(|S_{\mathcal{A}}(\theta)|\approx\prod_{i=0}^{k-1}F(t_{i},t_{i+1})\) above to a precise inequality, like \(|S_{\mathcal{A}}(\theta)|\leq\prod_{i=0}^{k-1}F(t_{i},t_{i+1})\), where \(F(t,u):=\max_{0\leq\eta\leq 1/q^{2}}\left|\frac{e(\frac{u+\eta}{q})-1}{e( \frac{t}{q}+\frac{u+\eta}{q^{2}})-1}-e(b(\frac{t}{q}+\frac{u+\eta}{q^{2}}))\right|\)
Our proof of the bounds for the minor arcs can be modified in a straightforward way, and then the result follows provided
\[\lambda_{M}<q^{1/5}.\]
With our earlier proved results we can assume that \(q<18647\); in particular we can compute the matrix in each case and determine the largest eigenvalue.
### A more general Markov process
But this is far from the end of the story, since we can be more precise by replacing the transition from the first two terms of the expansion of \(q^{i}\theta\), \(t_{i},t_{i+1}\), to the next two, \(t_{i+1},t_{i+2}\), in our "Markov process", by the transition from the first \(\ell\) terms of the expansion of \(q^{i}\theta\) to the next \(\ell\). This yields a \(q^{\ell}\)-by-\(q^{\ell}\) transition matrix \(M=M^{(\ell)}\) which is indexed by \(\ell\) digits in base \(q\) and \((M^{(\ell)})_{I,J}\) can only be non-zero if
\[I=(t_{1},\dots,t_{\ell}),J=(t_{2},\dots,t_{\ell+1})\text{ for some base-$q$ digits $t_{1},\dots,t_{\ell+1}$.}\]
Therefore each row and column is supported at only \(q\) entries.
If \(\theta=\sum_{i=1}^{\ell+1}t_{i}/q^{i}\) then the corresponding entry of \(M^{(\ell)}\) is \(G(t_{1},\dots,t_{\ell+1})\) where
\[G(t_{1},\dots,t_{\ell+1}):=\max_{0\leq\eta\leq 1/q^{\ell+1}}\frac{1}{| \mathcal{D}|}F_{\mathcal{D}}(\theta+\eta).\]
If \(\lambda_{\ell}\) is the largest eigenvalue of \(M^{(\ell)}\) in absolute value then
\[\sum_{j=0}^{q^{k}-1}\left|S_{\mathcal{A}}\bigg{(}\frac{j}{q^{k}}\bigg{)} \right|\ \ll|\mathcal{D}|^{k}\cdot|\lambda_{\ell}|^{k},\]
and therefore if \(|\lambda_{\ell}|<q^{1/5}\) for some \(\ell\geq 1\) then there are indeed the expected number of primes with base-\(q\) digits in the set \(\mathcal{D}\).
Since these are truncations of the true Markov process on a Hilbert space (with infinitely many dimensions) we have that \(|\lambda_{1}|>|\lambda_{2}|>\dots\) and so our bounds improve as \(\ell\) gets larger. These tend to a (positive) limit \(|\lambda_{\infty}|\) which gives the solution to the eigenvalue problem for the matrices in this Hilbert space. However,
numerical approximation shows that \(|\lambda_{\infty}|\) is not as small as would be needed to resolve the base \(10\) problem.
### Using the Markov process to remove generic minor arcs in small bases
Maynard's next idea for small \(q\) was to "remove" as many "generic" minor arcs as possible. He does so by using a simple moment argument: For any \(\sigma>0\) we have
\[\#\bigg{\{}j\in[0,N):\bigg{|}S_{\mathcal{A}}\bigg{(}\frac{j}{q^{k}}\bigg{)} \bigg{|}\geq\frac{A(N)}{T}\bigg{\}}\leq\bigg{(}\frac{T}{A(N)}\bigg{)}^{\sigma} \sum_{j=0}^{N-1}\bigg{|}S_{\mathcal{A}}\bigg{(}\frac{j}{q^{k}}\bigg{)}\bigg{|}^{ \sigma}, \tag{7.1}\]
so now we are interested in bounding the \(\sigma\)th moment of \(|S_{A}|\). To do this we work with the matrix \(M^{(\ell,\sigma)}\) where \((M^{(\ell,\sigma)})_{I,J}=(M^{(\ell)})_{I,J}^{\sigma}\), so that if \(\lambda_{\ell,\sigma}\) is the largest eigenvalue of \(M^{(\ell,\sigma)}\) in absolute value then
\[\frac{1}{A(N)^{\sigma}}\sum_{j=0}^{q^{k}-1}\bigg{|}S_{\mathcal{A}}\bigg{(} \frac{j}{q^{k}}\bigg{)}\bigg{|}^{\sigma}\ \ll|\lambda_{\ell,\sigma}|^{k}.\]
On the other hand
\[\#\bigg{\{}j\in[0,N):\bigg{|}S_{\mathcal{P}}\bigg{(}\frac{j}{q^{k}}\bigg{)} \bigg{|}\geq U\bigg{\}}\leq U^{-2}\sum_{j=0}^{N-1}\bigg{|}S_{\mathcal{P}} \bigg{(}\frac{j}{q^{k}}\bigg{)}\bigg{|}^{2}=U^{-2}N\pi(N)\sim\frac{N^{2}}{U^{2 }\log N}.\]
Therefore if \(\mathcal{E}=\{j\in[0,N):|S_{\mathcal{A}}(\frac{j}{q^{k}})|\geq\frac{A(N)}{T}\) or \(|S_{\mathcal{P}}(\frac{j}{q^{k}})|\geq U\}\) then
\[\frac{1}{N}\sum_{\begin{subarray}{c}j=0\\ j\not\in\mathcal{E}\end{subarray}}^{N-1}\bigg{|}S_{\mathcal{A}}\bigg{(}\frac{ j}{q^{k}}\bigg{)}\cdot S_{\mathcal{P}}\bigg{(}\frac{j}{q^{k}}\bigg{)}\bigg{|}\leq \frac{A(N)}{(\log N)^{2}}\]
taking \(U=T/(\log N)^{2}\). Now if \(|\lambda_{\ell,\sigma}|<q^{\rho}\) then
\[|\mathcal{E}|\ll T^{\sigma}N^{\rho}+\frac{N^{2}(\log N)^{3}}{T^{2}}<N^{\frac{ 2+\rho+\rho\sigma}{2+\sigma}+o(1)},\]
selecting \(T=N^{\frac{2-\rho}{2+\sigma}}\).
Karwatowski [23] used the fact that the eigenvalues of a matrix are bounded in absolute value by the largest sum of the absolute values of the elements in a row of the matrix, to numerically prove the bounds
\[\lambda_{4,1}<q^{\frac{27}{77}}\text{ and }\lambda_{4,\frac{235}{154}}<q^{ \frac{50}{433}}\]
for all \(q\geq 10\) (Maynard had already shown these inequalities hold for \(q=10\).) The moment method with \(\sigma=\frac{235}{154}\) then implies \(|\mathcal{E}|\ll N^{2/3}\) arguing as above, and therefore one can focus on the exceptional \(j\)-values.
To make the base-\(10\) argument unconditionally doable, Maynard developed delicate sieve methods. In effect this allowed him to replace needing to understand how often primes are written with the digits from \(\mathcal{D}\) in base \(q\), to understanding when integers composed of a product of a few large primes in certain given intervals are so represented. Maynard could therefore improve on the upper bounds for exponential sums over primes (as in (6.1)) when appropriately weighted, since now he was working with a more malleable set of the integers. He was able to restrict attention to a set \(\mathcal{E}\subset\mathfrak{m}\) of exceptional integers \(j\) with \(|\mathcal{E}|\ll N^{.36}\).
### The exceptional minor arcs
If \(j/q^{k}\in\mathcal{E}\) has an important effect on our sum, then the fraction \(j/q^{k}\) will have to simultaneously have several surprising Diophantine features, which Maynard proves are mostly incompatible (when \(q=10\)). The techniques are too complicated to discuss here. The following diagram exhibits the tools used in the whole proof, but especially when dealing with these exceptional arcs.
## 8. Generalizations
Our argument for sufficiently large \(q\), generalizes to a given set \(\mathcal{D}\), if \(\mathcal{D}\) contains two consecutive integers (for section 5.3), and if
\[\sum_{t=0}^{q-1}\max_{0\leq\eta<\frac{1}{q}}F_{\mathcal{D}}\bigg{(}\frac{t}{q} +\eta\bigg{)}<(q-|\mathcal{D}|)q^{1/5}\]
(The contributions of the \(\sum_{a\in\mathcal{D}}ae(a\phi)=q^{O(1)}\) in section 6.4 and so are not relevant). Now if \(\mathcal{D}=\{0,\ldots,q-1\}\setminus\mathcal{R}\) for a set \(\mathcal{R}\) with \(r\) elements then
\[\bigg{|}\sum_{a\in\mathcal{D}}e(a\phi)\bigg{|}\leq\bigg{|}\sum_{b\in\mathcal{ R}}e(b\phi)\bigg{|}+\bigg{|}\sum_{a=0}^{q-1}e(a\phi)\bigg{|}\leq r+\frac{1}{ \sin(\pi\|\phi\|)},\]
so that
\[\sum_{t=0}^{q-1}\max_{0\leq\eta<\frac{1}{q}}F_{\mathcal{D}}\bigg{(}\frac{t}{q} +\eta\bigg{)}\leq(q-1)r+q\log q+O(q)<(q-r)q^{1/5}\]
if \(r<(1-\epsilon)q^{1/5}\) for \(q\) sufficiently large. We can improve this using (6.7), first summing over the points with \(t\) even, then those with \(t\) odd, this is
\[\leq q\int_{0}^{1}\bigg{|}\sum_{b\in\mathcal{R}}e(b\phi)\bigg{|}d\phi+\int_{0 }^{1}\bigg{|}\sum_{b\in\mathcal{R}}be(b\phi)\bigg{|}d\phi<2q\sqrt{r}\]
Figure 1. Outline of steps to prove primes with missing digits
since, for any coefficients \(c_{b}\)
\[\bigg{(}\int_{0}^{1}\bigg{|}\sum_{b\in\mathcal{R}}c_{b}e(b\phi)\bigg{|}d\phi\bigg{)} ^{2}\leq\int_{0}^{1}\bigg{|}\sum_{b\in\mathcal{R}}c_{b}e(b\phi)\bigg{|}^{2}d\phi =\sum_{b\in\mathcal{R}}|c_{b}|^{2}\]
by the Cauchy-Schwarz inequality. Therefore
_There are roughly the expected number of primes whose base-\(q\) digits come from the set \(\mathcal{D}\) whenever \(|\mathcal{D}|\geq q-\frac{1}{5}q^{2/5}\), for \(q\) sufficiently large._
Another idea is to let \(\mathcal{D}\) be a set of \(r\) consecutive integers; we can see that
\[\bigg{|}\sum_{a\in\mathcal{D}}e(a\phi)\bigg{|}\leq\min\bigg{\{}r,\frac{1}{\sin (\pi\|\phi\|)}\bigg{\}}\]
so that
\[\sum_{t=0}^{q-1}\max_{0\leq\eta<\frac{1}{q}}F_{\mathcal{D}}\bigg{(}\frac{t}{q} +\eta\bigg{)}\ll(q-r)\frac{q}{r}\log r\]
and this is \(<q^{1/5}\) provided \(r\gg q^{4/5}\log q\). Therefore
_There are roughly the expected number of primes whose base-\(q\) digits come from any set \(\mathcal{D}\) of \(\gg q^{4/5}\log q\) consecutive integers, for \(q\) sufficiently large._
The "\(\frac{4}{5}\)" was improved to "\(\frac{3}{4}\)" in [31], and even to "\(\frac{57}{80}\)" if one just wants a lower bound of the correct order of magnitude.
## Part 2. Approximations by reduced fractions
### 9. Approximating most real numbers
We begin by reducing the real numbers modulo the integers; that is, given \(\theta\in\mathbb{R}\) we consider the equivalence class \((\theta)\) of real numbers that differ from \(\theta\) by an integer (and so each such \((\theta)\) is represented by a unique real number in \((-\frac{1}{2},\frac{1}{2}]\)).
Dirichlet observed that if \(\alpha\in[0,1)\) then the representations of
\[(0),(\alpha),(2\alpha),\cdots,(N\alpha)\]
all belong to an interval of length \(1\) so two of them \((i\alpha)\) and \((j\alpha)\) must differ by \(<\frac{1}{N}\), by the pigeonhole principle.16 Now if \(n=|j-i|\) then \(n\leq N\) and \(n=\pm(j-i)\), so that
Footnote 16: Moreoverm by embedding the interval onto the circle by the map \(t\to e(t):=e^{2i\pi t}\) we see that they must differ by \(<\frac{1}{N+1}\).
\[\pm(n\alpha)\equiv\pm n\alpha=(j-i)\alpha\equiv(j\alpha)-(i\alpha)\mod 1.\]
Therefore there exists an integer \(m\) for which \(|n\alpha-m|<\frac{1}{N}\) which we rewrite as
\[\left|\alpha-\frac{m}{n}\right|<\frac{1}{nN}\leq\frac{1}{n^{2}}.\]
This is a close approximation to \(\alpha\) by rationals, and one wonders whether one can do much better? In general, no, since the continued fraction of the golden ratio \(\phi:=\frac{1+\sqrt{5}}{2}\) implies that the best approximations to \(\phi\) are given by \(F_{n+1}/F_{n},n\geq 1\) where \(F_{n}\) is the \(n\)th Fibonacci number: One can show that
\[\left|\phi-\frac{F_{n+1}}{F_{n}}\right|\sim\frac{1}{\sqrt{5}}\cdot\frac{1}{F_ {n}^{2}},\]
and so all approximations to \(\phi\) by rationals \(p/q\) miss by \(\geq\{1+o(1)\}\frac{1}{\sqrt{5}}\cdot\frac{1}{q^{2}}\).
This led researchers at the end of the 19th century to realize that if the partial quotients in the continued fraction for irrational \(\alpha\) are bounded, say by \(B\) (note that \(\phi=[1,1,1,\dots]\)) then there exists a constant \(c=c_{B}>0\) such that \(|\alpha-\frac{m}{n}|\geq\frac{c_{B}}{n^{2}}\). However there are very few such \(\alpha\) under any reasonable measure. If the partial quotients aren't bounded then how good can approximations be? And how well can one approximate famous irrationals like \(\pi\)? (still a very open question).17
Footnote 17: If \(\alpha\) has continued fraction \([a_{0},a_{1},\dots]\) and \(|\alpha-\frac{b}{q}|<\frac{1}{2q^{2}}\) then \(\frac{b}{q}\) a convergent of the continued fraction, say the \(j\)th convergent, and then \(|\alpha-\frac{b_{j}}{q_{j}}|\asymp\frac{1}{a_{j}q_{j}^{2}}\); that is, we get better approximations the larger the \(a_{j}\) in the continued fractions (especially in comparison to the \(q_{j}\)). However we do not understand the continued fractions of most real numbers \(\alpha\) well enough to be able to assert that the problem is resolved, so we have transferred the difficulty of the problem into a seemingly different domain. See appendix 11B of [16] for more on continued fractions.
An easy argument shows that the set of \(\alpha\in[0,1)\) with infinitely many rational approximations \(\frac{m}{n}\) for which \(|\alpha-\frac{m}{n}|\leq\frac{1}{n^{3}}\) has measure \(0\). Indeed if there are infinitely many such rational approximations then there is one with \(n>x\) (an integer). Now for each \(n\) the measure of \(\alpha\in[0,1)\) with \(|\alpha-\frac{m}{n}|\leq\frac{1}{n^{3}}\) is \(\frac{1}{n^{3}}\) for \(m=0\) or \(n\), \(\frac{2}{n^{3}}\) for \(1\leq m\leq n-1\) and \(0\) otherwise, a total of \(\frac{2}{n^{2}}\), and summing that over all \(n>x\) gives \(\sum_{n>x}\frac{2}{n^{2}}<\int_{x}^{\infty}\frac{2}{t^{2}}dt=\frac{2}{x}\). Letting \(x\to\infty\) we see that the measure is \(0\). Obviously the analogous result holds for \(|\alpha-\frac{m}{n}|\leq\frac{1}{(n\log n)^{2}}\), and any other such bounds that lead to convergence of the infinite sum.
More generally we should study, for a given function \(\psi:\mathbb{Z}_{\geq 1}\to\mathbb{R}_{\geq 0}\), the set \(\mathcal{L}(\psi)\) which contains those \(\alpha\in[0,1)\) for which there are infinitely many rationals \(m/n\) for which
\[\left|\alpha-\frac{m}{n}\right|\leq\frac{\psi(n)}{n^{2}}.\]
We have seen that \(\mathcal{L}(1)=[0,1)\) whereas if \(c<1/\sqrt{5}\) then \(\phi-1\not\in\mathcal{L}(c)\) so \(\mathcal{L}(c)\neq[0,1)\). Moreover if \(\sum_{n}\psi(n)/n\) is convergent then \(\mu(\mathcal{L}(\psi))=0\) where \(\mu(\cdot)\) is the Lebesgue measure. In each case that we have worked out, \(\mu(\mathcal{L}(\psi))=0\) or \(1\), and Cassels [4] showed that this is always true (using the Birkhoff Ergodic Theorem)! So we need only decide between these two cases.
The first great theorem in _metric Diophantine approximation_ was due to Khinchin who showed that if \(\psi(n)\) is a decreasing function then
\[\mu(\mathcal{L}(\psi))=\begin{cases}0&\text{ if and only if }\sum_{n\geq 1} \frac{\psi(n)}{n}\text{ is }\left\{\begin{array}{l}\text{convergent}\\ \text{divergent}\end{array}\right..\end{cases}\]
Thus measure \(1\) of reals \(\alpha\) have approximations \(\frac{m}{n}\) with \(|\alpha-\frac{m}{n}|\leq\frac{1}{n^{2}\log n}\), and measure \(0\) with \(|\alpha-\frac{m}{n}|\leq\frac{1}{n^{2}(\log n)^{1+\varepsilon}}\)
The hypothesis "\(\psi(n)\) is decreasing" is too restrictive since, for example, one can't determine anything from this about rational approximations where the denominator is prime. So can we do without it? Our proof above that if \(\sum_{n\geq 1}\frac{\psi(n)}{n}\) is convergent then \(\mu(\mathcal{L}(\psi))=0\), works for general \(\psi\). Indeed we follow the usual proof of the first Borel-Cantelli lemma: Let \(E_{n}\) be the event that \(\alpha\in[\frac{m}{n}-\frac{\psi(n)}{n^{2}},\frac{m}{n}+\frac{\psi(n)}{n^{2}} ]\cap[0,1]\) for some \(m\in\{0,1,\ldots,n\}\), where we have selected \(\alpha\) randomly from \([0,1]\), and we established that \(\sum_{n}\mathbb{P}(E_{n})=\sum_{n}\frac{\psi(n)}{n}<\infty\). Then, almost surely, only finitely many of the \(E_{j}\) occur, and so \(\mu(\mathcal{L}(\psi))=0\).
The second Borel-Cantelli lemma states that if the \(E_{n}\) are independent and \(\sum_{n}\mathbb{P}(E_{n})\) diverges then almost surely infinitely many of the \(E_{j}\) occur. Our \(E_{n}\) are far from independent (indeed compare \(E_{n}\) with \(E_{2n}\)) but this nonetheless suggests that perhaps with the right notion of independence it is feasible that Khinchin's theorem holds without the decreasing condition.
### Duffin and Schaefer's example
Duffin and Schaefer constructed a (complicated) example of \(\psi\) for which \(\sum_{n\geq 1}\frac{\psi(n)}{n}\) diverges but \(\mu(\mathcal{L}(\psi))=0\);. Their example uses many representations like \(\frac{1}{3}=\frac{2}{6}\), that is, non-reduced fractions:
We begin with \(\psi_{0}\) where \(\psi_{0}(q)=0\) unless \(q=q_{\ell}:=\prod_{p\leq\ell}p\) is the product of the primes up to some prime \(\ell\), in which case \(\psi_{0}(q_{\ell})=\frac{q_{\ell}}{\ell\log\ell}\). Therefore
\[\sum_{q}\frac{\psi_{0}(q)}{q}=\sum_{\ell}\frac{1}{\ell\log\ell}\]
which converges by the prime number theorem, and so \(\mu(\mathcal{L}(\psi_{0}))=0\) as we just proved in the last subsection.
Now we construct a new \(\psi\) for which if \(q\) is squarefree integer with largest prime factor \(\ell\) (so that \(q\) divides \(q_{\ell}\)), then \(\psi(q)=q^{2}/(q_{\ell}\ell\log\ell)\), and \(\psi(q)=0\) otherwise. Now if \(|x-\frac{a}{q}|\leq\frac{\psi(q)}{q^{2}}\) then for \(A=a(q_{\ell}/q)\) we have
\[\left|x-\frac{A}{q_{\ell}}\right|=\left|x-\frac{a}{q}\right|\leq\frac{\psi(q )}{q^{2}}=\frac{\psi(q_{\ell})}{q_{\ell}^{2}}=\frac{\psi_{0}(q_{\ell})}{q_{ \ell}^{2}}\]
so that \(\mathcal{L}(\psi)=\mathcal{L}(\psi_{0})\) which has measure \(0\). On the other hand
\[\sum_{q}\frac{\psi(q)}{q}=\sum_{\ell}\frac{1}{\ell\log\ell}\sum_{\ell|q|q_{\ell} }\frac{q}{q_{\ell}}=\sum_{\ell}\frac{1}{\ell\log\ell}\prod_{p<\ell}\left(1+ \frac{1}{p}\right)\gg\sum_{\ell}\frac{1}{\ell}\]
by Mertens' Theorem, which diverges.
### A revised conjecture
Duffin and Schaefer's example uses many representations like \(\frac{1}{3}=\frac{2}{6}\), which suggests that we should restrict attention to _reduced fractions_\(\frac{m}{n}\) with \((m,n)=1\). We let \(E_{n}^{*}\) be the event that \(\alpha\in[\frac{m}{n}-\frac{\psi(n)}{n^{2}},\frac{m}{n}+\frac{\psi(n)}{n^{2}}] \cap[0,1]\) for some \(m\in\{0,1,\ldots,n\}\) with \((m,n)=1\).
Therefore Duffin and Schaefer defined \(\mathcal{L}^{*}(\psi)\) to be those \(\alpha\in[0,1)\) with infinitely many reduced fractions \(m/n\) for which
\[\left|\alpha-\frac{m}{n}\right|\leq\frac{\psi(n)}{n^{2}},\]
and conjectured
\[\mu(\mathcal{L}^{*}(\psi))=\begin{cases}0&\text{ if and only if }\sum_{n\geq 1} \frac{\phi(n)}{n}\cdot\frac{\psi(n)}{n}\text{ is }\left\{\begin{array}{l}\text{convergent}\\ \text{divergent}\end{array}\right.\end{cases}.\]
Here \(\phi(n)=\#\{\frac{m}{n}\in[0,1):(m,n)=1\}\). Now if \(\sum_{n}\mathbb{P}(E_{n}^{*})=\sum_{n}\frac{\phi(n)}{n}\cdot\frac{\psi(n)}{n}<\infty\), then almost surely, only finitely many of the \(E_{j}^{*}\) occur, and so \(\mu(\mathcal{L}^{*}(\psi))=0\). We therefore can assume that \(\sum_{n\geq 1}\frac{\phi(n)}{n}\cdot\frac{\psi(n)}{n}\) is divergent.
Gallagher [14] (in a slight variant of Cassell's result [4]) showed that \(\mu(\mathcal{L}^{*}(\psi))\) always equals either \(0\) or \(1\). Therefore we only need to show that \(\mu(\mathcal{L}^{*}(\psi))>0\) to deduce that \(\mu(\mathcal{L}^{*}(\psi))=1\).
Duffin and Schaefer themselves proved the conjecture in the case that there are arbitrarily large \(Q\) for which
\[\sum_{q\leq Q}\frac{\phi(q)}{q}\cdot\frac{\psi(q)}{q}\gg\sum_{q\leq Q}\frac{ \psi(q)}{q};\]
which more-or-less implies that the main weight of \(\psi(q)\) should not be focussed on integers \(q\) with many small prime factors (which are extremely rare), since that is what forces
\[\frac{\phi(q)}{q}=\prod_{p|q}\left(1-\frac{1}{p}\right)\text{ to be small.}\]
Thus for example, the conjecture follows if we only allow prime \(q\) (that is, if \(\psi(q)=0\) whenever \(q\) is composite), or if we only allow integers \(q\) which have no prime factors \(<\log q\).
In 2021, Koukoulopoulos and Maynard [25] showed that this Duffin-Schaefer conjecture is true, the end of a long saga. The proof is a blend of number theory, probability theory, combinatorics, ergodic theory, and graph theory combined with considerable ingenuity.
### Probability
Assuming that \(\sum_{n\geq 1}\frac{\phi(n)}{n}\cdot\frac{\psi(n)}{n}\) is divergent, we want to show that almost surely, infinitely many of the \(E_{j}^{*}\) occur, where \(E_{q}^{*}\) is the event that \(\alpha\) belongs to
\[[0,1)\cap\bigcup_{(a,q)=1}\bigg{[}\frac{a}{q}-\frac{\psi(q)}{q^{2}},\frac{a}{q} +\frac{\psi(q)}{q^{2}}\bigg{]}.\]
The \(E_{q}^{*}\) are not "independent", but were they independent enough, say if
\[\mu(E_{q}^{*}\cap E_{r}^{*})=(1+o_{q,r\to\infty}(1))\,\mu(E_{q}^{*})\,\mu(E_{r }^{*}),\]
then we could prove our result; however one can easily find counterexamples to this, for example when \(r=2q\). On the other hand, since we only need to show that \(\mu(\mathcal{L}^{*}(\psi))>0\), we will only need to establish a very weak quasi-independence, on average, like
\[\sum_{Q\leq q\neq r<R}\mu(E_{q}^{*}\cap E_{r}^{*})\leq 10^{6}\bigg{(}\sum_{Q \leq q<R}\mu(E_{q}^{*})\bigg{)}^{2} \tag{9.1}\]
for arbitrarily large \(Q\) and certain \(R\): To prove this note that since \(\sum_{q\geq Q}\mu(E_{q}^{*})=2\sum_{q\geq Q}\frac{\phi(q)}{q}\frac{\psi(q)}{q}\) diverges, we may select \(R\geq Q\) for which \(1\leq\sum_{Q\leq q<R}\mu(E_{q}^{*})\leq 2\). Now let \(N=\sum_{Q\leq q<R}1_{E_{q}^{*}}\) so that \(\mathbb{E}[N]=\sum_{Q\leq q<R}\mu(E_{q}^{*})\) and so
\[1\leq\bigg{(}\sum_{Q\leq q<R}\mu(E_{q}^{*})\bigg{)}^{2} =\mathbb{E}[N]^{2}=\mathbb{E}[1_{N>0}\cdot N]^{2}\leq\mu\bigg{(} \bigcup_{Q\leq q<R}E_{q}^{*}\bigg{)}\cdot\mathbb{E}[N^{2}]\] \[=\mu\bigg{(}\bigcup_{Q\leq q<R}E_{q}^{*}\bigg{)}\sum_{Q\leq q,r<R }\mu(E_{q}^{*}\cap E_{r}^{*})\]
by the Cauchy-Schwarz inequality. Therefore
\[\mu\bigg{(}\bigcup_{q\geq Q}E_{q}^{*}\bigg{)}\geq\mu\bigg{(}\bigcup_{Q\leq q <R}E_{q}^{*}\bigg{)}\geq 10^{-6}\]
by (9.1). But this is true for arbitrarily large \(Q\) and so \(\mu(\mathcal{L}^{*}(\psi))\geq 10^{-6}\), which implies that \(\mu(\mathcal{L}^{*}(\psi))=1\).
Following Pollington and Vaughan [33] we study \(\mu(E_{q}^{*}\cap E_{r}^{*})\), assuming \((q,r)=1\) for convenience: If \(\alpha\in[\frac{a}{q}-\frac{\psi(q)}{q^{2}},\frac{a}{q}+\frac{\psi(q)}{q}] \cap[\frac{b}{r}-\frac{\psi(r)}{r^{2}},\frac{b}{r}+\frac{\psi(r)}{r^{2}}]\) with \((a,q)=(b,r)=1\) then \(|\frac{a}{q}-\frac{b}{r}|\leq\frac{\psi(q)}{q^{2}}+\frac{\psi(r)}{r^{2}}\leq 2\Delta\) where \(\Delta:=\max\{\frac{\psi(q)}{q^{2}},\frac{\psi(r)}{r^{2}}\}\) and the overlap will have size \(\leq 2\delta\) where \(\delta:=\min\{\frac{\psi(q)}{q^{2}},\frac{\psi(r)}{r^{2}}\}\). Now the \(\frac{a}{q}-\frac{b}{r}\) are in 1-to-1 correspondence with the \(\frac{n}{qr}\) as \(n\) runs through the reduced residue classes mod \(qr\). Therefore, by the small sieve,
\[\mu(E_{q}^{*}\cap E_{r}^{*}) \leq 2\delta\#\{n:|n|\leq 2\Delta qr\text{ and }(n,qr)=1\}\ll\delta\Delta qr\prod_{ \begin{subarray}{c}p|qr\\ p\leq\Delta qr\end{subarray}}\bigg{(}1-\frac{1}{p}\bigg{)}\] \[\leq\frac{\phi(q)\psi(q)}{q^{2}}\cdot\frac{\phi(r)\psi(r)}{r^{2}} \cdot\exp\bigg{(}\sum_{\begin{subarray}{c}p|qr\\ p>\Delta qr\end{subarray}}\frac{1}{p}\bigg{)}\ll\mu(E_{q}^{*})\mu(E_{r}^{*}) \exp\bigg{(}\sum_{\begin{subarray}{c}p|qr\\ p>\Delta qr\end{subarray}}\frac{1}{p}\bigg{)}.\]
(If \((q,r)>1\) then we need only alter this by taking \(p|qr/(q,r)^{2}\) instead of \(p|qr\) in the sum over \(p\) on the far right of the previous displayed equation.)
Using this one can easily deduce the Duffin-Schaefer conjecture provided \(\psi(\cdot)\) does not behave too wildly. For example Erdos and Vaaler [10, 35] proved the Duffin-Schaefer conjecture provided the \(\psi(n)\) are bounded. Key to this is to note that there are \(\ll e^{-y}x\) integers \(n\leq x\) for which
\[\sum_{\begin{subarray}{c}p\mid n\\ p>y\end{subarray}}\frac{1}{p}\geq 1.\]
Therefore we obtain good enough bounds on \(\mu(E_{q}^{*}\cap E_{r}^{*})\) in the previous displayed equation unless \((q,r)\) is large, and unless \(q\) and \(r\) are each divisible by a lot of different small prime factors. This reduces the problem to one in the _anatomy_ of integers (a concept that is brought to life in the graphic novel [17]).
### The anatomy of integers
By partitioning \([Q,R]\) into dyadic intervals and studying the contribution of the integers in such intervals to the total we find ourselves drawn towards the following
**Model Problem**_Fix \(\eta\in(0,1]\). Suppose that \(S\) is a set of \(\gg\eta Q/B\) integers in \([Q,2Q]\) for which there are at least \(\eta|S|^{2}\) pairs \(q,r\in S\) such that \((q,r)\geq B\). Must there be an integer \(g\geq B\) which divides \(\gg_{\eta}Q/B\) elements of \(S\)?_
The model problem is false but a technical variant, which takes account of the \(\phi(q)/q\)-weights, is true.18 Using this one can reduce the problem to the Erdos-Vaaler argument, by anatomy of integers arguments, and prove the theorem.
Footnote 18: Let \(Q=\prod_{p\leq 2y}p\) and \(S:=\{Q/p:y<p\leq 2y\}\). If \(q=Q/p,r=Q/\ell\in S\) then \((q,r)=Q/p\ell\geq B:=Q/4y^{2}\), but any integer \(\geq B\) divides no more than two elements of \(S\). (This is adapted from an idea of Sam Chow.)
To attack the (variant of the) Model Problem, Koukoulopoulos and Maynard view it as a question in graph theory:
### Graph Theory
Consider the graph \(G\), with vertex set \(S\) and edges between vertices representing pairs of integers with gcd\(>B\).
Figure 2. Vertices = The integers in our set.
Edges = Pairs of integers with a large GCD.
Beginning with such a graph for which the edge density is \(\eta\), we wish to prove that there is a "dense subgraph" \(H\) whose vertices are each divisible by a fixed integer \(\geq B\). To locate this structured subgraph \(H\), Koukoulopoulos and Maynard use an iterative "compression" argument, inspired by the papers of Erdos-Ko-Rado [9] and Dyson [8]: with each iteration, they pass to a smaller graph but with more information about which primes divide the vertices. This is all complicated by the weights \(\phi(q)/q\). The details are complicated (see a vague sketch in the next subsection); and the reader is referred to [24], where the original proof of [25] is better understood from more recent explorations of Green and Walker [18], who gave an elegant proof of the following important variant:
_If \(R\subset[X,2X]\) and \(S\subset[Y,2Y]\) are sets of integers for which \((r,s)\geq B\) for at least \(\delta|R||S|\) pairs \((r,s)\in R\times S\) then \(|R||S|\ll_{\epsilon}\delta^{-2-\epsilon}XY/B^{2}\)._
Although this has a slightly different focus from the model problem, it focuses on the key question of how large such sets can get and takes account of the example of footnote 8 (unlike the model problem).
### Iteration and graph weights
The key to such an iteration argument is to develop a measure which exhibits how close one is getting to the final goal, which can require substantial ingenuity. In their paper Koukoulopoulos and Maynard [25] begin with two copies of \(S\) and construct a bipartite graph \(V_{0}\times W_{0}\) with edges in-between \(q\in V_{0}=S\) and \(r\in W_{0}=S\) if \((q,r)\geq B\). The idea is to select distinct primes \(p_{1},p_{2},\ldots\) and then \(V_{j}=\{v\in V_{j-1}:p_{j}\text{ divides }v\}\) or \(V_{j}=\{v\in V_{j-1}:p_{j}\text{ does not divide }v\}\), and similarly \(W_{j}\), so that \(p_{j}\) divides all \((v_{j},w_{j}),v_{j}\in V_{j},w_{j}\in W_{j}\) or none. If we terminate at step \(J\) then there are integers \(a_{J},b_{J}\), constructed out of the \(p_{j}\), such that \(a_{J}\) divides every element of \(V_{J}\) and \(b_{J}\) divides every element of \(W_{J}\). The goal is to proceed so that \((v_{J},w_{J})\geq B\) for some \(J\), for all \(v_{J}\in V_{J},w_{J}\in W_{J}\) such that all of the prime divisors of any \((v_{J},w_{J})\) appears amongst the \(p_{j}\). Hence, if say all the integers in \(S\) are squarefree, then \((a_{J},b_{J})=(v_{J},w_{J})\geq B\). So how do we measure progress in this algorithm?
One key measure is \(\delta_{j}\), the proportion of pairs \(v_{j}\in V_{j},w_{j}\in W_{j}\) with \((v_{j},w_{j})\geq B\), another the size of the sets \(V_{j}\) and \(W_{j}\). Finally we want to measure how much of the \(a_{j}b_{j}\) are given by prime divisors not dividing \((a_{j},b_{j})\), which we can measure using \(\frac{a_{j}b_{j}}{(a_{j},b_{j})^{2}}\). Koukoulopoulos and Maynard [25] found, after some trial and error, that the measure
\[\delta_{j}^{10}\cdot|V_{j}|\cdot|W_{j}|\cdot\frac{a_{j}b_{j}}{(a_{j},b_{j})^{ 2}}\]
fits their needs, allowing them eventually to restrict their attention to \(v,w\in S\) for which \(a_{j}\) divides \(v\), \(b_{J}\) divides \(w\) and
\[\sum_{\begin{subarray}{c}p|vw/(v,w)\\ p>y\end{subarray}}\frac{1}{p}\approx 1.\]
Koukoulopoulos and Maynard then finish the proof by applying a relative version of the Erdos-Vaaler argument to the pairs \((v/a_{J},w/b_{J})\).
### Hausdorff dimension
If \(\sum_{n\geq 1}\phi(n)\cdot(\psi(n)/n^{2})\) is convergent then \(\mu(\mathcal{L}^{*}(\psi))=0\) so we would like to get some idea of the true size of \(\mathcal{L}^{*}(\psi)\). Using a result of Beresnevich and Velani [2], one can deduce that the Hausdorff dimension of \(\mathcal{L}^{*}(\psi)\)
is given by the infimum of the real \(\beta>0\) for which
\[\sum_{n\geq 1}\phi(n)\cdot\left(\frac{\psi(n)}{n^{2}}\right)^{\beta}\text{ is convergent.}\]
|
2304.05682 | Automated Information Flow Analysis for Integrated Computing-in-Memory
Modules | Novel non-volatile memory (NVM) technologies offer high-speed and
high-density data storage. In addition, they overcome the von Neumann
bottleneck by enabling computing-in-memory (CIM). Various computer
architectures have been proposed to integrate CIM blocks in their design,
forming a mixed-signal system to combine the computational benefits of CIM with
the robustness of conventional CMOS. Novel electronic design automation (EDA)
tools are necessary to design and manufacture these so-called neuromorphic
systems. Furthermore, EDA tools must consider the impact of security
vulnerabilities, as hardware security attacks have increased in recent years.
Existing information flow analysis (IFA) frameworks offer an automated
tool-suite to uphold the confidentiality property for sensitive data during the
design of hardware. However, currently available mixed-signal EDA tools are not
capable of analyzing the information flow of neuromorphic systems. To
illustrate the shortcomings, we develop information flow protocols for NVMs
that can be easily integrated in the already existing tool-suites. We show the
limitation of the state-of-the-art by analyzing the flow from sensitive signals
through multiple memristive crossbar structures to potential untrusted
components and outputs. Finally, we provide a thorough discussion of the merits
and flaws of the mixed-signal IFA frameworks on neuromorphic systems. | Lennart M. Reimann, Felix Staudigl, Rainer Leupers | 2023-04-12T08:10:08Z | http://arxiv.org/abs/2304.05682v1 | # Automated Information Flow Analysis for Integrated Computing-in-Memory Modules
###### Abstract
Novel non-volatile memory (NVM) technologies offer high-speed and high-density data storage. In addition, they overcome the von Neumann bottleneck by enabling computing-in-memory (CIM). Various computer architectures have been proposed to integrate CIM blocks in their design, forming a mixed-signal system to combine the computational benefits of CIM with the robustness of conventional CMOS. Novel electronic design automation (EDA) tools are necessary to design and manufacture these so-called neuromorphic systems. Furthermore, EDA tools must consider the impact of security vulnerabilities, as hardware security attacks have increased in recent years. Existing information flow analysis (IFA) frameworks offer an automated tool-suite to unbold the confidentiality property for sensitive data during the design of hardware. However, currently available mixed-signal EDA tools are not capable of analyzing the information flow of neuromorphic systems. To illustrate the shortcomings, we develop information flow protocols for NVMs that can be easily integrated in the already existing tool-suites. We show the limitation of the state-of-the-art by analyzing the flow from sensitive signals through multiple memristive crossbar structures to potential untrusted components and outputs. Finally, we provide a thorough discussion of the merits and flaws of the mixed-signal IFA frameworks on neuromorphic systems.
information flow analysis, neuromorphic computing, confidentiality
## I Introduction
Non-volatile memory (NVM) technologies, such as spintorque transfer memory (STT-RAM/MRAM), phase-change random access memory (PCRAM), or redox-based random access memory (ReRAM), are promising candidates to substitute traditional RAM. NVMs offer dense storage with low leakage power, and enable computing-in-memory.
Combining conventional CMOS with NVMs results in complex high-performance designs referred to as neuromorphic systems. In modern design processes, electronic design automation (EDA) tools are used to assist the designer in the intricate implementation. However, the EDA tools need to be equipped to facilitate mixed-signal designs incorporating NVM-based accelerators. Furthermore, novel technologies introduce new security vulnerabilities (see Fig. 1) [1, 2, 3]. These new vulnerabilities are particularly worrying, because neuromorphic systems are a promising candidate for future applications, such as autonomous driving. Consequently, EDA tools need to be adapted to enforce security properties for mixed-signal designs, enabling a security-aware design flow in both the digital and analog domain. Availability, confidentiality, and integrity are the three cornerstones of hardware security that must be considered during the design process. While most research focuses on the integrity property [4, 5], we concentrate in our work on the confidentiality property. Information flow analysis (IFA) is the state-of-the-art technique to enforce the confidentiality and integrity property in a design. IFA can track the flow of information from sensitive sources, such as encryption keys, to untrusted components, such as outputs or third-party intellectual property (IP). The analysis can be conducted statically to ensure confidentiality of a signal for every possible input combination. Although some work has been published supporting mixed-signal designs, no work has been released to the best of our knowledge that discusses the information flow for NVMs. Consequently, our work extends currently available IFA tools for neuromorphic mixed-signal systems and discusses its merits and shortcomings.
The major contributions of this paper are: (1) Development of information flow policies for NVM components. (2) An introduction of a crossbar driver masking scheme to _forbid_ sneak paths in hardware. (3) A demonstration of the limitations of current mixed-signal IFA tools for neuromorphic systems.
## II Background
### _Information Flow Analysis_
Information flow analysis represents the state-of-the-art technique to enforce the integrity and confidentiality property in a hardware design. The security analysis requires the hardware to be divided and labeled in different security classes. For instance, third-party IP, shared resources, or the output ports of the design are labeled untrustworthy. IFA determines whether information from high-security parts affects lower-security areas. The analysis relies on the non-interference property, aiming to prove that a change in sensitive values does not lead to an observable change in the untrustworthy components. The sensitive signals do not interfere with insecure components. Enforcing the properties during every step of the design
Fig. 1: Exemplary data leakage paths in a SoC using a CIM module (1T1R crossbar). Sensitive data flows from the trusted source (RISC-V) to untrusted peripherals or shared memories.
process, avoiding security vulnerabilities that threaten sensitive data, such as encryption keys or user data.
### _VeriCoq-IFT_
The majority of IFA frameworks are designed to handle digital hardware. In contrast, the VeriCoq-IFT framework [6] introduces the capabilities to process mixed-signal designs when analyzing the information flow [7]. The analysis indicates whether sensitive information is leaked to the design's output signals. First, all output ports of the design must be labeled untrustworthy. Second, the user marks the sensitive signal in the design description and assigns it a sensitivity score. The conservative approach of VeriCoq-IFT propagates the sensitivity score of a signal at every signal assignment in Verilog. A variable receives the highest sensitivity score of all its inputs. Furthermore, operations can be labeled a _sensitivity reducer_, so that every time the sensitive signal passes the designated operation, the sensitivity score is reduced. If the signal reaches an output before it reaches a sensitivity score of zero, a data leakage is detected. The score system can be used to enforce that, e.g., the plaintext passes an AES round at least 12 times before it reaches the design's output as the ciphertext. Nevertheless, information flow rules for memristors have not been introduced yet.
### _Non-volatile memories (NVMs)_
NVMs represent a novel memory technology that takes advantage of the memristors. A memristor is next to a capacitor, a resistor, and an inductor, the fourth fundamental electrical component and stores information in the form of resistance. The resistance which can be set or reset using voltage pulses represents different states, which are called low resistive state (LRS) and high resistive state (HRS). To achieve high densities, memristors are organized in crossbar structures consisting of horizontal word and vertical bit lines, with a memristive cell at each cross point. These so-called passive crossbars suffer from sneak-path currents based on parasitic effects between the memristive cells limiting their reliability and retention. Consequently, more advanced cells have been proposed incorporating an active component, i.e., transistor, to connect/isolate the memristor from the remaining crossbar. However, as these novel devices have not been discussed in the VeriCoq-IFT framework yet [7], information flow policies need to be developed and implemented to enable the analysis of information flows of neuromorphic systems.
## III Threat Model
The developed framework aims to identify undesired information leakages in neuromorphic mixed-signal designs. The static analysis is conducted on register transfer level for the digital components, and transistor-level for the analog parts. We assume the attacker has access to the complete hardware description and intends to leak information via a direct flow of information at the primary outputs, no matter whether those outputs lie in the analog or digital domain. Side-channels are not considered. We assume the hardware vulnerabilities are already present at the design stage. It is not considered whether the observations of the primary outputs are obtained via remote access or physical access.
## IV Related Work
Khan et al. elaborate on possible attacks on information leaks on emerging non-volatile memories by using side channels caused by supply noise when writing and reading sensitive data [8]. Furthermore, current research has shown that data-dependent write latencies can be exploited as a side-channel to leak sensitive information. By observing the time to access a memristor, information about the current content can be derived. The analysis regarding this vulnerability has also been conducted manually [2]. In addition to side-channels through the supply noise and the write latency, the supply current can be observed to gather information about sensitive signals [1].
Although multiple vulnerabilities in NVMs have been identified, no work has been presented to automate the identification of such vulnerabilities. Automated security-aware EDA tools are required to assist a hardware designer, inexperienced in hardware security, in identifying security vulnerabilities while maintaining a competitive design process.
## V Framework
Therefore, we develop information flow rules for NVMs and integrate them in known IFA frameworks. Fig. 2 illustrates the direction of voltage and current for the three operational modes of a memristor: _set_, _read_ and _reset_. For _set_ and _read_ the memristor is accessed from the terminal \(ae\), so the current and voltage directions aim at the other terminal, called \(oe\) in this work. Fundamentally, the memristor requires a fourth mode to initialize the device after manufacturing. However, we do not consider this mode in our work because of its limited attack surface compared to the remaining other operational modes.
As stated in [7], for analog components, information is carried by voltage _and_ current. Therefore, when setting the voltage at \(ae\), the information can be read at both terminals of the memristor via the current. Due to the bidirectional behavior of a memristor [9], the information flow behaves bidirectionally, as illustrated with the red arrows in Fig. 2 for the three access modes.
Fig. 3: Functionality of VeriCoq-IFT [6] and the CoqIDE [10] in this work.
Fig. 2: Information flow (red arrows) in a memristor for different operational modes.
We integrated the derived policies in VeriCoq-IFT and combined the framework directly with the CoqIDE [10] to provide an automated IFA framework. The tool flow of the combined VeriCoq-IFT and CoqIDE framework used in this work for the evaluation is depicted in Fig. 3. Although VeriCoq-IFT was introduced for third-party IP as proof-carrying hardware IP, it is solely used for the IFA in this work. VeriCoq-IFT has two operational modes: 1) It processes the VeriCoq-A/MS description of the complete design _or_ 2) The designer provides a Verilog description of the digital domain, combined with Verilog modules of the analog modules that mimic the information flow of the device. The latter is required for the early design stages, when no Verilog-A/MS of the memristor is yet available. Then, Verilog modules mimicking the information flow of a memristor need to be introduced. The model does not depict the actual behavior of a NVM device, but models the information flow. Fig. 4 depicts the Verilog module mimicking the information flow of a memristor. Both terminals \(ae\) and \(oe\) are labeled 'inout'-ports. Additionally, each of the two ports depends on both terminals (line 6 & 7). The type of operation performed on the right side of the Verilog assignment is irrelevant, as only a flow of information needs to be modeled, not the functionality. This allows the framework to handle analog memristors in the digital domain.
Secondly, modern EDA tools allow the export of mixed-signal design into the language Verilog-A/MS. The analog and digital behavior of the components is embedded into a single description. A small number of Verilog-A/MS devices are available online [11]. VeriCoq-IFT processes the design to generate a design description in the language Coq, and theorems and proofs of the information flow rules. These rules are generated for all signals labeled sensitive in the design description. In this work, _all sensitivity labels are set to 1 and no sensitivity reducers are instantiated_, enforcing the non-interference property [12]. Therefore, every output port that can be influenced by the sensitive signal is labeled a leakage point. The three auxiliary files are forwarded to the theorem prover in the CoqIDE. If the theorems and proofs pass for the hardware description in Coq, no leakage is detected, otherwise VeriCoq-IFT identifies undesired flows of information.
## VI Demonstration
In this work, we present the functionality and limitations of the presented framework using three individual integrated NVM-based mixed-signal designs. We assume the NVM block is integrated on a system-on-chip (SoC) accelerator. We focus our evaluation on the analog domain to illustrate the capabilities and shortcomings of the implemented framework. Fig. 5 illustrates the analog domain of the SoC and marks one identified leakage path in red. The surrounding circuitry enables in this example design the orchestration of both passive and active crossbar arrays. Following, we conduct three experiments to highlight the obstacles of IFA to neuromorphic systems and ultimately the shortcomings of the IFA framework. The internals of both crossbar structures are shown in Fig. 5 (b) and (d). The two crossbar structures can each be integrated into the circuitry (Fig. 5(a)) by replacing the blue abstract crossbar. Furthermore, we propose a masking mechanism that enforces the intended usage of the NVM module (see Fig. 5 (c)), which would replace the drivers shown in Fig. 5 (a).
### _1R Crossbar Accelerator_
The memory cell of a passive crossbar consists of a single memristor. Hence, the crossbar itself acts like a network of resistances, allowing a bidirectional information flow. The digital domain limits the information flow based on the implemented output signals, i.e., to compute a vector-matrix multiplication, or to communicate the result by an input signal. Fig. 5 (b) exemplifies in red one possible information leakage path additionally to the intended flow of information. In addition to the intended flow of information from \(\includegraphics[scale=0.1]{R}\) to \(\includegraphics[scale=0.1]{R}\), the undesired sneak paths are identified too (see \(\includegraphics[scale=0.1]{R}\) to \(\includegraphics[scale=0.1]{R}\)) [13]. Overall, the information flow in passive crossbars is considered complex, since there is no clear direction and the information can be transferred alternating between the analog and digital domain.
### _1T1R Crossbar Accelerator_
While passive crossbar introduces sneak path currents limiting their usability, active crossbar aims to solve this by extending the NVM cell by an active component [13]. Fig. 5 (d) illustrates an active crossbar using a transistor as a selector component to separate unselected cells from the crossbar. However, our experiments show that the information flow, determined by VeriCoq-IFT, of an active crossbar matches the flow of a passive crossbar. _VeriCoq-IFT performs a static analysis of the design which does not take into account the "intended" usage of the selector transistors. Consequently, the framework classifies the NVM module as leaky._
### _Active Crossbar Accelerator with Access Mask_
To secure the information flow through crossbar arrays, we propose a hard-wired _access mask_ enforcing the intended usage of the selector transistors. We implement this _access mask_ within the driver circuitry of the NVM block. The mask allows only a fixed set of driver voltages to be applied to the crossbar, as shown in Table I and is implemented with a lookup table (see Fig. 5 (c)). The lookup table forbids operational modes that can simultaneously write to multiple rows, thus omitting sneak paths. For instance, if the memristor in row \(m\) and column \(n\) (green) is accessed, all other word and bit lines are set to \(GND\) (orange), blocking potential sneak paths.
Fig. 4: High-level definition of a Verilog memristor module modeling the information flow.
The results of the VeriCoq-IFT analysis are illustrated with the red leakage paths (Fig. 5 (d)), which are the same as for the previous two hardware designs._
## VII Discussion
The VeriCoq-IFT analysis of the 1R crossbar accurately depicts the behavior of the sneak paths, so that the framework identifies possible data leakages. On the contrary, active crossbars solve the influence of sneak path currents by incorporating active components, such as transistors. As a static analysis cannot depict the difference between active and passive crossbars, the information flow did not change. Although an access mask in the control circuitry must lead to a change in the identified information flow, VeriCoq-IFT's conservative analysis is not capable of differentiating between the three designs' flows, leading to false positive identifications. Specifically, the SoC designs with a CIM module, which allows continuous information flow between the analog and digital domain, require a less conservative approach to reduce the high number of false positives. Less conservative approaches are already available for the digital domain [14], but are yet missing for the analog domain. The accurate frameworks consider inter-signal dependencies, the actual functionality of an operation, and the accurate value of a signal, which is not done by VeriCoq-IFT. Thus, a framework needs to be developed that considers the mentioned features in the analog by processing the information in the Verilog-A/MS description.
## VIII Conclusion
This work presented an evaluation of the state-of-the-art information flow analysis framework VeriCoq-IFT for integrated CIM modules. As demonstrated, derived Verilog models could be implemented to enable an early stage IFA for trending memristor crossbars, a crucial building block for neuromorphic systems. The functionality of the mixed-signal IFA was demonstrated using three system designs. However, the conservative nature of the current analog information flow theorems leads to many false positives, which make a practical static analysis of the confidentiality property in CIM modules infeasible. In future work, the framework could be extended to allow a less conservative analysis of the information flow [14] or even a quantification of the information flow, so that negligible flows can be ignored [15, 16].
## Acknowledgment
The work was funded by the German Federal Ministry of Education and Research (BMBF) within the project NEU-ROTEC II under contract no. 16ME0399. The VeriCoq framework was provided by the TRELA laboratory at UT Dallas.
\begin{table}
\begin{tabular}{c|c}
**SET** & **RESET** \\ \hline \(V_{WL,k}=V_{SET}\) & \(V_{WL,k}=V_{RES}\) \\ \(V_{SL,l}=V_{GAT}\) & \(V_{SL,l}=V_{GAT}\) \\ \(V_{BL,l}=GND\) & \(V_{BL,l}=GND\) \\ \(V_{WL,1:k+1}=GND\) & \(V_{WL,1:k+1}=GND\) \\ \(V_{SL,1:l+1}=GND\) & \(V_{SL,1:l+1}=GND\) \\ \(V_{BL,1:l+1}=GND\) & \(V_{BL,1:l-1}=GND\) \\ \(V_{WL,k+1:m}=GND\) & \(V_{WL,k+1:m}=GND\) \\ \(V_{SL,l+1:m}=GND\) & \(V_{SL,l+1:m}=GND\) \\ \(V_{BL,l+1:m}=GND\) & \(V_{BL,l+1:m}=GND\) \\ \end{tabular}
\end{table} TABLE I: Rules for the allowed access masks to set or reset the crossbar at (\(k\),\(l\)) when using a (\(m\),\(n\))-crossbar.
Fig. 5: The demonstration setup: (a) The computing-in-memory module and its interface to the digital domain with a generic crossbar (blue), (b) a 1R crossbar that can be integrated in the CIM on the left for the generic crossbar, (c) a LUT for the drivers that forbids certain voltage combinations, and (d) a 1T1R crossbar that can be integrated in the CIM module on the left. The red lines indicate exemplary leakages for a sensitive signal in the digital domain, identified by our VeriCoq-IFT setup. |
2310.11768 | On the Classification of Weierstrass Elliptic Curves over $\mathbb{Z}_n$ | The development of secure cryptographic protocols and the subsequent attack
mechanisms have been placed in the literature with the utmost curiosity.
While sophisticated quantum attacks bring a concern to the classical
cryptographic protocols present in the applications used in everyday life, the
necessity of developing post-quantum protocols is felt primarily.
In post-quantum cryptography, elliptic curve-base protocols are exciting to
the researchers.
While the comprehensive study of elliptic curves over finite fields is well
known, the extended study over finite rings is still missing.
In this work, we generalize the study of Weierstrass elliptic curves over
finite ring $\mathbb{Z}_n$ through classification.
Several expressions to compute critical factors in studying elliptic curves
are conferred.
An all-around computational classification on the Weierstrass elliptic curves
over $\mathbb{Z}_n$ for rigorous understanding is also attached to this work. | Param Parekh, Paavan Parekh, Sourav Deb, Manish K Gupta | 2023-10-18T07:55:39Z | http://arxiv.org/abs/2310.11768v1 | # On the Classification of Weierstrass Elliptic Curves over \(\mathbb{Z}_{n}\)
###### Abstract
The development of secure cryptographic protocols and the subsequent attack mechanisms have been placed in the literature with the utmost curiosity. While sophisticated quantum attacks bring a concern to the classical cryptographic protocols present in the applications used in everyday life, the necessity of developing post-quantum protocols is felt primarily. In post-quantum cryptography, elliptic curve-base protocols are exciting to the researchers. While the comprehensive study of elliptic curves over finite fields is well known, the extended study over finite rings is still missing. In this work, we generalize the study of Weierstrass elliptic curves over finite ring \(\mathbb{Z}_{n}\) through classification. Several expressions to compute critical factors in studying elliptic curves are conferred. An all-around computational classification on the Weierstrass elliptic curves over \(\mathbb{Z}_{n}\) for rigorous understanding is also attached to this work.
Elliptic curves, Weierstrass equation, Classification, Counting, Computational data
## I Introduction
In the digital era, data security has become a major concern. The classical scenario of Alice, Bob, and Eve in cryptographic protocols has become extremely critical as time has passed. The exploration of two revolutionary algorithms given by Peter Shor [1] has diverted the interest of researchers into cryptographic protocols that are more robust and are preventive against possible quantum attacks. While exploring such protocols, usually known as Post-Quantum (PQ) Cryptographic Protocols, are categorized mainly into six significant parts; Elliptic Curve Cryptography or ECC caught the eye due to its enhanced protection of the security, smaller key sizes, bandwidth savings, and quicker deployment compared to most everyday used or potential cryptographic protocols. The recent application of elliptic curve cryptography in WhatsApp [2] or the selection of the finalists in NIST's search of setting standards in protocols [3] are evident in proving the effectiveness of ECC.
The study of elliptic curves blends many branches of pure mathematics, such as algebra and number theory, and shows significant impacts in cryptography. Due to their rich algebraic nature, elliptic curves have been at the center of the focus of mathematics from Diophantus's Arithmetica to this date in Bitcoin. The first known occurrence of the elliptic curve is in the _Arithmetica_ book by Diophantus, written in the second or third century A.D. He formulated the following elliptic curve problem in his book: "To divide a given number into two numbers such that their product is cube minus its side." The corresponding curve is \(y(a-y)=x^{3}-x\). Diophantus solved the polynomial for \(a=6\) corresponding to cubic equation \(y^{2}=x^{3}-x+9\) by giving appropriate algebraic treatment. Successively, in the thirteenth century, elliptic curves appeared in Fibonacci's congruent number problem. Though elliptic curves are known only as cubic polynomials in general, their appearance in finding the arc length of ellipses, the separate nomenclature _elliptic curves_ is justified. Later in the nineteenth century, Jacobi and Weierstrass linked these cubic polynomials to elliptic integrals and functions. In \(1901\), Poincare [4] first demonstrated the underlying group structure of points in an elliptic curve concerning the chord-and-tangent addition (with the point at infinity as identity). In 1948, Sir Andre Weil generalized the conjecture that Artin gave in \(1924\), which deals with the count of the points on an elliptic curve \(E\) of general genus \(g\) over the finite field \(\mathbb{F}_{q}\)\((q=p^{r})\), for some positive integers \(p\) (prime) and \(r\). The elegant solution is given by the Hasse-Weil Bound as \(|\#E(\mathbb{F}_{q})-q-1|\leq 2\sqrt{q}\) where \(\#E(\mathbb{F}_{q})\) denotes the number of points on the elliptic curve \(E\)[5]. The article [6] by Lenstra Jr. showed the possibility of using elliptic curves in cryptography by providing an integer factoring algorithm based on elliptic curves. In \(1985\), N. Koblitz and V. Miller independently proposed an elliptic curve cryptosystem using the group of points on an elliptic curve defined over a finite field. In the same year, Rene Schoof [7] gave the deterministic polynomial time algorithm for finding the number of points on elliptic curves over a finite field. In \(1992\), ECDSA(Elliptic Curve Digital Signature Algorithm) was proposed by Johnson et al. [8], which pioneered the development of Bitcoin. Due to the use of elliptic curves in the proof of Fermat's Last Theorem in \(1995\)[9], elliptic curves gained popularity among researchers for a different perspective to handle the hardness of the ECC. In \(2005\), the recommendation of ECC by NIST made the study of elliptic curves at the heart of today's research in security.
Elliptic curves over finite fields were first introduced by Neal Koblitz [10] and Victor S. Miller [11] independently in \(1985\). They both defined the group operation in terms of "point addition" on the curve, which allows two points to be combined to create a third point. This operation forms an abelian group used in various cryptographic applications. The group operation on elliptic curves is central to the security of elliptic curve cryptography and has been extensively studied and analyzed over the years. The problem of bounding the number of points on an elliptic curve over a finite field was also extensively studied, and In \(1974\), the mathematicians Hasse [12] and Weil [13] independently proved the Hasse
Weil bound, which gives an upper bound on the number of points on an elliptic curve over a finite field in terms of the size of the field. This bound is used in cryptographic applications to ensure the security of elliptic curve cryptography. There are several algorithms for counting points on elliptic curves, including the baby step giant step algorithm [14], the Schoof algorithm, the SEA (Schoof-Elkies-Atkin) algorithm [15][16], and Satoh's algorithm [17]. These algorithms are based on the theory of elliptic curves and involve computations over finite fields.
Elliptic curves can also be defined over ring \(\mathcal{R}\). The elliptic curve over the ring is first used in the integer factoring algorithm by H. Lenstra [6]. Elliptic curves over rings that satisfy some conditions are well studied at [18]. Curves defined over such rings also form a group under the special operation _chord-and-tangent rule_. Bound on the number of points on the elliptic curve over these rings is also given. Cryptosystems based on elliptic curves over rings are Koyama, Maurer, Okamoto, and Vanstone Cryptosystem [19], Meyer-Muller Cryptosystem [20], and the Paillier Schemes [21].
Knowing whether the chosen curves are isomorphic and how many non-isomorphic curves are available when picking curves over a given field \(\mathbb{K}\) is helpful. After determining the isomorphism classes, we may choose a representation that might lead to a more effective group addition implementation. Because of this, we are inspired to research the isomorphism classes of elliptic curves. Several additional curves may be used to describe elliptic curves, including the Legendre, Hessian, Quartic, Montgomery, Weierstrass, and Edward curves. Counting the number of distinct elliptic curves over \(\mathbb{F}_{q}\) up to isomorphism is a natural question. This has been done for Weierstrass curves [22, 7, 23], and various alternate models of elliptic curves [24, 25, 26, 27]. The number of isomorphism classes of hyperelliptic curves over finite fields has also been of interest [28, 29, 30, 31].
Driven by the importance of elliptic curves in diverse applications, we present a rigorous algebraic classification of the Weierstrass elliptic curves over the finite rings \(\mathbb{Z}_{n}\). Several results are derived to characterize the isomorphic elliptic curves to a given nonsingular generalized or reduced curve via a particular coordinate transformation map. In most cases, Exact expressions are given, validated from extensive computational searches over the set of all possible curves over the ring \(\mathbb{Z}_{n}\), for a particular \(n\). The thorough dataset obtained by the computational approach is referred to for a complete understanding of the proposed classification. One should note that this work considers the general settings \(i.e.\), over the finite ring \(\mathbb{Z}_{n}\) as the focus. However, in particular, the algebraic complexities of finding the roots of a given polynomial over \(\mathbb{Z}_{n}\) confines the proposed results in the closest possible general assumption with a fitting explanation in such cases. Moreover, all the enumeration of the points over the elliptic curves is done by omitting the point at infinity.
This work is organized in the following manner. Section II deals with the fundamental definitions and the notations used in this work. Section III includes several results from the existing literature that contribute centrally to classifying the Weierstrass elliptic curves over a finite field \(\mathbb{F}_{q}\) are discussed to provide insight into the approaches that are comprehended in the original results in this work. The independent results of this work are included in Section IV, where the extension of existing results are done on the generalized as well as the reduced Weierstrass elliptic curves over \(\mathbb{Z}_{n}\) in a dual manner that fits theoretical results along with their computational counterparts. In Section V, we present the relevant computational information in the tabular method to validate the proposed results that are presented in Section IV together with a dedicated HTML page for the overall classification database [32]. Section VII concludes this paper.
## II Preliminaries
In this section, we revisit the basic definitions and properties of the elliptic curves. Every algebraic curve can be described in terms of the projective plane and affine plane. For the field \(\mathbb{K}\), consider the algebraic closure denoted by \(\overline{\mathbb{K}}\). The Weierstrass equation is a homogeneous polynomial of degree \(3\) over \(\mathbb{K}\) of the form,
\[E:Y^{2}Z+a_{1}XYZ+a_{3}YZ^{2} =X^{3}+a_{2}X^{2}Z+a_{4}XZ^{2} \tag{1}\] \[\quad+a_{6}Z^{3}\]
The elliptic curve \(E\) is defined over \(\mathbb{K}\) and represented as \(E/\mathbb{K}\) and the set of \(\mathbb{K}\)-valued points of \(E/\mathbb{K}\) is denoted as \(E(\mathbb{K})\). The given polynomial is said to be nonsingular if for all projective points \(P(X,Y,Z)\in P^{2}(\overline{\mathbb{K}})\) satisfying \(F(X,Y,Z)=Y^{2}Z+a_{1}XYZ+a_{3}YZ^{2}-X^{3}-a_{2}X^{2}Z-a_{4}XZ^{2}-a_{6}Z^{3}=0\), at least one of the three partial derivatives \(\frac{\partial F}{\partial X},\frac{\partial F}{\partial Y},\frac{\partial F }{\partial Z}\) is non-zero at \(P\). The particular point \((0,1,0)\) in \(E\) is called the point at infinity and denotes it by \(\mathcal{O}\). An equivalent definition states that. elliptic curve \(E\) is the set of all solutions in \(P^{2}(\overline{\mathbb{K}})\) of a non-singular Weierstrass equation.
For simplicity, we rewrite the Weierstrass equation for an elliptic curve in non-homogeneous (affine) coordinates. By adjusting \(x=X/Z,y=Y/Z\) in _Equation 1_, we further incorporate the equivalent form,
\[y^{2}+a_{1}xy+a_{3}y=x^{3}+a_{2}x^{2}+a_{4}x+a_{6} \tag{2}\]
The above-said transformation leads to the affine plane \(A^{2}(\overline{\mathbb{K}})=\overline{\mathbb{K}}\times\overline{\mathbb{K}}\) that contains the set of solutions of _Equation 2_ including the point \(\mathcal{O}\). For the generalized Weierstrass equation (_Equation 2_), consider the following values along with the characterizing factors Discriminant \(\Delta\) and j-invariant \(j(E)\):
\[b_{2} =a_{1}^{2}+4a_{2} \tag{3}\] \[b_{4} =2a_{4}+a_{1}a_{3}\] \[b_{6} =a_{3}^{2}+4a_{6}\] \[b_{8} =a_{1}^{2}a_{6}+4a_{2}a_{6}-a_{1}a_{3}a_{4}+a_{2}a_{3}^{2}-a_{4}^ {2}\] \[c_{4} =b_{2}^{2}-24b_{4}\] \[\Delta =-b_{2}^{2}b_{8}-8b_{4}^{3}-27b_{6}^{2}+9b_{2}b_{4}b_{6}\] \[j(E)=c_{4}^{3}/\Delta\]
Then _Equation 2_ represents the Weierstrass elliptic curve \(E/\mathbb{K}\) with the condition that it is non-singular, \(i.e.\)\(\Delta\neq 0\). It is well known that the points over the elliptic curve form
an abelian group under the chord-and-tangent rule as the group operation, along with the point at infinity \(\mathcal{O}\) as the identity element.(_Theorem 2.3, [22]_). The natural isomorphism between two elliptic curves \(E_{1}/\mathbb{K}\) and \(E_{2}/\mathbb{K}\) can be determined if and only if \(j(E_{1})=j(E_{2})\). Equivalently, two elliptic curves given by the generalized Weierstrass equations over \(\mathbb{K}\) are isomorphic if one curve can be obtained from the other using the coordinate transformation \(\tau:(x,y)\rightarrow(u^{2}x+r,u^{3}y+u^{2}sx+t),\ u\in\mathbb{K}^{*}\), \(r,s,t\in\mathbb{K}\).
While the generalized Weierstrass equation can be used over any field with random characteristic, the reduced Weierstrass equation is crucial in the case of \(char(\mathbb{K})\neq 2,3\), by selecting the transformation mapping as \((x,y)\rightarrow(x,y-\frac{a_{1}}{2}x-\frac{a_{3}}{2})\) and \((x,y)\rightarrow(\frac{x-3b_{2}}{36},\frac{y}{216})\)[7, 33] over \(\mathbb{K}\) as,
\[E:y^{2}=x^{3}+ax+b,\ char(\mathbb{K})\neq 2,3. \tag{4}\]
The discriminant and j-invariant for the reduced Weierstrass equation can be obtained as:
\[\Delta =-16(4a^{3}+27b^{2}) \tag{5}\] \[j(E) =-1728\frac{4a^{3}}{\Delta}\]
For an elliptic curve \(E\) over \(\mathbb{K}\), we denote the automorphism group of \(E\) by \(Aut(E)\), which consists of all the isomorphism to itself and subsequently, \(|Aut(E)|\) denotes the cardinality of the automorphism group of \(E\). The order of \(Aut(E)\) takes different possible values based on the coefficients of the curve \(E\) defined over \(\mathbb{K}\). The automorphism group \(Aut(E)\) holds a crucial role in characterizing the elliptic curves in classes, where each class contains a primary curve or the _Class Leader_ that further correspondences to the other class members through automorphisms.
## III Classification of Weierstrass elliptic curves over \(\mathbb{F}_{q}\)
In this section, we explore the algebraic structure of the elliptic curves corresponding to reduced and generalized Weierstrass equations over finite fields that contribute fundamentally to this work.
### _Results on reduced Weierstrass elliptic curves over \(\mathbb{F}_{q}\)_
The following results are presented concisely in [6, 22]. However, we elaborate on crucial findings by providing detailed explanations and proofs to facilitate a comprehensive understanding of the fundamentals.
**Theorem 1**.: _For \(char(\mathbb{F}_{q})\neq 2,3\), the number of non-singular reduced Weierstrass elliptic curves over \(\mathbb{F}_{q}\) is \(q^{2}-q\)._
Proof.: Consider the reduced Weierstrass elliptic curve over \(\mathbb{F}_{q}\), \(E:y^{2}=x^{3}+ax+b\) along with the discriminant \(\Delta=-16(4a^{3}+27b^{2})\). To count the non-singular curves, we should obtain those pairs \((a,b)\) such that \(\Delta\neq 0\) over \(\mathbb{F}_{q}\). In order to count the pairs \((a,b)\) such that \(\Delta=0\), we have \(4a^{3}+27b^{2}=0\). Now we can find a trial solution to this equation as \(a=-3c^{2}\) and \(b=2c^{3}\) for some \(c\) in \(\mathbb{F}_{q}\) where \(c\) is uniquely determined as \(c=-3b/2a\). Therefore, the result follows.
**Theorem 2**.: _The number of unique reduced Weierstrass elliptic curves isomorphic to given curve \(E/\mathbb{F}_{q}\) will be \(\frac{q-1}{|Aut(E)|}\), where \(char(\mathbb{F}_{q})\neq 2,3\)._
Proof.: By counting the possibilities of \(u\in\mathbb{F}_{q}^{*}\) in the transformation function \(\tau:(x,y)\rightarrow(u^{2}x,u^{3}y)\), the total number of elliptic curves isomorphic to given curve \(E/\mathbb{F}_{q}\) is \(q-1\). These \(q-1\) outcomes incorporate the transformations in the form of automorphisms that map \(E\) to \(E\) itself and the transformations that result in curve \(E\) from different curves.
Consider, \(Aut(E)=\{\tau_{1},\tau_{2},...,\tau_{t}\}\) the set of unique mappings, such that \(\tau_{i}(E)=E\) where \(i\in\{1,2,3,...,t\}\), and \(|Aut(E)|=t\). Now for a different curve \(E^{\prime}\), if exists, the transformation \(\tau^{\prime}\neq\tau_{i}\) leads to \(\tau^{\prime}(E)=E^{\prime}\), and then for the transformations, we have, \(\tau^{\prime}\circ\tau_{i}(E)=\tau^{\prime}(E)=E^{\prime},\ \forall\ i\in\{1,2,3,...,t\}\). Moreover, these \(\tau^{\prime}\)s can be obtained uniquely, since \(\tau^{\prime}\circ\tau_{i}=\tau^{\prime}\circ\tau_{j}\) leads to \((\tau^{\prime})^{-1}\circ\tau^{\prime}\circ\tau_{i}=(\tau^{\prime})^{-1} \circ\tau^{\prime}\circ\tau_{j}\), and finally, \(\tau_{i}=\tau_{j}\). So, \(Aut(E)\) can simply characterize the automorphism group of \(E^{\prime}\), \(Aut(E^{\prime})\), and hence we obtain \(Aut(E^{\prime})=t\). Since \(E^{\prime}\) is arbitrary, it is clear that \((q-1)\) is a multiple of \(t\), and so the number of unique elliptic curves that are isomorphic to the given curve \(E/\mathbb{F}_{q}\) is \(\frac{q-1}{|Aut(E)|}\).
**Theorem 3**.: _For \(char(\mathbb{F}_{q})\neq 2,3\), the Number of isomorphism classes \((\sigma)\) of reduced Weierstrass elliptic curves over \(\mathbb{F}_{q}\), will be_
\[\sigma=\left\{\begin{array}{ll}2q+6&when\quad q\equiv 1\mod 12\\ 2q+2&when\quad q\equiv 5\mod 12\\ 2q+4&when\quad q\equiv 7\mod 12\\ 2q&when\quad q\equiv 11\mod 12\end{array}\right.\]
Proof.: Let \(E_{1}/\mathbb{F}_{q}:y^{2}=x^{3}+ax+b\) and \(E_{2}/\mathbb{F}_{q}:y^{2}=x^{3}+\bar{a}x+\bar{b}\) be two isomorphic elliptic curves over \(\mathbb{F}_{q}\) and additionally follow the relations \(u^{4}\bar{a}=a\) and \(u^{6}\bar{b}=b\), \(u\in\mathbb{F}_{q}^{*}\). To count the number of isomorphism classes of elliptic curves over \(\mathbb{F}_{q}\), we first focus on \(|Aut(E)|\). From _Theorem 2.6, [22]_, we can ensure that there exists a solution \(u^{\prime}\in\mathbb{F}_{q}^{*}\) to the equations \(\bar{a}=u^{-4}a\) and \(\bar{b}=u^{-6}b\). Moreover, it is evident that \(\bar{a}=0\) iff \(a=0\) and \(\bar{b}=0\) iff \(b=0\). Therefore, three particular cases are of our interest,
1. For, \(a\neq 0\) and \(b\neq 0\) (\(j(E_{1})\neq 0,1728\)) The relations \(u^{4}\bar{a}=a\) and \(u^{6}\bar{b}=b\) leads to \(u^{2}=\frac{\bar{a}b}{ab}\). Subsequently, the solutions are \(u=u^{\prime}\) or \(u=-u^{\prime}\), _i.e._, \(|Aut(E_{1})|=2\).
2. \(a=0\) and \(b\neq 0\) (\(j(E_{1})=0\)) As \(a=0\), we have \(\bar{a}=0\). So we are left with only one relation \(u^{6}\bar{b}=b\). From Elementary Group Theory, it is evident that if there is an element \(\alpha\) of order \(3\), then the solution set for \(u\) will be \(\{u^{\prime},\alpha u^{\prime},\alpha^{2}u^{\prime},-u^{\prime},-\alpha u^{\prime}, -\alpha^{2}u^{\prime}\}\). Otherwise, \(\{u,u^{\prime}\}\) will be the only set of solutions in this case. Then, we have \(|Aut(E_{1})|=2\) or \(|Aut(E_{1})|=6\).
3. \(a\neq 0\) and \(b=0\) (\(j(E_{1})=1728\)) As \(b=0\), we have \(\bar{b}=0\). So we are left with only one relation \(u^{4}\bar{a}=a\). From Elementary Group Theory, it is evident that if there is an element \(\beta\) of order \(4\), then the
solution set for \(u\) will be \(\{u^{\prime},\beta u^{\prime},\beta^{2}u^{\prime},\beta^{3}u^{\prime}\}\). Otherwise, \(\{u,u^{\prime}\}\) will also be the only set of solutions in this case. Hence we obtain \(|Aut(E_{1})|=2\) or \(|Aut(E_{1})|=4\).
Now, combining _Theorem 1_ and _Theorem 2_, the following relation
\[\sum_{E_{k}}\frac{q-1}{|Aut(E_{k})|}=q^{2}-q \tag{6}\]
holds, where the summation is taken over the set of isomorphism class representatives \(E_{k}\)s defined over \(\mathbb{F}_{q}\). Also \(\gcd(q,6)=1\) results in \(q\equiv 1,5,7,11\mod 12\).
Let us consider the scenario where \(q\equiv 1\mod 12\).
The primary assumption leads to \(q\equiv 1\mod k\), for positive integer \(2\leq k\leq 4\), which further shows that \(|Aut(E_{k})|=2,4\) or \(6\).
Suppose \(|Aut(E_{k})|=6\). Then according to Case II, we have \(a=0\) and \(b\neq 0\), \(i.e.\), \(\Delta=-16(27b^{2})\). Since there are only \(q-1\) possibilities left for nonzero \(b\), the number of elliptic curves over \(\mathbb{F}_{q}\) with \(|Aut(E_{k})|=6\) is \(q-1\). Then, using _Equation 6_, we have the total number of isomorphism classes to be \(6\). Similar calculations yield that the total number of isomorphism classes will be \(4\) when \(|Aut(E_{k})|=4\).
Now we have \(q^{2}-q\) non-singular curves in total, out of which there are \(2(q-1)\) curves that have either \(|Aut(E_{k})|=4\) or \(|Aut(E_{k})|=6\). So the total number of non-singular curves with \(|Aut(E_{k})|=2\) will be \(q^{2}-q-2(q-1)=(q-1)(q-2)\) and hence the number of isomorphism classes will be \(2q-4\), using _Equation 6_.
Therefore, the overall isomorphism classes of elliptic curves over \(\mathbb{F}_{q}\) will be \(2q-4+6+4=2q+6\) when \(q\equiv 1\mod 12\).
Using a similar argument, one can derive the other cases over \(q\). Hence, the result follows.
### _Results on generalized Weierstrass elliptic curves over \(\mathbb{F}_{q}\)_
Using computational tools, we analyzed the number of non-singular generalized Weierstrass elliptic curves over \(\mathbb{F}_{q}\). Supporting data for the same is given in _Table I_. From the computational result, we arrived at the following conjecture.
**Conjecture 1**.: _The number of nonsingular generalized Weierstrass elliptic curves over \(\mathbb{F}_{q}\) is \(q^{5}-q^{4}\)._
**Theorem 4**.: _The number of unique generalized Weierstrass elliptic curves isomorphic to given curve \(E/\mathbb{F}_{q}\) is \(\frac{q^{4}-q^{3}}{|Aut(E)|}\)._
Proof.: Considering the transformation function \(\tau:(x,y)\rightarrow(u^{2}x+r,u^{3}y+u^{2}sx+t),u\in\mathbb{F}_{q}^{*}\) and _Theorem 2_, the result follows directly.
In \(1969\), W.C. Waterhouse [34] proposed the formula for classifying the elliptic curve over the finite field \(\mathbb{F}_{q}\) by the count of the isomorphic curves to a given elliptic curve that holds an extreme motivation behind this work.
**Theorem 5**.: _The number of isomorphism classes of generalized Weierstrass elliptic curves over \(\mathbb{F}_{q}\) is \(N_{q}=2q+3+\left(\frac{-4}{q}\right)+2\left(\frac{-3}{q}\right)\), where \((.)\) denotes the Jacobi symbol._
Proof.: For detailed proof, readers are referred to _Proposition \(5.7\), [7]_.
For the rest of the paper, we represent the number of nonsingular generalized Weierstrass elliptic curves and the number of nonsingular reduced Weierstrass elliptic curves by \(N_{G}(\mathcal{R})\) and \(N_{R}(\mathcal{R})\) over the ring \(\mathcal{R}\). Similarly, \(C_{G}(\mathcal{R})\) and \(C_{R}(\mathcal{R})\) denote the number of isomorphism classes in the generalized and reduced scenarios over \(\mathcal{R}\) respectively. Furthermore, \(\mathbb{E}_{k}\) represents the class leaders in each isomorphism class.
In the following section, we present the classification of elliptic curves, some results on the \(\mathbb{Z}_{n}\)-classification of elliptic curves, and the exact formula for the number of \(\mathbb{Z}_{n}\)-isomorphism classes of elliptic curves.
## IV Classification of Weierstrass elliptic curves over \(\mathbb{Z}_{n}\)
The elliptic curves over finite rings are studied similarly as presented in the finite field cases. However, due to the generalized nature of rings, searching for the roots of a polynomial of degree \(n\) is contained in the fundamental problem. In this work, we are explicitly interested in devising a method to classify the elliptic curves over \(\mathbb{Z}_{n}\) based on the underlying fundamental characteristics that closely align with the computational data.
The generalized Weierstrass equation of elliptic curves over \(\mathbb{Z}_{n}\) can be represented as
\[\begin{split} E:y^{2}+a_{1}xy+a_{3}y=& x^{3}+a_{2}x^{2}+a_{4}x+a_{6}\\ &\text{where }a_{i}\in\ \mathbb{Z}_{n}\end{split} \tag{7}\]
The corresponding discriminant and j-invariant for the generalized Weierstrass equation can be obtained as,
\[\begin{split}\Delta&=-b_{2}^{2}b_{8}-8b_{4}^{3}-27b_{ 6}^{2}+9b_{2}b_{4}b_{6}\\ j(E)&=c_{4}^{3}/\Delta\\ c_{4}&=b_{2}^{2}-24b_{4}\\ b_{2}&=a_{1}^{2}+4a_{2}\\ b_{4}&=2a_{4}+a_{1}a_{3}\\ b_{6}&=a_{3}^{2}+4a_{6}\\ b_{8}&=a_{1}^{2}a_{6}+4a_{2}a_{6}-a_{1}a_{3}a_{4}+a_{ 2}a_{3}^{2}-a_{4}^{2}\end{split}\]
where discriminant \(\Delta\in\mathbb{Z}_{n}^{*}\) (Section \(3\), [35]) and relating the transformation mapping as stated in the case of \(\mathbb{F}_{q}\), we can obtain the Weierstrass elliptic curves with the general
\begin{table}
\begin{tabular}{|c|c|} \hline \(q\) & Number of non-singular curves over \(\mathbb{F}_{q}\) \\ \hline \(2\) & \(16\) \\ \hline \(4\) & \(768\) \\ \hline \(8\) & \(28672\) \\ \hline \(16\) & \(983040\) \\ \hline \(32\) & \(32505856\) \\ \hline \(64\) & \(1056964608\) \\ \hline \(128\) & \(34091302912\) \\ \hline \(256\) & \(1.0952167\times 10^{12}\) \\ \hline \(512\) & \(3.5115653\times 10^{14}\) \\ \hline \(1024\) & \(1.1248004\times 10^{13}\) \\ \hline \end{tabular}
\end{table}
Table I: Computational results on the number of nonsingular generalized Weierstrass elliptic curves over \(\mathbb{F}_{2^{m}},1\leq m\leq 10\).
transformation \(\tau:(x,y)\rightarrow(u^{2}x+r,u^{3}y+u^{2}xx+t),u\in\mathbb{Z}_{n}^{*},r,s,t\in \mathbb{Z}_{n}\). Moreover, we are curious about the reduced equations of the elliptic curve over \(\mathbb{Z}_{n}\), where \(char(\mathbb{Z}_{n})\nmid 2,3\)
The reduced Weierstrass equation of elliptic curves over \(\mathbb{Z}_{n}\) can be obtained by applying \((x,y)\rightarrow(x,y-\frac{a_{1}}{2}x-\frac{a_{3}}{2})\) and \((x,y)\rightarrow(\frac{x-3b_{2}}{36},\frac{y}{216})\) as admissible change of variables to Equation 7:
\[E:y^{2}=x^{3}+ax+b,\ \ a,b\in\mathbb{Z}_{n}\ \text{and}\ \gcd(6,n)=1 \tag{8}\]
Subsequently, the corresponding discriminant and \(j\)-invariant for the reduced Weierstrass equation will be in the form as,
\[\Delta =-16(4a^{3}+27b^{2})\in\mathbb{Z}_{n}^{*}\] \[j(E) =-1728\frac{4a^{3}}{\Delta}\]
For classifying these forms of the Weierstrass equation, the transformation function is \(\tau:(x,y)\rightarrow(u^{2}x,u^{3}y),\ u\in\ \mathbb{Z}_{n}^{*}\) is deployed.
We summarize the findings of this work in the following theorems. The corresponding computational data is also presented accordingly.
### _Results on reduced Weierstrass elliptic curves over \(\mathbb{Z}_{n}\)_
Computational data through which we have arrived at these results are given in [32]. Here, we set the central focus on devising the number of nonsingular reduced Weierstrass elliptic curves over \(\mathbb{Z}_{n}\). Explicit proofs for \(n=p\) and \(n=p_{1}p_{2}\ldots p_{k}\) are outlined in Theorem 6 and Theorem 7, respectively. Lower bound (\(N_{R}^{\prime}(\mathbb{Z}_{n})\)) and Upper bounds (\(N_{R}^{\prime\prime}(\mathbb{Z}_{n})\)) for \(N_{R}(\mathbb{Z}_{n})\), \(n\) = \(p^{m}\) is described under Proposition 1 and Conjecture 4, respectively.
Before proceeding to the later theorem, we present specific number-theoretic valuable results that help characterize the nonsingular reduced elliptic curves over finite rings.
**Lemma 1**.: _[_36_]_ \(x^{k}\equiv a\mod p\) has a solution if and only if \(a^{\frac{p-1}{4}}\equiv 1\mod p\), where \(d=\gcd(k,p-1)\). If the congruence equation has a solution, it has \(d\) congruent solutions in modulo \(p\)._
**Lemma 2**.: _[_36_]_ _The number of cubic residues (\(CR\)) over \(\mathbb{Z}_{p}\) is \(p-1\) when \(p\equiv 2\mod 3\) and \(\frac{p-1}{3}\) when \(p\equiv 1\mod 3\) and the occurrence can be observed in each case as \(1,3\), respectively._
**Lemma 3**.: _[_37_]_ _The negative of quadratic residue is a quadratic residue, and the negative of non-quadratic residues (\(NQR\)) is non-quadratic residues in \(\mathbb{Z}_{p}\), \(p\equiv 1\mod 4\), and vise versa for \(p\equiv 3\mod 4\)._
**Lemma 4**.: _Number of cubic residues (\(CRs\)) that are also quadratic residues (\(QR\)s) in \(\mathbb{Z}_{p}\) is \(\frac{p-1}{2}\) when \(p\equiv 2\mod 3\) and \(\frac{p-1}{6}\) when \(p\equiv 1\mod 3\)._
Proof.: The intuition used in this proof is to find the \(QR\)s that are \(CR\)s as well, and hence, we need to obtain those elements that are \(6\)th residue modulo \(p\). If \(p=3j+1\) then \(d=\gcd(6,3j)=6\) where \(j\in 2\mathbb{Z}\). Now from _Lemma 1_, we have to prove that \(a^{\frac{p-1}{8}}\equiv 1\mod p\). Lagrange's theorem ensures that if \(m\mid(p-1)\), the congruence \(x^{m}\equiv 1\mod p\) has a complete set of solutions. Therefore if \(\frac{p-1}{6}\mid(p-1)\) then \(a^{\frac{p-1}{8}}\equiv 1\mod p\) has the full set of solutions.
If \(p=3j+2\), then \(d=\gcd(6,3j+1)=2\) where \(j\) is an odd integer, and from Lagrange's theorem, we can ensure that \(\frac{a^{\frac{p-1}{8}}}{2}\equiv 1\mod p\) has a complete set of solutions since \(\frac{p-1}{2}\mid(p-1)\).
The following theorem represents an independent approach apart from _Theorem 1_ where the number of nonsingular reduced elliptic curves are considered over the finite field \(\mathbb{Z}_{p}\) when \(p>3\). The necessity of the new approach is based on the occurrence of quadratic and cubic residues over \(\mathbb{Z}_{p}\) and contributes significantly to the successive results.
**Theorem 6**.: _For prime \(p>3\), the number of nonsingular reduced Weierstrass elliptic curves over \(\mathbb{Z}_{p}\), \(N_{R}(\mathbb{Z}_{p})=\Phi(p^{2})\), where \(\Phi\) is the Euler totient function._
Proof.: Consider the reduced Weierstrass elliptic curve over \(\mathbb{Z}_{p}\), \(E:y^{2}=x^{3}+ax+b\) along with the discriminant \(\Delta=-16(4a^{3}+27b^{2})\), \(a,b\in\mathbb{Z}_{p}\). To count the non-singular curves, we should exclude those pairs \((a,b)\) such that \(\Delta=0\). Now, a simple calculation on \(\Delta\equiv 0\mod p\) leads to the relation \((3^{-1})^{3}a^{3}\equiv-(2^{-1})^{2}b^{2}\mod p\), since 27 and 4 are cubic and quadratic residue over \(\mathbb{Z}_{p}\) and \(2,3\) are units in \(\mathbb{Z}_{p}\). Assuming \((3^{-1}a,2^{-1}b)=(s_{1},s_{2})\), we finally obtain
\[\begin{split} s_{1}^{3}+s_{2}^{2}&\equiv 0\mod p\\ \implies s_{1}^{3}&\equiv-s_{2}^{2}\mod p\end{split} \tag{9}\]
Now observing the occurrence of the \(QR\)s and \(CR\)s over \(\mathbb{Z}_{p}\), the proof is partitioned into the following four cases:
1. If \(p\equiv 2\mod 3\) and \(p\equiv 1\mod 4\), then from _Lemma 3_ and _Lemma 4_, we obtain that there are \(\frac{p-1}{2}\) possibilities those are \(QR\)s as well as \(CR\)s to solve _Equation 9_. Therefore, taking into account the occurrence of the \(QR\)s and \(CR\)s over \(\mathbb{Z}_{p}\) to be \(2\) and \(1\) respectively, there will be \((\frac{p-1}{2}\times 1\times 2)+1=p\) solutions (including 0) in total for the abovesaid equation.
2. If \(p\equiv 1\mod 3\) and \(p\equiv 1\mod 4\), using the similar argument as given in i), we obtain total \((\frac{p-1}{6}\times 3\times 2)+1=p\) (including 0) solutions. In this case, one should note that the occurrence of \(CR\)s will be \(3\).
3. If \(p\equiv 2\mod 3\) and \(p\equiv 3\mod 4\), then from _Lemma 4_ and _Lemma 3_, We observed that \(s_{1}\) being \(NQR\) gives the solution of the _Equation 9_ solution. There are \(\frac{p-1}{2}\) possibilities those are \(NQR\)s as well as \(CR\)s to solve _Equation 9_. Therefore, taking into account the occurrence of the \(NQR\)s and \(CRs\) over \(\mathbb{Z}_{p}\) to be \(2\) and \(1\) respectively, there will be \((\frac{p-1}{2}\times 1\times 2)+1=p\) solutions (including 0) in total for the above-said equation.
4. If \(p\equiv 1\mod 3\) and \(p\equiv 3\mod 4\), using the similar argument of case iii), we obtain total \((\frac{p-1}{6}\times 3\times 2)+1=p\) (including 0) solutions. In this case, one should note that the occurrence of \(CR\)s will be \(3\).
Combining all the cases, the number of elliptic curves over \(\mathbb{Z}_{p}\), for \(p>3\) will be \(p^{2}-p=\Phi(p^{2})\)
As an immediate consequence, the following result reflects the extension of _Theorem_6 on the odd composite number \(n=p_{1}p_{2}\ldots p_{k}\). Considering the well-established result over the decomposition of elliptic curves given in _Corollary 2.32_[38], we explore the multiplicity nature of the number of nonsingular reduced Weierstrass elliptic curves over \(\mathbb{Z}_{n}\).
**Theorem 7**.: _For the odd composite number \(n=p_{1}p_{2}\ldots p_{l}\), \(N_{R}(\mathbb{Z}_{n})=\prod\limits_{i=1}^{l}N_{R}(\mathbb{Z}_{p_{i}})\)._
Proof.: For the odd composite integer \(n=p_{1}p_{2}\ldots p_{l}\), we have the decomposition \(E(\mathbb{Z}_{n})\cong E(\mathbb{Z}_{p_{1}})\oplus E(\mathbb{Z}_{p_{2}}) \oplus\ldots\oplus E(\mathbb{Z}_{p_{l}})\), as an illustration of the Fundamental Theorem of Finite Abelian Groups [38]. Note that group law can be defined by applying a reduction function over points on elliptic curves over \(\mathbb{Z}_{n}\) followed by the modulo addition over \(\mathbb{Z}_{p}\). The following expression can be established considering the class leaders \(\mathbb{E}_{k}\)s over \(\mathbb{Z}_{n}\).
\[\begin{split} N_{R}(\mathbb{Z}_{n})&=\sum_{\mathbb{ E}_{k}}N_{R}^{(k)}(\mathbb{Z}_{n})\\ &=\sum_{\mathbb{E}_{k}}\prod_{i=1}^{l}N_{R}^{(k_{i})}(\mathbb{Z}_ {p_{i}})\end{split} \tag{10}\]
where \(N_{R}^{(k)}(\mathbb{Z}_{n})\) represents the total number of reduced elliptic curves over \(\mathbb{Z}_{n}\) that can be derived from each of the class leader \(\mathbb{E}_{k}\).
Using the decomposition of each \(\mathbb{E}_{k}\) over \(\mathbb{Z}_{p_{i}}\) and the nature of the isomorphism, we further obtain,
\[\begin{split} N_{R}(\mathbb{Z}_{n})&=\prod_{i=1}^{l }\sum_{\mathbb{E}_{k_{i}}}N_{R}^{(k_{i})}(\mathbb{Z}_{p_{i}})\\ N_{R}(\mathbb{Z}_{n})&=\prod_{i=1}^{l}N_{R}( \mathbb{Z}_{p_{i}})\end{split} \tag{11}\]
Hence proved.
Due to the unavailability of an analytic proof of the _Conjecture_5, we focus on the lower and upper bounds of the number of nonsingular elliptic curves over \(\mathbb{Z}_{p^{m}}\).
**Proposition 1**.: _For odd prime \(p\), the lower bound \(N_{R}^{\prime}(\mathbb{Z}_{p^{m}})\) on the number of nonsingular reduced Weierstrass elliptic curves over \(\mathbb{Z}_{p^{m}}\) is \(p^{2m-1}\)._
Proof.: Consider the reduced Weierstrass elliptic curve over \(\mathbb{Z}_{p^{m}}\), \(E:y^{2}=x^{3}+ax+b\) along with the discriminant \(\Delta=-16(4a^{3}+27b^{2})\). To count the nonsingular curves, we should obtain those pairs \((a,b)\) such that \(\Delta\) is a non-unit. For the pairs \((a,b)\) such that \(\Delta=0\), we have \(4a^{3}+27b^{2}=0\). Now we can find a trial solution to this equation as \(a=-3c^{2}\) and \(b=2c^{3}\) for some \(c\) in \(\mathbb{Z}_{p^{m}}\) where \(c\) is uniquely determined as \(c=-3b/2a\). This leads to the count for unique \(c\), precisely finding \(a\) as a unit and \(b\) as any element in \(\mathbb{Z}_{p^{m}}\). Considering \(\Phi(p^{m})=p^{m}-p^{m-1}\), the result follows.
Utilizing the approach adopted in _Theorem 6_, we extend this proof over the reduced elliptic curves on finite rings \(\mathbb{Z}_{p^{m}}\), for odd prime \(p\) satisfying \(\gcd(p^{m},6)=1\). Likewise, in the number-theoretic results on the distribution of \(CR\)s over a finite field \(\mathbb{Z}_{p}\), we used the computational techniques to determine the distribution of \(QR\)s and \(CR\)s over the finite rings \(\mathbb{Z}_{p^{m}}\). Computational data [32] showed that \(QR\)s and \(CR\)s can also be classified based on their occurrences.
For \(u\in\mathbb{Z}_{p^{m}}\) and positive integer \(k\), we define the following set \(A_{u}^{k}(p^{m})=\{x\mid x^{k}\equiv u\mod p^{m}\}\). The order of the set \(A_{u}^{k}(p^{m})\) or the solutions of the congruent equation plays a pivotal role in classifying the QRs with their respective occurrences. The subsequent example states the classes of QRs over \(\mathbb{Z}_{25}\) and \(\mathbb{Z}_{3125}\), in detail and hence devises the framework.
**Example 1**.: _Consider the ring \(\mathbb{Z}_{25}\) and the following table._
_Based on the occurrences of the QRs (excluding \(0\)), we present the classes \([i]\) of the QRs over \(\mathbb{Z}_{25}\)[32]._
_Similarly, the classification of the QRs (excluding \(0\)) over \(\mathbb{Z}_{3125},(5^{4}=3125)\) is stated below [32]._
Equivalently, the classifications for the cubic and sixth-order residues are derived and presented in [32].
We have arrived at the following results by summing the results for each class.
**Lemma 5**.: _The number of \(QR\)s over \(\mathbb{Z}_{p^{m}}\), is_
\[(\sum\limits_{i=0}^{\left\lfloor\frac{m-1}{2}\right\rfloor}\frac{1}{2}\Phi(p^{ m-2i}))+1.\]
Note that, for equivalent expressions with some mathematical adjustments on the number of \(QR\)s over \(\mathbb{Z}_{n}\) one can look at [39, 40, A000224], which also contains the separate results for the number of \(QR\)s over \(\mathbb{Z}_{2^{m}}\), \(\mathbb{Z}_{3^{m}}\), \(\mathbb{Z}_{5^{m}}\) and \(\mathbb{Z}_{7^{m}}\).
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(u\) & \(A_{u}^{k}(25)\) & \(|A_{u}^{k}(25)|\) \\ \hline \(0\) & \(\{0,5,10,15,20\}\) & \(5\) \\ \hline \(1\) & \(\{1,24\}\) & \(2\) \\ \hline \(4\) & \(\{2,23\}\) & \(2\) \\ \hline \(6\) & \(\{9,16\}\) & \(2\) \\ \hline \(9\) & \(\{3,22\}\) & \(2\) \\ \hline \(11\) & \(\{6,19\}\) & \(2\) \\ \hline \(14\) & \(\{8,17\}\) & \(2\) \\ \hline \(16\) & \(\{4,21\}\) & \(2\) \\ \hline \(19\) & \(\{12,13\}\) & \(2\) \\ \hline \(21\) & \(\{11,14\}\) & \(2\) \\ \hline \(24\) & \(\{7,18\}\) & \(2\) \\ \hline \end{tabular}
\end{table}
Table II: Analysis of the QRs over \(\mathbb{Z}_{25}\).
\begin{table}
\begin{tabular}{|c|c|c|} \hline Classes & Occurences of \(u\) & Total no. of \\ & \(u\) with same occurrence \\ \hline \([1]\) & \(2\) & \(10\) \\ \hline \([2]\) & \(10\) & \(50\) \\ \hline \([3]\) & \(50\) & \(2\) \\ \hline \end{tabular}
\end{table}
Table IV: Classification of QRs over \(\mathbb{Z}_{3125}\).
\begin{table}
\begin{tabular}{|c|c|c|} \hline Classes & Occurences of \(u\) & Total no. of \\ & \(u\) with same occurrence \\ \hline \([1]\) & \(2\) & \(10\) \\ \hline \end{tabular}
\end{table}
Table III: Classification of QRs over \(\mathbb{Z}_{25}\).
Considering _Example 1_ we arrive at the subsequent claim.
**Conjecture 2**.: _The occurrence of \(QR\)s in class \([i]\) of \(\mathbb{Z}_{p^{m}}\) is \(2p^{i-1}\) where \(1\leq i\leq\left\lfloor\frac{m-1}{2}\right\rfloor+1\)._
**Lemma 6**.: _For \(p\equiv 2\mod 3\), the number of \(CR\)s in \(\mathbb{Z}_{p^{m}}\), is_
\[(\sum_{i=0}^{\left\lfloor\frac{m-1}{3}\right\rfloor}\Phi(p^{m-3i}))+1\]
_and For \(p\equiv 1\mod 3\), the number of cubic residues in \(\mathbb{Z}_{p^{m}}\), is_
\[\frac{\sum_{i=0}^{\left\lfloor\frac{m-1}{3}\right\rfloor}\Phi(p^{m-3i})}{3}+1.\]
Using the computational data [32], the obtained formula coincides with the formula given at [40, A046530], which also contains separate results for the number of cubic residues over \(\mathbb{Z}_{2^{m}}\), \(\mathbb{Z}_{3^{m}}\), \(\mathbb{Z}_{5^{m}}\) and \(\mathbb{Z}_{7^{m}}\).
**Conjecture 3**.: _For \(p\equiv 2\mod 3\), the occurrence of \(CRs\) in class \([i]\) of \(\mathbb{Z}_{p^{m}}\), is \(p^{2(i-1)}\) and for \(p\equiv 1\mod 3\), the occurrence of \(CRs\) in class \([i]\) of \(\mathbb{Z}_{p^{m}}\) is \(3p^{2(i-1)}\) where \(1\leq i\leq\left\lfloor\frac{m-1}{3}\right\rfloor+1\)._
**Lemma 7**.: _For \(p\equiv 2\mod 3\), the number of sixth residues (quadratic residues that are also cubic residues) in \(\mathbb{Z}_{p^{m}}\) is_
\[(\sum_{i=0}^{\left\lfloor\frac{m-1}{3}\right\rfloor}\frac{1}{2}\Phi(p^{m-6i}))+1\]
_and For \(p\equiv 1\mod 3\), the number of sixth residues in \(\mathbb{Z}_{p^{m}}\), is_
\[(\sum_{i=0}^{\left\lfloor\frac{m-1}{3}\right\rfloor}\frac{1}{6}\Phi(p^{m-6i}) )+1.\]
For further analysis, readers can look into the exact formula for the number of \(k^{th}\) residues modulo \(n\)[41], and hence the Lemma 7 is the particular case when \(k=6\) and \(n=p^{m}\).
**Lemma 8**.: _[_42_]_ _Considering \(A_{0}^{k}(p^{m})=\{x\mid x^{k}\equiv 0\mod p^{m}\}\), we have,_
1. \(|A_{0}^{2}|=p^{\lfloor\frac{m}{3}\rfloor}\)_._
2. \(|A_{0}^{3}|=p^{\lfloor\frac{m}{3}\rfloor}\)_._
_Furthermore, for composite \(n\), the formulae will be multiplicative._
Utilizing the abovementioned results, we further intend to outline the bounds on \(N_{R}(\mathbb{Z}_{p^{m}})\). Considering \(\Delta^{i}(n)\) as the number of solutions to \(\Delta\equiv i\mod n\), we state the following claim that complies with the computational data attained separately.
**Conjecture 4**.: _The upper bound \(N_{R}^{\prime\prime}(\mathbb{Z}_{p^{m}})\) on the number of nonsingular reduced Weierstrass elliptic curves over \(\mathbb{Z}_{p^{m}}\) is given by_
\[N_{R}^{\prime\prime}(\mathbb{Z}_{p^{m}})=(p^{2m}-\Delta^{0}(p^{m}))\geq N_{R }(\mathbb{Z}_{p^{m}}) \tag{12}\]
_where \(p\) is odd prime, \(\gcd(p^{m},6)=1\)._
As stated in the _Equation 9_, \(\Delta\equiv 0\mod(p^{m})\) leads to simple equation
\[\begin{split} s_{1}^{3}+s_{2}^{2}&\equiv 0\mod(p^{m}) \\ \implies s_{1}^{3}&\equiv-s_{2}^{2}\mod(p^{m}) \end{split} \tag{13}\]
where \((3^{-1}a,2^{-1}b)=(s_{1},s_{2})\) since 27 and 4 are \(CR\) and \(QR\) over \(\mathbb{Z}_{p^{m}}\) respectively and \(2,3\) are units in \(\mathbb{Z}_{p^{m}}\).
To get the solution of _Equation 13_, all the possible order residues with respective multiplicities, including \(0\), are considered. Now for \(p\equiv 1\mod 3\) we have,
\[\begin{split}\Delta^{0}(p^{m})&=(\sum_{i=0}^{ \left\lfloor\frac{m-1}{6}\right\rfloor}\frac{1}{6}\Phi(p^{m-6i})\cdot 3p^{2i} \cdot 2p^{i})\\ &+p^{\left\lfloor\frac{m}{3}\right\rfloor+\left\lfloor\frac{m}{3 }\right\rfloor}\\ &=(\sum_{i=0}^{\left\lfloor\frac{m-1}{3}\right\rfloor}p^{3i}\Phi(p ^{m-6i}))+p^{\left\lfloor\frac{m}{3}\right\rfloor+\left\lfloor\frac{2m}{3} \right\rfloor}\\ &=(\sum_{i=0}^{\left\lfloor\frac{m-1}{3}\right\rfloor}\Phi(p^{m-3 i}))+p^{\left\lfloor\frac{m}{3}\right\rfloor+\left\lfloor\frac{2m}{3}\right\rfloor} \end{split}\]
Similarly, one can obtain the same formula for \(p\equiv 2\mod 3\).
The supporting data is shown in Table V. The corresponding graphical representation of the bounds and the obtained values of the reduced nonsingular Weierstrass elliptic curves is also given in _Figure 1_.
For the non-unit \(v\in\mathbb{Z}_{p^{m}}\), we have the computational data for \(\Delta^{v}(p^{m})\) and the analysis leads to the final result \(N_{R}(\mathbb{Z}_{p^{m}})=\Phi(p^{2m})\).
Based on the computational data [32], a more general statement of _Theorem 6_ over \(\mathbb{Z}_{n}\) is presented below.
**Conjecture 5**.: _For \(gcd(n,6)=1\), the number of nonsingular reduced Weierstrass elliptic curves over \(\mathbb{Z}_{n}\), \(N_{R}(\mathbb{Z}_{n})=\Phi(n^{2})\)._
Finally, the following results provide insight into the classification of the reduced Weierstrass elliptic curves over the finite rings \(\mathbb{Z}_{n}\) in light of the _Theorem 2_ and _Theorem 3_ that are evident in characterizing reduced elliptic curves over \(\mathbb{F}_{q}\) with \(char(\mathbb{F}_{q})\neq 2,3\).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Lower Bound & Actual Value & Upper Bound \\ \hline \hline \(p^{m}\) & \(N_{R}(\mathbb{Z}_{p^{m}})\) & \(N_{R}(\mathbb{Z}_{p^{m}})\) & \(N_{R}^{{}^{\prime\prime}}(\mathbb{Z}_{p^{m}})\) \\ \hline \(5^{1}\) & \(5\) & \(20\) & \(20\) \\ \hline \(5^{2}\) & \(125\) & \(500\) & \(580\) \\ \hline \(5^{3}\) & \(3125\) & \(12500\) & \(15400\) \\ \hline \(7^{1}\) & \(7\) & \(42\) & \(42\) \\ \hline \(7^{2}\) & \(343\) & \(2058\) & \(2310\) \\ \hline \(7^{3}\) & \(16807\) & \(100842\) & \(117012\) \\ \hline \end{tabular}
\end{table}
Table V: Computational results on lower bound(\(N_{R}^{{}^{\prime}}(\mathbb{Z}_{p^{m}})\)), actual value(\(N_{R}(\mathbb{Z}_{p^{m}})\)), and upper bound(\(N_{R}^{{}^{\prime\prime}}(\mathbb{Z}_{p^{m}})\)) on the number of nonsingular reduced Weierstrass elliptic curves over \(\mathbb{Z}_{p^{m}}\).
**Theorem 8**.: _The number of reduced Weierstrass elliptic curves isomorphic to given curve \(E/\mathbb{Z}_{n}\) is \(\frac{\Phi(n)}{|Aut(E)|}\)._
Proof.: Considering the transformation function \(\tau:(x,y)\rightarrow(u^{2}x,u^{3}y),u\in\ \mathbb{Z}_{n}^{*}\) and _Theorem 2_, the result follows directly.
One should note that _Theorem 8_ helps in identifying a reduced elliptic curve for a set of reduced curves over \(\mathbb{Z}_{n}\), along with its cardinality. We termed the representative curve for each group of curves "_Class Leader_" in this work.
The subsequent results give an exact formula to count the isomorphism classes for a given reduced curve over \(\mathbb{Z}_{p^{m}}\) and \(\mathbb{Z}_{n}\), where \(n\) is a composite number. So combining _Theorem 3_ and these results, one can find the extended outcome for any finite ring \(\mathbb{Z}_{n}\), where any representation of \(n\) is considered.
**Theorem 9**.: _The number of isomorphism classes of reduced Weierstrass elliptic curves over finite ring \(\mathbb{Z}_{p^{m}},\ \gcd(p,6)=1\) will be,_
\[C_{R}(\mathbb{Z}_{p^{m}})=\left\{\begin{array}{lcl}2p^{m}+6&when&p\equiv 1 \mod 12\\ 2p^{m}+2&when&p\equiv 5\mod 12\\ 2p^{m}+4&when&p\equiv 7\mod 12\\ 2p^{m}&when&p\equiv 11\mod 12\end{array}\right.\]
Proof.: Consider the isomorphic reduced Weierstrass elliptic curves \(E_{1}/\mathbb{Z}_{p^{m}}:y^{2}=x^{3}+ax+b\) and \(E_{2}/\mathbb{Z}_{p^{m}}:y^{2}=x^{3}+\bar{a}x+\bar{b}\) along with the corresponding relations \(\bar{a}=u^{-4}a\) and \(\bar{b}=u^{-6}b\), \(u\in\mathbb{Z}_{p^{m}}^{*}\)[35]. An analogous approach to _Theorem 3_ leads to the fact that the number of isomorphism classes of elliptic curves over \(\mathbb{Z}_{p^{m}}\), corresponds to the \(|Aut(E_{1})|\) and we are left with three cases: **a)** for \(a\neq 0\) and \(b\neq 0\) (\(j(E)\neq 0,1728\), \(|Aut(E)|=2\), **b)** for \(a=0\) and \(b\neq 0\) (\(j(E)=0\), either \(|Aut(E)|=6\) or \(|Aut(E)|=2\), and **c)** for \(a\neq 0\) and \(b=0\) (\(j(E)=1728\), either \(|Aut(E)|=4\) or \(|Aut(E)|=2\).
Subsequently, considering _Theorem 6_ and _Theorem 8_, we can immediately write the following equation,
\[\sum_{E_{k}}\frac{\Phi(n)}{|Aut(E_{k})|}=\Phi(n^{2}) \tag{14}\]
where the summation is taken over the set of isomorphism class representatives of the elliptic curves defined over \(\mathbb{Z}_{p^{m}}\). Since, \(\gcd(p,6)=1\) we have \(p\equiv 1,5,7,11\mod 12\).
For the sake of simplicity, we consider the case when \(p\equiv 1\mod 12\), and the rest of the cases will follow similarly.
The elementary number theoretic approach ensures \(p\equiv 1\mod l\), where \(2\leq l\leq 4\). So, we are left with the following subcases.
1. \(p\equiv 1\mod 2\) and hence \(p^{m}\equiv 1\mod 2\), \(i.e.\), \(p=2i{+}1\) and \(p^{m}=2j+1,\ i,j\in\mathbb{N}\). A simple calculation shows that \(2\mid\Phi(p^{m})\), and so using Cauchy's theorem, we can ensure that there exists an element of order \(2\) in \(\mathbb{Z}_{p^{m}}^{*}\)..
2. \(p\equiv 1\mod 3\) and hence \(p^{m}\equiv 1\mod 3\), \(i.e.\), \(p=3i+1\) and \(p^{m}=3j+1,\ i,j\in\mathbb{N}\). Since \(3\mid\Phi(p^{m})\), using Lagrange's theorem, we ensure that there exists an element of order \(3\) in \(\mathbb{Z}_{p^{m}}^{*}\)..
3. \(p\equiv 1\mod 4\) and hence \(p^{m}\equiv 1\mod 4\), \(i.e.\), \(p=4i{+}1\) and \(p^{m}=4j+1,\ i,j\in\mathbb{N}\). As \(4\mid\Phi(p^{m})\), using Sylow's first theorem, \(\mathbb{Z}_{p^{m}}^{*}\) contains an element of order \(4\).
Therefore, from the above three cases, it can be inferred that the possible values of \(|Aut(E_{1})|\) are \(2\), \(4\), and \(6\).
By fixing \(|Aut(E)|=6\) we get \(\Delta=-16(27b^{2})\), since \(a=0\). Let \(k_{1}\) be the number of isomorphism classes of reduced Weierstrass elliptic curves having the order of the respective automorphism group as \(6\). The possibilities of \(b\) to be a unit will be, \(\Phi(p^{m})\), and therefore we obtain the following relation
\[(\frac{\Phi(p^{m})}{6})k_{1} =\Phi(p^{m})\] \[\implies k_{1} =6\]
Similarly, if \(k_{2}\) represents the number of isomorphism classes of reduced Weierstrass elliptic curves having the order of the respective automorphism group as \(4\), then \(k_{2}=4\). Consider \(k_{3}\) represents the number of isomorphism classes of reduced Weierstrass elliptic curves having the order of the respective automorphism group as \(2\). Now we have \(\Phi(p^{2m})\) non-singular curves in total over \(\mathbb{Z}_{p^{m}}\), out of which there are \(2\Phi(p^{m})\) curves that are having either \(|Aut(E_{k})|=4\) or \(|Aut(E_{k})|=6\). So the total number of non-singular curves with \(|Aut(E_{k})|=2\) will be \(\Phi(p^{2m})-2\Phi(p^{m})=\Phi(p^{m})(p^{m}-2)\) and hence the number of isomorphism classes will be \(2p^{m}-4\), using _Equation 6_.
Finally, if \(p\equiv 1\mod 12\), then there are \(k_{1}+k_{2}+k_{3}=6+4+2p^{m}-4=2p^{m}+6\) isomorphism classes of elliptic curves over \(\mathbb{Z}_{p}^{m}\).
Using an analogous approach, one can derive the different isomorphism classes for different relations for \(p\), and this concludes the proof.
As the immediate consequence, we obtain the following result on the multiplicative form of the number of isomorphic classes given in _Theorem 9_.
**Theorem 10**.: _For the odd composite integer \(n=p_{1}p_{2}\ldots p_{l}\), \(C_{R}(\mathbb{Z}_{n})=\prod\limits_{i=1}^{l}C_{R}(\mathbb{Z}_{p_{i}})\)._
Proof.: The number of isomorphism classes of reduced Weierstrass elliptic curves over the finite ring \(\mathbb{Z}_{p_{i}}(1\leq i\leq l)\), we have,
\[C_{R}(\mathbb{Z}_{p_{i}})=\left\{\begin{array}{lcl}2p+6&when&p\equiv 1\mod 12\\ 2p+2&when&p\equiv 5\mod 12\\ 2p+4&when&p\equiv 7\mod 12\\ 2p&when&p\equiv 11\mod 12\end{array}\right.\]
Considering all the possible automorphism classes over the class representatives, we obtain,
\[C_{R}(\mathbb{Z}_{n})=\sum\limits_{\mathbb{E}_{k}}C_{R}^{(k)}(\mathbb{Z}_{n}) \tag{15}\]
where \(C_{R}^{(k)}(\mathbb{Z}_{n})\) represents the number of isomorphic classes of each of the class leaders \(\mathbb{E}_{k}\) in reduced form over \(\mathbb{Z}_{n}\).
Now _Equation 14_ leads to,
\[C_{R}^{(k)}(\mathbb{Z}_{n})=\frac{N_{R}^{(k)}(\mathbb{Z}_{n}).|Aut(\mathbb{E} _{k})|}{\Phi(n)} \tag{16}\]
For the odd composite integer \(n=p_{1}p_{2}\ldots p_{l}\), we have the decomposition form \(E(\mathbb{Z}_{n})\cong E(\mathbb{Z}_{p_{1}})\oplus E(\mathbb{Z}_{p_{2}}) \oplus\ldots\oplus E(\mathbb{Z}_{p_{l}})\)[38]. This leads to the following results,
1. \(|Aut(\mathbb{E}_{k})|\) over \(\mathbb{Z}_{n}=\prod\limits_{i=1}^{l}|Aut(\mathbb{E}_{k_{i}})|\) over \(\mathbb{Z}_{p_{i}}\) ([43])
2. \(N_{R}^{(k)}(\mathbb{Z}_{n})=\prod\limits_{i=1}^{l}N_{R}^{(k_{i})}(\mathbb{Z}_ {p_{i}})\)
Applying above results to _Equation 16_ we will get,
\[C_{R}^{(k)}(\mathbb{Z}_{n}) =\frac{\prod\limits_{i=1}^{l}N_{R}^{(k_{i})}(\mathbb{Z}_{p_{i}}). \prod\limits_{i=1}^{l}|Aut(\mathbb{E}_{k_{i}})|}{\prod\limits_{i=1}^{l}\Phi(p_ {i})}\] \[=\prod\limits_{i=1}^{l}\frac{N_{R}^{(k_{i})}(\mathbb{Z}_{p_{i}}).| Aut(\mathbb{E}_{(k_{i})})|}{\Phi(p_{i})}\] \[=\prod\limits_{i=1}^{l}C_{R}^{(k_{i})}(\mathbb{Z}_{p_{i}})\]
Therefore, from _Equation 15_, we have,
\[\begin{split} C_{R}(\mathbb{Z}_{n})&=\sum\limits_{ \mathbb{E}_{k}}C_{R}^{(k)}(\mathbb{Z}_{n})\\ &=\sum\limits_{\mathbb{E}_{k}}\prod\limits_{i=1}^{l}C_{R}^{(k_{i} )}(\mathbb{Z}_{p_{i}})\end{split} \tag{17}\]
Finally, considering the decomposition of each \(\mathbb{E}_{k}\) over \(\mathbb{Z}_{p_{i}}\) and the nature of the isomorphism, we obtain the desired result.
\[\begin{split} C_{R}(\mathbb{Z}_{n})&=\prod\limits_ {i=1}^{l}\sum\limits_{\mathbb{E}_{k_{i}}}C_{R}^{(k_{i})}(\mathbb{Z}_{p_{i}})\\ &=\prod\limits_{i=1}^{l}C_{R}(\mathbb{Z}_{p_{i}}).\end{split} \tag{18}\]
As a direct outcome, we probe into the result extending for the odd integer \(n=p_{1}^{e_{1}}p_{2}^{e_{2}}\ldots p_{l}^{e_{l}}\), \(\forall e_{i}\geq 1\) by ensuring the isomorphism between \(E(\mathbb{Z}_{n})\) and \(E(\mathbb{Z}_{p_{i}{}^{\ast}})\) in the decomposed form.
However, the sufficient conditions that are required to establish the isomorphism between elliptic curves over \(\mathbb{Z}_{p^{m}}\) can be attained by slight modification in the mapping given at [38], the point addition over \(\mathbb{Z}_{p^{m}}\) is not at hand according to the existing literature. For illustration, we present the following example.
**Example 2**.: _Consider the points \(P(1,1)\) and \(Q(21,4)\) over the elliptic curve \(y^{2}=x^{3}+24x+1\) over \(\mathbb{Z}_{25}\). Adding these two points over \(\mathbb{Z}_{25}\), the obtained curve attain the slope \(m=3.20^{-1}\) which is not possible over \(\mathbb{Z}_{25}\)._
Therefore, we state the following conjecture obtained from the computational data [32].
**Conjecture 6**.: _For the odd composite integer \(n=p_{1}^{e_{1}}p_{2}^{e_{2}}\ldots p_{l}^{e_{l}}\), the expression of \(C_{R}(\mathbb{Z}_{n})\) given in Theorem 9 will be multiplicative, i.e., \(C_{R}(\mathbb{Z}_{n})=\prod\limits_{i=1}^{l}C_{R}(\mathbb{Z}_{p_{i}^{\ast}})\)._
### _Results on generalized Weierstrass elliptic curves over \(\mathbb{Z}_{n}\)_
The computational data demonstrates that the number of the generalized Weierstrass elliptic curves over \(\mathbb{Z}_{n}\) is to be \(\Phi(n^{5})\)[32]. A similar formula can be found in [40, A238533], that provides the number of solutions to \(\gcd(x^{2}+y^{2}+z^{2}+w^{2}+t^{2},n)=1\).
In this case, the subsequent set of equations should be solved; hence, we present the following conjecture.
\[\begin{split}\Delta&=-b_{2}^{2}b_{8}-8b_{4}^{3}-27b_{6} ^{2}+9b_{2}b_{4}b_{6}\\ b_{2}&=a_{1}^{2}+4a_{2}\\ b_{4}&=2a_{4}+a_{1}a_{3}\\ b_{6}&=a_{3}^{2}+4a_{6}\\ b_{8}&=a_{1}^{2}a_{6}+4a_{2}a_{6}-a_{1}a_{3}a_{4}+a_{2} a_{3}^{2}-a_{4}^{2}\end{split}\]
**Conjecture 7**.: _The number of non-singular generalized Weierstrass elliptic curves over \(\mathbb{Z}_{n}\), \(N_{G}(\mathbb{Z}_{n})=\Phi(n^{5})\)._
Corresponding computational validation is given in _Table VIII_.
**Theorem 11**.: _The number of unique generalized Weierstrass elliptic curves isomorphic to given curve \(E/\mathbb{Z}_{n}\) is \(\frac{\Phi(n^{4})}{|Aut(E)|}\)._
Proof.: Considering the transformation function \(\tau:(x,y)\rightarrow(u^{2}x+r,u^{3}y+u^{2}sx+t),\ u\in\mathbb{Z}_{n}^{\ast}\), and \(r,s,t\in\mathbb{Z}_{n}\), total number of elliptic curves isomorphic to given curve \(E/\mathbb{Z}_{n}\) are \(n^{3}\Phi(n)=\Phi(n^{4})\) (since the possibilities of \(u,r,s\) and \(t\) are \(\Phi(n),n,n,n\) respectively in \(\mathbb{Z}_{n}\)). We obtain the desired result using the similar argument given in _Theorem 2_.
**Conjecture 8**.: _The number of isomorphism classes of generalized Weierstrass elliptic curves over \(\mathbb{Z}_{n}\), \(C_{G}(\mathbb{Z}_{n})\) is unknown._
The conjecture can be split into the following cases:
1. \(n=p\) where \(p\) is prime
2. \(n=p^{m}\) where gcd(\(p^{m}\),6) = 1
3. \(n=2^{m}\)
4. \(n=3^{m}\)
5. \(n\) is composite integer (\(n=p_{1}p_{2}\ldots p_{k}\))
6. \(n\) is composite integer (\(n=p_{1}^{e_{1}}p_{2}^{e_{2}}\ldots p_{k}^{e_{k}}\))
The number of isomorphism classes for elliptic curves will remain the same as it does not depend on the generalized or reduced form of the curve. Considering the same, for the _Cases_\(1,2,\) and \(5\), \(C_{G}(\mathbb{Z}_{n})\) is given in _Theorem 3_, _Theorem 9_, and _Theorem 10_ respectively. All the remaining cases are open problems.
The successive section presents a rigorous analysis of the elliptic curves over \(\mathbb{Z}_{n}\) based on the data achieved through computational algorithms. The computational results are given extensively to classify the curves over integer modulo rings.
## V Computational Results
This section presents a simple brute force algorithm implemented in C++ using the Visual Studio Code for classifying the elliptic curves over ring \(\mathbb{Z}_{n}\). The crucial factor that contributes prominently here is the isomorphism between two elliptic curves by taking the appropriate transformation map. Using this idea, we can fix the _Class Leader_ or the primary nonsingular elliptic curve that is isomorphic to the other candidates in the class through the respective transformation map. Continuing the process over all the nonsingular curves, we obtain the complete classification of elliptic curves over the finite ring \(\mathbb{Z}_{n}\).
The computational results are principally divided into generalized and reduced Weierstrass curve categories. This follows the subsequent results.
_Classification of reduced Weierstrass elliptic curves over \(\mathbb{Z}_{n}\) where, \(6\nmid char(\mathbb{Z}_{n})\)_
For the reduced Weierstrass elliptic curves over the finite rings \(\mathbb{Z}_{n}\) where \(6\nmid n\), the computational data is comprehended in _Table VI_ and _Table VII_.
Specifically, _Table VI_ presents the values of \(N_{R}(\mathbb{Z}_{n})\) and results in _Theorem 1_ and _Theorem 6_ in the compact form.
For the computational data on the number of classes of reduced Weierstrass curves \(C_{R}(\mathbb{Z}_{n})\), one can look into _Table VII_, Also, the computational data validates _Theorem 9_ and coincides with _Theorem 3_.
### _Classification of generalized Weierstrass elliptic curves over \(\mathbb{Z}_{n}\)_
We summarize the computational data related to the generalized Weierstrass elliptic curve in _Table VIII_ and _Table IX_.
In particular, _Table VIII_ articulates \(N_{G}(\mathbb{Z}_{n})\), coinciding with _Theorem 1_ and _Theorem 7_ in the compact form over the finite fields and finite rings respectively.
In _Table IX_, we present the computational results on the number of classes of generalized Weierstrass curves \(C_{G}(\mathbb{Z}_{n})\), that corresponds to _Theorem 5_ and _Theorem 8_.
To provide an extensive outlook on computational data, we classify the nonsingular generalized Weierstrass elliptic curves with respective _Class Leaders_ and appropriate transformation map over the ring \(\mathbb{Z}_{5}\) in _Table X_. There are \(12\) isomorphism classes with respective _Class Leaders_, each with No. of curves isomorphic to the corresponding _Class Leader_ with a particular coordinate-based transformation function. All classes have different \(|Aut(E)|\), which shows that on applying the transformation on each curve, precisely \(|Aut(E)|\) transformations result in the same curve.
For the complete classification data in detail over finite rings \(\mathbb{Z}_{n}\), readers are referred to [32].
## VI Open Problems
Understanding the fundamental nature of the proposed categorization, we highlight some problems that will contribute principally to the comprehensive study of the elliptic curves over finite integer modulo rings.
\begin{table}
\begin{tabular}{|c|c|} \hline \(\mathbb{Z}_{n}\) & \(N_{R}(\mathbb{Z}_{n})\) \\ \hline
[MISSING_PAGE_POST]
\end{tabular}
\end{table}
Table VI: Computational results on the number of nonsingular reduced Weierstrass elliptic curves \(N_{R}(\mathbb{Z}_{n})\) over \(\mathbb{Z}_{n}\), where \(6\nmid n\).
\begin{table}
\begin{tabular}{|c|c|} \hline \(\mathbb{Z}_{n}\) & \(C_{R}(\mathbb{Z}_{n})\) \\ \hline
7 & 18 \\ \hline
11 & 22 \\ \hline
13 & 32 \\ \hline
17 & 36 \\ \hline
23 & 46 \\ \hline
29 & 60 \\ \hline
31 & 66 \\ \hline
35 & 216 \\ \hline
37 & 80 \\ \hline
41 & 84 \\ \hline
43 & 90 \\ \hline
47 & 94 \\ \hline
49 & 102 \\ \hline
53 & 108 \\ \hline
55 & 264 \\ \hline
59 & 118 \\ \hline \end{tabular}
\end{table}
Table VII: Computational data on the number of isomorphism classes for reduced Weierstrass elliptic curves \(C_{R}(\mathbb{Z}_{n})\) over \(\mathbb{Z}_{n}\), with the condition that \(6\nmid n\).
* _Conjecture 1._ The number of non-singular generalized Weierstrass elliptic curves over \(\mathbb{F}_{q}\) is \(q^{5}-q^{4}\).
* _Conjecture 5._ For \(gcd(n,6)=1\), the number of nonsingular reduced Weierstrass elliptic curves over \(\mathbb{Z}_{n}\), \(N_{R}(\mathbb{Z}_{n})=\Phi(n^{2})\).
* _Conjecture 7._ The number of non-singular generalized Weierstrass elliptic curves over \(\mathbb{Z}_{n}\), \(N_{G}(\mathbb{Z}_{n})=\Phi(n^{5})\).
* _Conjecture 8._ The number of isomorphism classes of generalized Weierstrass elliptic curves over \(\mathbb{Z}_{n}\), \(C_{G}(\mathbb{Z}_{n})\) is unknown. Necessarily, _Conjecture 6_ is also covered in this problem.
## VII Conclusion
The present work comprises the fundamental classification of Weierstrass elliptic curves over the finite ring \(\mathbb{Z}_{n}\). Both the generalized and reduced curves are considered in the study. The exact formulas for counting the number of nonsingular elliptic curves, the number of isomorphic curves to a given curve, and the number of isomorphism classes over a particular finite ring are presented with convincing justifications. To facilitate the purpose, extensive computational data [32] is also included in this work to validate the results. In the future, the central focus will be the extension of the Hasse-Weil-like bound on the number of points of an elliptic curve of \(\mathbb{Z}_{n}\).
|
2303.00985 | Ground test results of the electromagnetic interference for the x-ray
microcalorimeter onboard XRISM | Electromagnetic interference (EMI) for low-temperature detectors is a serious
concern in many missions. We investigate the EMI caused by the spacecraft
components to the x-ray microcalorimeter of the Resolve instrument onboard the
X-Ray Imaging and Spectroscopy Mission (XRISM), which is currently under
development by an international collaboration and is planned to be launched in
2023. We focus on the EMI from (a) the low-frequency magnetic field generated
by the magnetic torquers (MTQ) used for the spacecraft attitude control and (b)
the radio-frequency (RF) electromagnetic field generated by the S and X band
antennas used for communication between the spacecraft and the ground stations.
We executed a series of ground tests both at the instrument and spacecraft
levels using the flight-model hardware in 2021-2022 in a JAXA facility in
Tsukuba. We also conducted electromagnetic simulations partially using the
Fugaku high-performance computing facility. The MTQs were found to couple with
the microcalorimeter, which we speculate through pick-ups of low-frequency
magnetic field and further capacitive coupling. There is no evidence that the
resultant energy resolution degradation is beyond the current allocation of
noise budget. The RF communication system was found to leave no significant
effect. We present the result of the tests and simulation in this article. | Miki Kurihara, Masahiro Tsujimoto, Megan E. Eckart, Caroline A. Kilbourne, Frederick T. Matsuda, Brian McLaughlin, Shugo Oguri, Frederick S. Porter, Yoh Takei, Yoichi Kochibe | 2023-03-02T05:33:38Z | http://arxiv.org/abs/2303.00985v1 | Ground test results of the electromagnetic interference for the x-ray microcalorimeter onboard XRISM
###### Abstract
Electromagnetic interference (EMI) for low-temperature detectors is a serious concern in many missions. We investigate the EMI caused by the spacecraft components to the x-ray microcalorimeter of the _Resolve_ instrument onboard the X-Ray Imaging and Spectroscopy Mission (XRISM), which is currently under development by an international collaboration and is planned to be launched in 2023. We focus on the EMI from (a) the low-frequency magnetic field generated by the magnetic torquers (MTQ) used for the spacecraft attitude control and (b) the radio-frequency (RF) electromagnetic field generated by the S and X band antennas used for communication between the spacecraft and the ground stations. We executed a series of ground tests both at the instrument and spacecraft levels using the flight-model hardware in 2021-2022 in a JAXA facility in Tsukuba. We also conducted electromagnetic simulations partially using the Fugaku high-performance computing facility. The MTQs were found to couple with the microcalorimeter, which we speculate through pick-ups of low-frequency magnetic field and further capacitive coupling. There is no evidence that the resultant energy resolution degradation is beyond the current allocation of noise budget. The RF communication system was found to leave no significant effect. We present the result of the tests and simulation in this article.
low-temperature detector, x-ray microcalorimeter, electromagnetic interference, XRISM, high-performance computing figure table
*Miki Kurihara, [email protected]
## 1 Introduction
Electromagnetic interference (EMI) is a growing concern in modern astronomical instruments with increasing demands for high sensitivity and low noise. One of the areas of serious concern is the low-temperature detectors in space-borne or air-borne platforms. A noise level of \(\mathcal{O}(10^{-18}\ \mathrm{W}/\sqrt{\mathrm{Hz}})\) is required in a densely packed test bed consuming \(\mathcal{O}(10^{3}\ \mathrm{W})\). Examples can be found in Planck[1] high-frequency instrument[2] and SPIDER[3] for cosmic microwave background observations and ASTRO-H[4] soft x-ray spectrometer (SXS)[5, 6] for x-ray observations. More examples will follow in future missions.
_Resolve[7]_ x-ray microcalorimeter for the X-Ray Imaging and Spectroscopy Mission (XRISM)[8] was designed to be almost the same as the SXS[5, 6] for ASTRO-H[4] to recover its excellent science programs yet to be achieved due to the unexpected early loss of the mission by the malfunction of the spacecraft attitude control in March 2016. The SXS suffered from the EMI from the bus system that degraded the detector performance[9], which was only recognized after the integration
of the instrument into the spacecraft. Still, the SXS demonstrated its excellent performance beyond the requirement in the orbit [10, 11].
Based on the lessons of the SXS, we started with an EMI verification program for _Resolve_ and conducted a series of tests in 2021-2022 at JAXA's Tsukuba space center both at the instrument and spacecraft levels. We recently finished the major part of these tests in August 2022. The purpose of this article is to report the results of the EMI ground tests and simulations so that it will be a reference in interpreting the in-orbit data with _Resolve_ and in designing future instruments susceptible to EMI. Amongst various EMI effects, we focus on (i) the radiative EMI by the low-frequency magnetic field and (ii) the high-frequency electromagnetic field. The former was observed in the SXS [9], while the latter remains unverified in the SXS. The conductive EMI was also tested, which is described in a separate article [7].
The article is structured as follows. In SS 2, we give an overview of the _Resolve_ instrument (SS 2.1) and the spacecraft (SS 2.2). The description is focused on the victims of the EMI, which are the microcalorimeter and anti-coincidence detectors and their signal chain (SS 2.1) and on the perpetrators of the EMI, which are the spacecraft attitude control and communication systems (SS 2.2). In SS 3, we present the low-frequency magnetic field EMI results of the simulations (SS 3.1), instrument-level test (SS 3.2), spacecraft-level test (SS 3.3), and discuss the coupling mechanism (SS 3.4). In SS 4, we present the high-frequency electromagnetic field EMI result in the same structure: simulation (SS 4.1), instrument-level test (SS 4.2), spacecraft-level test (SS 4.3), and discussion about the outcome (SS 4.4). The article is summarized in SS 5.
## 2 _Resolve_ onboard XRISM
### Resolve[7]
_Resolve[7]_ is one of the two science instruments onboard XRISM [8], which aims to achieve an energy resolution of 7 eV (FWHM) at 5.9 keV non-dispersively over a wide range of energy (0.3-12 keV). _Resolve_ hosts an array of 6\(\times\)6 x-ray microcalorimeter pixels thermally anchored to the 50 mK heat bath with a thermal time constant of 3.5 ms [12]. In each pixel, a HgTe absorber absorbs incoming x-ray photons and the resultant temperature rise is measured by a Si thermistor with a temperature-dependent impedance of \(\sim\)30 M\(\Omega\). Each microcalorimeter pixel is biased in series with a 140 M\(\Omega\) load resistor, and the change in the voltage across the thermistor is the signal. Below the microcalorimeter array, an anti-coincidence detector [12] (anti-co) is placed for identifying particle events. The anti-co is a Si ionization detector biased in series with a 2.5 M\(\Omega\) load resistor; the voltage across the load resistor is the signal, which is zero in quiescence. The bias voltage for the microcalorimeter pixels, but not the anti-co, is divided down by a factor of 121 between the connector at the dewar main shell and the top of load resistors [13]. The nine pixels within each quadrant of the array are connected to the same bias line. Each of these high impedance signal is converted into low impedance by a junction field effect transistor (JFET) source follower. The JFETs must be operated at 130 K, and, thus, are thermally isolated from the cold stage. Because of the commonalities and differences of the microcalorimeter and anti-co detectors, observation of the differential response is useful for evaluating potential coupling mechanisms of the noise.
The JFET signals are passed to the x-ray amplifier box (XBOX) [5], which AC-couples, amplifies, and applies an anti-aliasing filter before digitizing them at 12.5 Hz sampling. The digitized signal is relayed to other room-temperature electronics called the pulse shape processor (PSP) [14], which is responsible for x-ray event detection and reconstruction as well as collecting the detector
noise data. We use the spectra made from noise records of an 8k sample length (0.65536 s) for the frequency-domain data and dumps of continuous 50k samples (4.096 s) synchronous among all microcalorimeter and the anti-co channels for the time-domain data to evaluate the detector responses to EMI.
The detectors and JFETs are housed inside the dewar [15, 16]. A tank with superfluid He provides a stable thermal anchor of \(\sim\)1.2 K. From there, two adiabatic demagnetization refrigerators (ADRs) work in series to cool the detector stage to 50 mK [17], controlled by room-temperature electronics called the ADR controller (ADRC) [5]. Between the detector stage and the dewar main shell are multiple isolated thermal shields made of Al of a few mm thickness. The dewar interior is cooled actively by five cryocoolers [18] and passively by the He vapor cooling, and its exterior is passively cooled by radiative cooling toward deep space. In case of the depletion of superfluid He, a third ADR works to cool the He tank for an extended lifetime [19, 20].
The dewar is an Al vacuum vessel, leak-tight on the ground under air, thus constituting a Faraday cage against the external EMI environment. For x-ray observations in orbit, the dewar needs to be open along the x-ray light path. An apparatus called the gate valve (GV) is installed at the top of the dewar, which is kept closed on the ground and during launch. The GV will be permanently opened using a non-explosive actuator during the commissioning phase after the spacecraft out-gassing settles. The GV door has a Be window of \(\sim\)270 \(\mu\)m thickness [21] to transmit x-rays above \(\sim\)2 keV.
### Spacecraft
XRISM is planned to be launched in 2023 from the JAXA's Tanegashima space center into a near-Earth orbit with an altitude of 575 km and an inclination of 31 degrees. The spacecraft is in the final integration testing, as of writing, since April 2022 at the JAXA's Tsukuba space center. The structure of the spacecraft inherits the design of ASTRO-H (Figure 1). It has a weight of \(2.3\times 10^{3}\) kg and an envelope size of 7.9, 9.2, and 3.1 m respectively for the height (\(z\)), length (\(x\)), and width (\(y\)). The main body is composed of eight side panels (SP1-8) of 990\(\times\)3100 mm\({}^{2}\) in size, whose inside has the top, middle, and lower plates perpendicular to the SPs standing on the bottom structure. For _Resolve_, the x-ray mirror is placed on the top panel, the dewar is upright on the bottom structure, and room-temperature electronics are inside the body on the SPs. The solar array panels are stored in SP2 and SP4 at launch and are deployed in parallel with SP3, which are oriented toward the Sun. A part of SP7 is a lattice window exposed to deep space to cool the dewar surface radiatively.
For the attitude control of the spacecraft, the reaction wheels (RWs) and the magnetic torquers (MTQs) are used as main actuators. The RWs are composed of four units with the base momentum of a 3000 rpm rotation, which are used to rebalance the angular momentum within the spacecraft for pointed observations and maneuvering. The MTQs dump the accumulated angular momentum against the Earth magnetic field. Three MTQs are installed inside SP3 for the \(x\) axis and SP5 for the \(y\) and \(z\) axes. Each MTQ is a solenoid of a 920 mm length and a 34 mm radius, which generates the maximum magnetic moment of 900 A m\({}^{2}\) for a \(\pm\)35 V bipolar DC drive. The field strength is controlled by changing the duty ratio of the DC with pulse width modulation (PWM) at 127 Hz frequency. This is close to the thermal time constant of the detector, and can cause significant degradation when coupled. This was indeed observed in the SXS [9], but its coupling path (magnetic or conductive) has not been clarified.
For communication with the ground, the spacecraft has four S-band and two X-band antennas. A half of them (two S-band and one X-band antennas) are located outside of SP7, while the other half are outside of SP3. The S-band is used both for uplink and downlink at 2 GHz, while the X-band is used only for downlink at 8 GHz. All of them use a cross-dipole antenna with a reflector to increase the forward-to-backward ratio. Another use of radio frequency (RF) is the GPS receivers outside of SP3 and 7. The electronics for the RF modulation/demodulation, filtering, and amplifying are installed inside of the SPs. These RF equipments, in particular those for downlink with strong emission local to the spacecraft, are the sources of high-frequency electromagnetic field. Because the _Resolve_ dewar constitutes a Faraday cage with the GV closed on the ground and early in the orbit, the RF signals would not affect the detector inside the dewar. This was indeed the case for the SXS, which ended its life before the GV was opened in the orbit. However, when the GV will be opened for _Resolve_, the cage is broken with a \(\sim 35\) mm diameter hole left open and we may expect RF interference with a frequency higher than its cut-off at \(\sim\)2 GHz. A particular concern is the RF emission from the X-band or S-band antenna outside of SP7, which can diffract into the spacecraft main body through the opening in SP7 and reach the detector through the opened GV. The modulation in the carrier frequency used for communication may load energy to the detector that varies in time within its bandpass. This could not be verified for the SXS and thus must be investigated for _Resolve_.
## 3 Magnetic EMI
### Simulation
We start with the simulation of the magnetic field generated by the three MTQs. The MTQ has a large inductance (6.7 H) with a cut-off frequency of 0.68 Hz, which is much smaller than its PWM drive frequency. The magnetic field is approximated as DC, thus we calculated the static solution of Maxwell's equations. The AC component can be calculated by scaling the DC simulation results in the post-processing. We used the Maxwell software1 provided by ANSYS based on the finite element method (FEM) solver. A model was made by simplifying the spacecraft main body and the _Resolve_ dewar (figure 2 a). A personal computer is sufficient for this simulation.
Figure 1: Illustration of the spacecraft and the _Resolve_ instrument. (Photo (c) is from Ishisaki et al.[7])
The results are shown in Figure 3 for the three MTQs separately, which generates the magnetic field of different strengths of \(\mathcal{O}\)(10 \(\mu\)T) and orientations around the dewar. A conceivable part to pick up the magnetic field is the harness from the room-temperature electronics (XBOX and ADRC) that goes into the cold stages inside the dewar. The cross-section of the dewar including their feed-through is chosen for visualizing the calculated field.
### Instrument-level test
We performed the magnetic EMI test during the instrument-level test using the flight-model hardware of _Resolve_ on September 14-15, 2021. The test was designed to measure the detector response against the magnetic field injection. Despite the nearly DC behavior of the magnetic field, its AC component is more important in the microcalorimeter bandpass. We need to simulate the PWM shape precisely in the time domain. We thus used the engineering model (EM) unit of MTQ left from ASTRO-H, which is the same design as the ones used for XRISM, and drove it in the same way as the flight MTQ driver does.
Figure 4 shows the setup of the instrument-level test for the magnetic EMI. The EM MTQ was placed at a flight position of the MTQ-\(y\) relative to the dewar (Figure 4 a). The PWM shape was made with a function generator, which was amplified by a bipolar power supply (Figure 4 c). The magnetic field was measured using magnetometers sensitive to the AC field up to 500 Hz
Figure 2: Simulation models of (a) magnetic and (b) RF EMI seen from the SP7 in a slightly upward direction. The emission sources (a; three MTQs and b; two RF antennas) and the evaluation plane of the simulation (a; circular cross section perpendicular to the \(z\) axis including the feed-through and b; cross section perpendicular to the \(x\) axis including the detector center) are shown in red dashed-and-dotted curve or line.
for the three axes (Figure 4 a). Unlike the spacecraft-level test, we have several flexibilities to investigate the coupling nature and possible mitigation: (1) we can change the relative distance between the MTQ and the dewar arbitrarily to examine the contribution of the magnetic coupling, and (2) we can test wrapping the harness between the room-temperature electronics and the dewar with a magnetic shield material.
We had three configurations for the distance between the dewar center and EM MTQ: the flight value (1135 mm) and 1.56 and 2.12 times of it. In each configuration, the MTQ was driven in various duty ratios (20, 30, \(-\)70, and 80%). Here, \(-\)70% duty denotes 70% duty with the negative polarity (\(-\)35 V). The microcalorimeter bias setting was also changed from nominal (1.6 V) to zero or a high (5.0 V) value. A part of this test was repeated after wrapping the harness with the cobalt-based magnetic shield (2705M) provided by Metglas(r), Inc2(Figure 4 b).
Fig 4: Setup for the magnetic EMI instrument-level test: (a) SP5 and (b) SP8 side of the dewar (Figure 3), and (c) drive electronics.
Fig 3: Results of the magnetic simulation. The magnetic field strength \(|\mathbf{B}|\) is given in the unit of \(\mu\)T on the plane perpendicular to the \(z\) axis at the height of the feed-through of the dewar harness leading to the ADRC/XBOX (Figure 2 a). The position of the detector and the feed-through is shown with the cross and the circle, respectively.
Figure 5 shows the detector response in the frequency domain using the 8k noise spectra with and without the MTQ with different detector biases. A strong signature of the MTQ PWM frequency (127 Hz) and its harmonics was observed on both calorimeter and anti-co channels when the MTQ was operated. For the anti-co, the magnitude of the interference was independent of bias, whereas for the microcalorimeter channels, the interference was stronger at zero bias than at nominal bias. Figure 5 shows the detector response in the time domain folded by the PWM frequency using the bipolar power supply monitor reading for the input voltage (red) and the microcalorimeter output for two selected pixels (blue and orange) as well as anti-co's (green). The time between the input and output data set is shifted to match the PWM edges to the microcalorimeter peaks and valleys as their relative time was not calibrated. Figure 5 shows the strength of the microcalorimeter noise power at 127 Hz for all pixels in the array. The distribution is characterized by the enhanced response in the pixels of a multiple of 9 (0, 9, 18, and 27) and an elevated level in the upper half of the array than the lower half. These remarkable features were consistently observed throughout the test. The measurement with the magnetic shield wrapping the harness did not show significant difference beyond the systematics.
### Spacecraft-level test
We performed the magnetic EMI test during the spacecraft-level test on June 9, 2022. We operated the three units of the MTQs separately for the \(\pm\)90 and \(\pm\)45% duty ratio and collected the detector noise spectra (a) to confirm the result obtained in the instrument-level test (SS 3.2) and (b) to characterize the transfer function from each unit of the MTQ to the microcalorimeter. We also operated the three MTQs with the PWM duty ratio of (\(x\), \(y\), \(z\))=(\(\pm\)30, \(\mp\)30, \(\pm\)30)% for an overnight integration with x-rays and compared the result with another overnight result without the MTQ to assess the degradation of the energy resolution by the presence of the strong 127 Hz and its harmonic lines in the noise spectra (Figure 5). The duty ratio was chosen to be the same among the three axes so that the time-domain peaks (Figure 5) match for the worst case; the opposite polarity for \(y\) is to negate the opposite polarity of the MTQ-\(y\) driver by design.
Figure 6 shows the microcalorimeter noise power at 127\(n\) Hz (\(n=\)1, 2, 3) against the MTQ operation for each unit. The response was the strongest in the order of \(y\), \(z\), and \(x\). No significant difference was found between the opposite polarities. Figure 6 shows the noise power at 127 Hz as a function of the PWM duty ratio both in the instrument- and the spacecraft-level tests.
The MTQ duty ratio keeps changing in the orbit. We simulated its behavior, along with the RW, during the spacecraft thermal-vacuum test in August 28-29, 2022 to assess their impact in practice. The energy resolution of the microcalorimeter averaged over a four-hour period with the MTQ and RW operation and another period without the MTQ and RW operation was compared using calibration x-ray sources. No significant difference was found and the risk of science impact is low. In case the increased coupling for some reasons in the future, we have a backup option ready to notch the 127 and 254 Hz lines in the onboard signal processing.
### Discussion
Based on the results obtained in the instrument-level (SS 3.2) and spacecraft-level (SS 3.3) tests and the simulation (SS 3.1), we speculate how the MTQ couples with the microcalorimeter detector. We note that the features observed in the _Resolve_ ground tests were very similar to those identified during SXS spacecraft-level tests, including the enhanced interference in specific pixels, the MTQ
## 4 Conclusion
Fig 5: Results of the magnetic EMI instrument-level test: (a) Frequency-domain data of pixel 9 with varying detector bias (blue for nominal, violet for high, and orange for none) with a 30% PWM duty of the MTQ. The PWM frequency (127 Hz) and its second harmonics are shown with dashed lines. Data of nominal bias without MTQ operation (green) is shown for comparison. The 150 and 156 Hz lines are, respectively, the third harmonic of the commercial AC and a cryocooler drive [22, 23]. (b) Time-domain data of the drive voltage (blue) and response of the microcalorimeter for pixel 9 (orange), 27 (green), and the anti-co (red) during the 20% (solid) and 80% (dashed) duty drive. The anti-co signal is intrinsically opposite in polarity and shifted slightly in phase from that of the microcalorimeter. (c) Pixel dependence of the noise power at 127 Hz when the MTQ was operated with a 30% PWM duty using the 8k noise spectra. Pixel 12 yielded no noise spectra by being interrupted by constant x-ray illumination by the \({}^{55}\)Fe calibration source.
axis- and polarity-dependence of the detector response, the nature of the scaling with MTQ duty cycle, and the level of energy resolution degradation. The consistency between _Resolve_ and SXS test results lends support to the identification of the likely MTQ coupling mechanism outlined below.
First, the coupling is likely via the magnetic field. This is illustrated by the distance dependence between the MTQ and the dewar in the instrument-level test (Figure 7). The dependence of the 127 Hz power in the detector noise spectra decreases as the distance increases. Because other coupling mechanisms, such as conductive coupling from the MTQ driver to the _Resolve_ room-temperature electronics or low-frequency RF coupling from the MTQ driver, are not expected to exhibit such dependence, we argue that the magnetic coupling is dominant. Indeed, the observed distance dependence is quite similar to the simulated dependence of the field strength \(|\mathbf{B}|\) at the dewar center (Figure 7). This explains why we observed the strong axis dependence of the MTQs (\(y\), \(z\), and \(x\) in the decreasing order) in the spacecraft-level test (Figure 6). It is mostly attributable to the distance between each MTQ unit and the dewar (Figure 7).
The magnetic coupling is also supported by the MTQ polarity dependence of the response. The magnitude of response is the same between the two polarities (Figure 6) and the sign of the response is the opposite for opposite polarity in the time domain (Figure 5). Such behavior excludes the possibility that the response is a function of the input power expected in conductive or RF coupling.
We speculate that a particular part of the instrument is sensitive to a particular direction of the magnetic field. We injected local magnetic field using a portable solenoid driven with a 127 Hz sine wave and found that the harness between the room-temperature electronics (ADRC, XBOX) and the dewar and their ends are particularly sensitive. There might be other susceptible places, in particular inside the dewar, which was not accessible with the portable solenoid.
Fig 6: Results of the magnetic EMI spacecraft-level test: (a) Line power of px 9 at 127\(n\) Hz (\(n=\)1, 2, 3) for the three MTQ units operated at \(\pm\)45% duty ratio. The underlying continuum was subtracted. The error bars are estimated from the continuum levels of the frequency neighbors. The upper limit of \(3\sigma\) is given in case of no detection. (b) Line power at 127 Hz for different duty ratio in the instrument- and spacecraft-level tests.
If the nearly DC magnetic field is the path for the coupling, why do we observe an AC response in the microcalorimeter? This can be studied by using the time-domain data in Figure 5. The spacing between rising and falling edges of the input voltage \(V_{\rm input}(t)\) coincides with that between the peak and the valley of the microcalorimeter output in all tested duty cycles. The magnetic field is smoothed in time for its large inductance with \(B(t)\propto\int V_{\rm input}(t)dt\), but the induced voltage is \(V_{\rm ind}(t)\propto dB(t)/dt\propto V_{\rm input}(t)\). If this further couples capacitively, the noise voltage would be \(V_{\rm noise}(t)\propto dV_{\rm ind}(t)/dt\propto dV_{\rm input}(t)/dt\). This explains the edge-peak relation in the time-domain data. The microcalorimeter response is smaller for the 90% duty than the 45% duty in the spacecraft-level test (Figure 6). This is probably because the rising and falling edges are close in time for the 90% duty so they cancel to some extent when arithmetically added.
The likely place of the capacitive coupling can be investigated using the pixel dependence in Figure 5. The pixels with a multiple of 9 (pixels 0, 9, 18, and 27) are particularly susceptible to the magnetic EMI. This is related to the readout wire layout from the microcalorimeter [13]. Figure 1 (d) shows the close-up view, in which two bundles of wires (18 pairs of signal and return for each pixel) come out of the detector array. One bundle reads the upper half of the array and the other the lower half. This part with a high impedance before the JFET impedance conversion is particularly sensitive to external radiative noise, hence a likely site of the capacitive coupling. The signal and return wires are aligned alternately, and all signal wires but the outermost (pixels 0, 9, 18, 27) are in between grounded return wires. This could explain the peculiar pixel dependence of magnetic coupling.
We have ample evidence that the magnetic interference is not acting through heating, but is purely electrical. The first indication is that the pickup is seen on the anti-co (Figure 5), which is a non-thermal device. Furthermore, the signal in the time domain is bipolar (Figure 5). Finally, the detector bias dependence (Figure 5) is not consistent with heating. With no bias, the microcalorimeter would not work as a thermal detector. Still, the strong MTQ noise lines were
Figure 7: Comparison of the measurement and simulation: (a) Distance dependence of the 127 Hz power (pixels 9 and 27) and the simulated magnetic field strength \(|\mathbf{B}|\) at the dewar center in the instrument-level test (§ 3.2) with the EM MTQ driven at a 30% duty. (b) Axis dependence of the 127 Hz power (pixels 9 and 27) and the simulated magnetic field strength \(|\mathbf{B}|\) at the dewar center in the spacecraft-level test (§ 3.3) with the flight MTQs driven at a 45% duty.
observed, illustrating the electrical nature of the input. This is in contrast to the thermal nature of the input found in the low-frequency noise by the cryocooler micro-vibration interference of the _Resolve_ instrument [22, 23]. In fact, the dependence of the magnetic coupling on bias for a particular pixel appears to scale with the expected temperature- and frequency-dependent impedance of the thermistor.
## 4 Ref Emi
### Simulation
For RF EMI, we also start with the simulation, which requires massive computational resources unlike the low-frequency magnetic EMI (SS 3.1). RF simulation is typically performed with a mesh size \(\sim\)1/20 of the wavelength (i.e., 2 mm for the X-band) with a model detailed to the scale. This is far smaller than the spacecraft size. Therefore, in spacecraft RF simulations, simplified models with hybrid solvers are often used. In this study, however, we use the detailed CAD model with a single solver for simulating the entire spacecraft.
For the solver, we use the finite difference time domain (FDTD) method [24]. This solves a discretized Maxwell's equations on the Yee's lattice [25]. We adopt the Poynting for Microwave software3 by Fujitsu. For the computer, we use Fugaku [26], a high-performance computing facility in RIKEN, which is also manufactured by Fujitsu. The most detailed CAD model for the entire ASTRO-H spacecraft was used with some modifications in accordance with the design changes for the XRISM. The materials are replaced with the perfect electric conductor. The perfect matched layer was set as the boundary condition for the simulation box (Figure 2 b).
Footnote 3: See [https://www.fujitsu.com/jp/solutions/business-technology/tc/sol/poynting/](https://www.fujitsu.com/jp/solutions/business-technology/tc/sol/poynting/) for detail.
We injected the maximum operational power used for transmission (31.6/36.3 dBm respectively for the S/X-band antenna outside of SP7; the closer to the dewar of the two for the S-band antenna). The simulation was run for 25/23 \(\mu\)s in time and a total mesh of \(1.9\times 10^{11}\) in space using 1024 nodes of Fugaku for 3 hours. A snapshot at 11.1/10.2 ns is shown in Figure 8. The calculated field strength at the interface above the GV is \(1.5\times 10^{-4}\)/\(5.6\times 10^{-3}\) V m\({}^{-1}\), which corresponds to -106/-63 dBm, respectively for the S/X-band.
Figure 8: Results of the RF simulation for the (a) S-band and (b) X-band antenna on SP7. The field strength \(|\mathbf{E}|\) is given in the unit of V m\({}^{-1}\) on the plane vertical to the \(x\)-axis including the detector (Figure 2 b). The interface is marked with the magenta cross. The horizontal structure from the antennas is an artifact of the simulation. Paraview[27] is used for visualization.
### Instrument-level test
We performed the RF EMI test during the instrument-level test using the flight-model hardware on February 24 and 28, 2022. The test was designed to measure the detector response against the RF injection at the S and X-band from above the dewar by opening the GV. Figure 9 shows the experimental setup. We avoided moving the entire instrument to an EMC test room in a different building to avoid various risks, thus the test was performed in a clean room continued from other tests. Special apparatuses described below for this test are not mechanically compatible with the spacecraft structure, so the instrument-level is the highest level of integration to perform this test.
In the setup, we need to meet two requirements: one is to keep the dewar vacuum and the other is to comply with the radio act of the government. For the former, we used a small vacuum chamber to cover the GV and kept the dewar leak-tight even when the GV was opened (Figure 9 b\({}^{7}\)). The GV can be opened and closed repeatedly using a handle on the ground. The top part of the chamber was replaced with an RF transmissive window made of high-density polyethylene. For the latter, a radio-anechoic chamber made of Al with the radio absorber interior was placed on top of the chamber in the air (Figure 9 a). The S- or X-band antenna was placed inside (Figure 9 d) to emit power provided by a signal generator toward the microcalorimeter. The S-band antenna was made and characterized in-house and the X-band antenna was borrowed from the OMOTENASHI project[28] using a close frequency. A dipole antenna was placed 3 m away from the radio anechoic chamber to monitor the RF leakage using a spectral analyzer. As the RF injection path is only through the opened GV, the small radio-anechoic chamber suffices for the test purpose.
The injection level was increased from -120 dBm to 0 dBm by a 20 dB increment for the S and X-band in both the GV closed and open configurations. When the monitor was about to exceed the legal limit, the strongest injection was replaced with -10 dBm. The carrier frequency was amplitude-modulated at 73.5 Hz so that the modulated power is visible in the detector bandpass. At each injection, we obtained the 8k noise spectra of the microcalorimeter and measured the power at 73.5 Hz. Figure 10 shows the 8k noise spectra of some selected pixels in the strongest injection case of each configuration. No significant excess noise at the modulation frequency was observed in any of the measurements. The upper limit was 15-20 nV/\(\sqrt{\rm Hz}\), which is negligible for any degradation of the detector performance. No interference was detected on the anti-co signal either.
Figure 9: Setup for the RF EMI instrument-level test. (Left) photo of the setup. (Right) configuration and close-up views of some major apparatuses.
### Spacecraft-level test
We cannot open the GV in the spacecraft configuration on the ground, so no end-to-end assessment is possible when the spacecraft RF system is operating. We placed the X-band antenna, which was used as a transmitter in the instrument-level test (SS 4.3) and used it as a receiver in the spacecraft-level test, at a place close to the dewar entrance (Figure 11) and monitored the level of the field inside the spacecraft during the spacecraft-level tests. When the RF systems were operated in various modes for the air-link communication testing in June, 2022, the maximum measured level was about -80 dBm, which validates the assumed input level in the instrument-level test (SS 4.2) based on the simulation (SS 4.1).
### Discussion
By combining the simulation and tests, we found that the microcalorimeter in _Resolve_ was found immune to the RF input from the open hole in the top of the dewar when the GV is opened. The RF input of 1 mW, which corresponds to the maximum injection power at the instrument-level test (SS 4.2), is larger by many orders than the x-ray input from the same direction during observations of \(\mathcal{O}(1~{}\mathrm{fW})\) for a milli-Crab source. The microcalorimeter is protected against the input in electromagnetic forms of longer wavelengths than x-rays by the multiple layers of thin filters made of aluminized polyimide in the x-ray aperture [29]. For RF input, the Al in the filters of a 50-100 nm thickness reflects back most of the incoming power at its surface by the impedance mismatch with the vacuum, allowing only a small fraction (less than -50 dB) to penetrate. We speculate that multiple layers of such filters work effectively to reject RF noise input.
Figure 10: Results of the RF EMI instrument-level test. 8k noise spectrum of pixel 0, 9, 18, and 27 against the strongest RF injection. The dashed line shows 73.5 Hz.
## 5 Summary
We presented the results of the ground testing and simulation of the EMI to the x-ray microcalorimeter in the _Resolve_ instrument onboard the XRISM. We discussed the magnetic EMI caused by the MTQs in the spacecraft attitude control system and the RF EMI caused by the spacecraft communication system. For the magnetic EMI, we observed a strong response in the microcalorimeter and anti-co time- and frequency-domain data. We speculated its coupling mechanisms based on the test and simulation results. There is no evidence that the resultant degradation is beyond the current allocation of noise budget. For the RF EMI, we injected RF signal up to 0/-10 dBm for the S/X-band but did not observe any response in the microcalorimeter and anti-co data in the instrument-level test. Because no end-to-end assessment is possible for the RF EMI on the ground, we conducted a full RF simulation using a detailed spacecraft CAD model and a single solver with Fugaku to complement the limitation. We found that the expected levels of the field diffracted from the spacecraft antennas are much smaller (-106/-63 dBm for S/X-band) than the level that _Resolve_ was tested to be immune.
### Acknowledgments
This work is made possible with significant contributions of all the XRISM _Resolve_ team members, and the SHI and NEC engineers, which we greatly appreciate. Kenji Fukunabe, Yoshiaki Mitsutake, Atsushi Tomiki, Yutaro Sekimoto, Hayato Takakura at ISAS helped us to prepare for the instrument-level tests. This work used computational resources of the supercomputer Fugaku provided by RIKEN. This paper is a derivation of the SPIE proceeding [30].
|
2310.09127 | On Generalization Bounds for Projective Clustering | Given a set of points, clustering consists of finding a partition of a point
set into $k$ clusters such that the center to which a point is assigned is as
close as possible. Most commonly, centers are points themselves, which leads to
the famous $k$-median and $k$-means objectives. One may also choose centers to
be $j$ dimensional subspaces, which gives rise to subspace clustering. In this
paper, we consider learning bounds for these problems. That is, given a set of
$n$ samples $P$ drawn independently from some unknown, but fixed distribution
$\mathcal{D}$, how quickly does a solution computed on $P$ converge to the
optimal clustering of $\mathcal{D}$? We give several near optimal results. In
particular,
For center-based objectives, we show a convergence rate of
$\tilde{O}\left(\sqrt{{k}/{n}}\right)$. This matches the known optimal bounds
of [Fefferman, Mitter, and Narayanan, Journal of the Mathematical Society 2016]
and [Bartlett, Linder, and Lugosi, IEEE Trans. Inf. Theory 1998] for $k$-means
and extends it to other important objectives such as $k$-median.
For subspace clustering with $j$-dimensional subspaces, we show a convergence
rate of $\tilde{O}\left(\sqrt{\frac{kj^2}{n}}\right)$. These are the first
provable bounds for most of these problems. For the specific case of projective
clustering, which generalizes $k$-means, we show a convergence rate of
$\Omega\left(\sqrt{\frac{kj}{n}}\right)$ is necessary, thereby proving that the
bounds from [Fefferman, Mitter, and Narayanan, Journal of the Mathematical
Society 2016] are essentially optimal. | Maria Sofia Bucarelli, Matilde Fjeldsø Larsen, Chris Schwiegelshohn, Mads Bech Toftrup | 2023-10-13T14:15:54Z | http://arxiv.org/abs/2310.09127v1 | # On Generalization Bounds for Projective Clustering
###### Abstract
Given a set of points, clustering consists of finding a partition of a point set into \(k\) clusters such that the center to which a point is assigned is as close as possible. Most commonly, centers are points themselves, which leads to the famous \(k\)-median and \(k\)-means objectives. One may also choose centers to be \(j\) dimensional subspaces, which gives rise to subspace clustering. In this paper, we consider learning bounds for these problems. That is, given a set of \(n\) samples \(P\) drawn independently from some unknown, but fixed distribution \(\mathcal{D}\), how quickly does a solution computed on \(P\) converge to the optimal clustering of \(\mathcal{D}\)? We give several near optimal results. In particular,
1. For center-based objectives, we show a convergence rate of \(\tilde{O}\left(\sqrt{k/n}\right)\). This matches the known optimal bounds of [Fefferman, Mitter, and Narayanan, Journal of the Mathematical Society 2016] and [Bartlett, Linder, and Lugosi, IEEE Trans. Inf. Theory 1998] for \(k\)-means and extends it to other important objectives such as \(k\)-median.
2. For subspace clustering with \(j\)-dimensional subspaces, we show a convergence rate of \(\tilde{O}\left(\sqrt{{k^{j}}^{2}/n}\right)\). These are the first provable bounds for most of these problems. For the specific case of projective clustering, which generalizes \(k\)-means, we show a convergence rate of \(\Omega\left(\sqrt{{kj}/n}\right)\) is necessary, thereby proving that the bounds from [Fefferman, Mitter, and Narayanan, Journal of the Mathematical Society 2016] are essentially optimal.
## 1 Introduction
Among the central questions in machine learning is, given a sample of \(n\) points \(P\) drawn from some unknown but fixed distribution \(\mathcal{D}\), how well does a classifier trained on \(P\) generalize to \(\mathcal{D}\)? The probably most popular way to formalize this question is, given a loss function \(L\) and optimal solutions \(\mathcal{S}_{P}\) and \(\mathcal{S}_{\mathcal{D}}\) for sample \(P\) and distribution \(\mathcal{D}\), respectively, how the empirical excess risk \(L(\mathcal{D},\mathcal{S}_{P})-L(\mathcal{D},\mathcal{S}_{\mathcal{D}})\) decreases as a function of \(n\). This paper focuses on loss functions associated with the clustering problem. Popular examples include \((k,z)\) clustering, which asks for a set of \(k\) centers \(\mathcal{S}\subset\mathbb{R}^{d}\) minimizing the cost \(\sum_{p\in P}\min_{s\in\mathcal{S}}\|p-s\|_{2}^{2}\) and more generally, \((k,j,z)\) subspace clustering which asks for a set of \(k\) subspaces \(\mathcal{U}:=\{U_{1},U_{2},\ldots U_{k}\}\) minimizing \(\sum_{p\in P}\min_{U_{i}\in\mathcal{U}}\|(I-U_{i}U_{i}^{T})p\|_{2}^{2}\). Special cases include \((k,1)\) clustering, known as \(k\)-median, \((k,2)\) clustering known as \(k\)-means and \((k,j,2)\) clustering known as projective clustering. Generally, there seems to be an interest in varying \(z\), as letting \(z\) tend towards \(1\) tends to result in outlier-robust clusterings. The problem is less widely explored for \(z>2\), although in particular for subspace
approximation there is some recent work [27; 34; 81; 79]. Higher powers give more emphasis on outliers. For example, centralised moments with respect to the three and four norms are skewness and kurtosis, respectively, and are extensively employed in statistics, see Cohen-Addad et al. [31] for previous work on clustering with these measures. Fitting a mixture model with respect to skewness minimizes asymmetry around the target center. Studing the problems for \(z\to\infty\) is very well motivated, as the \((1,\infty)\) clustering is equivalent to the minimum enclosing ball problem. Unfortunately, one often requires additional assumptions, as the minimum enclosing ball problem suffers from the curse of dimensionality [2], is very prone to outliers [25; 38].
Despite a huge interest and a substantial amount of research, so far optimal risk bounds \(\tilde{O}\left(\sqrt{k/n}\right)\)1 for the \(k\)-means problem have been established, see the seminal paper by Fefferman et al. [39] for the upper bound and Bartlett et al. [10] for nearly matching lower bounds. For general \((k,z)\)-clustering problems, the best known results prove a risk bound of \(O\left(\sqrt{\nicefrac{{k}}{{n}}}\right)\)[10]. For \((k,j,2)\) clustering, the best known bounds of \(\tilde{O}\left(\nicefrac{{k}}{{j}}/n\right)\) are due to Fefferman et al. [39]. Thus, the following question naturally arises:
Footnote 1: \(\tilde{O}\) hides logarithmic terms, i.e. we consider \(O\left(\sqrt{k/n}\cdot\text{polylog}(k,n)\right)=\tilde{O}\left(\sqrt{k/n}\right)\).
Is it possible to obtain optimal generalization bounds for all \((k,j,z)\)-clustering objectives?
We answer this question in the affirmative whenever \(j\) and \(z\) are constant, which seems to be the most relevant case in practise [78]. Specifically, we show
* The excess risk bound for \((k,z)\)-clustering when given \(n\) independent samples from an unknown fixed distribution \(\mathcal{D}\) is bounded by \(\tilde{O}\left(\sqrt{k/n}\right)\), matching the lower bound of [10].
* The excess risk bound for \((k,j,z)\)-clustering when given \(n\) independent samples from an unknown fixed distribution \(\mathcal{D}\) is bounded by \(\tilde{O}\left(\nicefrac{{k}^{2}}{{n}}\right)\).
* There exists a distribution such that the excess risk for the \((k,j,2)\)-clustering problem is at least \(\Omega\left(\nicefrac{{k}}{{j}}/n\right)\), matching the upper bound of Fefferman et al. [39] up to polylog factors.
### Related work
The most basic question one could answer is if the empirical estimation performed on \(P\) is consistent, i.e. as \(n\to\infty\), whether the excess risk tends to \(0\). This was shown in a series of works by Pollard [65; 67], see also Abaya and Wise [1]. Subsequent work then analyzed the convergence rate of the risk. The first works in this direction proved convergence rates of the order \(\tilde{O}(1/\sqrt{n})\) without giving dependencies on other parameters [22; 66]. Linder et al. [56] gave an upper bound of \(O(d^{3/2}\sqrt{k/n})\). Linder [55] improved the upper bound to \(O(d\sqrt{k/n})\). Bartlett et al. [8] showed an upper bound \(O(\sqrt{kd/n})\) and gave a lower bound of \(\Omega(\sqrt{k^{1-4/d}}/n)\). Motivated by applications of clustering for high dimensional kernel spaces [7; 18; 20; 21; 37; 39; 57; 59; 68; 80; 82; 83], research subsequently turned its efforts towards minimizing the dependency on the dimension. Biau et al. [14] presented an upper bound of \(O(k/\sqrt{n})\), see also the work by Clemenccon [26]. Fefferman et al. [39] gave a matching upper bound of the order \(O(\sqrt{k/n})\), which was later recovered by using techniques from Foster and Rakhlin [45] and Liu [58]. Further improvements require additional assumptions of the distribution \(\mathcal{D}\), see Antos et al. [5], Levrard [53], Li and Liu [54]. For subspace clustering, there have only been results published for the case \(z=2\)[39; 50; 73], for which the state of the art provides a \(\tilde{O}\left(\nicefrac{{k}}{{j}}/n\right)\) risk bound due to Fefferman et al. [39]. A highly related line of research originated with the study of coresets for compression. For Euclidean \((k,z)\) clustering, coresets with space bounds of \(\tilde{O}\left(\nicefrac{{k}}{{z}}/\nicefrac{{k}}{{n}}\right)\) have been established [30; 32], which roughly corresponds to a error rate of \(\tilde{O}\left(\nicefrac{{z+z}}{{\sqrt{k/n}}}\right)\) as a function of the size of the compression. For the specific case of \(k\)-median and \(k\)-means, coresets with space bounds of \(\tilde{O}\left(\nicefrac{{k^{(2+2)/(z+2)}}}{{z}}/\nicefrac{{z}}{{z}^{2}}\right)\) are known [33], which corresponds to a
error rate of \(\tilde{O}\left(\sqrt{\nicefrac{{k^{(2+2)/(z+2)}}}{{n}}}\right)\). Both results are optimal for certain ranges of \(\varepsilon\) and \(k\)[47] and while these bounds are worse than what we hope to achieve for generalization, many of the techniques such as terminal embeddings are relevant for both fields. For \((k,j,z)\) clustering, coresets are only known to exist under certain assumptions, where the provable size is \(\tilde{O}\left(\exp(k,j,\varepsilon^{-1})\right)\)[40; 44].
## 2 Preliminaries
We use \(\|x\|_{p}:=\sqrt[k]{\sum|x_{i}|^{p}}\) to denote the \(\ell_{p}\) norm of a vector \(x\). For \(p\to\infty\), we define the limiting norm \(\|x\|_{\infty}=\max x_{i}\). Further, we refer to the \(d\)-dimensional unit Euclidean ball by \(B_{2}^{d}\), i.e. \(x\in B_{2}^{d}\) is a vector in \(\mathbb{R}^{d}\) and \(\|x\|_{2}:=\sqrt{\sum_{i=1}^{d}x_{i}^{2}}\leq 1\). Let \(U\) be a \(d\times j\) orthogonal matrix, i.e., with columns that are pairwise orthogonal and have unit Euclidean norm. We say that \(UU^{T}\) is the projection matrix associated with \(U\). Let \(z\) be a positive integer. Given any set \(\mathcal{S}\) of \(k\) points in \(B_{2}^{d}\) we denote the \((k,z)\)-clustering cost for a point set \(P\) with respect to solution \(\mathcal{S}\) as
\[\text{cost}(P,\mathcal{S}):=\sum_{p\in P}\min_{s\in\mathcal{S}}\|p-s\|_{2}^{z}.\]
Special cases include \(k\)-means (\(z=2\)) and \(k\)-median (\(z=1\)). Similarly, given a collection \(\mathcal{U}\) of \(k\) orthogonal matrices of rank at most \(j\), we denote the \((k,j,z)\)-clustering cost of a point set \(P\) as
\[\text{cost}(P,\mathcal{U}):=\sum_{p\in P}\min_{U\in\mathcal{U}}\|(I-UU^{T})p\| _{2}^{z}.\]
The specific case \((k,j,2)\) is often known as projective clustering in literature. The cost vector \(v^{\mathcal{S},P}\in\mathbb{R}^{|P|}\), respectively \(v^{\mathcal{U},P}\in\mathbb{R}^{|P|}\) has entries \(v_{p}^{\mathcal{S}}=\min_{s\in\mathcal{S}}\|p-s\|_{2}^{z}\), respectively \(v_{p}^{\mathcal{U}}=\min_{U\in\mathcal{U}}\|(I-UU^{T})p\|_{2}^{z}\) for \(p\in P\). We will omit \(P\) from \(v^{\mathcal{S},P}\) and \(v^{\mathcal{U},P}\), if \(P\) is clear from context. The overall cost is \(\|v^{\mathcal{S}}\|_{1}=\sum_{p\in P}\min_{s\in\mathcal{S}}\|p-s\|_{2}^{z}\) and \(\|v^{\mathcal{U}}\|_{1}=\sum_{p\in P}\min_{U\in\mathcal{U}}\|(I-UU^{T})p\|_{2}\). The set of all cost vectors is denoted by \(V\).
Let \(\mathcal{D}\) be an unknown but fixed distribution on \(B_{2}^{d}\) with probability density function \(\mathbb{P}\). For any solution \(\mathcal{S}\), respectively \(\mathcal{U}\), we define \(\text{cost}(\mathcal{D},\mathcal{S}):=\int_{p\in B_{2}^{d}}\min_{s\in\mathcal{S }}\|p-s\|^{z}\cdot\mathbb{P}[p]dp\) and \(OPT:=\min_{\mathcal{S}}\text{cost}(\mathcal{D},\mathcal{S})\) and respectively \(\text{cost}(\mathcal{D},\mathcal{U}):=\int_{p\in B_{2}^{d}}\min_{U\in\mathcal{U }}\|(I-UU^{T})p\|^{z}\cdot\mathbb{P}[p]dp\) and \(OPT:=\min_{\mathcal{U}}\text{cost}(\mathcal{D},\mathcal{U})\). Let \(P\) be a set of \(n\) points sampled independently from \(\mathcal{D}\). We denote the cost of the empirical risk minimizer on \(P\) by \(OPT_{P}:=\frac{1}{n}\min_{\mathcal{S}}\|v^{\mathcal{S}}\|_{1}\), and respectively, \(OPT_{P}:=\frac{1}{n}\min_{\mathcal{U}}\|v^{\mathcal{U}}\|_{1}\). The excess risk of \(P\) with respect to a set of cost vectors is denoted by
\[\mathcal{E}_{|P|}(V):=\mathbb{E}_{P}[OPT_{P}]-OPT.\]
Finally, we use the notion of a net. Let \((V,dist)\) be a metric space, \(\mathcal{N}(V,dist,\varepsilon)\) is an \(\varepsilon\)-net of the set of vectors \(V\), if for all \(v\in V\)\(\exists\)\(v^{\prime}\in\mathcal{N}(V,dist,\varepsilon)\) such that \(dist(v,v^{\prime})\leq\varepsilon\). We will particularly focus on nets for cost vectors induced by \((k,z)\)-clustering and \((k,j,z)\)-clustering defined as follows, prior work has proposed similar nets for coresets and sublinear algorithms for \((k,z)\) clustering [31].
**Definition 2.1** (Clustering Nets).: A set \(\mathcal{N}_{\varepsilon}\) of \(|P|\)-dimensional vectors is an \(\varepsilon\)-clustering net if for every cost vector \(v\) obtained from a solution \(\mathcal{S}\) or \(\mathcal{U}\), there exists a vector \(v^{\prime}\in\mathcal{N}_{\varepsilon}\) with \(\|v^{\prime}-v\|_{\infty}\leq\varepsilon\)
A slightly weaker condition as required by these nets requiring only \(\|v^{\prime}-v\|_{2}\leq\varepsilon\sqrt{n}\) would also be sufficient. Nevertheless, we are not able to show better bounds when relaxing the condition and having a point-wise guarantee may be of independent interest.
Another net frequently used in literature and indeed used here are nets of the Euclidean unit ball.
**Lemma 2.2** (Lemma 5.2 of [77]).: \(|\mathcal{N}(B_{2}^{d},\|.\|_{2},\varepsilon)|\leq(1+2/\varepsilon)^{d}\)_._
Finally, we will frequently use the following triangle inequality extended to powers.
**Lemma 2.3** (Triangle Inequality for Powers (Lemma A.1 of [61])).: _Let \(a,b,c\) be an arbitrary set of points in a metric space with distance function \(d\) and let \(z\) be a positive integer. Then for any \(\varepsilon>0\)_
\[d(a,b)^{z}\leq(1+\varepsilon)^{z-1}d(a,c)^{z}+\left(\frac{1+\varepsilon}{ \varepsilon}\right)^{z-1}d(b,c)^{z}\]
\[|d(a,b)^{z}-d(a,c)^{z}|\leq\varepsilon\cdot d(a,c)^{z}+\left(\frac{2z+ \varepsilon}{\varepsilon}\right)^{z-1}d(b,c)^{z}.\]
## 3 Outline and technical contribution
In this section, we endeavour to present a complete and accessible overview of the key ideas behind the theorems. Let \(P\) be a set of \(n\) points sampled independently from some unknown but fixed distribution \(\mathcal{D}\). To show that the excessive risk with respect to clustering objectives is in \(\tilde{O}(f(n))\) for some function \(f\), it is sufficient to show two things. First, that for the optimal solution \(\mathcal{U}_{\text{OPT}}\), the clustering cost estimated using \(P\) is close to the true cost. Second, any solution that is more expensive than \(\mathcal{U}_{\text{OPT}}\) does not become too cheap when evaluated on \(P\). Both conditions are satisfied if for any solution \(\mathcal{U}\)
\[\left|\frac{1}{n}\text{cost}(P,\mathcal{U})-\text{cost}(\mathcal{D},\mathcal{ U})\right|\in O(f(n)).\]
Showing \(\mathbb{E}_{P}\left|\frac{1}{n}\text{cost}(P,\mathcal{U}_{\text{OPT}})- \text{cost}(\mathcal{D},\mathcal{U}_{\text{OPT}})\right|\in O(\sqrt{1/n})\) is typically a straightforward application of concentration bounds such as Chernoff's bound. In fact, these concentration bounds show something even stronger. Given \(t\) solutions \(\mathcal{U}_{1},\ldots\mathcal{U}_{t}\), we have
\[\mathbb{E}_{P}\sup_{\mathcal{U}_{i}}\left|\frac{1}{n}\text{cost}(P,\mathcal{U} _{i})-\text{cost}(\mathcal{D},\mathcal{U}_{i})\right|\in O\left(\sqrt{\frac{ \log t}{n}}\right). \tag{1}\]
What remains is to bound the number of solutions \(t\).
Clustering nets and dimension reduction for center based clusteringUnfortunately, the total number of expensive clusterings in Euclidean space is infinite, making a straightforward application of 1 useless. Nets as per Definition 2.1 are now typically used to reduce the infinite number of solutions to a finite number. Specifically, one has to show that by preserving the costs of all solutions in the net, the cost of any other solution is also preserved. Using basic techniques from high dimensional computational geometry, it is readily possible to prove that a \(\varepsilon\)-net for \((k,j,z)\) clustering of size \(\exp(k\cdot j\cdot d\cdot\log\varepsilon^{-1})\) exists, where \(d\) is the dimension of the ambient space. Plugging this into Equation 1 and setting \(\varepsilon^{-1}=n\) then yields a generalization bound of the order \(O\left(\sqrt{kjd\log n/n}\right)\).
Unfortunately, this leads to a dependency on \(d\), which is suboptimal. To improve the upper bounds, we take inspiration from coreset research. For \((k,z)\)-clustering, a number of works have investigated dimension reduction techniques known as terminal embeddings, see [11, 46]. Given a set of points \(P\in\mathbb{R}^{d}\), a terminal embedding \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\) guarantees \(\|p-q\|_{2}=(1\pm\varepsilon)\cdot\|f(p)-f(q)\|_{2}\) for any \(p\in P\) and \(q\in\mathbb{R}^{d}\). Terminal embeddings are very closely related to the Johnson-Lindenstrauss lemma, see [16, 28, 61] for applications to clustering, but more powerful in key regard: only one of the points is required to be in \(P\). The added guarantee extended to arbitrary \(q\in\mathbb{R}^{d}\) due to terminal embeddings allows us to capture all possible solutions. There are also even simpler proofs for \(k\)-mean that avoid this machinery entirely, see [39, 45, 58]. Unfortunately, these arguments are heavily reliant on properties of inner products and are difficult to extend to other values of \(z\). The terminal embedding technique may be readily adapted to \((k,z)\)-clustering, though some care in the analysis must be made to avoid the worse dependencies on the sample size necessitated for the corset guarantee, described as follows.
Improving the union bound via chaining:To illustrate the chaining technique, consider the simple application of the union bound for a terminal embedding with target dimension \(m=\Theta(\varepsilon^{-2}\log n)\), see the main result of Narayanan and Nelson [63]. Replacing the dependency on \(d\) with an appropriately chosen parameters and plugging the resulting net \(N_{\varepsilon}\) of size \(\exp(k\varepsilon^{-2}\log n\log\varepsilon^{-1})\) yields a generalization bound of \(O\left(\sqrt{k\log^{2}n/n}\right)\) for \((k,z)\) clustering. We improve on this using a
chaining analysis, see [30, 32] for its application to coresets for \((k,z)\) clustering and [39] for \((k,j,2)\) clusterings. Specifically, we use a nested sequence of nets \(N_{1/2},N_{1/4},N_{1/8},\ldots,N_{2^{-2\log n}}\). Note that for every solution \(\mathcal{S}\), we may now write \(\text{cost}(p,\mathcal{S})\) for any \(p\in P\) as a telescoping sum
\[\text{cost}(p,\mathcal{S})=\sum_{h=0}^{\infty}\text{cost}(p,\mathcal{S}_{2^{- (h+1)}})-\text{cost}(p,\mathcal{S}_{2^{-h}})\]
with \(,\mathcal{S}_{2^{-h}}\in N_{h}\) and \(\text{cost}(p,\mathcal{S}_{1})\) being set to \(0\). We use this as follows. Suppose for some solution \(\mathcal{S}\), we have solutions \(\mathcal{S}_{2^{-h}}\in N_{2^{-h}}\) and \(\mathcal{S}_{2^{-(h+1)}}\in N_{2^{-(h+1)}}\). Then applying the union bound for a small set of solutions, we apply the union bound along every pair of solutions appearing in the telescoping sum. Using arguments similar to Equation 1, we then obtain
\[\mathbb{E}_{P}\sup_{\begin{subarray}{c}\mathcal{S}_{2^{-h}} \times\mathcal{S}_{2^{-(h+1)}}\\ \in N_{h}\times N_{h+1}\end{subarray}}\left|\frac{1}{n}\text{cost}(P,\mathcal{ S}_{2^{-h}})-\frac{1}{n}\text{cost}(P,\mathcal{S}_{2^{-(h+1)}})\right|\] \[=2^{-h}\cdot\tilde{O}\left(\sqrt{\frac{\log(|N_{h}|\cdot|N_{h+1} |)}{n}}\right)=2^{-h}\cdot\tilde{O}\left(\sqrt{\frac{k\cdot 2^{2h}\cdot\text{ polylog}(k/2^{h})}{n}}\right)\in\tilde{O}\left( \sqrt{\frac{k}{n}}\right)\]
This is the desired risk bound for \((k,z)\) clustering. To complete the argument in a rigorous fashion, we must now merely combine the decomposition of \(\text{cost}(P,\mathcal{S})\) into the telescoping sum with the learning rate that we just derived. Indeed, this already provides a simple way of obtaining a bound on the risk of the order \(\tilde{O}\left(\sqrt{k/n}\right)\), which turns out to be optimal. In summary, to apply the chaining technique successfully, the following two properties are sufficient: (i) the dependency on \(\varepsilon\) in the net size can be at most \(\exp(\tilde{O}(\varepsilon^{-2}))\), as the increase in net size is then met with a corresponding decrease between successive estimates along the chain and (ii) the nets have to preserve the cost up to an additive \(\varepsilon\) for _every_ sample point \(p\). The second property is captured by Definition 2.1. Both properties impose restrictions on the dimension reductions that can be successfully integrated into the chaining analysis.
Dimension reduction for projective clustering:It turns out that extending this analysis \((k,j,z)\) clustering is a major obstacle. While the chaining method itself uses no particular properties of \((k,z)\) clustering, the terminal embeddings needed to obtain nets cannot be applied to subspaces. Indeed, terminal embeddings by the very nature of their guarantee, cannot be linear2, and hence a linear structure such as a subspace will not be preserved. At this stage, there are a number of initially promising candidates that can provide alternative dimension reduction methods. For example, the classic Johnson-Lindenstrauss lemma can be realized via a random embedding matrix and, moreover, preserves subspaces, see for example [70, 23, 28]. Unfortunately, as remarked by [46], there is an inherent difficulty in applying Johnson-Lindenstrauss type embeddings even for \((k,z)\) clustering coresets and the same arguments also apply for generalization bounds.
Footnote 2: Consider an embedding matrix \(S\in\mathbb{R}^{d\times m}\). Clearly, there exists some vector \(x\in\mathbb{R}^{d}\) that is in the kernel of \(S\) whenever \(m<d\), hence for any vector \(p\), \(\|p-(x+p)\|_{2}\) cannot be preserved.
An alternative dimension reduction method based on principal component analysis was initially proposed by [44] for \((k,j,2)\), see also [28] and most notably [75] for a different variant that applies to arbitrary \((k,j,z)\) objectives. For \((k,j,2)\) clustering, it states that a dimension reduction on the first \(O(D/\varepsilon)\) principal components preserves the projective cost of all subspaces of dimension \(D\). Since \((k,j,2)\) clustering is a special case of a \(k\cdot j\) dimensional projection, it implies that \(O(kj/\varepsilon)\) dimensions are sufficient. Given that these dimension reductions are based on PCA-type methods, they are linear and therefore seem promising initially. Unfortunately, this technique has serious drawbacks. It does not satisfy the requirements for Definition 2.1, only preserving the cost on aggregate rather then per individual point, and thus cannot be combined with the chaining technique3. Without the chaining technique, the best bound one can hope for is of the order \(\tilde{O}\left(\sqrt[3]{k^{2}j^{2}/n}\right)\), which falls short of what we are aiming for.
Footnote 3: PCA as well as the other potential alternative dimension reduction techniques also do not satisfy the relaxed definition that would be sufficient for the analysis to go through.
Another important technique used to quantify optimal solutions of \((k,j,z)\) clustering initially proposed by [74] and subsequently explored by [43; 35] and has frequently seen use in coreset literature [40; 46]. Succinctly, it states that a \((1+\varepsilon)\) approximate solution to the \((1,j,z)\) clustering problem of a point set \(P\) is contained in a subspace spanned by \(\tilde{O}(j^{2}/\varepsilon)\) input points of \(P\). While this result improves over PCA for large values of \(k\), applying it only yields a learning rate of the order \(O\big{(}\sqrt[3]{k^{3}/n}\big{)}\). It turns out that this technique has the exact same limitations as PCA, namely that costs per point are not preserved, and thus only offers a different tradeoff in parameters.
Our new insight:Given the state of the art, designing a dimension reduction technique that would enable the application of the chaining technique might seem hopeless, and indeed, we were not able to prove such. The key insight that allows us to bypass these bottlenecks is to find a dimension reduction that applies not to all solutions \(\mathcal{U}\), but only to a certain subset of them. Indeed, we show that for any point set \(P\) contained in the unit ball and any subspace \(\mathcal{U}\) of dimension \(j\), there exists a subspace \(S\) spanned by \(O(j/\varepsilon^{2})\) points of \(P\) such that for every point \(p\): \(|\text{cost}(p,\mathcal{U})-\text{cost}(p_{S},\mathcal{U}_{S})|\leq\varepsilon\). This is similar to the guarantee provided by [74] but stronger in that it (i) applies to arbitrary subspaces, which is required for the chaining analysis, and (ii) applies to each point of \(P\) individually, rather than for the entire point set \(P\) on aggregate. We then augment the chaining analysis by applying a union bound over all \(\binom{|P|}{j/\varepsilon^{2}}\) possible dimension reductions, thereby capturing all solutions \(\mathcal{U}\). We are unaware of any previously successful attempts at integrating multiple dimension reductions within a chaining analysis and believe that the technique may be of independent interest.
## 4 Useful results from learning theory
Our goal is to bound the rate with which the empirical risk decreases for clustering problems. For a fixed set of \(n\) points \(P\) and a set of functions \(F:P\to\mathbb{R}\), we define the Rademacher complexity (\(Rad_{n}\)) and the Gaussian complexity (\(G_{n}\)) wrt \(F\) respectively as
\[Rad_{n}(F)=\frac{1}{n}\cdot\mathbb{E}_{r}\sup_{f\in F}\sum_{p\in P}f(p)\cdot r _{p}\qquad G_{n}(F)=\frac{1}{n}\cdot\mathbb{E}_{g}\sup_{f\in F}\sum_{p\in P}f (p)\cdot g_{p}\]
where \(r_{p}\) are independent random variables following the Rademacher distribution, whereas \(g_{p}\) are independent Gaussian random variables. In our case, we can think of \(f\) as being associated to a solution \(\mathcal{S}\) (respectively a solution \(\mathcal{U}\)) and \(f(p)=\text{cost}(p,\mathcal{S})=\min_{s\in\mathcal{S}}\|p-s\|_{2}^{s}\) (respectively \(f(p)=\text{cost}(p,\mathcal{U})=\min_{U\in\mathcal{U}}\|(1-UU^{T})p\|_{2}^{s}\)). Since we associate every \(f\) with a cost vector \(v^{\mathcal{S}}\), we will use \(Rad_{n}(F)\) and \(Rad_{n}(V)\) as well as \(G_{n}(F)\) and \(G_{n}(V)\) interchangeably. The following theorem is due to Bartlett and Mendelson. [9].
**Theorem 4.1** (Simplified variant of Theorem 8 of Bartlett and Mendelson [9]).: _Consider a loss function \(L:A\to[0,1]\). Let \(F\) be a class of functions mapping from \(X\) to \(A\) and let \((X_{i})_{i=1}^{n}\) be independent samples from \(\mathcal{D}\). Then, for any integer \(n\) and any \(\delta>0\), with probability at least \(1-\delta\) over samples of length \(n\), denoting by \(\hat{\mathbb{E}}_{n}\) the empirical risk, every \(f\in F\) satisfies_
\[\mathbb{E}L(f(X))\leq\hat{\mathbb{E}}_{n}L(f(X))+Rad_{n}(F)+\sqrt{\frac{8\ln 2 /\delta}{n}}.\]
Thus, in order to bound the excess risk, Theorem 4.1 shows that it is sufficient to bound the Rademacher complexity. It is well known (see, for example, B.3 of Rudra and Wootters [69]) that \(Rad_{n}(V)\leq\sqrt{2\pi}G_{n}(V)\). Thus we can alternatively bound the Gaussian complexity, which is sometimes more convenient. Note that if \(V\) is the set of all cost vectors, clustering nets are mere \(\mathcal{N}(V,\|.\|_{\infty},\varepsilon)\). Using these nets, we can bound the Rademacher and Gaussian complexity. Indeed the following lemma holds. Proofs of similar statements are commonly found in works on controlling Gaussian processes, see [52; 76]. For the convenience of the reader, we give a proof in the appendix.
**Lemma 4.2**.: _Let \(\mathcal{D}\) be a distribution over \(B_{2}^{d}\) and let \(P\) a set of \(n\) points sampled from \(\mathcal{D}\). Suppose that for a set of \(n\)-dimensional vector \(V\), we have an absolute constant \(C,\gamma>0\) such that \(\log|\mathcal{N}(V,\|.\|_{\infty},\varepsilon)|\in O(\varepsilon^{-2}\log^{ \gamma}(n\varepsilon^{-1})C)\). Then_
\[G_{n}(V)\in O\left(\sqrt{\frac{C\log^{\gamma+2}n}{n}}\right).\]
The specific types of nets used in our study and the size bounds for those nets will be the key to obtaining the desired upper bounds and will be detailed in the next section.
## 5 Generalization bounds for center-based clustering and subspace clustering
We start by giving our generalization bounds for center based clustering and subspace clustering problems. For subspace clustering problems, we first state the result for general \((k,j,z)\) clustering. An improvement for the special case \(z=2\) will be given later.
**Theorem 5.1**.: _Let \(\mathcal{D}\) be a distribution over \(B_{2}^{d}\) and let \(P\) be a set of \(n\) points sampled from \(\mathcal{D}\). For any set of \(k\) points \(\mathcal{S}\subset B_{2}^{d}\), we denote by \(v^{\mathcal{S}}\) the \(n\)-dimensional cost vector of \(P\) in solution \(\mathcal{S}\) with respect to the \((k,z)\)-clustering objective. Moreover we denote by \(v^{\mathcal{U}}\) the \(n\)-dimensional cost vector of \(P\) in solution \(\mathcal{U}\) with respect to the \((k,j,z)\)-clustering objective. Let \(V_{z}\) be the union of all cost vectors of \(P\) for the center-based clustering and \(V_{j,z}\) the union of all cost vectors for subspace clustering. Then with probability at least \(1-\delta\)_
\[\mathcal{E}_{n}(V_{z})\in O\left(\sqrt{\frac{k\cdot\log^{4}n}{n}} +\sqrt{\frac{\log 1/\delta}{n}}\right) \tag{2}\] \[\mathcal{E}_{n}(V_{j,z})\in O\Bigg{(}\sqrt{\frac{k\cdot j^{2} \cdot\log jn\cdot log^{3}n}{n}}+\sqrt{\frac{\log 1/\delta}{n}}\Bigg{)}. \tag{3}\]
Following Theorem 4.1, it is sufficient to bound the Rademacher complexity in order to bound the excess risk. The Rademacher complexity is, up to lower order terms, equal to the Gaussian complexity, which, following Lemma 4.2 may be bounded by obtaining small nets with respect to the \(\|.\|_{\infty}\) norm. We believe that the results on the bounds of the nets, may be of independent interest and we'll state these results in the following Lemma.
**Lemma 5.2**.: _Let \(\mathcal{D}\) be a distribution over \(B_{2}^{d}\) and let \(P\) a set of \(n\) points sampled from \(\mathcal{D}\), let \(V_{z}\) be defined as in Theorem 5.1 let \(V_{j,z}\) be defined as in Theorem 5.1. Then_
\[|\mathcal{N}(V_{z},\|.\|_{\infty},\varepsilon)|\leq\exp(O(1)z^{3} \cdot k\cdot\varepsilon^{-2}\log n\cdot(\log(z)+\log(\varepsilon^{-1}))) \tag{4}\] \[|\mathcal{N}(V_{j,z},\|.\|_{\infty},\varepsilon)|\leq\exp(O(1)(3z )^{z+2}\cdot k\cdot j\cdot\varepsilon^{-2}(\log n+j\log(j\varepsilon^{-1})) \log\varepsilon^{-1}). \tag{5}\]
Combining Lemma 5.2 with Lemma 4.2 now yields the immediate bound on the Rademacher and Gaussian complexity.
We will first give the bound for center-based clustering and subsequently turn our attention to the more involved subspace-based construction.
### Center-Based Clustering
Following the discussion from Section 3, we use terminal embeddings to prove the part of Lemma 5.2 pertaining to \((k,z)\) clustering.
**Lemma 5.3**.: _Let \(P\subset B_{2}^{d}\) be a set of points. Let \(V\) be the set of all cost vectors of \(P\) for \((k,z)\)-clustering. Then there exists an \(\varepsilon\)-clustering net of size_
\[|\mathcal{N}(V,\|.\|_{\infty},\varepsilon)|\leq\exp(O(1)\cdot z\cdot k\cdot d \cdot\log(z\varepsilon^{-1})).\]
Proof.: We start by proving the bound for \(k=1\). Suppose we are given a net \(\mathcal{N}(B_{2}^{d},\|.\|_{2},\delta)\), for a \(\delta\) to be determined later. Consider a candidate solution \(\{s\}\) with cost vector \(v^{\{s\}}\in V\). Let \(s^{\prime}\) be the point in \(\in\mathcal{N}(B_{2}^{d},\|.\|_{2},\delta)\) of such that \(\|s-s^{\prime}\|\leq\delta\), if \(s^{\prime}\) is not unique any one will be sufficient. Let \(v^{\mathcal{S}^{\prime}}\) be the cost vector of \(\mathcal{S}^{\prime}\). The number of distinct solutions \(\mathcal{S}^{\prime}\) are \(|\mathcal{N}(B_{2}^{d},\|.\|_{2},\delta)|=\exp(O(1)\cdot d\cdot\log\delta^{-1})\) due to Lemma 2.2.
What is left to show is that all solutions constructed in this way satisfy the guarantee of \(\mathcal{N}(V,\|.\|_{\infty},\delta)\), for an appropriately chosen \(\delta\). We have for any \(p\in P\) and any non-negative integer \(z\) due to Lemma
\[\|\|p-s\|^{z}-\|p-s^{\prime}\|^{z}\| \leq \alpha\cdot\|p-s\|^{z}+\left(\frac{2z+\alpha}{\alpha}\right)^{z-1} \|s-s^{\prime}\|^{z}\] \[\leq \alpha\cdot\|p-s\|^{z}+(3z)^{z}\left(\frac{\delta}{\alpha}\right)^ {z-1}\cdot\delta\]
We set \(\alpha=\frac{1}{2\cdot 2^{z}}\varepsilon\) and \(\delta=\alpha\cdot\frac{1}{2(3z)^{z}}\varepsilon=\frac{1}{4(6z)^{z}}\varepsilon^ {2}\). Then the term above is upper bounded by at most \(\varepsilon\) as \(\|p-s\|\leq 2\). Now since \(\|\|p-s\|^{z}-\|p-s^{\prime}\|^{z}\|\leq\varepsilon\) for all \(s\in B_{2}^{d}\) also implies \(|\min_{s\in\mathcal{S}}\|p-s\|^{z}-\min_{s^{\prime}\in\mathcal{S}^{\prime}}\| p-s^{\prime}\|^{z}\|\leq\varepsilon\), we have proven our desired approximation.
To conclude, observe that by our choice of \(\delta\), the overall net \(N\) has size at most \(\exp(O(1)\cdot z\cdot d\cdot\log(z\varepsilon^{-1}))\).
To extend this proof to \(k\)-centers, observe that any solution consisting of \(k\) centers can be obtained by selecting \(k\) points from \(B_{2}^{d}\), rather than one. This raises the net size of the single cluster case by a power of \(k\).
We now show that Lemma 5.3 combined with terminal embeddings yields the desired net.
**Lemma** (Equation 4 in Lemma 5.2).: _Let \(\mathcal{D}\) be a distribution over \(B_{2}^{d}\) and let \(P\) a set of \(n\) points sampled from \(\mathcal{D}\) and let \(V\) be defined as in Theorem 2. Then_
\[|\mathcal{N}(V,\|.\|_{\infty},\varepsilon)|\leq\exp(O(1)z^{3}\cdot k\cdot \varepsilon^{-2}\log n\cdot(\log(z)+\log(\varepsilon^{-1}))).\]
Proof.: Let \(f:\mathbb{R}^{d}\to\mathbb{R}^{m}\) be a terminal embedding, that is \(f\) is such that \(m\in O(z^{2}\cdot\varepsilon^{-2}\log|P|)\)4 and for all \(p\in P\) and \(q\in\mathbb{R}^{d}\)
Footnote 4: The dependency on \(z\) is easily derived via a straightforward application of Lemma 2.3.
\[\|p-q\|^{z}=(1\pm\varepsilon)\|f(p)-f(q)\|^{z},\]
as given by [63]. Therefore, for any candidate solution \(\mathcal{S}\), we also have
\[\text{cost}(p,\mathcal{S})=(1\pm 2\varepsilon)\text{cost}(f(p),f(\mathcal{S})).\]
In other words, the set of cost vectors in the image of \(f\) is the desired \(O(\varepsilon)\)-net for the true set of cost vectors. Hence an \(\varepsilon\)-net for the cost vectors induced by solutions in the image of \(f\) is also an \(O(\varepsilon)\)-net for the set of cost vectors. We thus may apply Lemma 5.3 for all cost vectors induced by solutions in the image of \(f\). After rescaling \(\varepsilon\) by constant factors, the overall net size is therefore \(\exp(O(1)z^{3}\cdot k\cdot\varepsilon^{-2}\log n\cdot(\log(z)+\log( \varepsilon^{-1})))\)
### Subspace Clustering
Unfortunately, the terminal embedding technique is not admissible for obtaining nets for subspace clustering as clarified in Section 3. Thus, we use an entirely different approach. We show the existence of a collection of dimension reducing maps with subspace preserving properties. Fortunately, the number of dimension reducing maps is small. Our desired net sizes then follow by enumerating over all of these dimension reducing maps, and for the candidate solutions covered by each such dimension reducing map, we can find an efficient net. First, we introduce a slightly different, but closely related notion to \((1,j,z)\)-nets.
**Definition 5.4** (Projective Nets).: Let \(P\subset B_{2}^{d}\) be a set of points, and let \(z\) be a positive integer. For a \(d\times j\) matrix \(S\) with columns that have at most unit norm and any point \(p\in P\), define the projective cost as \(\text{cost}_{proj}(p,S)=\|S^{T}p\|_{2}\). Let \(V\) be the set of all projective cost vectors induced by such matrix \(S\). We call a \(\mathcal{N}(V,\|.\|_{\infty},\varepsilon)\) a \((\varepsilon,j)\)-projective net of \(P\).
On a high level, the proof largely relies on the following decomposition. Let \(U\) be a candidate subspace and let \(\Pi\) be a projection matrix used to approximate \(\|(I-UU^{T})p\|_{2}^{z}\) We have
\[\|(I-UU^{T})p\|^{2}\!=\!\underbrace{\|\Pi p\|^{2}}_{(1)}-\underbrace{\|U^{T} \Pi p\|^{2}}_{(2)}\!+\!\underbrace{\|(I-\Pi)p\|^{2}}_{(3)}\!-\!\underbrace{\|UU ^{T}(I-\Pi)p\|^{2}}_{(4)}\!+\!\underbrace{2p^{T}\Pi UU^{T}(I-\Pi)p}_{(5)} \tag{6}\]
Here, we wish to select \(\Pi\) such that \(\|U^{T}(I-\Pi)p\|_{2}\) is small for all \(p\in P\). Note that this implies that the terms \(2p^{T}\Pi UU^{T}(I-\Pi)p\) and \(\|UU^{T}(I-\Pi)p\|^{2}\) are small. For the term (2), we merely have to show that projective nets exist. If the number of \(\Pi\) is small, we can further construct good nets for the terms (1) and (3). We start by giving a bound for the projective nets. Our first Lemma 5.5 shows that if the points lie in a sufficiently low-dimensional space, such a net can be obtained by constructing a net \(\mathcal{N}(B_{2}^{d},\|.\|_{2},\varepsilon^{\prime})\) for a sufficiently small \(\varepsilon^{\prime}\).
**Lemma 5.5**.: _Let \(P\subset B_{2}^{d}\) be a set of points and let \(z\) be a positive integer. Then there exists an \((\varepsilon,j)\)-projective net of size \(|\mathcal{N}(V,\|.\|_{\infty},\varepsilon)|\leq\exp(O(1)\cdot d\cdot j\cdot \log(j\varepsilon^{-1}))\)._
Proof.: Initially, let \(M=\emptyset\). We add points to \(M\) in rounds and denote by \(M_{t}\) the set after \(t\) rounds. Furthermore, let \(\Pi_{t}\) be the projection matrix onto the subspace spanned by \(M_{t}\) at round \(t\). If there is a \(p\in P\) in round \(t\) such that
\[\|U^{T}(I-\Pi_{t})p\|>\varepsilon\|(I-\Pi_{t})p\| \tag{7}\]
then we let \(M_{t+1}=M_{t}\cup\{p\}\). Our goal is to show that after \(T\in O(j\varepsilon^{-2})\) many rounds, we have \(\|U^{T}(I-\Pi_{T})p\|\leq\varepsilon\cdot\|(I-\Pi_{T})p\|\). We show this by proving inductively
\[\|U^{T}\Pi_{t}\|_{F}^{2}\geq\varepsilon^{2}\cdot t.\]
For the base case \(t=0\), this is trivially true. Thus suppose we add a point \(p\) in iteration \(t+1\). Reformulating Equation 8, we have \(\frac{\|U^{T}(I-\Pi_{t})p\|}{\|(I-\Pi_{t})p\|}>\varepsilon\). By the Pythagorean theorem, we therefore have
\[\|U^{T}\Pi_{t+1}\|_{F}^{2}=\|U^{T}\Pi_{t}\|_{F}^{2}+\frac{\|U^{T}(I-\Pi_{t})p\| ^{2}}{\|(I-\Pi_{t})p\|^{2}}\geq\varepsilon^{2}\cdot t+\varepsilon^{2}\geq \varepsilon^{2}\cdot(t+1).\]
Now since \(\Pi_{t}\) is a projection and since \(U\) has j orthonormal columns \(j\geq||U^{T}||_{F}^{2}\geq||U^{T}\Pi_{t}||_{F}^{2}\). If \(T\geq\varepsilon^{-2}j\), we obtain \(||U^{T}\Pi_{T}||_{F}^{2}\geq j\). This implies that \(U\) is contained in the space spanned by \(M_{T}\). Conversely, \(U\) must also be orthogonal to the kernel of \(M_{T}\) that is \(U(I-\Pi_{T})=0\). Therefore after at most \(\varepsilon^{-2}j\) rounds, we have \(\|U^{T}(I-\Pi_{T})p\|\leq\varepsilon\cdot\|(I-\Pi_{T})p\|\).
To reduce the dependency on the dimension, we now use the following lemma. Essentially, it shows that in order to retain the properties of \(U\), we can find a projection matrix \(\Pi\) of rank at most \(O(j\varepsilon^{-2})\).
**Lemma 5.6**.: _Let \(P\subseteq B_{2}^{d}\). For any orthogonal matrix \(U\in\mathbb{R}^{j\times d}\), there exists \(M\subseteq P\), with \(|M|\in O(j\cdot\varepsilon^{-2})\), such that \(\forall p\in P,\|U^{T}(I-\Pi_{M})p\|\leq\varepsilon\cdot\|(I-\Pi_{M})p\|\)._
Proof.: Initially, let \(M=\emptyset\). We add points to \(M\) in rounds and denote by \(M_{t}\) the set after \(t\) rounds. Furthermore, let \(\Pi_{t}\) be the projection matrix onto the subspace spanned by \(M_{t}\) at round \(t\). If there is a \(p\in P\) in round \(t\) such that
\[\|U^{T}(I-\Pi_{t})p\|>\varepsilon\|(I-\Pi_{t})p\| \tag{8}\]
then we let \(M_{t+1}=M_{t}\cup\{p\}\). Our goal is to show that after \(T\in O(j\varepsilon^{-2})\) many rounds, we have \(\|U^{T}(I-\Pi_{T})p\|\leq\varepsilon\cdot\|(I-\Pi_{T})p\|\). We show this by proving inductively
\[\|U^{T}\Pi_{t}\|_{F}^{2}\geq\varepsilon^{2}\cdot t.\]
For the base case \(t=0\), this is trivially true. Thus suppose we add a point \(p\) in iteration \(t+1\). Reformulating Equation 8, we have \(\frac{\|U^{T}(I-\Pi_{t})p\|}{\|(I-\Pi_{t})p\|}>\varepsilon\). By the Pythagorean theorem, we therefore have
\[\|U^{T}\Pi_{t+1}\|_{F}^{2}=\|U^{T}\Pi_{t}\|_{F}^{2}+\frac{\|U^{T}(I-\Pi_{t})p\| ^{2}}{\|(I-\Pi_{t})p\|^{2}}\geq\varepsilon^{2}\cdot t+\varepsilon^{2}\geq \varepsilon^{2}\cdot(t+1).\]
Now since \(\Pi_{t}\) is a projection and since \(U\) has j orthonormal columns \(j\geq||U^{T}||_{F}^{2}\geq||U^{T}\Pi_{t}||_{F}^{2}\). If \(T\geq\varepsilon^{-2}j\), we obtain \(||U^{T}\Pi_{T}|_{F}^{2}\geq j\). This implies that \(U\) is contained in the space spanned by \(M_{T}\). Conversely, \(U\) must also be orthogonal to the kernel of \(M_{T}\) that is \(U(I-\Pi_{T})=0\). Therefore after at most \(\varepsilon^{-2}j\) rounds, we have \(\|U^{T}(I-\Pi_{T})p\|\leq\varepsilon\cdot\|(I-\Pi_{T})p\|\).
We now use this lemma as follows. We can efficiently enumerate over all candidate \(\Pi\), as Lemma 5.6 guarantees us that we only have to consider \(\binom{n}{j_{\varepsilon^{-2}}}\leq\exp(j\cdot\varepsilon^{-2}\log n)\) many different \(M\) inducing projection matrices. This immediately gives us \(0\)-nets for the terms (1) and (3). For each \(\Pi\), we then apply Lemma 5.5, which gives us a net for term (2). Finally, by choice of \(\Pi\), we can show that terms (4) and (5) are negligible. We first give two basic inequalities before given the proof of the net size.
**Lemma 5.7**.: _Let \(a,b\) be numbers in \([0,2]\) and let \(\varepsilon>0\). Suppose \(a^{2}=b^{2}\pm\varepsilon\cdot b\). Then_
\[|a-b|\leq\varepsilon.\]
_Moreover, for any non-negative integer \(z\), we have_
\[|a^{z}-b^{z}|\leq 2\cdot(3z)^{z}\cdot\varepsilon.\]
Proof.: For the first part of the lemma, we observe
\[|a^{2}-b^{2}|=|a-b|\cdot(a+b)\leq\varepsilon\cdot b\]
which implies
\[|a-b|\leq\varepsilon.\]
For the second part, Lemma 2.3 implies
\[|a^{z}-b^{z}|\leq\varepsilon\cdot\max(a,b)^{z}+\left(\frac{2z+\varepsilon}{ \varepsilon}\right)^{z-1}\cdot|a-b|^{z}\leq\varepsilon\cdot 2^{z}+\left( \frac{3z+\varepsilon}{\varepsilon}\right)^{z-1}\cdot\varepsilon^{z}\leq 2(3z )^{z}\varepsilon.\qed\]
This lemma now immediately implies the following corollary by rescaling \(\varepsilon\).
**Corollary 5.8**.: _Let \(a,b\) be numbers in \([0,2]\) and let \(\varepsilon>0\). Suppose \(a^{2}=b^{2}\pm\frac{1}{4\cdot(3z)^{z}}\max(\varepsilon\cdot b,\varepsilon^{2})\). Then for any non-negative integer \(z\), we have_
\[|a^{z}-b^{z}|\leq\varepsilon.\]
**Lemma** ( Equation 5 in Lemma 5.2).: _Let \(\mathcal{D}\) be a distribution over \(B_{2}^{d}\) and let \(P\) a set of \(n\) points sampled from \(\mathcal{D}\) and let \(V_{j,z}\) be defined as in Theorem 5.1. Then_
\[|\mathcal{N}(V_{j,z},\|.\|_{\infty},\varepsilon)|\leq\exp(O(1)(3z)^{z+2}\cdot k \cdot\varepsilon^{-2}(\log n+j\log(j\varepsilon^{-1}))\log\varepsilon^{-1}).\]
Proof.: Let \(\alpha,\beta>0\) be sufficiently small parameters depending on \(\varepsilon\) that will determined later. We first describe a construction for nets for a single subspace of rank at most \(j\), before composing to \(k\) subspaces.
We start by describing the composition of the nets. For every subset \(M\subseteq P\), with \(|M|\in O(j\alpha^{-2})\), we let \(\Pi_{M}\) denote an orthogonal projection matrix of the span of \(M\). Note that this implies \(\mathbf{rank}(\Pi_{M})=O(j\alpha^{-2})\). Further, let \(N(\Pi_{M}):=\mathcal{N}(B_{2}^{\mathbf{rank}(\Pi_{M})},\|.\|_{2},\beta)\) be a \((\beta,j)\)-projective net of the point set \(\cup_{p\in M}\{\Pi_{M}p\}\) of size at most \(\exp(O(1)\cdot\mathbf{rank}(\Pi_{M})\cdot\log(j\beta^{-1}))\) given by Lemma 5.5. Finally, let \(N:=\cup_{M}N(\Pi_{M})\).
We consider an arbitrary orthogonal matrix \(U\in\mathbb{R}^{j\times d}\). Denote by \(M_{U}\) the subset of points and by \(\Pi_{U}\) the projection matrix given by Lemma 5.6, using \(\alpha\) as the precision variable. We claim that for every \(U\), there exists an \(U^{\prime}\in N\) such that for all \(p\in P\)
\[\left|\left(\|\Pi_{U}p\|_{2}^{2}-\|{U^{\prime}}^{T}\Pi_{U}p\|_{2}^{2}+\|(I-\Pi _{U})p\|_{2}^{2}\right)^{z/2}-\|(I-UU^{T})p\|^{z}\right|\in O(\alpha+\beta).\]
In other words, by enumerating over all \((\beta,j)\)-projective nets, we obtain an \(O(\alpha+\beta)\)-subspace clustering net for \((1,j,z)\)-clustering. The desired error of \(\varepsilon\) then follows by choosing \(\alpha\) and \(\beta\) accordingly. For \(U\), we construct \(U^{\prime}\) as follows. Let \(D=\sqrt{\Pi_{U}}\), i.e. \(DD^{T}=\Pi_{U}\). Further, let \(V=U^{T}D\), notice that \(V\) has at most \(j\) rows that have at most unit norm. Hence, there exists a \(U^{\prime}\in N\) such that
\[\|U\Pi_{U}p\|_{2}-\|U^{\prime}\Pi_{U}p\|_{2}|\leq\varepsilon\]
that is a \((\beta,j)\)-projective net.
We then obtain
\[\|\Pi_{U}p\|_{2}^{2}-\|{U^{\prime}}^{T}\Pi_{U}p\|_{2}^{2}+\|(I- \Pi_{U})p\|_{2}^{2}\] \[= \|\Pi_{U}p\|_{2}^{2}-\|U^{T}\Pi_{U}p\|_{2}^{2}\pm\beta+\|(I-\Pi_ {U})p\|_{2}^{2}\] \[= \|\Pi_{U}p\|_{2}^{2}-\|UU^{T}\Pi_{U}p\|_{2}^{2}\pm\beta+\|(I-\Pi _{U})p\|_{2}^{2}\] \[= \|(I-UU^{T})\Pi_{U}p\|_{2}^{2}+\|(I-\Pi_{U})p\|_{2}^{2}\pm\beta\] \[(Eq.6) = \|(I-UU^{T})p\|_{2}^{2}\pm\beta-\|U^{T}(I-\Pi_{U})p\|^{2}-2p^{T} \Pi_{U}UU^{T}(I-\Pi_{U})^{T}p\] \[(Lem.5.6) = \|(I-UU^{T})p\|_{2}^{2}\pm\alpha^{2}\cdot\|(I-UU^{T})p\|^{2}\pm 2 \alpha\cdot\|(I-UU^{T})p\|\pm\beta\]
Setting \(\alpha^{2}=\beta=\frac{1}{64(3z)^{z}}\varepsilon^{2}\), we then have due to Corollary 5.8
\[\left|\left|\|\Pi_{U}p\right\|_{2}^{2}-\|U^{\prime T}\Pi_{U}p\|_{2}^{2}+\|(I- \Pi_{U})p\|_{2}^{2}\right|^{z}-\|(I-UU^{T})p\|^{z}\Big{|}\leq\varepsilon. \tag{9}\]
To extend this from a single \(j\)-dimensional subspace to a solution \(\mathcal{U}\) given by the intersection of \(k\)\(j\)-dimensional subspaces, we define cost vectors \(v^{\mathcal{S}^{\prime}}\) obtained from \(\mathcal{N}=\otimes_{i=1}^{k}N\) as follows. For each \(U\in\mathcal{U}\) let \(U^{\prime}\) be constructed as above and let \(\mathcal{U}^{\prime}\) be the union of the thus constructed \(U^{\prime}\). Then, with a slight abuse of notation, letting \(\Pi_{U^{\prime}}\) correspond to the subspace used to obtain \(U^{\prime}\), we define
\[v_{p}^{\mathcal{U}^{\prime}}:=\min_{U^{\prime}\in\mathcal{U}^{\prime}}\left| \|(I-\Pi_{U^{\prime}})p\|^{2}+\|\Pi_{U^{\prime}}p\|^{2}-\|U^{\prime}\Pi_{U^{ \prime}}p\|^{2}\right|\right|^{z/2}.\]
Let \(U\) be the subspace to which \(p\) is assigned \(\mathcal{U}\) and let \(U^{\prime}\) be the center in \(\mathcal{U}^{\prime}\) used to approximate \(U\) and let \({U^{*}}^{\prime}=\operatorname*{argmin}_{U^{\prime}\in\mathcal{U}^{\prime}} \left|\|(I-\Pi_{U}^{\prime})p\|^{2}+\|\Pi_{U^{\prime}}p\|^{2}-\|U^{\prime}\Pi_ {U^{\prime}}^{\prime}p\|^{2}\right|\right|^{z/2}\) and let \(U^{*}\in\mathcal{U}\) be the center approximated by \({U^{*}}^{\prime}\). Then applying Equation 9, we have
\[\|(I-UU^{T})p\|^{z}\] \[\leq \|(I-U^{*}{U^{*}}^{T}p\|^{z}\] \[\leq \left|\|(I-\Pi_{U^{*}})p\|^{2}+\|\Pi_{U^{*}}p\|^{2}-\|U^{*}\Pi_{U ^{*}}p\|^{2}\right|^{z/2}+\varepsilon\] \[\leq \left|\|(I-\Pi_{U^{\prime}})p\|^{2}+\|\Pi_{U^{\prime}}p\|^{2}-\| U^{\prime}\Pi_{U^{\prime}}p\|^{2}\right|^{z/2}+\varepsilon\]
Thus, the cost vectors obtained from \(\mathcal{N}\) are a \((k,j,z)\)-clustering net, i.e.
\[\left|v_{p}^{\mathcal{S}^{\prime}}-v_{p}^{\mathcal{S}}\right|:=\left|\min_{s^ {\prime}\in\mathcal{S}^{\prime}}\|\Pi_{s^{\prime}}p-[s^{\prime},0]\|^{z}- \min_{s\in\mathcal{S}}\|p-s\|^{z}\right|\leq\varepsilon.\]
What remains is to bound the size of the clustering net. Here we first observe that size of the clustering net is equal to \(|\mathcal{N}|=|N|^{k}\). For \(|N|\), we have \(\binom{|P|}{O(\alpha^{-2}\log\alpha^{-1})}\leq n^{O(j\alpha^{-2}\log\alpha^{- 1})}\) many choices of \(N(\Pi)\). In turn, the size of each \(N(\Pi)\) is bounded by \((\beta/j)^{-O(j^{2}\alpha^{-2})}\) due to Lemma 5.5. Thus the overall size of \(\mathcal{N}\) is
\[\exp\left(k\cdot j\cdot O(\alpha^{-2}\log\alpha^{-1}(\log n+j \log\beta/j))\right)\] \[=\exp(O(1)(3z)^{z+2}\cdot k\cdot j\cdot\varepsilon^{-2}(\log n+j \log(j\varepsilon^{-1}))\log\varepsilon^{-1})\]
as desired.
### Tight generalization bounds for projective clustering
For the specific case of \((k,j,2)\) clustering, also known as projective clustering, we obtain an even better dependency on \(j\). A similar bound can likely also be derived using the seminal work of [39], though the dependencies on \(\log n\) and \(\log 1/\delta\) are slightly weaker. The proof uses the main result by [45], itself heavily inspired by [39], and arguments related to bounding the Rademacher complexity of linear function classes. Crucially, it avoids the issue of obtaining an explicit dimension reduction entirely, but the approach cannot be extended to general \((k,j,z)\) clustering.
**Theorem 5.9**.: _Let \(\mathcal{D}\) be a distribution over \(B_{2}^{d}\) and let \(P\) a set of \(n\) points sampled from \(\mathcal{D}\). For any set \(\mathcal{U}\) of \(k\) orthogonal matrices of rank at most \(j\), we denote by \(v^{\mathcal{U}}\) the \(n\)-dimensional cost vector of \(P\) in solution \(\mathcal{U}\) with respect to the \((k,j,2)\)-clustering objective, i.e. \(v_{p}^{\mathcal{U}}=\min_{U\in\mathcal{U}}\|(I-UU^{T})p\|^{2}\). Let \(V_{j,2}\) be the union of all cost vectors of \(P\). Then with probability at least \(1-\delta\) for any \(\gamma>0\)_
\[\mathcal{E}_{n}(V_{j,2})\in O\left(\sqrt{\frac{kj}{n}\cdot\log^{3+\gamma}\left( \frac{n}{j}\right)}+\sqrt{\frac{\log 1/\delta}{n}}\right).\]
Proof.: The proof of the theorem is a straightforward application of Theorem 4.1 with the following Lemma
**Lemma 5.10**.: _Let \(\mathcal{D}\) be a distribution over \(B_{2}^{d}\), let \(P\) a set of \(n\) points sampled from \(\mathcal{D}\), and let \(V\) be defined as in Theorem 5.9. Then for any \(\gamma>0\)_
\[Rad_{n}(V_{j,2})\in O\left(\sqrt{\frac{kj}{n}\log^{3+\gamma}\left(\frac{n}{j} \right)}\right).\]
Proof.: We use the following result due to Foster and Rakhlin [45].
**Theorem 5.11** (\(\ell_{\infty}\) contraction inequality (Theorem 1 by [45])).: _Let \(F\subseteq X\rightarrow\mathbb{R}^{k}\), and let \(\phi:\mathbb{R}^{k}\rightarrow\mathbb{R}\) be \(L\)-Lipschitz with respect to the \(\ell_{\infty}\) norm, i.e. \(\|\phi(X)-\phi(X^{\prime})\|_{\infty}\leq L\cdot\|X-X^{\prime}\|_{\infty}\) for all \(X,X^{\prime}\in\mathbb{R}^{k}\). For any \(\gamma>0\), there exists a constant \(C>0\) such that if \(|\phi_{t}(f(x))|\vee\|f(x)\|_{\infty}\leq\beta\), then_
\[Rad_{n}(\phi\circ F)\leq C\cdot L\sqrt{K}\cdot\max_{i}Rad_{n}(F|_{i})\cdot\log ^{3/2+\gamma}\left(\frac{\beta n}{\max_{i}R_{n}(F|_{i})}\right).\]
We use this theorem as follows. Our functions are associated with candidate solutions \(\mathcal{U}\), that is \(\phi(f)=\min_{U\in\mathcal{U}}\|(I-UU^{T})p\|_{2}^{2}\). In other words, \(f\) maps a point \(p\) to the \(k\)-dimensional vector, where \(f_{i}(p)=\|(I-U_{i}U_{i}^{T})p\|_{2}^{2}\) and \(\phi\) selects the minimum value among all \(\|I-U_{i}U_{i}^{T})p\|_{2}^{2}\).
Thus, we require three more steps. First, we have to bound the Lipschitz constant of the minimum operator. Second, we have to give a bound on \(\beta\). Third and last, we have to give a bound on the Rademacher complexity
\[Rad_{n}(V)=\frac{1}{n}\cdot\mathbb{E}_{r}\sup_{U}\sum_{p\in P}\|(I-UU^{T})p\|_ {2}^{2}r_{p}. \tag{10}\]
The Lipschitz constant of the minimum operator with respect to the \(\ell_{\infty}\) norm can be readily shown to be \(1\) as for any two vectors \(x,y\) with \(\min_{i}y_{i}=y_{j}\)
\[\min_{i}x_{i}-\min_{i}y_{i}=\min_{i}x_{i}-y_{j}\leq x_{j}-y_{j}\leq|x_{j}-y_{j }|\leq\|x-y\|_{\infty}.\]
Since \(U\) is an orthogonal matrices and \(p\in B_{2}^{d}\), we have \(\|(I-UU^{T})p\|_{2}^{2}\leq 1\) and thus \(\beta\) is bounded by \(1\).
Thus, we only require a bound on Equation 10. For this, we use a result by [50]. Since the result is embedded in the proof of another result, we restate it here for the convenience of the reader.
**Lemma 5.12** (Compare the proof Theorem 3 of [50]).: _Let \(P\) be an set of \(n\) points in \(B_{2}^{d}\) and let \(\mathcal{U}\) be the set of all orthogonal matrices of rank at most \(j\). For every \(U\in\mathcal{U}\), define \(f_{U}(p)=\|(I-UU^{T})p\|_{2}^{2}\) and let \(F\) be the set of all functions \(f_{U}(p)\) Then._
\[Rad_{n}(F):=\frac{1}{n}\cdot\mathbb{E}_{r}\sup_{U\in\mathcal{U}}\sum_{p\in P} \|(I-UU^{T})p\|_{2}^{2}\cdot r_{p}\in O\left(\sqrt{\frac{j}{n}}\right).\]
Proof.: We have
\[Rad_{n}(F)=\mathbb{E}_{r}\sup_{U}\sum_{p\in P}\|(I-UU^{T})p\|_{2}^{2}r_{p}= \mathbb{E}_{r}\sum_{p\in P}\|p\|^{2}r_{p}+\mathbb{E}_{r}\sup_{U}\sum_{p\in P} \|U^{T}p\|_{2}^{2}r_{p}.\]
We observe that the term \(\mathbb{E}_{r}\sum_{p\in P}\|p\|^{2}r_{p}\) is \(0.\) Thus, we focus on the second term. We have
\[\mathbb{E}_{r}\sup_{U}\sum_{p\in P}\|U^{T}p\|_{2}^{2}\cdot r_{p} = \mathbb{E}_{r}\sup_{U}\sum_{p\in P}p^{T}UU^{T}p\cdot r_{p}= \mathbb{E}_{r}\sup_{U}\sum_{p\in P}trace(p^{T}UU^{T}p)\cdot r_{p}\] \[= \mathbb{E}_{r}\sup_{U}\sum_{p\in P}trace(UU^{T}pp^{T})\cdot r_{p}\] \[= \mathbb{E}_{r}\sup_{U}trace\left(UU^{T}\sum_{p\in P}\left(r_{p} \cdot pp^{T}\right)\right)\] \[\leq \mathbb{E}_{r}\sup_{U}\|U\|_{F}\left\|\sum_{p\in P}r_{p}\cdot pp^ {T}\right\|_{F}.\]
We have \(\|U\|_{F}\leq\sqrt{3}\), so we focus on \(\left\|\sum_{p\in P}r_{p}\cdot pp^{T}\right\|_{F}\). Here, we have
\[\left\|\sum_{p\in P}r_{p}\cdot pp^{T}\right\|_{F}^{2} = trace\left(\left(\sum_{p\in P}r_{p}\cdot pp^{T}\right)\left(\sum_ {p\in P}r_{p}\cdot pp^{T}\right)\right)\] \[= \sum_{p\in P}\sum_{q\in P}r_{p}\cdot r_{q}\cdot trace\left(pp^{T} qq^{T}\right)=\sum_{p\in P}\sum_{q\in P}r_{p}\cdot r_{q}\cdot(p^{T}q)^{2}.\]
This implies
\[n\cdot Rad_{n}(F) = \mathbb{E}_{r}\sup_{U}\sum_{p\in P}\|U^{T}p\|_{2^{T}p}^{2}\leq \mathbb{E}_{r}\sup_{U}\|U\|_{F}\left\|\sum_{p\in P}r_{p}\cdot pp^{T}\right\|_{F}\] \[\leq \sqrt{j}\cdot\mathbb{E}_{r}\sqrt{\sum_{p\in P}\sum_{q\in P}r_{p} \cdot r_{q}\cdot(p^{T}q)^{2}}\] \[\text{(Jensen's inequality)} \leq \sqrt{j}\cdot\sqrt{\mathbb{E}_{r}\sum_{p\in P}\sum_{q\in P}r_{p} \cdot r_{q}\cdot(p^{T}q)^{2}}\] \[= \sqrt{j}\cdot\sqrt{\sum_{p\in P}(p^{T}p)^{2}}\leq\sqrt{j}\cdot \sqrt{\sum_{p\in P}1}=\sqrt{nj}.\]
Solving the above for \(Rad_{n}(F)\) concludes the proof.
We can now conclude the proof. Combining the bounds on \(L\) and \(\beta\) with Lemma 5.12 and Theorem 5.11, we have
\[Rad_{n}(V_{j,2})\in O\left(\sqrt{k}\cdot\sqrt{\frac{j}{n}}\cdot\log^{3+\gamma} \left(n\right)\right)\]
as desired.
Finally, we also show that the bounds from Theorem 5.9 and [39] are optimal up to polylogarithmic factors. The rough idea is to define a distribution \(\mathcal{D}\) supported on the nodes of a \(2kj\)-dimensional simplex with some points having more probability mass and some points having smaller mass. Using the tightness of Chernoff bounds, we may then show that the probability of fitting a subspace clustering to a good fraction of the lower mass points is always sufficiently large.
**Theorem 5.13**.: _There exists a distribution \(\mathcal{D}\) supported on \(B_{2}^{d}\) such that \(\mathcal{E}_{n}(V_{j,2})\in\Omega\left(\sqrt{(kj)/n}\right)\)._
Proof.: We first describe the hard instance distribution \(\mathcal{D}\). We assume that we are given \(d=2kj\) dimensions. Let \(e_{i}\) be the standard unit vector along dimension \(i\) with \(i\in\{1,\ldots d\}\). Let \(p,\varepsilon\in[0,1]\) be a parameters, where \(\varepsilon\) is sufficiently small. We set the densities for a point \(q\) as follows.
\[\mathbb{P}[q]=\begin{cases}p&\text{if }q=e_{i},i\in\{1,\ldots,k\cdot j\}\\ p-\varepsilon\cdot p&\text{if }q=e_{i},i\in\{kj+1,\ldots,d\}\\ 0&\text{otherwise}\end{cases} \tag{11}\]
We choose \(p\) such that integral over densities is \(1\), i.e. \(kj\cdot p+kj\cdot(p-\varepsilon p)=1\). It is straightforward to verify that for \(\varepsilon\) sufficiently small, \(p\in(\frac{1}{kj},\frac{1}{kj})\). We denote the points \(\{e_{1},\ldots e_{kj}\}\) by \(G\) for "good" and the points \(\{e_{kj+1},\ldots e_{d}\}\) by \(B\) for "bad".
We now characterize the properties of the optimal solution as well as suboptimal solutions.
**Lemma 5.14**.: _Let \(\mathcal{D}\) be the distribution described above in Equation (11). Then for any optimal solution \(\mathcal{U}=\{U_{1},\ldots U_{k}\}\), we have \(e_{i}\in U_{t}\) for \(i\in\{1,\ldots,kj\}\) and some \(t\) and \(\text{OPT}=kj\cdot p\cdot(1-\varepsilon)\)._
Proof.: We transform the instance into a \(d\times d\) diagonal matrix \(D\) where \(D_{i,i}=\sqrt{\mathbb{P}[e_{i}]}\). So \(D\) is a \(d\times d\) diagonal matrix with diagonal entries equal to \(\sqrt{p}\) for the first \(k\cdot j\) elements and \(\sqrt{p-\varepsilon\cdot p}\) for elements from \(k\cdot j+1\) to \(d\). Now consider any partition of the points into clusters \(C_{t}\) with the corresponding subspace \(U_{t}\) for (\(t\in\{1,\ldots,k\}\) ). The optimal solution for \(U_{t}\) is simply the right singular vector of the submatrix of \(D\) corresponding to points in \(C_{t}\), which by the construction of \(D\) is the \(j\) points with the largest weight. This means that each cluster can remove at most \(\sum_{i=1}^{j}1=j\) from the cost, so \(k\) clusters can remove at most \(\sum_{i=1}^{k}j\) from the cost. This implies that the cost of the clustering is lower bounded by \(\sum_{i=1}^{d}D_{i,i}^{2}-\sum_{i=1}^{k}D_{i,i}^{2}=\sum_{i=kj+1}^{d}D_{i,i}^{2}\). Conversely, the solution \(\mathcal{U}\) has exactly this cost, which implies that it must be optimal.
Using Lemma 5.14, we now have to, given \(n\) independent samples from \(\mathcal{D}\). Control the probability that the sample \(P\) will (falsely) put a higher weight on some of the points in \(B\) than the points in \(G\). Let \(B_{ex}\) denote the set of misclassified points in \(B\) and let \(P_{\text{OPT}}\) denote the optimum computed on the sample \(P\). We have
\[\mathbb{E}[\text{cost}(\mathcal{D},P_{\text{OPT}})]=kj\cdot p\cdot(1- \varepsilon)+p\cdot\varepsilon\cdot|B_{ex}|.\]
and hence an expected excess risk bound of
\[\mathbb{E}[\text{cost}(\mathcal{D},P_{\text{OPT}})]-\text{OPT}=p\cdot \varepsilon\cdot\mathbb{E}[B_{ex}].\]
By linearity of expectation, we have \(\mathbb{E}[|B_{ex}|]=kj\cdot\mathbb{P}[e_{kj+1}\in B_{ex}]\). Thus, \(\mathbb{E}[\text{cost}(\mathcal{D},P_{\text{OPT}})]-\text{OPT}\in\Theta(1) \varepsilon\cdot\mathbb{P}[e_{kj+1}\in B_{ex}]\). Define \(G_{low}\) to be the set of points from \(G\) that are have an empirical density of at most \(p\). Let \(\widehat{e_{kj+1}}\) denote the empirical density of \(e_{kj+1}\). We now claim that
\[\mathbb{P}[e_{kj}\in B_{ex}] \geq \mathbb{P}[\widehat{e_{kj+1}}>p\wedge e_{kj+1}\in B_{ex}]\] \[= \mathbb{P}[e_{kj+1}\in B_{ex}|\widehat{e_{kj+1}}>p]\cdot\mathbb{ P}[\widehat{e_{kj+1}}>p]\geq 1/2\cdot\mathbb{P}[\widehat{e_{kj+1}}>p]\]
The first inequality follows because we are considering a subset of the possible events, the second inequality follows because the number of points with an empirical estimated density greater than \(p\) is negatively correlated with the empirical density \(\widehat{e_{kj+1}}\) of the point \(e_{kj}\). Specifically, conditioned on \(\widehat{e_{kj+1}}>p\), the mean and median density of any point \(e_{i}\in G\) is at most \(\frac{1}{n}\cdot p(n-p\cdot n)=p\cdot(1-p)<p\). Thus, the (marginal) mean and median density of any other point is below \(p\) and therefore the probability that \(e_{kj+1}\) will be in \(B_{ex}\) is at least \(1/2\).
Thus, what remains to be shown is a bound on \(\mathbb{P}[e_{kj}>p]\). Here, we use the tightness of the Chernoff bound (see Lemma 4 of [48]).
**Lemma 5.15** (Tightness of the Chernoff Bound).: _Let \(X\) be the average of \(n\) independent, \(0/1\) random variables. For any \(\varepsilon\in(0,1/2]\) and \(\mu\in(0,1/2]\), assuming \(\varepsilon^{2}\mu n\geq 3\) if each random variable is 1 with probability at least \(\mu\), then_
\[\mathbb{P}[X>(1+\varepsilon)p]>\exp(-9\varepsilon^{2}\mu n).\]
Thus, sampling \(n\) elements, we have
\[\mathbb{P}\left[e_{kj}>p\right] =\mathbb{P}\left[e_{kj}>\left(1+\frac{\varepsilon}{1-\varepsilon }\right)\cdot(1-\varepsilon)\cdot p\right]\] \[>\exp\left(-9\frac{\varepsilon^{2}}{(1-\varepsilon)^{2}}(1- \varepsilon)pn\right)\in\Omega(1)\exp\left(-\frac{\varepsilon^{2}}{kj}n\right).\]
If we require \(\mathbb{E}[\text{cost}(\mathcal{D},P_{\text{OPT}})]-\text{OPT}=\varepsilon \cdot c\) for a sufficiently small absolute constant \(c\), we also require \(\mathbb{P}\left[e_{kj}>p\right]=c^{\prime}\) and hence \(\sqrt{\frac{kj}{n}}\leq\varepsilon\cdot c^{\prime\prime}\) for a sufficiently small absolute constants \(c^{\prime}\) and \(c^{\prime\prime}\). Letting \(\varepsilon\to 0\) then shows that the excess risk can asymptotically decrease no faster than \(\Omega\left(\sqrt{\frac{kj}{n}}\right)\).
Experiments
Theoretical guarantees are often notoriously conservative compared to what is seen in practice. In this section, we present empirical findings detailing whether the risk bounds from the previous sections are also the risk bounds one can expect when dealing with real datasets. Indeed, for the related question of computing coresets, experimental work by [72] seems to indicate that the worst case bounds by [47] are not what one has to expect in practise for center based clustering. Generally, two properties can determine the risk decrease. First, the clusters may be well separated [4, 29]. Indeed, making assumptions to this end, there is also some theoretical evidence that a rate of \(O(k/n)\) is possible [5, 53]. The other, somewhat related explanation is that if the ground truth consists of \(k^{\prime}<k\) clusters [13, 64], the dependency on \(k\) will point more towards the smaller, true number of clusters. We run the experiments both for center based clustering, as well as subspace clustering. While the focus of the paper is arguably more on subspace clustering, the experiments are important in both cases. Although both problems are hard to optimize exactly, center based clustering is significantly more tractable and thus may lend better insight into practical learning rates. For example, we have an abundance of approximation algorithms for \((k,z)\) clustering [6, 62] whereas, even in the case of \((k,1,z)\) clustering in two dimensions [49] it is not possible to find any finite approximation in polynomial time.
In the main body, we focus on \((k,1,z)\) clustering, as there already exists a phase transition in terms of the computational complexity between the normal \(k\)-median and \(k\)-means problems and the \((k,1,1)\) and \((k,1,2)\) clustering objectives, while \(j=1\) still admits more positive results than other subspace clustering problems Agarwal et al. [3], Feldman et al. [41, 42].
DatasetsWe use four publicly available real-world datasets: Mushroom [71], Skin-Nonskin [12], MNIST [51], and Covtype [15]. Below we show the results on the Covtype dataset, and the remaining experiments are deferred to the appendix. Each dataset was normalized by the diameter, ensuring that all points lie in \(B_{2}^{d}\).
Mushroom comprises of 112 categorical features of the appearance of mushrooms with class labels corresponding to poisonous or edible. MNIST contains 28x28 pixel images of handwritten digits. Skin_Nonskin are RGB values given as 3 numerical features used to predict if a pixel is skin or not. Lastly, Covtype consists of a mix of categorical and numerical features used to predict seven different cover types of forests. In the main body, we focus on Covtype because of its large number of points.
Problem parameters and algorithmsFor both center based clustering as well as subspace clustering, we focus on the powers \(z\in\{1,2,3,4\}\). \(z=2\) is arguably the most popular and also the most tractable variant. \(z=1\) is the objective with the least susceptibility to outliers. Finally, we consider the cases \(z=3\), due to it minimizing asymmetry and \(z=4\) as a tractable alternative to the coverage objective \(z\rightarrow\infty\). The excess risk is evaluated for \(k\in\{10,20,30,50\}\) for both center based and subspace clustering. Expectation maximization (EM) type algorithms are used for both center-based and subspace clustering, though this is a severe computational challenge fo \((1,j,z)\) clustering, if \(z\neq 2\), see [24, 36]. Given a solution \(\mathcal{S}\) we first assign every point to its closest center and subsequently recompute the center.
Center based clusteringFor each experiment, we use an expectation maximization (EM) type algorithm. Given a solution \(\mathcal{S}\), we first assign every point to its closest center and subsequently, we recompute the center. For the case \(z=2\), we do this analytically and in this case the EM algorithm is more commonly known as Lyod's method [60]. For the cases, \(z\in\{1,3,4\}\), the new center is obtained via gradient descent. The initial centers are chosen via \(D^{z}\) sampling, i.e. sampling centers
\begin{table}
\begin{tabular}{l l l l} \hline \hline Dataset & Points & Dim & Labels \\ \hline Mushrooms & 8,124 & 112 & 2 \\ MNIST & 60,000 & 784 & 10 \\ Skin\_Nonskin & 245,057 & 3 & 2 \\ Covtype & 581,012 & 54 & 7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets used for the experiments
proportionate to the \(z\)th power of the distance between a point and its closest center (for \(z=2\) this is the \(k\)-means++ algorithm by [6]).
We wrote all of the code using Python 3 and utilized the Pytorch library for implementations using gradient descent. Specifically, we employed the AdamW optimizer to find the closest center with a learning rate set to \(0.01\). All experiments were conducted on a machine equipped with a single NVIDIA RTX 2080 GPU.
Subspace ClusteringFor subspace clustering, we consider \(j\in\{1,2,5\}\) to demonstrate the effects of the subspace dimension on convergence rate, taking computational expenses into consideration. Since there are no known tractable algorithms for these problems with guarantees, we initialize a solution \(\mathcal{U}=\{U_{1},\ldots,U_{k}\}\) by sampling \(k\) orthogonal matrices of rank \(j\), where the subspace for each matrix is determined via the volume sampling technique [35]. Subsequently, we run the EM algorithm. As before, the expectation step consists of finding the closest subspace for every point. For \(z=2\), the maximization step consists of finding the \(j\) principal component vectors of the data matrix induced by each cluster. For the other values of \(z\), it is NP-hard even approximate the maximization step [24], so we use gradient descent to find a local optimum. Due to the fact that Skin_nonskin only has 3 features, we only evaluate the excess risk for \(j\in\{1,2\}\). Due to a large computational dependency on dimension, we do not evaluate subspaces on the MNIST dataset.
Experimental setup and resultsTo estimate the optimal cost \(OPT\) for the two objective functions, we run the corresponding appropriate algorithms mentioned above ten times on the entire dataset \(P\) and use the minimal objective value as an estimate for \(OPT\). We obtain a sample \(S_{i}\) of size \(n\) by sampling uniformly at random and estimate the optimal cost for that sample, \(OPT_{i}\). We repeat this 5 times. The empirical excess risk is calculated as \(\mathcal{E}_{n}=\frac{1}{|P|}\sum_{i=1}^{5}\frac{\cos(OPT_{i})}{5}-OPT\). The excess risk for center-based clustering is evaluated on exponential-sized subset sizes \(n\in\{2^{6},2^{7},\ldots,2^{12}\}\). We fit a line of the form \(c\cdot\frac{k^{n}_{i}}{n^{\alpha_{2}}}\) where \(c,q_{1},q_{2}\) are the optimizeable parameters. Let \(y_{i}\) be the excess risk in run \(i\). Let \(k_{i}\) and \(n_{i}\) be the values of \(k\) and \(n\) in run \(i\) and let \(r\) be the total number of times the
Figure 1: Excess risk for line clustering on Covtyp. Shaded areas show max-min intervals.
excess risk was evaluated for each combination of algorithm and dataset. We use gradient descent on the following loss to optimize the parameters \(LSE=\sum_{i=1}^{r}\left(y_{i}-c\cdot\frac{k^{q_{1}}}{n^{q_{2}}}\right)^{2}.\)
The results in Figure 1 show that the excess risk for subspace clustering decreases quicker for higher values of \(z\), and we see a similar pattern for center-based clustering. The appendix contains more plots on the empirical evaluations of center-based clustering. The best-fit lines shown in Tables 2 and 3 in the appendix indicate that the empirical excess risk values decrease slightly quicker than predicated by theory. The expected values are \(q_{1}=q_{2}=0.5\) and we observe \(q_{1},q_{2}\) around \(0.44,0.52\) respectively. For \(k\) this indicates a slightly favorable dependency in practice. For \(q_{2}\), we consider the difference to the theoretical bound of \(0.5\) negligible. The choice of \(z\) does not seem to have a significant impact on either finding. For subspace clustering, the dependency on \(k\) is a bit more pronounced and increases slightly towards the theoretical guarantees. Contrary to hopes that margin or stability conditions might occur on practical datasets, the results indicate that the theoretical guarantees of the learning rate are near-optimal even in practice. Moreover, the rates were not particularly affected by either the choice of \(z\) or by the dimension \(j\) when analyzing subspace clustering.
## 7 Conclusion and open problems
In this paper, we presented several new generalization bounds for clustering objectives such as \(k\)-median and subspace clustering. When the centers are points or constant dimensional subspaces, our upper bounds are optimal up to logarithmic terms. For projective clustering, we give a lower bound showing that the results obtained by [39] are nearly optimal. A key novel technique was using an ensemble of dimension reduction methods with very strong guarantees.
An immediate open question is to which degree ensembles of dimension reductions can improve learning rates over a single dimension reduction. Is it possible to find natural problems where there is a separation between the embeddability and the learnablity of a class of problems, or given the ensemble, is it always possible to find a single dimension reduction with the guarantees of the ensemble? Another open question is motivated by the recent treatment of clustering through the lens of computational social choice [19]. Using current techniques from coresets [17] and learning theory [45], it seems difficult to improve over the learning rate of \(O\left(\sqrt{k^{2}/n}\right)\) for the fair clustering problem specifically. It it possible to match the bounds for unconstrained clustering?
## 8 Disclosure of Funding Acknowledgements
Maria Sofia Bucarelli was partially supported by projects FAIR (PE0000013) and SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU. Supported also by the ERC Advanced Grant 788893 AMDROMA, EC H2020RIA project "SoBigData++" (871042), PNRR MUR project IR0000013-SoBigData.it.
Chris Schwiegelshohn was supported by the Independent Research Fund Denmark (DFF) under a Sapere Aude Research Leader grant No 1051-00106B.
|
2302.09336 | Pulse in collapse: a game dynamics experiment | The collapse process is a constitutional sub-process in the full finding Nash
equilibrium process. We conducted laboratory game experiments with human
subjects to study this process. We observed significant pulse signals in the
collapse process. The observations from the data support the completeness and
the consistency of the game dynamics paradigm. | Wang Yijia, Wang Zhijian | 2023-02-18T14:04:53Z | http://arxiv.org/abs/2302.09336v1 | # Pulse in collapse: a game dynamics experiment
## Abstract
The collapse process is a constitutional sub-process in the full finding Nash equilibrium process. We conducted laboratory game experiments with human subjects to study this process. We observed significant pulse signals in the collapse process. The observations from the data support the completeness and the consistency of the game dynamics paradigm.
###### Contents
* 1 Introduction
* 1.1 Collapse
* 1.2 Research questions
* 1.3 Outline
* 2 Game and the prediction
* 2.1 Game design
* 2.2 Classical game theory predictions
* 2.3 Game dynamics theory prediction
* 2.4 Game experiment
* 3 Results
* 3.1 Distribution
* 3.2 Eigencycle
* 3.3 Collapse
* 3.4 Model comparison
* 4 Discussion and conclusions
* 4.1 Implications
* 4.2 Related works
* 4.3 Conclusion
A AbbreviationsA.2 DistributionA.3 CycleA.3.1 Verify the staticsA.3.2 Verification of the dynamics theoryA.4 CollapseA.4.1 Measurement definitionA.4.2 DataA.5 The game dynamics paradigm and its workflow
Introduction
Game equilibrium finding is a process [18, 25, 8, 13]. This process is also referred to as evolution, or adjustment process, or deviation from equilibrium process, or the convergence process in game theory and experiment. It is a non-linear spontaneous evolution process driven by game participates' strategic interactions.
In the study of game theory, terms such as adjustment dynamics [18], learning theory [12], population game dynamics, evolutionary game theory [10, 26] or market dynamics [23], are related to this the process. These terms are included in the game dynamics paradigm.
As a scientific paradigm that moves beyond equilibrium theory (classical game theory, or game statics theory), the game dynamics paradigm is expected to be able to describe the regularity of the game dynamic process in terms of completeness and consistency, as well as reality and accuracy [25, 8, 13].
Instead of using the statics theory to study the predictable equilibrium (stable statistical relationships among strategy), game dynamics theory studies the predictable motion (temporary deviations from equilibrium). For example, in the rock-paper-scissors game [33, 6, 39, 38], the persistent cycle is a style motion pattern of _predictable temporary deviation from equilibrium_ (PTDE). Such a pattern can be exploited [11]. In real life, most high-frequency trading strategies which are not fraudulent, but instead exploit the PTDE [36].
Following this paradigm, we investigate an unignorable sub-process of the full finding Nash equilibrium process. In this section, we introduce the sub-process, namely collapse process; then, we introduce the research questions and offer a summary of the contents of this paper.
### Collapse
A real game system can involve a large number of strategies, most of which would be dominated **during** the process of finding Nash equilibrium [3, 22]. As illustrated in Figure 0(a), the full equilibrium finding process can be classified into three stages:
1. In the initial stages, having no information about the game, agents randomly choose a strategy vector from the strategy space for optimal payoff. During this period, the game's social evolution trajectory is highly stochastic, smoothly distributed throughout the full game space.
2. Later, in the period of infancy, each agent may start to identify their own nonprofit strategy by learning from strategy interactions. As a result, the dimensions of the game strategy space may collapse. In theory, 'a dominated strategy can dominate a domination before being dominated', meaning that a phenomenon known as a pulse could exist. A conceptual rendering of a pulse in the collapse process is shown in Figure 0(b).
3. Finally, when there is no dominant strategy, the evolution trajectory will converge to a fixed pattern (a persistent cycle or fixed equilibrium point)
of the game. When rendered as an image, the pattern is the same as a condensed structures in astrophysics.
The collapse appears in the second stage. The term 'collapse', introduced to game dynamics here, is borrowed from the concept of 'gravitational collapse' in astrophysics [35] which is a fundamental mechanism and a legitimate constituent part of the formation of structures in the universe. This study benefits from the physical pictures in astrophysics.
### Research questions
We investigate the collapse process by asking the following two questions:
Figure 1: Conceptual figure. (a) The three periods in the full Nash-equilibrium-finding process, moving from random to collapse to fixed pattern. This study focus on the collapse process. (b) An illustration of a pulse in a collapse.(c) The workflow of this report: The Game, **T**heory (**S**tatic and **D**ynamic) vs. **E**xperiment by **O**bservations (1-distribution; 2-cycle; 3-collapse). For the abbreviations, see Table A1.
* In controlled laboratory game experiments with human subjects, can significant new observations be obtained during a collapse process?
* Can the existing game theory paradigm consistently and completely capture the collapse process?
Answering these questions is not trivial, for the following reasons:
* From a scientific perspective, without evidence from the collapse process, our knowledge of the full equilibrium-finding process is obviously incomplete.
* Within a scientific paradigm, without conducting experiments on collapses, there is no basis for the reality of the theory [16, 15, 21].
These issues relate to reality and accuracy, as well as to the completeness and consistency of the game dynamics paradigm, and are therefore not trivial.
### Outline
To answer our research questions, following the experimental game theory paradigm [23, 4, 6, 39, 29, 2], we carried out the workflow shown in Figure 0(c):
* We designed a parameterised three card poker game for use in a laboratory study of the the statics theory and the dynamics theory (see section 2.1).
* We derived the predictions for three variables (distribution, cycle, and pulse) by the statics theory and the dynamics theory (see sections 2.2 and 2.3).
* We conducted laboratory experiments which include 3-treatments with 72 human subjects, fixed-paired, and 1000 repeated games (see section 2.4).
* We outlined the experimental observations and their theoretical verification of the three variables (distribution, cycle, and pulse in sections 3.1, 3.2 and 3.3, respectively). Additionally, we we report the results of comparing the models (see section 3.4).
In the discussion section, we offer a summary, discuss the related literature, address the implications and applications, and introduce questions for future research.
## 2 Game and the prediction
This section explains the design of the game and the experiment, as well as the predictions made using the statics theory and the dynamics theory. A summary is shown in Table 1.
### Game design
The original game is a simplified 3-card porker game. The game is introduced by Binmore, by replacing Von Neumann's numerical cards by a deck with only the 1, 2, 3 (page 93 in [3]). This game is a dynamic game with incomplete information. We generalised this game to study the dynamics process.
The extensive form of the generalised \((m,n)\) game is shown in Figure 2. When \((m,n)\) = (2,1), the game is exactly the same as the original game (page 93 [3]). Following [3], the normal form could be represented as the \(8\times 8\) bi-matrix shown in Figure 1(b). That is, each player (X or Y) has eight strategies. This study mainly makes use of the bi-matrix form.
The parameters are assigned as \((m,n)\) = (2, 1), (3, 2) and (4, 2), which are referred to as treatment A, B, and C, respectively. Details on the game design and the motivations are provided in SI - Game.
### Classical game theory predictions
Method:The concepts of static equilibrium and the iterated eliminating dominated strategy (IEDS) are useful when describing the full Nash-equilibrium-finding process, according to the classical game theory paradigm. For details of the method, see SI - Statics.
* For the three treatments, to solve the Nash equilibrium of the three bi-matrix zero sum two person game, we can use quadratic programming method [20].
* For the three treatments, the order of the eliminating dominated strategies, as well as the surviving strategies, can be obtained by IEDS method.
Predictions:The predictions and their verification methods are as follows:
1. Distribution. The equilibrium distribution results are shown in figure 1(a). Following the methods of conventional experimental research [4, 28, 6], the predictions can be verified using the data, denoted as \(\rho^{E}\) and shown in Table A2.
2. Cycle. IEDS provides the surviving strategy for a game, which is shown in Figure 3 and in the row titled 'Surviving Strategy' in Table 1. According to best-response analysis, the cycle's existence and relative strength in each treatment (A, B, and C) can be predicted as (Yes, No, Yes), referring to Figure 3. These points can be verified using the experimental cycle (\(\vartheta^{E}\)) in section 3.2.
3. Collapse. IEDS provides the eliminating order of a game. The results is shown in the row 'IEDS round' in Table 1. Mathematical analysis of the best response analysis show that, among the dominated strategies, the strategy that is eliminated last could provide the pulse signal during the collapse. These points can be verified by the pulse (\(\psi^{E}\)) and the crossover points (\(\chi^{E}\)) using the experimental observations in section 3.3.
Figure 2: The parameterized (m,n) game. (a) The extensive form [3]. (b,c,d) The normal form of the game (the matrix elements are multiplied by 6).
In additions, these three predictions can be applied to compare with those from dynamics theory. The results will be reported in section 3.4.
### Game dynamics theory prediction
The game dynamics paradigm provides a set of concepts and methods on game evolution process [10, 8, 30, 17, 33]; This study adheres to this paradigm.
Method:We utilize the logic dynamics equations system [26]; At the same time, referring to [6, 39, 17, 10], this study makes use of the noise parameter (\(\lambda\) = 50) and the adjustment time step (\(\triangle t\) = 0.02). Then, with random initial conditions, we can generate the time series of 1000 rounds data as long as the 1000 round experiment conducted in this research. Then, using the generated time series, we can calculate the predictions of the observations (distribution, cycle and collapse) simultaneously.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \hline \multirow{2}{*}{ID} & Treatment ID & A & B & C \\ \cline{2-6} & Game-\((m,n)\) & (2,1) & \multicolumn{2}{c|}{(3,2)} & \multicolumn{2}{c|}{(4,2)} \\ \cline{2-6} & Player ID & X & Y & X & Y & X & Y \\ \hline \multirow{4}{*}{Equilibrium Theory} & IEDS Round 1 & 1,3,5,7 & \(\alpha^{*}\) & 3 & \(\alpha\) & 3 & \(\alpha\) \\ & IEDS Round 2 & 4,8 & - & 2,4,6,7,8 & - & 4,6,7,8 & - \\ & IEDS Round 3 & & & - & 2 & \\ & IEDS Round 4 & & & 5 & - & \\ \cline{2-6} & Survive strategy & 2,6 & 2,4 & 1 & 4 & 1,2,5 & 2,4 \\ \hline \multirow{4}{*}{Experimentment Protocol} & Session number & 12 & 12 & & 12 \\ \cline{2-6} & Player in session & 2 & 2 & & 2 \\ \cline{2-6} & Repeated round & 1000 & 1000 & & 1000 \\ \cline{1-1} \cline{2-6} & Matching & Fix paired & Fix paired & Fix paired \\ \hline \multicolumn{6}{l}{\({}^{*}\alpha\)=1,3,5,6,7,8 means the strategies are eliminated simultaneously.} \\ \end{tabular}
\end{table}
Table 1: Equilibrium theory predictions and experiment protocol.
Figure 3: The surviving strategy of the (A, B, C) game. The expected fixed patterns of the (A, B, C) game are a fixed orbital cycle, pure Nash equilibrium (a fixed point or signal star), and a mixed pattern, respectively.
It is worth noting that, based on the experiences of previous studies [6, 39], we take the two parameters \((\lambda,\triangle t)\) simply by scanning two-dimensional parametric space, aiming for \(\rho^{T}_{\mathbf{Dyn}(\lambda,\triangle t)}\) to be approximately close to \(\rho^{T}_{\mathbf{QP}}\). We do not fix the two parameters by the experiment data.
Predictions:Following the methods, the results are predictions that can be experimentally verified. These results are as follows:
* Distribution. The distribution of the treatments is shown in Figure 3(a) and Table A2 as \(\rho^{T}_{\text{Dyn}}\), which is verifiable by the experiment.
* Cycle. The eigencycle spectra of the treatments are shown in Figure 3(b) as \(\vartheta^{T}\). For further details about eigencycle spectra see Appendix A.3. The refined predictions are that, for treatments (A,B,C), there exist (strong, no, weak) cycles, respectively.
* Collapse. The theoretical accumulated curves (\(\varrho^{T}\)) are shown in Figure 3(c)-3(h). The numerical result of the crossover points are shown in Table A8. The statistically significant theoretical pulse signals are shown in Table A6. These predictions can be applied as the criterion to evaluate the consistency between the theory and the experiment. The crossover points (\(\chi_{-}\)) with crossover time \(\tau>100\) are labelled with red arrows. According to the numerical results shown in Table A8, there are (4,0,2) \(\chi_{-}\) red arrows in treatments (A,B,C).
### Game experiment
The experiment was performed at Zhejiang University between August and September 2018. A total of 72 undergraduate students from Zhejiang University volunteered to serve as the human subjects of this experiment.
The main parameters and protocol are shown in the 'Experiment' rows in Table 1. There were three treatments, and each was assigned 24 human subjects. The 24 subjects were divided into 12 sessions of 2 participants. The two players played a game with 1000 repeated in this fixed pairing. The reasoning behind the protocol design are explained in the Appendix--Experiment: Section 3.3.1 Design consideration).
In a session, a player can see observe sides strategy id \((X_{1},X_{2},...,X_{8})\) for player \(X\) or \((Y_{1},Y_{2},...,Y_{8})\) for player \(Y\). But a player can only observe her/his own realised payoffs and the strategy used of its own opponent. We do not provide the subject the payoff matrix, and no discussion allowed.
The experimental sessions lasted for an average of two hours. The payment of a subject, including CNY 20 for attending the session, as well as a payment based on the player's rank, averaged CNY 120. The rank fee was determined by the rank of a participant's total earning score in the 1000-round game, based on a comparison of players in the same role (player X or player Y) in the same experimental session. For details on the design, procedures and data, see SI--Experiment.
Results
In this section, we report the results of the experiment and the theoretical verification of the observations (distribution, cycle, and pulse), as well as the results of the model comparison.
### Distribution
In order to illustrate the completeness and consistency, we report the distribution in brief. For the method and details relating to the data, see Appendix--A.3.
Measurement.The strategy distribution and its convergence are reported. The convergence is measured according to the Euclidean distance \(\delta\) from the experimental distribution \(\rho^{E}\) in respect to two theoretical predictions (1) the static (Nash) equilibrium \(\rho_{\mathrm{S}}^{T}\), and (2) the time series produced by the dynamics model \(\rho_{\mathrm{D}}^{T}\).
Results.For the experimental distribution (\(\rho^{E}\)) and its evolution, which is determined by the distribution values at various time intervals, as shown in Table A2, the main results are as follows:
1. As expected, for all (A,B,C) treatments, over time, the observed distribution (\(\rho^{E}\)) moves closer to the static equilibrium (\(\rho_{\mathrm{S}}^{T}\)) and to the dynamics prediction (\(\rho_{\mathrm{D}}^{T}\)). This is because the Euclidean distances decrease along time (\(\triangle\delta_{\mathrm{S}}<0\) and \(\delta_{\mathrm{D}}<0\)) for all of the treatments, as shown in Table A3.
2. In all (A,B,C) treatments, the dynamics prediction \(\rho_{\mathrm{D}}^{T}\) performs better than the statics prediction \(\rho_{\mathrm{S}}^{T}\). This finding is supported by comparing the Euclidean distance during the last rounds, \(\triangle\delta<0\), for all of the treatments, as shown in Table A3.
To summarize, for the distributions, the experimental observations are consistent with the theoretical predictions and with the existing literature [4, 23, 6, 5, 28, 17]. In other words, the full process shown in Figure 0(a) can be observed in our 1000 rounds of data.
### Eigencycle
In order to illustrate illustrate the completeness and consistency, we report the cycle in brief.
MeasurementReferring to [34, 41, 40], we used the eigencycle spectrum \(\vartheta^{E}\) to present the cycle in the experiments. We measured the eigencycle value of the 120 subspace of the game state space for each treatment, respectively; then normalise \(\vartheta^{E}_{A},\vartheta^{E}_{B},\vartheta^{E}_{C}\) for treatments (A,B,C) to attain the maximum \(|\vartheta^{E}|=1\). For
more details on the measurements and the methods used for data presentation, see SI - Cycle.
ResultsThe experimental eigencycle spectrum \(\vartheta^{E}\) is shown in Figure 4(b). The main results are as follows:
1. Observations on the (A,B,C) treatments, \(\vartheta^{E}_{A},\vartheta^{E}_{B},\vartheta^{E}_{C}\) are consistent with the predictions made based on dynamics theory, \(\vartheta^{T}_{A},\vartheta^{T}_{B},\vartheta^{T}_{C}\), respectively, as shown in Figure 3(b) and Figure 4(b). This is because the linear regression between \(\vartheta^{E}\) and \(\vartheta^{T}\) has \(p<0.000\) for each treatment.
2. Our observations were consistent with the predictions made based on the statics theory. According to the eigencycle spectra for the three treatments, the cycle \((X_{2},X_{6})\times(Y_{2},Y_{4})\) in treatment A is the strongest. This consists the result of the best-response analysis, as shown in Figure 3. These finding are supported by the numerical analysis, as shown in Appendix A.3.1
In summary, the cycles observed in the experiment are consistent with the theoretical predictions. The consistency of the results is in keeping with the existing literature on cycles [34, 6, 31, 40, 41].
### Collapse
The first research question requires us to identify the pulse in the collapse. To evaluate the consistency is to validate the dynamics paradigm during the collapse. Here, we offer the main results obtained regarding the collapse.
Measurement.For the collapse, there are three relevant measurements - the accumulated curve (\(\varrho\)), the pulse signal (\(\psi\)) and the crossover points (\(\chi\)). For the definition of the measurements, see Appendix--A.4.1.
Results and explanation.The main experimental results are as follows:
1. **Accumulated curves (\(\varrho^{E}\)):** The experimental accumulated curves \(\varrho^{E}_{\text{j - i}}\) indicates the \(i\in\)(A,B,C) treatment and \(j\in\) (X,Y) player, respectively, and there are six conditions. These are shown in Figure 4(c) - 4(h), which can be individually compared to the dynamics predictions \(\varrho^{T}_{\text{j-i}}\), as shown in Figure 3(c) - 3(h). The results of the visual comparison support the consistency between the dynamics theory and the experimental results.
2. **Crossover point (\(\chi^{E}\)):** Following the dynamics prediction \(\chi^{T}\) in Table A.8, in the accumulated curve (\(\varrho^{E}\)), the \(\chi^{E}_{-}\) are labelled using red arrows between \(\mathbf{D}^{-}/\mathbf{D}^{+}\); meanwhile the \(\chi^{E}_{+}\) are labelled using blue arrows between the \(\mathbf{D}^{+}/\mathbf{D}^{+}\). These are shown in figure 4(c)-4(h). The consistency between theory and experiment is in visible.
Figure 4: Theoretical results of treatments A, B, and C. (a) Distributions \(\rho^{T}\) from the two models; (b) eigencycle spectrum \(\vartheta^{T}\); (c-h) accumulated curve (\(\varrho^{T}\)) of the player (X or Y). The crossover points (\(\chi\)) are labelled with arrows referring to the data in Table A8.
Figure 5: Experiment results of treatments A, B, and C. (a)Average time of distribution for the 1-200 period and the 800-1000 period; (b) eigencycle spectrum \(\vartheta^{E}\); (c-h) accumulated curve (\(\varrho^{E}\)) of the (X,Y) player over time. The crossover points (\(\chi\)) are labelled with arrows referring to the data in Table A7.
* The numerical results in Table A7 and Table A8 support the consistency between the theory and the experiment. For the dynamics predictions shown in Table A8, if \(\tau^{T}\geq 89\) of \(\psi([Y_{2},Y_{4}],\tau)\) is taken as the benchmark, there are have nine predicted samples. The experiment showed that, following the nine that were theoretically predicted, all eight of the \(\chi^{E}\) labelled arrows in experiment have the largest \(\tau\) values in terms of the related role (X,Y) and the treatment (A,B,C).
* The crossover point is a consequence of the pulse in collapse. Referring to the relation between the pulse and the crossover time shown in section A.4.1, the observed crossover points are used to support the existence of pulse in [EO3.3.6] and [EO3.3.7].
* **Pulse signals (\(\psi^{E}\)):*
* A pulse is a distinct observation made during the collapse. For the definition of a pulse, see Equation A13 in Appendix A.4.1. By examining the data, we observed pulses in significant. The main results regarding the pulses are as follows:
* Existence of the pulse:
* There are a total of nine pulses in significant (\(p<0.05\)), as shown in Table A5.
* Among these nine pulses, three pulses are of the strongest statistical significance (\(p<0.010\)), as shown in Table 2.
* The pattern of the pulses' existence is consistent with the theoretical pattern:
* The number of pulses in the treatments does not deviate the expectation by dynamics theory (see Table A6). More pulses are observed in treatment A than in B, and in B than in C.
* All three of the pulses with the strongest significance (\(p<0.010\)) are included in category of the most expected pulses, according to dynamics theory (for details, see Table A6).
* All four pulses with the strongest significance (\(p\leq 0.011\)) have the maximum surplus \(\psi\) value among the nine pulse signals shown in Table A5.
* The pattern of the pulses' existence is consistent with the pattern of the crossover points.
* Of the nine significant pulses (\(p<0.05\)), six are from treatment A and three are from C. More pluses are observed in treatment A than in C, as shown in Table A5. This is consistent with the \(\chi^{E}\), in that there more significant crossover points observed in treatment A than in C, according to the \(\chi^{E}_{-}\) with the red arrow in Table A7.
* As predicted in Table A8, there are (4,0,2) red arrows in treatment (A,B,C), respectively. This is supported by the number of pulses and their significance in the experiment, as shown in Table A5.
* On the other hand, there is no significant pulse signal (\(p<0.05\)) from the Y player in treatment A, B, or C. This is consistent with the dynamics predictions for pulses, as shown in Table A6.
According to the pulse measurements, all of the significant results for pulses are consistent with the predictions made based on dynamics. No statistically significant evidence against the dynamics model was obtained.
The main result from the collapse in experiment is that, the significant pulse signals are observed, which supported by the observed crossover points. In addition to existed literature [33, 6, 5, 37, 17, 29, 2], in the collapse process, the consistency between the theory and the experiment empower by new empirical evidence.
### Model comparison
Comparing the performances of the two theories under consideration is a key component of experiment research. Our main results are as follows:
* In terms of distribution, the dynamics prediction (\(\rho_{D}^{T}\)) outperformed statics prediction (\(\rho_{S}^{T}\)). This is supported by data (\(\rho^{E}\)) in section 3.1.
* For the cycle, the dynamics prediction outperformed the statics prediction. This is supported by comparing the results shown in EO2.1 and EO2.2 in section 3.2. Further analysis on cycle spectra can provide additional evidence to enhance the results, see section A.3.
* For the collapse, dynamics theory can quantitatively predict the pulse (\(\psi\)) as well as the crossover point (\(\chi^{E}\)). The predictions made using IEDS, a component of statics theory in statics theory (TS3 in section 2.2), offer only limited explanation of the pulsing signals and the crossover point in terms of quantity. As a result, the dynamics model outperforms the statics.
In summary, game dynamics theory performs better than the static equilibrium theory, according to our data. This finding is consistent with the existed literature [6, 5, 17, 37, 33, 29, 2]. The summary of this section, see the first paragraph of the Discussion.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Treat- & Domin- & Domin- & Time & paired- & Surplus & Sample \\ ments & acted & action & block & ttest (\(p\)) & & size \\ & \(s_{j}\) & \(s_{i}\) & \([t_{0},t_{1}]\) & & \(\psi\) & **N** \\ \hline A & \(X_{8}\) & \(X_{2}\) & 11-20 & 0.007 & 17 & 120 \\ A & \(X_{8}\) & \(X_{6}\) & 11-20 & 0.002 & 19 & 120 \\ A & \(X_{4}\) & \(X_{6}\) & 81-90 & 0.007 & 15 & 120 \\ \hline \end{tabular}
\end{table}
Table 2: All of the observed pulses \(\psi^{E}\) were highly significant (\(p<0.010\)). The symbols are explained in Equation A13 in Appendix A.4.1
Discussion and conclusions
The main results of this study are the answers to the two key research questions posed above.
1. Regarding the first question, we observed that the pulse signal, which is a distinct observation of the collapse predicted using the dynamic model, is significant.
2. Regarding the second question, in the full process of finding equilibrium, we showed that all of the significant observations were those most clearly predicted by the dynamic model. The completeness and the consistency of the dynamics paradigm are supported by our data.
We posit that these results will assist the development of the game dynamics paradigm.
### Implications
To explain the implications of our results, we can refer to the scientific paradigm of game dynamics theory [8, 9, 25, 18, 10, 26]. This paradigm has a self-reinforcing closed feedback workflow loop (for a brief introduction, see section A.5), which enhances its distinct set of concepts (e.g., natural selection, the evolutionary stable strategy, and, potentially, cycle [25, 10, 17, 6, 38, 37, 33, 29, 2]).
Since 1990 [8, 25],it has been expected that this paradigm would establish its own narrative of the game dynamics process, bringing about a paradigm shift with regard to the classical statics (Nash equilibrium) theory. However, the paradigm shift has not come to fruition. Those who are skeptical about using dynamics research to solve social and economic problems mainly focus on empirical observations [18, 25, 4].
Since 2010, the convenience of this method has been improved by empirical observations. The cycle is one example of this [7, 6, 40, 37, 39, 38, 32, 29, 2], and it may become a constitute element in the paradigm's distinct set of concepts. Because a cycle generally exists in mixed-strategy Nash equilibrium behaviours, it is a consequence of the engagement of strategy interactions; moreover, it demonstrates game evolution, and is an exploitable PTDE.
We posit that the pulse in collapse can be constantly developed, similarly to the cycle, because of the following:
1. The pulse fulfils the experimental observation gap in the collapse, which is a legitimate constituent part of the full equilibrium-finding process (as shown in Figure 1).
2. The study on the pulse in collapse process can be helpful to the completeness and consistency of the game dynamics paradigm.
### Related works
On the elimination of a dominated strategyThis study is not the unique to note the behaviours of the dominated strategy. For examples:
* Theoretically speaking, 'A dominated strategy can dominate a domination before being dominated'; this phenomenon, namely, a pulse, is evident (e.g., in Gintis' book [14], Figure 5.3, in section 5.8, the red mark during the infancy stage is the pulse signal). In finding the full Nash equilibrium process, the dominated strategies that are eliminated or survive have been extensively studied [16, 15, 21], but research on the pulse phenomenon is rare.
* In experimental terms, research into dominated strategies is not uncommon [24, 30, 27]. But these reports mainly focus on whether the dominated strategies are eliminated or attain equilibrium, but do not address what can be observed during the collapse.
The unique of this study in existed literature is that, it firstly provides the quantitative observation, in terms of the pulse, of the collapse process. By examining the collapse process, which has not been investigated before, the consistency and the completeness of the game dynamics paradigm are demonstrated.
Further researchFurther research on the collapse and the pulse is needed. Although the existence of the pulse in the collapse has been explored and pulse experiment is consistent with the theory, it is far from being considered a legitimate constituent part of the paradigm.
The open question on how to select a dynamics model and its parameters when facing a real-life system is remained. In this study, following [1, 6, 39], we utilised logit dynamics, and fix its parameter grossly by the distribution closing to the equilibrium distribution by QR. Although we can capture all of the various observations in various treatments without changing the parameters, admittedly, this work can not answer the open question.
The collapse stage in games does not only move beyond the stationary concept of static equilibrium [28, 37],but also departs from the concept of non-equilibrium stationary states (e.g., cycle) [19, 39, 37]. By developing a better understanding of collapse, and of how to obtain the hierarchy of condensed structures or various dynamic equilibrium structures from a primordial soup or an existing state, we can expect solutions to appear in the coming decades.
Potential applicationsExploiting the PTDE can be profitable, e.g., in the arbitrage by high frequency trading (HFT) [36]. The arbitrage of HFT comes from predictable non-equilibrium in the short-term, rather than equilibrium in the long-term. The collapse is part of a process that is distant from equilibrium. It could therefore provide a helpful framework for understanding sudden shocks in a social system. During such periods, many unconventional strategy behaviors might appear to dominate. The findings related to the pulse support the
statement: 'A dominated can dominate a domination before dominated'. This means that non-equilibrium behaviors can be profitable in the short term.
This study has real life applications in fields including artificial intelligence [22] and and the social systems addressed in [9, 25]. the collapse is a legitimate constituent part of the process of game evolution. The Nash equilibrium is not necessarily the best strategy during the infancy period when playing a complex game, e.g., Stratego [22].
Regarding general decision making in real-life social systems, philosophically, linear thinking may be harmful. In periods of turmoil, the power of myopia, or the power of the best response, must be properly acknowledged.
### Conclusion
Based on a dynamic game with incomplete information, this investigation illustrates the full equilibrium finding process shown in figure 1, from random play to collapse to a fixed pattern (cycling and equilibrium). As a PTDE, the collapse constitutes fertile ground for the science of game dynamics, especially as it relates to the system of human social interactions.
## Appendix A
### Abbreviations
The abbreviations and mathematical symbols, shown in Table A1, are used in the main text, this appendix and the supplementary information.
### Distribution
This section reports the results on the distribution and its evolution, including the following elements:
* Theoretical and experimental numerical results of the average time distribution are shown Table A2.
* Numerical results relating to Euclidean distance, illustrating the evolution of the distribution, are shown in A3.
For more details of the measurements and data, see SI - Distribution.
Explanation of support [EO1] and [EO2]. In Table A3, the numerical Euclidean distance and its evolution are reported as follow:
1. The (1,2,4,5)-th row is the Euclidean distance of the given time interval.
2. The (3, 6)-th row is the difference in the Euclidean distance \[\triangle\delta_{\mathbf{QP}}=\delta_{\mathbf{QP}[\mathbf{801,1000}]}-\delta _{\mathbf{QP}[\mathbf{1,200}]}<0\] (A1) \[\triangle\delta_{\mathbf{Dyn}}=\delta_{\mathbf{Dyn}[\mathbf{801,1000}]}- \delta_{\mathbf{Dyn}[\mathbf{1,200}]}<0\] (A2) This means that \(\rho^{E}\) is close to the predictions during the 1000 round game in all of the treatments (A,B,C).
3. The (7)-th row is the difference between the two models. \[\triangle\delta=\delta_{\mathbf{Dyn}[\mathbf{801,1000}]}-\delta_{\mathbf{QP}[ \mathbf{801,1000}]}<0\] (A3) This means that the \(\rho^{E}_{800,1000}\) is closer to the expectation of the dynamics model than the Nash equilibrium according to the QP algorithm in all of the treatments (A,B,C).
\begin{table}
\begin{tabular}{l l l} \hline PTDE & Predictable temporary deviation from equilibrium \\
**N** & Sample size for statistical analysis \\ Dyn\((\lambda,\Delta)\) & Logit Dynamics with noise \(\lambda\) and time step \(\Delta\), \\ & a model in game dynamics theory \\ IEDS & Iterated eliminating dominated strategy, \\ & a method to find the Nash equilibrium \\ QP & Quadratic programming, an algorithm \\ & for Nash equilibrium distribution \\
**D\({}^{\pm}\)** & +, domination strategy, survival strategy; \\ EO & Experimental observation \\ TS & Predictions from static equilibrium theory \\ TD & Predictions from dynamics theory \\ ET & Experiment data, used to compare the models’ \\ Obs. & observations \\ SI & Supplementary information \\ \hline \(s_{i}\) & The \(i\)-th optional strategy for a player \\ \(\rho\) & The proportion vector of strategies used \\ & \(\rho(s_{i})\) & The \(i\)-th component of \(\rho\), \\ & \(\rho(s_{i},t)\) & The \(\rho(s_{i})\) at time (round) \(t\) \\ & \(\rho_{([t_{1},t_{2}])}\) & The time average of \(\rho\) between \([t_{1},t_{2}]\) \\ & \(\rho_{*}^{+}([s_{i},s_{j}])\) & A surplus sample set \\ \(\delta\) & & The Euclidean distance between two distribution \\ & & vectors,used to evaluate their difference \\ \hline \(o\) & & A fixed point, rest point, or zero velocity point, equilibrium \\ \(\lambda\) & & Eigenvalue \\ \(\xi\) & & Eigenvector \\ \(\eta_{i}\) & & The \(i\)-th component of a given eigenvector \(\xi\) \\ \(\vartheta\) & & The eigencycle set in the 2D subspace set of a game \\ & & space, describing the cyclic motion strength \\ & \(\vartheta(m,n)\) & The \(\vartheta\) at dimension (1,2):=\((\eta_{m},\eta_{n})\) \\ & \(\vartheta(x)\) & Eigencycle spectrum, where \(x\) is a natural number \\ \hline \(\varrho\) & & Time-accumulated value curve of \(\rho(t)\) \\ & \(\varrho(s_{i})\) & the \(\varrho\) of the \(s_{i}\) strategy \\ \(\psi\) & & A pulse signal \\ & \(\psi([s_{i},s_{j}],[t_{1},t_{2}])\) & A pulse signal of strategy \(j\) over \(i\) \\ & & during time interval \([t_{1},t_{2}]\) \\ \(\chi\) & & A crossover point, having two results \(t\) and \(\varrho\) \\ & \(\chi([s_{i},s_{j}],\tau)\) & A crossover point of \(\varrho(s_{i})\) and \(\varrho(s_{j})\) at time \\ & \(\tau\), and when \(t=\tau-0^{+}\), \(s_{j}>s_{i}\) \\ & \(\chi_{+}(s_{i},s_{j})\) & \(\chi\) if \(s_{i}\in\textbf{D}^{+}\cap s_{j}\in\textbf{D}^{+}\) \\ & \(\chi_{-}(s_{i},s_{j})\) & \(\chi\) if \(s_{i}\in\textbf{D}^{+}\cap s_{j}\in\textbf{D}^{-}\) \\ \hline \end{tabular}
\end{table}
Table A1: Abbreviations and mathematical symbols.
## Appendix A
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{c|}{Treatment} \\ \cline{2-4} & A & B & C \\ \hline \(\delta_{\bf QP[1,200]}\) & 0.4298 & 0.5551 & 0.5745 \\ \hline \(\delta_{\bf QP[801,1000]}\) & 0.3766 & 0.4047 & 0.4436 \\ \hline \(\triangle\delta_{\bf QP}\) & \(-\)0.0532 & \(-\)0.1504 & \(-\)0.1309 \\ \hline \(\delta_{\bf Dyn[1-200]}\) & 0.0781 & 0.2088 & 0.1442 \\ \hline \(\delta_{\bf Dyn[801-1000]}\) & 0.0059 & 0.0097 & 0.0072 \\ \hline \(\triangle\delta_{\bf Dyn}\) & \(-\)0.0722 & \(-\)0.1991 & \(-\)0.1370 \\ \hline \(\triangle\delta\) & \(-\)0.3707 & \(-\)0.3950 & \(-\)0.4364 \\ \hline \end{tabular}
\end{table}
Table A3: The Euclidean distance and its evolution. For details, \({}^{\prime}\)Explanation of support [EO1] and [EO2]\({}^{\prime}\).
### Cycle
As the main focus of this study is the collapse, we report the cycle only briefly. The definition of the eigencycle spectrum, as well as the details of the numerical results of the eigencycle spectrum, are shown in SI - Cycle. This section includes the following contents:
1. Results on the maximum eigencycles in the eigencycle spectrum, in the experiment and in theory, respectively, are shown in Table A4.
2. The verification of the theories by experiment.
#### a.3.1 Verify the statics
In the main text section 2.2 [14], we state that the cycle's existence in treatment-(A, B, C) can be predicted as (Yes, No, Yes), respectively. This prediction is based on the IEDS approach in the static equilibrium theory.
From Table A4, we obtain the following results.
* In treatment A, the closed loops are \[2\to 10\to 6\to 12\to 2\] (A4)
* In treatment C, the closed loops are \[2\to 10\to 5\to 12\to 2\] (A5) \[2\to 10\to 1\to 12\to 2\] (A6)
the IEDS predictions for the cycle are supported. We observed a weak cycle in treatment B, which is not regarded as evidence against IDES because of its weakness, referring to equation A7.
#### a.3.2 Verification of the dynamics theory
The predictions given in the main text section 2.3 [13] are verified here. In treatments A and C, the predictions match the experimental data:
* In the cycle loop, there exist cycles for equation A4 in treatment A, and for equation A5 and A6 in treatment C;
* For the strength of the cycle loop, referring to Table A4, comparing the average of the strength of \(\vartheta^{E}_{\text{Treatment}}\) in the loops, the result is \[\vartheta^{E}_{\text{A}}>\vartheta^{E}_{\text{C}}>\vartheta^{E}_{\text{B}.}\] (A7)
For treatment B, the dynamics predicts little cycle. Considering equation A7, we do not regard the loop of \(\vartheta^{E}_{B}\) as significant evidence against the prediction. As result, the consistency between the dynamics and the experiment is supported.
\begin{tabular}{|c|c|c c|c|} \hline Treatment & \(x\) & \(m\) & \(n\) & \(\vartheta^{E}\) \\ \hline A & 25 & 2 & 12 & \(-1.000\) \\ A & 23 & 2 & 10 & 0.830 \\ A & 71 & 6 & 12 & 0.700 \\ A & 69 & 6 & 10 & \(-0.660\) \\ A & 101 & 10 & 12 & \(-0.189\) \\ A & 59 & 5 & 10 & \(-0.110\) \\ A & 9 & 1 & 10 & \(-0.104\) \\ A & 11 & 1 & 12 & 0.104 \\ A & 61 & 5 & 12 & 0.081 \\ A & 22 & 2 & 9 & 0.078 \\ A & 55 & 5 & 6 & 0.070 \\ A & 93 & 9 & 10 & \(-0.067\) \\ \hline B & 11 & 1 & 12 & \(-0.269\) \\ B & 101 & 10 & 12 & \(-0.148\) \\ B & 59 & 5 & 10 & \(-0.143\) \\ B & 23 & 2 & 10 & 0.126 \\ B & 111 & 12 & 13 & \(-0.098\) \\ B & 80 & 7 & 12 & 0.081 \\ B & 29 & 2 & 16 & \(-0.081\) \\ B & 102 & 10 & 13 & 0.075 \\ B & 13 & 1 & 14 & 0.074 \\ B & 65 & 5 & 16 & 0.067 \\ B & 12 & 1 & 13 & 0.063 \\ B & 71 & 6 & 12 & 0.061 \\ \hline C & 23 & 2 & 10 & 0.272 \\ C & 9 & 1 & 10 & \(-0.257\) \\ C & 25 & 2 & 12 & \(-0.153\) \\ C & 101 & 10 & 12 & \(-0.094\) \\ C & 11 & 1 & 12 & 0.086 \\ C & 93 & 9 & 10 & \(-0.077\) \\ C & 26 & 2 & 13 & \(-0.066\) \\ C & 12 & 1 & 13 & 0.066 \\ C & 61 & 5 & 12 & 0.056 \\ C & 111 & 12 & 13 & \(-0.051\) \\ C & 59 & 5 & 10 & \(-0.045\) \\ C & 104 & 10 & 15 & 0.044 \\ \hline \end{tabular}
Table A4: Experimental cycle \(\vartheta^{E}\) and the theoretical cycle \(\vartheta^{T}\). The top 12 maximum strengths in the eigencycles of the spectrum from treatments (A,B,C). \(x\) is the \(x\)-axis value in the eigencycle spectrum, and \((m,n)\) are the strategy IDs in the unified dimension of the game, which are defined in SI--Cycle.
### Collapse
This section includes the measurement definition for collapse, as well as the experimental data. For more details on this subsection, see SI - Collapse.
#### a.4.1 Measurement definition
Conceptual ExampleFigure A1 offers an example of the pulse signal, accumulated curve and the crossover points. It is derived from the theoretical results of the strategy evolution over time of the X population in treatment A.
#### Accumulated curve (\(\varrho\))
We denote the accumulated \(s_{k}\) strategy used in the time interval \([t_{0},t_{1}]\), which is: \(\varrho(x_{k},t)\), which is
\[\varrho(s_{k},t_{0},t_{1})=\int_{t_{0}}^{t_{1}}\rho(s_{k},t)dt,\] (A8)
where \(\rho(x_{k},t)\) is the proportion of the observed \(s_{k}\) at time \(t\). In this study, we take use of discrete time version and set \(dt=1\). Without additional specification, in this study we set \(t_{0}=0\), and the discrete version of the Equation A8 can be rewrites as
\[\varrho(s_{k},t)=\sum_{t^{\prime}=0}^{t}\rho(s_{k},t^{\prime}),\] (A9)
which is the **accumulated curve** in this study.
Pulse signalA pulse signal, or pulse, can be defined following these steps:
1. **A sample of surplus** in time series, denoted as \(\rho^{+}\big{(}[s_{i},s_{j}],t\big{)}\), is the proportion difference between two strategies \([s_{i},s_{j}]\) at \(t\), \[\rho^{+}\big{(}[s_{i},s_{j}],t\big{)}=\rho(s_{j},t)-\rho(s_{i},t).\] (A10)
2. **A surplus sample set**\(\rho_{*}^{+}\) refers to the surplus samples from a given successive time interval \([t_{0},t_{1}]\) (time block), \[\rho_{*}^{+}\big{(}[s_{i},s_{j}],[t_{0},t_{1}]\big{)}=\{\rho^{+}\big{(}[s_{i},s_{j}],t_{k}\big{)},\ \forall t_{k}\in[t_{0}\!+\!1,t_{0}\!+\!2,...,t_{1}]\}.\] (A11)
3. Sample size **N**. For statics analysis, the sample set size is \[\textbf{N}=(t_{1}-t_{0})\times\text{Number of experiment session}\] (A12) For example, assuming that a time block is set as 10 rounds, from the \(81^{s}t\) round to \(90^{t}h\) round in a 1000 round repeated game session, the element of the set is 10. We have 12 sessions for a given treatment, so the sample size **N** is 120.
4. **Pulse (\(\psi\))** is the total surplus of \(s_{j}\) over \(s_{i}\) in a time block \([t_{0},t_{1}]\): \[\psi\big{(}[s_{i},s_{j}],[t_{0},t_{1}]\big{)}=\sum_{t_{k}=t_{0}+1}^{t_{1}}\rho ^{+}\big{(}[s_{i},s_{j}],t_{k}\big{)};\] (A13) For example, in the top line in Table A5, the \(\psi=17\). The explanation for this is that: * \(X_{8}\in\textbf{D}^{-}\) is a dominated, and \(X_{2}\in\textbf{D}^{+}\) is a domination. * There are 120 samples from the X population of treatment A and in the \(t\in[11-20]\) rounds of the \(\text{ses}\in[1,12]\) sessions. * The surplus observed is the sum over the sample set shown in equation A11. That is \[\sum_{\text{ses}=1}^{12}\sum_{t=11}^{20}\big{(}\rho(X_{8},t,\text{ses})-\rho( X_{2},t,\text{ses})\big{)}=17\] (A14)
5. **The statistical significance of a pulse \(\psi\)** is reported by the **ttest** over the surplus sample set \(\rho_{*}^{+}\) shown in Equation A11. On the sample size **N** for **ttest**, see the example following the Equation A12.
6. **Strong significant** When \(p<0.010\), the statistic result is reported to be 'in strongly significant'. When \(0.010\leq p<0.050\), the statistic result is reported as 'in significant' but not as 'in strongly significant'.
Crossover point: \(\chi(x_{i},x_{j},\tau)\)The definition of the crossover point depends on the accumulated curve \((\varrho)\) shown in Equation A9. Assuming that:
\[\varrho(s_{i},t)-\varrho(s_{j},t)=0\] (A15)
the crossover point is defined as \(\chi(s_{i},s_{j},\tau)\) in which
* \(\tau\) is the solution of the \(t\)
* \(\chi(s_{i},s_{j},\tau):=\varrho(s_{i},t)=\varrho(s_{j},t)\)
The two classes of crossover points
* when the paired comparison is between two \((\mathbf{D}^{+})\), for example, the blue arrows in Figure 5c
- 5h are the crossover points \(\chi_{+}\).
* when the paired comparison is between the \((\mathbf{D}^{-})\) and \((\mathbf{D}^{+})\), for example, the red arrows in Figure 5c
- 5h are the crossover points \(\chi_{-}\).
The relation between the pulse and the crossover time.Based on the results reported in [E32.6] in section 3.3, we hypothesize that, if a crossover \(\chi_{-}([x_{i},x_{j}],\tau)\) ( \(s_{j}\in\mathbf{D}^{-}\) and \(s_{i}\in\mathbf{D}^{+}\)) has larger \(\tau\), \(s_{j}\) is more strongly expected to provide the pulse signal.
Assuming that \(\rho(s_{i})\) and \(\rho(s_{j})\) are equal at \(t=0\), if a long run dominated (\(s_{j}\in\mathbf{D}^{-}\)) has a pulse signal referring to a long run domination (\(s_{i}\in\mathbf{D}^{+}\)), that is
\[\rho(x_{j},t)>\rho(x_{i},t)\qquad\forall t\in[0,T],\] (A16)
As a mathematical result, there must be a crossover point in its accumulated curve \(\varrho(x_{j},\tau)\) referring to \(\varrho(x_{i},\tau)\) in which \(\tau>T\). This is one explanation for the hypothesis.
For example, in experimental pulse \(\psi^{E}\) (Table 2), all the strongest significant pulses belong to the largest \(\tau\) in \(\chi^{E}\) (Table A7). The result reported in [E32.6] refers to this mathematical property.
#### a.4.2 Data
Here, we present the data that support the results shown in section 3.3 in main text.
1. 1. The experimental pulse signal observed is statistically significant (pi0.05): see Table A5
2. The theoretical pulse signal in statistical significant; Listed are the top 10 samples ordered by the time block surplus values of treatments (A,B,C): see Table A6. Note: the time series from the dynamics model are smooth, meaning that more significant pulse signals can be obtained from this model, when compared with those from the highly stochastic human subject experimental time series..
3. Experimental crossover point: see Table A7
4. Theoretical crossover point: see Table A8
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Treat- & Domin- & Domin- & time & paired- & Surplus & Sample \\ \multicolumn{1}{c}{ments} & \multicolumn{1}{c}{ated} & \multicolumn{1}{c}{ation} & \multicolumn{1}{c}{block} & \multicolumn{1}{c}{ttest}(\(p\)) & \multicolumn{1}{c}{size} \\ & \(s_{j}\) & \(s_{i}\) & \([t_{0},t_{1}]\) & & \(\psi^{T}\) & **N** \\ \hline A & \(X_{8}\) & \(X_{6}\) & 21-31 & 0.000 & 35.04 & 120 \\ A & \(X_{8}\) & \(X_{2}\) & 21-31 & 0.000 & 34.75 & 120 \\ A & \(X_{8}\) & \(X_{6}\) & 31-41 & 0.000 & 29.97 & 120 \\ A & \(X_{8}\) & \(X_{2}\) & 11-21 & 0.000 & 29.20 & 120 \\ A & \(X_{8}\) & \(X_{6}\) & 11-21 & 0.000 & 29.08 & 120 \\ A & \(X_{8}\) & \(X_{6}\) & 41-51 & 0.000 & 24.47 & 120 \\ A & \(X_{4}\) & \(X_{6}\) & 41-51 & 0.000 & 23.87 & 120 \\ A & \(X_{8}\) & \(X_{2}\) & 31-41 & 0.000 & 23.84 & 120 \\ A & \(X_{4}\) & \(X_{6}\) & 51-61 & 0.000 & 22.01 & 120 \\ A & \(X_{4}\) & \(X_{6}\) & 31-41 & 0.000 & 20.47 & 120 \\ A & \(X_{8}\) & \(X_{6}\) & 51-61 & 0.000 & 19.98 & 120 \\ A & \(X_{4}\) & \(X_{6}\) & 61-71 & 0.000 & 18.84 & 120 \\ \hline B & \(X_{7}\) & \(X_{1}\) & 21-31 & 0.000 & 18.05 & 120 \\ \hline C & \(X_{8}\) & \(X_{5}\) & 11-21 & 0.000 & 21.64 & 120 \\ C & \(X_{8}\) & \(X_{1}\) & 11-21 & 0.000 & 21.64 & 120 \\ C & \(X_{8}\) & \(X_{2}\) & 11-21 & 0.000 & 18.75 & 120 \\ C & \(X_{8}\) & \(X_{5}\) & 21-31 & 0.000 & 18.39 & 120 \\ C & \(X_{8}\) & \(X_{1}\) & 21-31 & 0.000 & 18.39 & 120 \\ \hline \hline \end{tabular}
\end{table}
Table A6: Theoretical pulse (\(\psi^{T}\)). The most strongly expected top 18 samples (twice the \(\psi^{E}\) is significant) ordered by the \(\psi^{T}\) values of all the treatments (A,B,C). Note: the time series from the dynamics model are smooth, meaning that more significant pulse signals can be obtained from this model, when compared with those from the highly stochastic experimental time series.
### The game dynamics paradigm and its workflow
Similarly to the paradigms found in natural science, the game dynamics paradigm is also **a distinct set of concepts**, including theories, research methods, postulates, and standards for what constitutes legitimate contributions to a field. The reality and accuracy, as well as the completeness and consistency, of this distinct set of concepts must be proven. Key to developing a paradigm is a closed-loop of workflow. The workflow includes the following steps:
1. Describe a game (G) and its playing protocol (P);
2. Take a proper dynamics equation system T (e.g., replicator dynamics or logit dynamics) and specify the parameters of T by (G,P). Then, derive the theoretical observation O\({}^{T}\)(G,P) by solving T(G,P), e.g., the reset point, the eigen system, or by time series analysis among others technologies;
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline Treat- & Domin- & Domin- & Crossover & \multicolumn{3}{c}{Arrow} \\ \cline{3-8} ment & \multicolumn{1}{c|}{ation} & \multicolumn{1}{c|}{ated} & \multicolumn{1}{c|}{Time (\(\tau\))} & Color & ID & Fig ID \\ \hline \hline \(A\) & \(X_{2}\) & \(X_{4}\) & 62 & - & - & Fig 4c \\ \(A\) & \(X_{2}\) & \(X_{8}\) & 86 & - & - & \\ \(A\) & \(X_{6}\) & \(X_{5}\) & 65 & - & - & \\ \(A\) & \(X_{6}\) & \(X_{7}\) & 116 & red & 4 & \\ \(A\) & \(X_{6}\) & \(X_{1}\) & 117 & red & 3 & \\ \(A\) & \(X_{6}\) & \(X_{4}\) & 177 & red & 2 & \\ \(A\) & \(X_{6}\) & \(X_{8}\) & 209 & red & 1 & \\ \hline \(A\) & \(Y_{2}\) & \(Y_{1}\) & 71 & - & - & Fig 4f \\ \(A\) & \(Y_{2}\) & \(Y_{5}\) & 74 & - & - & \\ \(A\) & \(Y_{2}\) & \(Y_{4}\) & 277 & blue & 1 & \\ \hline \hline \(B\) & \(X_{1}\) & \(X_{3}\) & 42 & - & - & \\ \(B\) & \(X_{1}\) & \(X_{6}\) & 56 & - & - & \\ \(B\) & \(X_{1}\) & \(X_{8}\) & 69 & - & - & \\ \(B\) & \(X_{1}\) & \(X_{7}\) & 72 & - & - & \\ \(B\) & \(X_{1}\) & \(X_{5}\) & 75 & - & - & \\ \(B\) & \(X_{1}\) & \(X_{2}\) & 84 & - & - & \\ \hline \hline \(C\) & \(X_{1}\) & \(X_{7}\) & 52 & - & - & Fig 4e \\ \(C\) & \(X_{1}\) & \(X_{6}\) & 66 & - & - & \\ \(C\) & \(X_{1}\) & \(X_{4}\) & 77 & - & - & \\ \(C\) & \(X_{1}\) & \(X_{8}\) & 82 & - & - & \\ \(C\) & \(X_{1}\) & \(X_{2}\) & 126 & blue & 2 & \\ \(C\) & \(X_{5}\) & \(X_{7}\) & 61 & - & - & \\ \(C\) & \(X_{5}\) & \(X_{6}\) & 82 & - & - & \\ \(C\) & \(X_{5}\) & \(X_{4}\) & 112 & red & 3 & \\ \(C\) & \(X_{5}\) & \(X_{8}\) & 144 & red & 1 & \\ \hline \(C\) & \(Y_{2}\) & \(Y_{4}\) & 89 & blue & 1 & Fig 4h \\ \hline \hline \end{tabular}
\end{table}
Table A8: Theoretical crossover point \(\chi^{T}\) at \(\tau^{T}>40\). The arrow columns relate to the arrows (\(\tau>89\)) in Figure 4c-4h. For the definition of \(\tau\), see Equation A15.
3. Conduct experiments or collect empirical data to measure the observation O\({}^{E}\)(G,P)
4. Evaluate the O\({}^{T}\)(G,P) by empirical observation O\({}^{E}\)(G,P).
5. Iterate Step 1 to Step 4 and feedback to step 1 as a loop, by trial and error, to find the best fit between the theory and the empirical system in general.
This iterated closed-loop workflow is widely applied in various scientific fields, and can constantly improve the establishment of a paradigm. In addition to the reality and the accuracy, the paradigm is endorsed by the completeness and the consistency between:
\[O^{T}(P,G)\ \ \text{vs}\ \ O^{E}(P,G)\]
endorse the paradigm. Social science follows this workflow, as does natural science[10, 4, 23]. |
2308.04649 | Enhancing Optimization Performance: A Novel Hybridization of Gaussian
Crunching Search and Powell's Method for Derivative-Free Optimization | This research paper presents a novel approach to enhance optimization
performance through the hybridization of Gaussian Crunching Search (GCS) and
Powell's Method for derivative-free optimization. While GCS has shown promise
in overcoming challenges faced by traditional derivative-free optimization
methods [1], it may not always excel in finding the local minimum. On the other
hand, some traditional methods may have better performance in this regard.
However, GCS demonstrates its strength in escaping the trap of local minima and
approaching the global minima. Through experimentation, we discovered that by
combining GCS with certain traditional derivative-free optimization methods, we
can significantly boost performance while retaining the respective advantages
of each method. This hybrid approach opens up new possibilities for optimizing
complex systems and finding optimal solutions in a range of applications. | Benny Wong | 2023-08-09T01:27:04Z | http://arxiv.org/abs/2308.04649v1 | Enhancing Optimization Performance: A Novel Hybridization of Gaussian Crunching Search and Powell's Method for Derivative-Free Optimization
###### Abstract:
This research paper presents a novel approach to enhance optimization performance through the hybridization of Gaussian Crunching Search (GCS) and Powell's Method for derivative-free optimization. While GCS has shown promise in overcoming challenges faced by traditional derivative-free optimization methods [1], it may not always excel in finding the local minimum. On the other hand, some traditional methods may have better performance in this regard. However, GCS demonstrates its strength in escaping the trap of local minima and approaching the global minima. Through experimentation, we discovered that by combining GCS with certain traditional derivative-free optimization methods, we can significantly boost performance while retaining the respective advantages of each method. This hybrid approach opens up new possibilities for optimizing complex systems and finding optimal solutions in a range of applications.
## Introduction
Optimization is a fundamental task in various fields, aiming to find the best possible solution for a given problem. Traditional optimization methods often rely on derivative-based approaches, which require the availability of analytical derivatives. However, in many real-world scenarios, obtaining these derivatives may be challenging or even impossible. As a result, derivative-free optimization methods have gained significant attention due to their ability to tackle such scenarios.
Among derivative-free optimization methods, Gaussian Crunching Search (GCS) has emerged as a promising approach. GCS utilizes a probabilistic model to guide the search process, making it well-suited for problems with complex and non-linear landscapes. It has shown effectiveness in overcoming some of the difficulties faced by traditional derivative-free optimization methods.
However, while GCS has demonstrated its strengths in escaping the trap of local minima and approaching global minima, it may not always excel in finding the local minimum. On the other hand, certain traditional
Figure 1: GCS Stochastic Mutation [1] |
2301.08578 | The formation of hard VHE spectra from GRB afterglow via Two-Zone
Synchrotron Self-Compton Emission | Electron Compton scattering of target photons into the gamma-ray energy band
(inverse Compton scattering --IC--) is commonly expected to dominate the very
high energy spectra in gamma-ray bursts especially during the afterglow phase.
For sufficiently large center-of-mass energies in these collisions, the effect
of the electron recoil starts reducing the scattering cross section (the
Klein-Nishina regime). The IC spectra generated in the Klein-Nishina regime is
softer and has a smaller flux level compared to the synchrotron spectra
produced by the same electrons. The detection of afterglow emission from nearby
GRB 190819A in the very high energy (VHE) domain with H.E.S.S. has revealed an
unexpected feature: the slope of the VHE spectrum matches well the slope of the
X-ray spectra, despite expectations that for the IC production process, the
impact of the Klein-Nishina effect should be strong. The multi-wavelength
spectral energy distribution appears to be inconsistent with predictions of
one-zone synchrotron-self-Compton models. We study the possible impact of
two-zone configuration on the properties of IC emission when the magnetic field
strength differs considerably between the two zones. Synchrotron photons from
the strong magnetic field zone provide the dominant target for cooling of the
electrons in the weak magnetic field zone, which results in a formation of hard
electron distribution and consequently of a hard IC emission. We show that the
two-zone model can provide a good description of the X-ray XRT and VHE H.E.S.S.
data. | Dmitry Khangulyan, Andrew M. Taylor, Felix Aharonian | 2023-01-20T13:47:43Z | http://arxiv.org/abs/2301.08578v1 | # The formation of hard VHE spectra from GRB afterglow via Two-Zone Synchrotron Self-Compton Emission
###### Abstract
Electron Compton scattering of target photons into the gamma-ray energy band (inverse Compton scattering -IC-) is commonly expected to dominate the very high energy spectra in gamma-ray bursts especially during the afterglow phase. For sufficiently large center-of-mass energies in these collisions, the effect of the electron recoil starts reducing the scattering cross section (the Klein-Nishina regime). The IC spectra generated in the Klein-Nishina regime is softer and has a smaller flux level compared to the synchrotron spectra produced by the same electrons. The detection of afterglow emission from nearby GRB190829A in the very high energy (VHE) domain with H.E.S.S. has revealed an unexpected feature: the slope of the VHE spectrum matches well the slope of the X-ray spectra, despite expectations that for the IC production process, the impact of the Klein-Nishina effect should be strong. The multi-wavelength spectral energy distribution appears to be inconsistent with predictions of one-zone synchrotron-self-Compton models. We study the possible impact of two-zone configuration on the properties of IC emission when the magnetic field strength differs considerably between the two zones. Synchrotron photons from the strong magnetic field zone provide the dominant target for cooling of the electrons in the weak magnetic field zone, which results in a formation of hard electron distribution and consequently of a hard IC emission. We show that the two-zone model can provide a good description of the X-ray XRT and VHE H.E.S.S. data.
Non-thermal radiation sources(1119) -- Gamma-ray transient sources(1853) -- Gamma-ray bursts(629) -- Gamma-ray astronomy(628) -- Particle astrophysics(96) -- X-ray sources(1822) 0000-0002-4876-2808]Dmitry Khangulyan
0000-0002-4882-7880]Andrew M. Taylor
0000-0002-4882-7808]Felix Aharonian
## 1 Introduction
The very high energy (VHE; \(>100\) GeV) emission detected from gamma-ray burst (GRB) afterglows with H.E.S.S. and MAGIC (Abdalla et al., 2019; MAGIC Collaboration et al., 2019, 2019; H. E. S. S. Collaboration et al., 2021) is considered by many to have inverse Compton (IC) origin (see, e.g, Zhang, 2019). The emission component produced by relativistic protons is expected to have a significantly lower flux, due to the very low radiative efficiency of hadronic interactions (see, e.g., Abdalla et al., 2019). If the VHE emission is produced by relativistic electrons, then because of the so-called synchrotron burn-off limit (Guilbert et al., 1983) the synchrotron component is expected to reach the VHE regime only if the bulk Lorentz factor is very high, \(\Gamma\geq 10^{3}\). Such high bulk Lorentz factors are excluded during the afterglow phase by energy conservation arguments (e.g., related to self-similar solution for relativistic blast wave obtained by Blandford and McKee, 1976) making IC scattering the most feasible radiation mechanism for the VHE GRB emission during the afterglow period. However, the hard intrinsic spectral slope inferred from observations by H.E.S.S. of GRB190829A afterglow cannot be easily reproduced with standard IC models (see, e.g., H. E. S. S. Collaboration et al., 2021). This leaves one of two possibilities: (i) invoke alternative radiation mechanisms, or (ii) develop a more sophisticated IC sec
nario to provide a better description of the observational data.
Synchrotron radiation is a very efficient radiative emission mechanism of electrons during the afterglow phase of GRBs. If the synchrotron component extends into the VHE domain, it can reproduce the flux level and spectral slope revealed with H.E.S.S. from GRB190829A afterglow (H. E. S. S. Collaboration et al., 2021). While the conservation of energy, used to constrain the bulk Lorentz factor, is a robust argument, the burn-off energy limit can be avoided in certain non-standard scenarios. For example, if the strength of the accelerating electric field, \(\mathcal{E}\), exceeds the strength of the magnetic field, \(B\) (in a plasma such configurations require non-ideal magnetohydrodynamics) then synchrotron emission can extend beyond the burn-off limit by the factor of \(\mathcal{E}/B\). Alternatively, in highly turbulent magnetic fields magnetobremstrahlung radiation can extend beyond the burn-off limit (Kelner et al., 2013). If the correlation length of the magnetic field is large compared to the photon formation length, \(m_{e}c^{2}/e\bar{B}\) (here \(m_{e}\) and \(e\) are electron mass and charge, respectively; \(c\) and \(\bar{B}\) are light speed and averaged magnetic field), then the radiation is generated in the synchrotron regime, resulting in the burn-off limit for the synchrotron maximum energy (for a detailed consideration, see, e.g., in Kelner et al., 2013; Derishev and Aharonian, 2019). However, if the correlation length is short compared to the photon formation length, then the electrons instead emit in the jitter regime, and the emission peaks at higher energy compared to the synchrotron case, alleviating the limit from the burn-off limit (Kelner et al., 2013). Finally, the electron synchrotron spectrum can extend beyond the burn-off limit in two-zone systems, where the physical conditions at the acceleration site and in the radiation production region differ substantially (Kumar et al., 2012; Khangulyan et al., 2021). In conclusion, there are several ways of expanding the energy spectrum of magneto-bremsstrahlung to high or even very high energies. However, the feasibility of these scenarios depends on the implementation of many factors and requires extreme assumptions.
In contrast, IC scattering is a natural and very effective channel of VHE gamma-ray production. Although the recent observations of VHE gamma rays during the GRB afterglows challenge the simple one-zone IC model, more sophisticated scenarios cannot be excluded. In this paper, we study the spectral properties of gamma rays in the two-zone IC model in which the production region of the target (synchrotron) photons and the IC gamma-ray emitter are separated. One can propose several possible realizations for such a two-zone setup. For example, one may expect quite different conditions at the forward and reverse shocks, which propagate through the circumburst medium (CBM) and the jet, respectively. If the emission from the reverse shock appears to be important at certain frequencies then a two-zone description for GRB afterglow emission should be considered (see, e.g., Dichiara et al., 2022; Salafia et al., 2022). Alternatively, the shock region itself can be quite complex potentially providing quite different physical conditions for particle acceleration and radiation. Indeed, simulations suggest that downstream shock material, the dominant emission site during the afterglow phase, is expected to be highly inhomogeneous, an aspect usually neglected in GRB afterglow emission modelling. Below we consider the impact of a strongly inhomogeneous magnetic field on the properties of IC emission. We show that under reasonable assumptions, even a two-zone synchrotron self-Compton (SSC) scenario can provide a considerably improved description of the broadband spectra reported from GRB190829A.
## 2 Standard one-zone SSC scenario
The standard GRB afterglow emission framework postulates that this emission is generated via the synchrotron and IC channels, with synchrotron radiation providing the dominant target for IC scattering - the so called SSC scenario. The analysis of the spectral energy distribution (SED) in SSC models is straightforward if the IC emission is generated in the Thomson regime (see, e.g. Sari and Esin, 2001), as in this case the energy loss rate, \(\dot{E}\), has a simple form \(\dot{E}\propto E^{2}\) (here \(E\) is electron energy). In this regime, a power-law injection of non-thermal electrons, \(q\propto E^{-\alpha}\) (here \(\alpha\) is the injection index, for conventional acceleration mechanisms one typically assumes \(\alpha\approx 2\)), leads to the formation of a broken-power-law distribution of radiating electrons. The synchrotron and IC (Thomson) components generated by these electrons also reflect this broken-power-law shape, with the IC component dominating at higher energies. The subsequent broadband SED produced is double-humped, with the relative emissivity of the synchrotron and IC components being determined by phenomenological parameters (typically, by the radiation efficiency, i.e., by the fraction of energy radiated away). The photon index of the synchrotron spectrum, produced by electrons with energies above the cooling break, is \(\gamma_{\rm s}=(\alpha+2)/2\), provided that \(\alpha>1\). In the single zone SSC scenario, the corresponding IC spectrum has the same photon index, if generated in the Thomson regime.
Typically, during the afterglow phase the (synchrotron) X-ray spectrum is observed to be hard, with
a photon index \(\sim 2\). Thus the photons detected in the X-ray band provide a non-negligible target for IC scattering. In the plasma co-moving frame, the energy of the electron, \(E\), generating the VHE emission, detected at energy1\(\varepsilon^{\prime}_{\rm vhe}\), satisfy the condition: \(E>\varepsilon^{\prime}_{\rm vhe}/\Gamma\). If electrons of this energy up-scatter photons from a component detected by the observer at energy \(\varepsilon^{\prime}_{\rm x}\), then the typical product of the target photon and electron energies, which determines the scattering regime, is
Footnote 1: Note that we prime the quantities in the progenitor frame, and we neglect the cosmological redshift effect.
\[\frac{E\varepsilon_{\rm x}}{m_{e}^{2}c^{4}}>\frac{\varepsilon^{\prime}_{\rm vhe }\varepsilon^{\prime}_{\rm x}}{m_{e}^{2}c^{4}\Gamma^{2}}\approx 4\bigg{(} \frac{\Gamma}{10}\bigg{)}^{-2}\left(\frac{\varepsilon^{\prime}_{\rm vhe}}{0.1 \;{\rm TeV}}\right)\left(\frac{\varepsilon^{\prime}_{\rm x}}{1\;{\rm keV}} \right)\,. \tag{1}\]
Here \(m_{e}\) and \(c\) are the electron mass and speed of light, respectively. Unless the bulk Lorentz factor is high, \(\Gamma\geq 10^{2}\), the electrons that produce the VHE emission up-scatter a considerable part of the photon targets in the Klein-Nishina regime. The study of the VHE properties of GRB afterglows should therefore be conducted with models that account for the change of the IC cross-section in the relativistic regime.
The influence of the Klein-Nishina regime on the SED is two-fold, as one must account for both the change of the emission and energy loss rates (see, e.g., Derishev et al., 2003; Nakar et al., 2009). In the fast cooling regime, the particle spectrum, \({\rm d}N=n\,{\rm d}E\), is determined by the injection spectrum, \(q\), and by the cooling time \(\tau=E/|\dot{E}|\):
\[n(E)=\frac{\tau(E)}{E}\int\limits_{E}^{\infty}{\rm d}\tilde{E}\ q\Big{(} \tilde{E}\Big{)}\,. \tag{2}\]
If the injection is a power-law \(q\propto E^{-\alpha}\), then the particle spectrum is
\[n(E)\propto\tau(E)E^{-\alpha}\,. \tag{3}\]
(Note that here we assume that the injection spectrum is sufficiently steep so as to ensure the integral is dominated by the low energy limit).
If the synchrotron losses dominate over the Compton losses (more specifically if the energy density of the magnetic field is larger than the energy density of the target photons) then \(\tau(E)\propto E^{-1}\), and a power-law spectral injection also yields a power-law distribution of particles: \(n(E)\propto E^{-(\alpha+1)}\) (see Fig. 1 for a sketch of the cooled particle spectrum). Subsequently, a power-law synchrotron component is produced with photon index \(\gamma_{\rm s}\).
The inverse Compton of radiation has the same power-law photon index as long as the scattering takes place in the Thompson regime. In the Klein-Nishina regime, the IC slope should (asymptotically, i.e., ignoring the logarithmic term) approach \(\gamma_{\rm kn}\approx(\alpha+2)\) (provided that the emitting electrons obey a power-law energy distribution with index \(\alpha+1\), Blumenthal & Gould, 1970). Thus, since the slope of IC component generated in the Thomson regime matches that of the synchrotron radiation, \(\gamma_{\rm s}\), the Klein-Nishina effect causes a spectral softening by \(\Delta\gamma\approx\gamma_{\rm kn}-\gamma_{\rm s}\approx(\alpha+2)/2\). For example, if \(\alpha\approx 2\) then the spectral slope changes from \(\gamma_{\rm s}\approx 2\) to \(\gamma_{\rm kn}\approx 4\), and the spectral softening is \(\Delta\gamma\approx 2\). A schematic of the SED is shown in Fig. 2. One should note that for a broad target photon distribution, the transition to the Klein-Nishina regime is spread over a broad energy range and can have a rather complex character.
The situation changes dramatically when the energy density of target photons is larger than the energy den
Figure 1: A sketch that illustrates the formation of the particle spectrum in the case of dominant synchrotron losses and dominant IC losses. The part of the spectrum formed in the fast cooling regime is shown.
Figure 2: A sketch that illustrates the formation of the SED in the case of dominant synchrotron losses and dominant IC losses.
sity of the magnetic field. In this case, the impact of the Klein-Nishina effect on the formation of the electron spectrum becomes a dominant factor. The radiative cooling time \(\tau(E)\) can be approximated by a broken power-law function: for sufficiently low electron energies, the IC interaction proceeds in the Thomson regime, thus \(\tau(E)\propto E^{-1}\). At higher energies, the IC interactions occur in the Klein-Nishina regime where the energy loss rate is energy-independent, thus \(\tau(E)\propto E\). Finally at even higher energies, denoted \(E_{*}\), the synchrotron losses (as their rate increases with particle energy) begin to dominate over the IC energy losses, and the original energy dependence of the cooling time is recovered: \(\tau(E)\propto E^{-1}\). As follows from Eq. (3), for a power-law injection spectrum, the particle spectrum formed in the fast cooling regime should also be a double-broken-power-law (with the power-law index changing as \(\alpha+1\to\alpha-1\to\alpha+1\): see Fig. 1). The \(E^{-(\alpha+1)}\) part of the spectrum formed under dominant (Thomson regime) IC losses changes to, \(\propto E^{-(\alpha-1)}\), formed under the dominant IC (Klein-Nishina regime) losses. Finally, above \(E_{*}\), the spectrum softens back to \(E^{-(\alpha+1)}\). We note, however, that the transition to the Klein-Nishina regime proceeds smoothly, therefore the spectrum does not follow precisely the schematic shape explained above. For example, as can be seen from Fig. 4, the IC cooling time in the transition regime is better approximated as a constant, \(\tau\approx\text{const}\). Therefore, the corresponding transformation of the electron spectrum is better approximated as \(\alpha+1\to\alpha\to\alpha+1\) (note that this power law index is indicated in the bottom panel of Fig. 4 with a black guide line).
As for the synchrotron radiation, electrons cooled by IC in the Thomson regime produce a spectrum with photon index \(\gamma_{\text{s}}\); at higher energies the hardening of the electron spectrum due to the dominant Klein-Nishina energy losses results in a hard synchrotron spectrum with photon index in the range between \(\gamma_{\text{s}}\) and \(\gamma_{\text{s,kn}}\approx\alpha/2\) (\(\gamma_{\text{s,kn}}\) is the limiting value achieved under IC cooling in the deep Klein-Nishina regime: see Fig. 2). In the transition region with an approximately constant IC cooling time, the slope of the synchrotron spectrum is approximately \((\alpha+1)/2\), as indicated by the black guide lines in Figs. 5 and 6. Finally, the emission produced by electrons with energies exceeding \(E_{*}\) has the standard synchrotron slope \(\gamma_{\text{s}}\). As the synchrotron and IC energy loss rates for particles with \(E_{*}\) are equal, the narrow-band luminosity of the synchrotron and IC components produced by particles with \(E_{*}\) are (almost) equal.
The spectral shape of the IC component is different to that of the synchrotron spectrum. The component generated in the Thomson regime has a spectral index of \(\gamma_{\text{s}}\). At higher energies the impact of the Klein-Nishina effect on the particle spectrum is partially compensated by the reduction of the cross section. For example, in the limiting regime, a spectrum \(\propto E^{-(\alpha-1)}\) generates in the Klein-Nishina regime a \(E^{-\alpha}\) IC spectrum. For \(\alpha\approx 2\) a Thomson spectrum with photon index \((\alpha+2)/2\) transits smoothly into the Klein-Nishina spectrum with photon index \(\alpha\). However, in the region of transition to the Klein-Nishina regime, this asymptotic photon index might be quite a coarse approximation. Moreover, above \(E_{*}\) the synchrotron losses dominate, thus the Klein-Nishina spectrum eventually softens to \(\alpha+2\) above \(E_{*}\). Note that in the Klein-Nishina regime almost all the electron energy is transferred to the up-scattered photon, so the photon energy in the co-moving frame is equal to that of the incident electron energy, \(\varepsilon_{\text{ic}}\approx E_{*}\).
Observations of GRB190829A with H.E.S.S. revealed that VHE component, corrected for the extragalactic background light (EBL) attenuation, is best described as a single power-law spectrum extending up to \(3\,\text{TeV}\) with a hard photon index of \(\gamma_{\text{vhe}}=2.07\pm 0.09\) (H. E. S. S. Collaboration et al., 2021). Strikingly, this slope matches well the slope of the X-ray spectrum measured with _Swift_-XRT (e.g., \(\gamma_{\text{xrt}}=2.03\pm 0.06\) during the first night H. E. S. S. Collaboration et al., 2021). Also, the _Swift_-XRT and H.E.S.S. observations revealed that the fluxes in the X-ray and VHE bands appeared to be similar (potentially a natural feature of pair loading feedback, see Derishev & Piran, 2016, 2019, for detail).
In the VHE band the influence of the Klein-Nishina effect should be noticeable. However, this spectral effect was not observed in the H.E.S.S. measurements. In the framework of the simple one-zone analysis introduced above, the slope and flux level match implies that the cooling of TeV emitting electrons proceeds in the Klein-Nishina regime, and that the X-ray synchrotron is produced by particles with energy exceeding \(E_{*}\). As the hard VHE spectrum extends up to \(3\,\text{TeV}\), then \(E_{*}>0.3(\Gamma/10)^{-1}\,\text{TeV}\). The synchrotron emission produced by the high-energy electrons is detected by the observer at
\[\varepsilon^{\prime}_{\text{syn}}>60\bigg{(}\frac{\Gamma}{10}\bigg{)}^{-1} \frac{B}{1\,\text{G}}\,\text{keV}\,. \tag{4}\]
This estimate shows that a very low magnetic field of \(\sim\,\text{mG}\) level is required by the VHE measurements. Such a low magnetic field, however, is incompatible with the required radiation efficiency of the production region given the adiabatic cooling time is \(\tau_{\text{ad}}\sim t^{\prime}_{\text{tr}}\Gamma\), where \(t^{\prime}_{\text{tr}}\) is time since the GRB trigger (as measured by a distant observer at rest in the progenitor reference frame). The broad-band SED obtained with _Swift_-XRT and H.E.S.S.
therefore cannot be reproduced in the framework of the standard one-zone SSC scenario (see also Huang et al., 2022). To resolve the spectral issue in SSC scenario one needs either: (1) assume that there is an important low-energy target photon field, probably of external origin; or (2) consider a two-zone scenario.
The former scenario requires the presence of an external target that provides a target of an energy density comparable to that of the magnetic field in the plasma co-moving frame:
\[w_{\rm ext}\sim 4\times 10^{-2}\bigg{(}\frac{B}{1\,{\rm G}}\bigg{)}^{2}\,{\rm erg \,cm^{-3}}\,. \tag{5}\]
If the photons are isotropic in the progenitor frame, then we obtain \(w^{\prime}_{\rm ext}\sim 4\times 10^{-4}(10B/(\Gamma\,{\rm G}))^{2}\,{\rm erg \,cm^{-3}}\). The VHE emission detected from GRB190829A lasted for almost \(\Delta t=50\) h (H. E. S. S. Collaboration et al., 2021), and the forward shock covered a distance of \(\Delta R^{\prime}\sim\Gamma^{2}\Delta tc\sim 10^{17}(\Gamma/10)^{2}\,{\rm cm}\). The luminosity of the photon field should therefore be
\[L^{\prime}_{\rm ext}\sim 4\pi\Delta R^{\prime 2}w^{\prime}_{\rm ext}c\sim 10^{ 42}\bigg{(}\frac{B}{1\,{\rm G}}\bigg{)}^{2}\,{\rm erg\,s^{-1}}\,. \tag{6}\]
If the magnetic field is weak, \(B\ll 1\,{\rm G}\), then an external photon field of reasonable luminosity can provide a sufficiently dense external photon field (see, e.g., Zhang et al., 2021), however external IC scenarios with an equivalent Gauss-strength magnetic field cannot be realized.
## 3 Two-Zone SSC Emission Scenario
### Physical justification
We consider the emission region consisting of two zones: the first zone with a strong magnetic field, \(B_{1}\), and the second zone with a weak magnetic field, \(B_{2}\), with \(B_{1}\gg B_{2}\). Should particles themselves also easily mix between the two zones, then one would not expect a significant difference between the energy distributions of particles in these zones. We here, however, assume that the particle exchange between the zones is inefficient, and thus two distinct particle distributions, \(n_{1}\) and \(n_{2}\), are formed in the two zones.
The target photons, however, travel freely between the two zones. The specific realization of the scenario, in particular the shapes and relative location of the zones, determines the actual distribution of target photons in the zones. Let us qualitatively consider several possible realizations of the two-zone scenario: (i) two distinct regions with typical sizes of \(r_{1}\) and \(r_{2}\) separated by a distance \(r_{0}\); (ii) two converging shells of radius \(r_{1}\) and \(r_{0}\); (iii) \(N\) compact regions (of typical size \(r_{1}\)) with strong magnetic field embedded within a larger zone of size \(r_{0}\). These three possibilities are shown in Fig. 3. Although less apparent, scenario (iii) is two-zone in the sense that the physical conditions and processes are the same in the compact regions, and differ substantially from those in the larger zone.
The synchrotron luminosity of each of the zones is \(L_{1}\) and \(L_{2}\), respectively. In scenario (iii) we define \(L_{1}\) as the total luminosity of \(N\) regions of enhanced B-field. We consider a situation \(L_{1}\gg L_{2}\). Thus, when considering the processes in the first zone, we can ignore the photons supplied by the second zone. The energy density of the
Figure 3: Examples of three different geometries that allow the scenario realization. Scenario (i): two distinct regions with typical sizes of \(r_{1}\) and \(r_{2}\) separated by a distance \(r_{0}\); scenario (ii): two converging shells of radius \(r_{1}\) and \(r_{0}\); scenario (iii): a large number of compact regions (of typical size \(r_{1}\)) with strong magnetic field embedded within a larger zone of size \(r_{0}\).
locally generated photons in the first zone is
\[w_{1\to 1}\sim\frac{L_{1}}{r_{1}^{2}Nc}\,, \tag{7}\]
where \(N=1\) for scenarios (i) and (ii). Equation (7) ignores a numerical factor, which depends on the production region geometry and the distribution of emitting particles. For example, in the case of a spherical homogeneous production region, the volume average energy density of target photons is given by Eq. 7 with a factor \(9/(16\pi)\) (for detail, see in Atoyan & Aharonian, 1996). We note that such factors do not affect our conclusions, we therefore safely ignore them.
In the second zone one needs to account for the contribution of locally generated photons:
\[w_{2\to 2}^{(i)}\sim\frac{L_{2}}{r_{2}^{2}c}\quad\mbox{and}\quad w_{2\to 2}^{(ii) /(iii)}\sim\frac{L_{2}}{r_{0}^{2}c} \tag{8}\]
and the photons supplied from the first zone, \(w_{1\to 2}\). For the each of the above defined geometries one obtains
\[w_{1\to 2}\sim\frac{L_{1}}{r_{0}^{2}c}\,. \tag{9}\]
The suggested scenario is realized if the photon field produced in the first zone (being locally a subdominant) provides the dominant target for the particle cooling in the second zone:
\[w_{1\to 1}\ll\frac{B_{1}^{2}}{8\pi}\quad\mbox{and}\quad w_{1\to 2}\gg \frac{B_{2}^{2}}{8\pi}\,. \tag{10}\]
The photon field in the second zone is diluted compared to the first zone: \(w_{1\to 1}>w_{1\to 2}\), thus the scenario requires that \(B_{1}\gg B_{2}\). The difference of the magnetic fields determines the dilution of the photon field, \(\kappa=w_{1\to 2}/w_{1\to 1}\), that allows the scenario realization (i.e, the conditions given by Eq. 10).
The possible ratio of the magnetic fields should be determined by the physical arguments unique to each specific realization of the scenario. However, from the general point of view, it is obvious that if the photon field is significantly diluted in the second zone, \(\kappa\ll 1\), the required difference between the magnetic field strength becomes larger, making the realization of the scenario less feasible (although not excluded). For example, a strong dilution might be expected in scenario (i) provided that \(r_{0}\gg r_{1}\). In contrast, in scenario (ii) the dilution of the photon field in the second zone is small, by a factor of \(\sim 2\), provided that two shells are of comparable radius, \(r_{1}\approx r_{0}\). Similarly, in scenario (iii) one obtains
\[\frac{w_{1\to 2}}{w_{1\to 1}}\sim\frac{r_{1}^{2}N}{r_{0}^{2}}=\frac{fr_{0}}{r_{ 1}}\,, \tag{11}\]
where \(f\) is filling factor. If the above ratio is not small (i.e., \(w_{1\to 2}/w_{1\to 1}\gtrsim 1\)) then the photon field is nearly homogeneous in the entire production region, i.e., \(\kappa\approx 1\).
For the sake of simplicity we will consider a single common photon target being present in the two zones. In the first place, this seems to be a perfectly suitable choice for scenarios (ii) and (iii) if \(r_{1}\approx r_{0}\) and \(fr_{0}/r_{1}\gtrsim 1\), respectively. Even if these conditions are not fulfilled, the model calculations should reproduce correctly the part of SED formed in the fast cooling regime (provided that IC losses dominate over the synchrotron cooling in the second zone: \(w_{1\to 2}\gg B_{2}^{2}/(8\pi)\)).
Although the scenario can be realized also in scenario (i), if the magnetic field in the second zone is sufficiently weak to remain subdominant compared to the significantly diluted photon field provided from the first zone, scenarios (ii) and (iii) seem to be less demanding. In particular, these geometries can be formed during the afterglow phase of GRBs. The shells assumed in scenario (ii) may correspond to the reverse and forward shocks. Also an onion-like structure may be formed in the inner part of the forward shock downstream region, where the competing processes of magnetic field amplification and decay may lead to the formation of a layer with an enhanced magnetic field. If the magnetic field amplification in the downstream proceeds in a highly non-homogeneous manner, then instead of a shell-like structure one should expect rather a large number of magnetized blobs in the production region, i.e., scenario (iii). Although scenarios (ii) and (iii) are characterized by quite similar geometries, the angular distribution of the target photons in the second zone may be quite different in these two cases. While in scenario (iii), the target photons are nearly isotropic, scenario (ii) features a substantial anisotropy of the target photons in the second zone (as depicted in Fig. 3). As the emitting particles are isotropized in the plasma frame, this photon anisotropy should not have any impact on the cooling process. However, one may need to account for anisotropic IC cross-section (see, e.g., Aharonian & Atoyan, 1981) for accurate computation of the IC spectra. For example, if the emission generated in the direction of the observer is predominately produced by scattering target photons at small scattering angles, then the IC spectra appear to be harder compared to the spectra computed with angle-averaged IC cross-section (see, e.g., Khangulyan et al., 2008).
Because of the Doppler boosting effect, the observer can detect the emission coming from a patch of the shell with a typical size of \(R^{\prime}/\Gamma\), where \(R^{\prime}\sim t^{\prime}_{\rm tr}\Gamma^{2}c\). Thus, one obtains the patch size as \(t^{\prime}_{\rm tr}\Gamma c\gg 10^{15}\,\mbox{cm}\) (provided that \(t^{\prime}_{\rm tr}>1\,\mbox{h}\) for the afterglow period). The realiza
tion of scenario (iii) requires that the size of the blobs is small, \(r_{1}\ll t_{\rm tr}^{\prime}\Gamma c\). Verification of this condition from the first principles may require detailed plasma simulations, which are beyond the scope of this study. As in the case of GRB afterglow, the GeV emission seems to belong to the same component as the synchrotron, we may therefore speculate that the acceleration in the blobs are limited by the synchrotron cooling and the acceleration process is efficient, \(\eta_{\rm acc}\sim 1\) (here \(\eta_{\rm acc}\) is the acceleration efficiency). Thus, the size of the blobs should be sufficiently large to confine particles with energy
\[E\approx 60\left(\frac{B}{1\,{\rm G}}\right)^{-\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(\varepsilon\) is the target (synchrotron) photon energy. For the synchrotron integral kernel, \(K_{\rm syn,\varepsilon}\), we use a simple analytic approximation for the pitch angle averaged synchrotron spectrum (for detail see in Aharonian et al., 2010). Finally, we compute the energy distribution of the target photons as
\[\frac{{\rm d}N_{\rm syn}}{{\rm d}\varepsilon\,{\rm d}V}\approx\frac{1}{R^{2}c} \bigg{(}\frac{{\rm d}N_{\rm syn,1}}{{\rm d}\varepsilon\,{\rm d}t}+\frac{{\rm d }N_{\rm syn,2}}{{\rm d}\varepsilon\,{\rm d}t}\bigg{)}\,. \tag{24}\]
Here \(R\) is size of the production region.
The rate of IC scattering is determined by the angle averaged scattering cross section (for detail see in Jones, 1968):
\[\frac{{\rm d}\nu_{\rm ic}}{{\rm d}\varepsilon_{\gamma}\,{\rm d} \varepsilon}=\frac{8\pi cr_{0}^{2}}{bE}\frac{{\rm d}N_{\rm syn}}{{\rm d}V\,{ \rm d}\varepsilon}\times\] \[\bigg{[}1+\frac{z^{2}}{2(1-z)}+\frac{z}{b(1-z)}-\frac{2z^{2}}{b^ {2}(1-z)^{2}}-\] \[\frac{z^{3}}{2b(1-z)^{2}}-\frac{2z}{b(1-z)}\,\ln\frac{b(1-z)}{z} \bigg{]}. \tag{25}\]
Here \(r_{0}=e^{2}/m_{e}c^{2}\) is the electron classical radius; the Klein-Nishina parameter is given by \(b=4\varepsilon E/(m_{e}^{2}c^{4})\); and \(z\) is the ratio of the up-scattered photon to electron energy, \(z=\varepsilon_{\gamma}/E\). The IC energy loss rate depends on the energy distribution of target photons as
\[\dot{E}_{\rm ic}\approx\int\limits_{0}^{\infty}{\rm d}\varepsilon\int\limits_ {\varepsilon_{\rm min,\,\gamma}}^{\varepsilon_{\rm max,\,\gamma}}{\rm d} \varepsilon_{\gamma}\,(\varepsilon-\varepsilon_{\gamma})\frac{{\rm d}\nu_{\rm ic }}{{\rm d}\varepsilon_{\gamma}\,{\rm d}\varepsilon}\,, \tag{26}\]
where the maximum/minimum energy of up-scattered gamma-ray, \(\varepsilon_{\rm max/min,\gamma}\), is determined by kinematic constraints. If electrons up-scatter low-energy target photons (i.e., the Klein-Nishina parameter is small, \(b\ll 1\)), then the IC energy loss rate depends only on the energy density of the target photons, \(w_{\rm ph}\):
\[\dot{E}_{\rm T,\,i}=-\frac{32\pi}{9}\frac{e^{4}E^{2}}{m_{e}^{4}c^{7}}w_{\rm ph }\,, \tag{27}\]
analogous to the corresponding angle averaged energy loss rate in a magnetic field given in Eq. (21).
### Model calculations
For the model calculations, magnetic field values of \(B_{1}=1\,{\rm G}\) and \(B_{2}=10^{-3}\,{\rm G}\) are assumed. The injection power is set to \(\sim 10^{39}\,{\rm erg}\,{\rm s}^{-1}\), and for the size of the production region we consider a value close to \(10^{16}\,{\rm cm}\). If one considers this size in the context of a GRB afterglow, one should compare it to the forward shock radius, which depends on the time passed since the trigger, \(t^{\prime}_{\rm tr}\):
\[R\sim\Gamma^{2}t^{\prime}_{\rm tr}c\sim 3\times 10^{16}\bigg{(}\frac{\Gamma}{10} \bigg{)}^{2}\frac{t^{\prime}_{\rm tr}}{3\,{\rm h}}\,{\rm cm}\,. \tag{28}\]
The typical energy density of the target photons in the production region is
\[w_{\rm ph}\sim 4\times 10^{-5}\kappa_{1}\eta_{\rm rad}\bigg{(}\frac{R}{3\times 1 0^{16}\,{\rm cm}}\bigg{)}^{-2}\,{\rm erg}\,{\rm cm}^{-3}\,, \tag{29}\]
where \(\eta_{\rm rad}\) is the radiation efficiency in zone 1 (in what follows we ignore this factor, setting \(\eta_{\rm rad}=1\), for the sake of simplicity). This energy density corresponds to an equivalent magnetic field strength of
\[B_{\rm eq}\sim 3\times 10^{-2}\kappa_{1}^{\sfracfrac 12}\bigg{(}\frac{R}{3\times 10^{16}\,{\rm cm}}\bigg{)}^{-1}\,{\rm G}\,. \tag{30}\]
This photon field is the dominant target in zone 2, whereas it is negligible in zone 1. The corresponding cooling time scales are shown in Fig. 4 (top panel). Whilst at high energies (approaching \(1\,{\rm PeV}\)), the Klein-Nishina losses approach their asymptotic energy-dependence, \(\gamma_{\rm kn}\propto E\), for the parameter set considered, the particles cool in the transition regime with \(\tau\propto{\rm const}\). Thus the spectrum formed is not as hard as expected from our earlier qualitative analysis.
The effect of the onset of Klein Nishina cooling on the electron spectrum is shown in Fig. 4 (bottom panel), where the energy distribution of electrons in both zones are shown. For the calculations here we adopted the following parameters: linear size \(R=10^{16}\,{\rm cm}\); total power of acceleration of non-thermal particles \(L_{0}=10^{39}{\rm erg}\,{\rm s}^{-1}\), which is distributed between the zones with \(\kappa_{1}=0.90\) and \(\kappa_{2}=0.10\); the injection index \(\alpha=2.2\) (the "main case"). Finally, the acceleration efficiency was set to \(\eta_{\rm acc}=10^{2}\), for which the cutoff energy in zone 1 is determined to be:
\[E_{\rm cut,1}\approx 6\ \left(\frac{\eta_{\rm acc}}{10^{2}}\right)^{-\sfrac 12}\bigg{(}\frac{B_{1}}{1\,{\rm G}}\bigg{)}^{-\sfrac 12}\,{\rm TeV}\,. \tag{31}\]
For this acceleration efficiency the cutoff energy in zone 2 is at \(\approx 200\,{\rm TeV}\), which is close to the energy at which the synchrotron losses dominate over the IC losses, \(E_{\rm s}\approx 20\,{\rm TeV}\), thus the influence of the high energy cutoff becomes prominent at energies just above the Klein-Nishina hardening energy scale.
The energy dependence of the electron distribution is directly reflected in the synchrotron spectrum from zone 2. As can be seen from Fig. 5, this component is subdominant to the luminous synchrotron component from zone 1. The photon index of the hardest part of the spectrum is \((\alpha+1)/2\approx 1.5\), which is considerably softer than the limiting photon index of \(\gamma_{\rm s,kn}(=\alpha/2)\). This is caused by the smooth broad transition to the Klein-Nishina regime. While the broad transition from the Thomson to Klein-Nishina regimes causes the electron
distribution to be not as hard as naively expected, the IC component appears to be somewhat harder than in the limiting case. As can be seen in Fig. 5, a power-law component extends from a few \(\,\mathrm{GeV}\) to beyond \(10\,\mathrm{TeV}\) with a photon index of \(\approx(\alpha+1)/2\). Note that for our calculations we set \(\alpha=2.2\), and the production region bulk Lorentz factor was assumed to be \(\Gamma=10\).
To illustrate the influence of the model parameters, we performed calculations for a range of different parameter sets. The results of these calculations are shown in Fig. 6. For the "case A" we adopted a different value for the injection index: \(\alpha=2\) instead of \(\alpha=2.2\) used in the "main case". For the "case B" we adopted a different value for the acceleration efficiency: \(\eta_{\mathrm{acc}}=10^{4}\) instead of \(\eta_{\mathrm{acc}}=10^{2}\) used in the "main case". The adopted model parameter values are summarized in Table 2.
Low-energy target photons can be an important role in the formation of a hard VHE spectrum in the case of the conventional one-zone SSC models. To demonstrate the relatively small influence of low-energy target photons in the framework of our considered two-zone approach, in the top panel of Fig. 6 we also plot the
Figure 4: Top panel: Synchrotron, IC cooling time together with the acceleration time. Bottom panel: Electron distribution in two zones. Black guide lines indicate power-law approximations.
Figure 5: Spectral energy distribution of synchrotron and IC emission from two zones. Black guide lines show the power-law approximations.
Figure 6: Spectral energy distribution of synchrotron and IC emission from two zones. Black guide lines show the power-law approximations. Top panel: Case A; Bottom panel: Case B. Thin lines in the top panel correspond to a case when the electron distribution in the first zone features a cooling break at \(E\approx 10\,\mathrm{GeV}\).
SED obtained under the same conditions assuming that the particle spectrum in the first zone features a cooling break at \(E\approx 10\,\mathrm{GeV}\). The corresponding spectra are shown with thin lines. Under this assumption, the synchrotron spectrum from the first zone features the cooling breaks, as expected. The IC spectrum is more strongly suppressed: one sees here the impact of both the cooling break and reduced target photon density. The reduction of the IC loss rate leads to a considerable enhancement of the synchrotron emission from the second zone (note that this component still remains subdominant). The IC spectra from the second zone shows, however, only minor changes, noticeably only close to the high- and low-energy cutoffs regions. This quite weak influence of the target photon spectrum on the spectral properties of the IC component from the second zone is caused by the fact that the IC losses determine the particle spectrum, as we assume that the emission is generated in the fast cooling regime. Therefore, the electron spectrum adjusts to the rate of the dominant losses, and the spectral properties of the IC component are largely determined by the injection spectrum.
## 4 Discussion and Conclusion
The need for studying energy losses in the inhomogeneous emission region downstream can be easily realized by considering the evolution of the magnetic field from the upstream to downstream regions. Based on the hydrodynamics of the forward shock propagating through the CBM, one can obtain the following estimate for the downstream magnetic field strength:
\[B\sim 3\times 10^{2}\frac{\Gamma}{10}\frac{B_{\mathrm{cbm}}}{10\,\mu\mathrm{G }}\,\mu\mathrm{G}\,. \tag{32}\]
This estimate depends on the typical strength of the CBM magnetic field, \(B_{\mathrm{cbm}}\), and accounts for the transformation of this field to the forward shock rest frame, and for the increase of the field strength at a weakly magnetized relativistic shock due to the shock compression.
The magnetic field given by Eq. (32) appears significantly below the Gauss-level required for the afterglow radiation. Therefore, one needs to assume an efficient magnetic field amplification process, which can increase the energy density of the magnetic field to the level comparable to the plasma energy density in the downstream:
\[w\sim n_{\mathrm{cbm}}m_{p}\Gamma^{2}\approx 0.15\Big{(}\frac{n_{\mathrm{cbm}}}{ 1\,\mathrm{cm}^{3}}\Big{)}\bigg{(}\frac{\Gamma}{10}\bigg{)}^{2}\,\mathrm{erg} \,\mathrm{cm}^{-3}\,, \tag{33}\]
where \(n_{\mathrm{cmb}}\) is CBM density. This estimates shows that the magnetic field in the downstream can be amplified up to a strength of
\[B_{\mathrm{eq}}=\sqrt{8\pi w}\sim 2\Big{(}\frac{n_{\mathrm{cbm}}}{1\,\mathrm{ cm}^{3}}\Big{)}^{\nicefrac{{1}}{{2}}}\bigg{(}\frac{\Gamma}{10}\bigg{)}\,\mathrm{G}\,. \tag{34}\]
Gauss-strength magnetic fields in the afterglow production region are also favored on theoretical grounds by afterglow emission modeling. If the magnetic field is indeed amplified by a factor of \(\sim 10^{3}\), it is natural to further assume that this amplification is inhomogeneous throughout the volume resulting in a magnetic field configuration with strong spatial fluctuations. For example, magnetic field amplification by turbulent dynamo shows that the magnetic energy is predominantly localized in small blobs (Zhang et al., 2009). Moreover, this may be a general effect: the field amplification predominately operates on small scale fields (Kazantsev, 1968).
The highly inhomogeneous structure of the downstream region can have important implications for the properties of the non-thermal emission generated. In particular, such a structure in the production region can significantly alter the synchrotron radiation emission, with clumps of highly amplified magnetic field leading to the synchrotron emission extending significantly beyond the one-zone synchrotron burn-off limit (Khangulyan et al., 2021). This scenario requires that particles are accelerated in a region of weak magnetic field, and subsequently penetrate into a second zone of amplified magnetic field, where they rapidly cool producing VHE synchrotron radiation. The requirement of effective particle exchange between the two zones of strong and weak magnetic field is an important element of this scenario.
It should be noted, however, that efficient particle exchange between the zones is a significant assumption. Processes exist, which can hinder particle exchange between the two zones. For example, if the change of the magnetic field strength is relatively smooth, the magnetic adiabatic invariant prevents particles from the zone of weak magnetic field reaching a strong magnetic field zone (see the discussion in Khangulyan et al., 2021). The particle escape from the zone of strong to weak magnetic field is not forbidden by the magnetic adiabatic invariant, but it seems feasible that one can neglect this process. Because of the much higher rate of synchrotron losses in the strong magnetic field zone, the total number of particles in this zone is naturally significantly reduced to that in the weak magnetic field zone, particularly for the highest energy particles with energies close to the maximum energy.
On the other hand, synchrotron photons can freely travel between the two zones. The photon exchange between the zones have two major effects: (i) altering
particle energy losses, and (ii) change the properties of IC emission. We suggest a simple model that allows one to study these two effects. We find that for feasible model parameters, IC scattering dominates the cooling process in the zone of weak magnetic field. Due to the Klein-Nishina effect, the particle spectrum formed in the fast cooling regime appears to be significantly harder than the spectrum formed for the case when synchrotron losses dominate. While the synchrotron emission from this zone may appear completely sub-luminous with respect to the synchrotron emission generated in the strong magnetic field zone, the IC component from the weak magnetic field zone would be expected to dominate.
The second signature of the hard particle spectrum is expected in the IC component generated by these particles. This spectrum appears to be hard, with a photon index coinciding with the value expected for the synchrotron/Thomson spectra, \((\alpha+2)/2\) (where \(\alpha\) is the injection index). The IC spectrum therefore appears to have the same slope as the dominant synchrotron emission. The relative flux through these two channels is determined by the phenomenological parameters, \(\kappa_{i}\), which determine the ratio of the acceleration powers in the two zones. Our simulations presented in Fig. 7 show that for an acceleration spectrum with a spectral slope of \(\alpha=2.1\), which allows the slope of the X-ray spectra for GRB190829A to be reproduced, the X-ray and VHE flux ratio seen in GRB190829A implies \(\kappa_{1}=0.7\) and \(\kappa_{2}=0.3\) (see column "GRB190829A" in Table 2). This suggests that acceleration processes of comparable power operate in the both zones. However, the acceleration in the zone of stronger magnetic field is somewhat more efficient. The obtained hard VHE IC spectrum extends beyond \(10\,\mathrm{TeV}\) for a modest bulk Lorentz factor of \(\Gamma=10\). This implies that a hard multi-TeV IC component can be generated also during the late afterglow phases, when the forward shock transits into the mildly relativistic regime. During the prompt or early afterglow phases, when the bulk Lorentz factor can be significantly larger, \(\Gamma\geq 100\), the intrinsically hard IC component can extend up to the ultra high energy domain (\(\geq 100\,\mathrm{TeV}\)). However, we note that the extragalactic EBL attenuation is severe already in the VHE domain for even the most local GRB redshift values.
|
2310.08239 | Centrosymmetric and reverse matrices in bivariate orthogonal polynomials | We introduce the concept of reflexive moment functional in two variables and
the definition of reflexive orthogonal polynomial system. Also reverse matrices
and their interesting algebraic properties are studied. Reverse matrices and
reflexive polynomial systems are directly connected in the context of bivariate
orthogonal polynomials. Centrosymmetric matrices, reverse matrices and their
connections with reflexive orthogonal polynomial systems are presented.
Finally, several particular cases and examples are analysed. | Cleonice F. Bracciali, Glalco S. Costa, Teresa E. Pérez | 2023-10-12T11:35:03Z | http://arxiv.org/abs/2310.08239v1 | # Centrosymmetric and reverse matrices in bivariate orthogonal polynomials
###### Abstract.
We introduce the concept of reflexive moment functional in two variables and the definition of reflexive orthogonal polynomial system. Also reverse matrices and their interesting algebraic properties are studied. Reverse matrices and reflexive polynomial systems are directly connected in the context of bivariate orthogonal polynomials. Centrosymmetric matrices, reverse matrices and their connections with reflexive orthogonal polynomial systems are presented. Finally, several particular cases and examples are analysed.
Key words and phrases:Bivariate orthogonal polynomials, centrosymmetric matrices, reverse matrices, reflexive weight functions, reflexive orthogonal polynomial systems 2020 Mathematics Subject Classification: Primary: 42C05; 33C50; 15A09; 15B99
## 1. Introduction
When working with vectorial approach for multivariate orthogonal polynomials, it is necessary to make choices, such as the monomial order, which does not occur in the univariate case (see [11]). In addition to the choice of working with orthogonal, monic orthogonal or orthonormal polynomial vectors, the choice of a basis can simplify some theoretical objects, such as, coefficient matrices of three-term relations, structure relations, or explicit expressions for the polynomial vectors, among others. Within scope of the study of multivariate polynomials, many difficulties arise naturally with the increase of the dimension in relation to the univariate case. Then, it is important to simplify the objects' treatment as much as possible. The present work is inserted in this context.
Following [11], we consider the linear space of real polynomials in two variables given by \(\Pi=\operatorname{span}\{x^{h}\,y^{k}:h,k\geqslant 0\}\) and the linear space \(\Pi_{n}=\operatorname{span}\{x^{h}\,y^{k}:h+k\leqslant n\}\) of finite dimension \((n+1)(n+2)/2\). A polynomial of total degree \(n\) in two variables is a linear combination of monomials of total degree less than or equal to \(n\). As usual, a two variable polynomial of degree \(n\), \(p(x,y)\in\Pi_{n}\), is given by
\[p(x,y)=\sum_{i+j\leqslant n}c_{i,j}\,x^{i}\,y^{j},\quad c_{i,j}\in\mathbb{R}.\]
We say that \(p(x,y)\) is monic if only has one term of highest degree in the form
\[p(x,y)=x^{n-k}\,y^{k}+\sum_{i+j\leqslant n-1}\,c_{ij}\,x^{i}\,y^{j},\quad c_{ ij}\in\mathbb{R}.\]
The vector representation for bivariate polynomials by using the graded lexicographical order was introduced in [15, 16], developed in [21] and it is the main representation used in the monography [11]. A polynomial system is a sequence
Introduction
Let \(\mathbb{P}_{n}\) be a polynomial system with \(n\geqslant 1\). Let \(\mathbb{P}_{n}\) be a polynomial system with \(n\geqslant 1\).
For instance, we will show that the monic orthogonal polynomial system (MOPS) associated with \(\mathbf{u}\), denoted by \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\), where
\[\mathbb{Q}_{n}=(Q_{n,0}^{n}(x,y),Q_{n-1,1}^{n}(x,y),\ldots,Q_{0,n}^{n}(x,y))^{T},\]
satisfies \(Q_{n-k,k}^{n}(x,y)=Q_{k,n-k}^{n}(y,x)\), for \(k=0,1,\ldots,n\). This property, that it will be called _reflexive property_ of the MOPS, is useful since only half of the calculation is needed to build the polynomial vectors, \(\mathbb{Q}_{n}\), \(n\geqslant 0\). As a consequence, the coefficient matrices of the three-term relations (1.1) have connections with each other. These connections provide a _reverse property_ among the associated matrices, and also a smaller number of calculation is necessary to construct them.
Moreover, the moment functional has the _reflexive property_ if and only if the matrices involved in the three-term relations of the MOPS satisfy the _reverse property_. In this work we are interesting to show how _reverse matrices_ and _centrosymmetric matrices_ are related with bivariate orthogonal polynomials.
This work is organized into five sections. In the Section 2 the concepts of reverse matrices and centrosymmetric matrices as well as their most relevant properties are presented. Section 3 contains the basic theory of bivariate orthogonal polynomials using matrix approach, it also contains the definition and some properties of reflexive polynomial vectors. Reflexive moment functionals and the consequent properties of the associated bivariate orthogonal polynomial systems are introduced in Section 4. Section 5 brings the main focus of this work, the connection of _reverse matrices_, _centrosymmetric matrices_ and _reflexive bivariate orthogonal polynomials_. In Section 6 we present some non-trivial examples to illustrate these connections.
## 2. Reverse matrices and centrosymmetric matrices
Centrosymmetric matrices satisfy several properties, including some algebraic structures. They have been extensively studied and many applications can be found in the literature, for instance see [1, 3, 7, 8, 12, 19, 20].
We start this section defining the reverse property between two matrices of size \((m+1)\times(n+1)\).
**Definition 2.1**.: _Consider two matrices \(X\) and \(Y\) of size \((m+1)\times(n+1)\), denoted by \(X=(x_{ij})_{i,j=0}^{m,n}\) and \(Y=(y_{ij})_{i,j=0}^{m,n}\), respectively. If_
\[x_{ij}=y_{m-i,n-j},\quad for\ i=0,1,\ldots,m,\ j=0,1,\ldots,n,\]
_then \(X\) is called reverse matrix of \(Y\) (conversely, \(Y\) is reverse matrix of \(X\)). We will denote \(X\leftrightharpoons Y\)._
**Example 2.2**.: _Consider \(X=\begin{pmatrix}1&2&3\\ 4&5&6\end{pmatrix}\) and \(Y=\begin{pmatrix}6&5&4\\ 3&2&1\end{pmatrix}\), then \(X\leftrightharpoons Y\)._
One can determinate the matrix \(X^{R}\), the reverse matrix of \(X=(x_{ij})_{i,j=0}^{m,n}\), by applying an operation called _reflection_, i.e.,
\[X^{R}=(x_{m-i,n-j})_{i,j=0}^{m,n}. \tag{2.1}\]
Hence, \(X\leftrightharpoons X^{R}\).
The next result presents the minimum number of permutations between rows and between columns of a matrix \(X\) needed to transform \(X\) into its reverse matrix.
**Lemma 2.3**.: _Let \(X\) be a matrix of size \(n\). Then, there exist \(2\lfloor\frac{n}{2}\rfloor\) permutations between rows and between columns of \(X\) such that transforms \(X\) into \(X^{R}\)._
Now we present some direct properties of matrices that have the reverse property.
**Proposition 2.4**.: _Let \(X\) and \(Y\) be square matrices such that \(X\leftrightharpoons Y\), then \(\det(X)=\det(Y)\)._
Proof.: Since an even number of permutations of rows or columns of a matrix preserves the value of its determinant, the result follows from Lemma 2.3.
For the next results, we denote by \(X^{ij},\,i=0,1,\ldots,m,\,j=0,1,\ldots,n\), the matrix obtained from a matrix \(X\) of size \((m+1)\times(n+1)\) eliminating the row \(i\) and the column \(j\). The adjugate of the square matrix \(X\), that is, the transposed matrix of cofactors of \(X\), is denoted by \(adjX\), see [13, p. 22].
**Proposition 2.5**.: _If \(X\) and \(Y\) are matrices of size \((m+1)\times(n+1)\) satisfying \(X\leftrightharpoons Y\), then 1) \(X^{T}\leftrightharpoons Y^{T}\). 2) \(XY^{T}\leftrightharpoons YX^{T}\). 3) \(X^{ij}\leftrightharpoons Y^{m-i,n-j}\), for \(i=0,1,\ldots,m,\,j=0,1,\ldots,n\). 4) If \(m=n\), then \(adjX\leftrightharpoons adjY\)._
Proof.: 1) Consider \(X=(x_{ij})_{i,j=0}^{m,n}\), \(Y=(y_{ij})_{i,j=0}^{m,n}\), \(X^{T}=(\widetilde{x}_{ij})_{i,j=0}^{n,m}\), and \(Y^{T}=(\widetilde{y}_{ij})_{i,j=0}^{n,m}\) such that \(\widetilde{x}_{ji}=x_{ij}\) and \(\widetilde{y}_{ji}=y_{ij}\) for \(i=0,1,\ldots,m,\,j=0,1,\ldots,n\). Since \(x_{ij}=y_{m-i,n-j}\), for \(j=0,1,\ldots,n\) and \(i=0,1,\ldots,m\), we have
\[\widetilde{x}_{ji}=x_{ij}=y_{m-i,n-j}=\widetilde{y}_{n-j,m-i}.\]
Therefore, \(X^{T}\leftrightharpoons Y^{T}\).
2) Consider the same notation used in item 1). We know that \(XY^{T}\) is a matrix of size \((m+1)\times(m+1)\). Denoting \(XY^{T}=(c_{ij})_{i,j=0}^{m}\) and \(YX^{T}=(\widetilde{c}_{ij})_{i,j=0}^{m}\), since \(X\leftrightharpoons Y\), we see that
\[c_{ij}=\sum_{k=0}^{n}x_{ik}\widetilde{y}_{kj}=\sum_{k=0}^{n}y_{m-i,n-k} \widetilde{x}_{n-k,m-j}=\widetilde{c}_{m-i,m-j}.\]
Hence, \(XY^{T}\leftrightharpoons YX^{T}\).
3) It follows directly.
4) Consider the matrices \(X^{*}=(x_{ij}^{*})_{i,j=0}^{m,n}\) and \(Y^{*}=(y_{ij}^{*})_{i,j=0}^{m,n}\) the matrices of cofactors of \(X\) and \(Y\) respectively. From item 3) we know that \(X^{ij}\leftrightharpoons Y^{n-i,n-j}\), for \(i,j=0,1,\ldots,n\). The parity of \((i+j)\) is the same of \((n-i)+(n-j)\) then, from Proposition 2.4, for \(i,j=0,1,\ldots,n\),
\[x_{ij}^{*}=(-1)^{i+j}\det(X^{ij})=(-1)^{(n-i)+(n-j)}\det(Y^{n-i,n-j})=y_{n-i,n -j}^{*}.\]
Therefore, \(X^{*}\leftrightharpoons Y^{*}\). Thus, from item 1), \(adjX\leftrightharpoons adjY\).
There are many approaches for the centrosymmetric property of a matrix, see [1, 3, 7, 13, 20].
For general matrices of size \((m+1)\times(n+1)\), one approach can be the following.
**Definition 2.6**.: _Consider a matrix \(X=(x_{i,j})_{i,j=0}^{m,n}\) of size \((m+1)\times(n+1)\). The matrix \(X\) is centrosymmetric if it satisfies \(X\leftrightharpoons X\), it means \(X^{R}=X\) and_
\[x_{i,j}=x_{m-i,n-j},\quad for\ i=0,1,\ldots,m,\ j=0,1,\ldots,n.\]
**Example 2.7**.: _The matrices_
\[A=\begin{pmatrix}1&2&3&4&5\\ 6&7&8&7&6\\ 5&4&3&2&1\end{pmatrix},\quad B=\begin{pmatrix}1&2&3&4\\ 5&6&7&8\\ 8&7&6&5\\ 4&3&2&1\end{pmatrix},\quad and\quad C=\begin{pmatrix}1&2&3\\ 4&5&4\\ 3&2&1\end{pmatrix}\]
_are centrosymmetric matrices._
In an approach for square matrices of size \((n+1)\), the _reversal matrix_ (or _exchange matrix_), denoted by \(J_{n+1}\), is a matrix that has ones along the secondary diagonal and zeros elsewhere. Notice that \(J_{n+1}=J_{n+1}^{-1}\).
A square matrix \(X\) of size \((n+1)\) is a _centrosymmetric matrix_ if \(J_{n+1}X=XJ_{n+1}\). Furthermore, a square matrix \(X\) of size \((n+1)\) is called _skew-centrosymmetric matrix_ if \(J_{n+1}X=-XJ_{n+1}\), see [3, 7, 13].
A vector \(V=(v_{i})_{i=0}^{n}\) of size \((n+1)\) is called a _symmetric vector_ if \(J_{n+1}V=V\), it means \(v_{i}=v_{n-i}\), for \(i=0,1,\ldots,n\). Moreover, a vector \(V=(v_{i})_{i=0}^{n}\) is called a _skew-symmetric vector_ if \(J_{n+1}V=-V\), it yields \(v_{i}=-v_{n-i}\), \(i=0,1,\ldots,n\), ([7]).
The next proposition brings some known direct consequences of the centrosymmetric property of a matrix.
**Proposition 2.8**.:
_1) The sum of centrosymmetric matrices is a centrosymmetric matrix._
_2) The product of centrosymmetric matrices is a centrosymmetric matrix._
_3) The transpose of a centrosymmetric matrix is a centrosymmetric matrix._
_4) If_ \(X\) _is a square centrosymmetric matrix, then_ \(adjX\) _is also centrosymmetric._
_5) If_ \(X\) _is an invertible centrosymmetric matrix, then_ \(X^{-1}\) _is also centrosymmetric._
Proof.: The proof of items 1), 2) and 3) follow directly from the definition of centrosymmetric matrix. The proof of item 4) follows from Proposition 2.5 item 4) making \(Y=X\).
For completeness we include here a simple proof of item 5), using item 4). We denote \(X^{-1}=(\widehat{x}_{ij})_{i,j=0}^{n}\) and \(adjX=(x_{ij}^{A})_{i,j=0}^{n}\). From item 4), \(x_{ij}^{A}=x_{n-i,n-j}^{A}\), for \(i,j=0,1,\ldots,n\). Then,
\[\widehat{x}_{ij}=\frac{1}{\det X}x_{ij}^{A}=\frac{1}{\det X}x_{n-i,n-j}^{A}= \widehat{x}_{n-i,n-j},\]
for \(i,j=0,1,\ldots,n\).
The following result is a straightforward consequence of items 2) and 5) of Proposition 2.8.
**Corollary 2.9**.: _If \(XY\) is a centrosymmetric matrix and \(X\) is an invertible centrosymmetric matrix, then \(Y\) is also centrosymmetric matrix._
Additionally, some basic properties of symmetric centrosymmetric matrices, including properties of their eigenvalues and eigenvectors, were studied in [20]. The next result, a consequence of [20, Th. 11], gives a characteristic of the eigenvectors of symmetric centrosymmetric positive definite matrices.
**Proposition 2.10** ([20]).: _If \(V\) is an eigenvector of a symmetric centrosymmetric positive definite matrix, then \(V\) is either a symmetric vector or a skew-symmetric vector._
Next result shows that one can construct, in different ways, centrosymmetric matrices from two matrices that have the reverse property.
**Proposition 2.11**.:
_1) Let \(T_{1}\) and \(T_{2}\) be matrices of size \((m+1)\times(n+1)\). \(T_{1}\leftrightharpoons T_{2}\) if and only if, the matrix of size \(2(m+1)\times 2(n+1)\)_
\[\widehat{T}=\left(\begin{array}{c|c}T_{1}&\mathsf{0}\\ \hline\mathsf{0}&T_{2}\end{array}\right),\]
_where \(\mathsf{0}\) is a zero matrix of size \((m+1)\times(n+1)\), is centrosymmetric. 2) If \(T_{1}\) and \(T_{2}\) are matrices of size \((m+1)\times(n+1)\) such that \(T_{1}\leftrightharpoons T_{2}\), then \(T_{1}+T_{2}\) and \(T_{1}-T_{2}\) are centrosymmetric amtrices._
Proof.: 1) Suppose that \(T_{1}\leftrightharpoons T_{2}\), hence, using the reflection operation (2.1), we know that \(T_{1}^{R}=T_{2}\) and \(T_{2}^{R}=T_{1}.\) Applying reflection operator in \(\widehat{T}\) we get
\[\widehat{T}^{R}=\left(\begin{array}{c|c}T_{2}^{R}&\mathsf{0}\\ \hline\mathsf{0}&T_{1}^{R}\end{array}\right)=\left(\begin{array}{c|c}T_{1}& \mathsf{0}\\ \hline\mathsf{0}&T_{2}\end{array}\right)=\widehat{T}.\]
Therefore, \(\widehat{T}\) is a centrosymmetric matrix.
Reciprocally, supposing that \(\widehat{T}^{R}=\widehat{T}\), we see that \(T_{2}^{R}=T_{1}\) and \(T_{1}^{R}=T_{2}\), hence \(T_{1}\leftrightharpoons T_{2}\).
2) The proof follows directly from the fact that \(T_{1}^{R}=T_{2}\) and \(T_{2}^{R}=T_{1}\).
## 3. Bivariate orthogonal polynomials
We start recalling the basic definitions and main tools about bivariate orthogonal polynomials that we will need in the rest of the paper. We refer mainly [11].
As we mention on the introduction, a _polynomial system (PS)_ is a sequence of polynomial vectors \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\) of increasing size \(n+1\),
\[\mathbb{P}_{n}=(P_{n,0}^{n}(x,y),P_{n-1,1}^{n}(x,y),\ldots,P_{0,n}^{n}(x,y))^ {T},\]
where the bivariate polynomial \(P_{n-k,k}^{n}(x,y)\), \(k=0,1,\ldots,n\), has exactly degree \(n\), and the set of polynomial entries is linearly independent.
The simplest polynomial system is the so-called _canonical basis_\(\{\mathbb{X}_{n}\}_{n\geqslant 0}\), defined as
\[\mathbb{X}_{n}=(x^{n},x^{n-1}\,y,x^{n-2}\,y^{2},\ldots,x\,y^{n-1},y^{n})^{T}.\]
If \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\) is a polynomial system, then there exist real matrices \(G_{k}^{n}\) of size \((n+1)\times(k+1)\) such that every polynomial vector \(\mathbb{P}_{n}\) can be expressed in terms of the canonical basis as
\[\mathbb{P}_{n}=G_{n}\,\mathbb{X}_{n}+G_{n-1}^{n}\,\mathbb{X}_{n-1}+G_{n-2}^{n }\,\mathbb{X}_{n-2}+\cdots+G_{1}^{n}\,\mathbb{X}_{1}+G_{0}^{n}\,\mathbb{X}_{0}. \tag{3.1}\]
In the particular case when \(G_{n}=I_{n+1}\), \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\) is called _monic polynomial system_.
A polynomial system can satisfy a symmetric property on the entries of the polynomial vectors.
**Definition 3.1**.: _Let \(\mathbb{P}_{n}=(P_{n,0}^{n}(x,y),P_{n-1,1}^{n}(x,y),\ldots,P_{0,n}^{n}(x,y))^{T}\) be a polynomial vector satisfying_
\[P_{n-k,k}^{n}(x,y)=P_{k,n-k}^{n}(y,x),\quad k=0,1,\ldots,n,\]
_then \(\mathbb{P}_{n}\) is called reflexive polynomial vector. A polynomial system such that all the polynomial vectors are reflexives will be called reflexive polynomial system._
We remark that in the univariate case, a reflexive property of a polynomial is defined according with its coefficients. First consider a complex polynomial in one variable \(r(x)=b_{n}x^{n}+b_{n-1}x^{n-1}+\cdots+b_{0}\), \(b_{i}\in\mathbb{C}\), the polynomial \(r^{*}(x)=x^{n}\overline{r\left(1/\overline{x}\right)}=\overline{b}_{0}x^{n}+ \overline{b}_{1}x^{n-1}+\cdots+\overline{b}_{n}\), is called _reciprocal polynomial_ of \(r(x)\). If \(r^{*}(x)=ur(x)\), with \(|u|=1\), then \(r(x)\) is called _self-inversive polynomial_, see [18]. These type of polynomials appear also on the theory of orthogonal polynomials on the unit circle.
Consider now a real polynomial \(r(x)=b_{n}x^{n}+b_{n-1}x^{n-1}+\cdots+b_{0}\), \(b_{i}\in\mathbb{R}\), and the polynomial \(x^{n}r\left(1/x\right)=b_{0}x^{n}+b_{1}x^{n-1}+\cdots+b_{n}.\) If \(r(x)=x^{n}r(1/x)\), it means \(b_{i}=b_{n-i}\), for \(i=0,1,\ldots,n\), then \(r(x)\) is called _self-reciprocal polynomial_ or _palindromic polynomial_. In several works, for instance in [4, 5, 14, 17], the behaviour of the zeros of special cases of this type of univariate polynomials was studied.
Next result shows a relation between reflexive polynomial vectors and centrosymmetric matrices. More precisely, it says that the matrix of change of basis between two reflexive polynomial vectors is necessarily centrosymmetric. Reciprocally, the product of centrosymmetric matrix and reflexive polynomial vector preserves the reflexive property of the polynomial vector.
**Proposition 3.2**.: _Let \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\) and \(\{\widetilde{\mathbb{P}}_{n}\}_{n\geqslant 0}\) be two polynomial systems such that \(\mathbb{P}_{n}\) is a reflexive polynomial and \(\widetilde{\mathbb{P}}_{n}=T_{n}\mathbb{P}_{n}\), for \(n\in\mathbb{N}\). Then, \(\widetilde{\mathbb{P}}_{n}\) is reflexive polynomial vector if and only if \(T_{n}\) is centrosymmetric matrix._
Proof.: Let \(\widetilde{\mathbb{P}}_{n}=T_{n}\mathbb{P}_{n}\) be written as
\[\left(\begin{array}{c}\widetilde{P}_{n,0}^{n}(x,y)\\ \vdots\\ \widetilde{P}_{n-k,k}^{n}(x,y)\\ \vdots\\ \widetilde{P}_{0,n}^{n}(x,y)\end{array}\right)=\begin{pmatrix}t_{00}^{(n)}&t_ {01}^{(n)}&\ldots&t_{0n}^{(n)}\\ \vdots&\vdots&&\vdots\\ t_{k0}^{(n)}&t_{k1}^{(n)}&\ldots&t_{kn}^{(n)}\\ \vdots&\vdots&&\vdots\\ t_{n0}^{(n)}&t_{n1}^{(n)}&\ldots&t_{nn}^{(n)}\end{pmatrix}\left(\begin{array} []{c}P_{n,0}^{n}(x,y)\\ \vdots\\ P_{n-k,k}^{n}(x,y)\\ \vdots\\ P_{0,n}^{n}(x,y)\end{array}\right).\]
Since \(\mathbb{P}_{n}\) is reflexive polynomial vector,
\[\widetilde{P}_{n-k,k}^{n}(x,y)=\sum_{j=0}^{n}t_{kj}^{(n)}P_{n-j,j}^{n}(x,y)= \sum_{j=0}^{n}t_{kj}^{(n)}P_{j,n-j}^{n}(y,x). \tag{3.2}\]
On the other hand, we can write \(\widetilde{P}_{k,n-k}^{n}(y,x)\) as
\[\widetilde{P}_{k,n-k}^{n}(y,x)=\sum_{j=0}^{n}t_{n-k,j}^{(n)}P_{n-j,j}^{n}(y,x). \tag{3.3}\]
Thus, \(\widetilde{P}_{n-k,k}^{n}(x,y)=\widetilde{P}_{k,n-k}^{n}(y,x)\) implies that
\[\sum_{j=0}^{n}t_{kj}^{(n)}P_{j,n-j}^{n}(y,x)=\sum_{j=0}^{n}t_{n-k,j}^{(n)}P_{n -j,j}^{n}(y,x).\]
Making \(l=n-j\) in the second summation, we have
\[\sum_{j=0}^{n}t_{kj}^{(n)}P_{j,n-j}^{n}(y,x)=\sum_{l=0}^{n}t_{n-k,n-l}^{(n)}P_{ l,n-l}^{n}(y,x).\]
Hence,
\[\sum_{j=0}^{n}[t_{k,j}^{(n)}-t_{n-k,n-j}^{(n)}]P_{j,n-j}^{n}(y,x)=0,\]
and since \(\{P_{n-j,j}^{n}\}\) is linearly independent, it follows that \(t_{kj}^{(n)}=t_{n-k,n-j}^{(n)},\,j=0,1,\ldots,n.\) That is, \(T_{n}\) is centrosymmetric matrix.
Conversely, suppose that \(T_{n}\) is centrosymmetric matrix and \(\mathbb{P}_{n}\) is reflexive polynomial vector, from (3.2) we know that
\[\widetilde{P}_{n-k,k}^{n}(x,y)=\sum_{j=0}^{n}t_{kj}^{(n)}P_{n-j,j}^{n}(x,y)= \sum_{j=0}^{n}t_{n-k,n-j}^{(n)}P_{j,n-j}^{n}(y,x).\]
Making \(l=n-j\) on the latter summation, from (3.3), we have
\[\widetilde{P}_{n-k,k}^{n}(x,y)=\sum_{j=0}^{n}t_{n-k,l}^{(n)}P_{n-l,l}^{n}(y,x) =\widetilde{P}_{k,n-k}^{n}(y,x).\]
Then, \(\widetilde{\mathbb{P}}_{n}\) is reflexive polynomial vector.
From now on, we will work with bivariate moment functionals \(\mathbf{u}\) defined as
\[\langle\mathbf{u},f\rangle=\iint\limits_{\Omega}f(x,y)\,W(x,y)\,dx\,dy,\]
where \(W(x,y)\) is a weight function defined on a region \(\Omega\subset\mathbb{R}^{2}\) such that its associated moments satisfy
\[\mu_{n,m}=\langle\mathbf{u},x^{n}\,y^{m}\rangle=\iint\limits_{\Omega}x^{n}\, y^{m}\,W(x,y)\,dx\,dy<+\infty,\]
for \(n,m=0,1,\ldots\). Thus, we are considering the inner product
\[(f,g):=\langle\mathbf{u},f\,g\rangle=\iint\limits_{\Omega}f(x,y)\,g(x,y)\,W(x,y)\,dx\,dy. \tag{3.4}\]
In terms of the inner product (3.4), if \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\) is a polynomial system satisfying
\[(\mathbb{P}_{n},\mathbb{P}_{n}^{T})=\langle\mathbf{u},\mathbb{P}_{n}\, \mathbb{P}_{n}^{T}\rangle=H_{n},\]
where \(H_{n}\) is non-singular symmetric positive definite matrix of size \(n+1\) and \(\mathtt{0}\) is zero matrix of adequate size. Then, \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\) is an _orthogonal polynomial system_ (OPS) associated with the weight function \(W(x,y).\) If \(H_{n}=I_{n+1},\) the identity matrix of order \(n+1,\) then \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\) is called _orthonormal polynomial system_, and they are not unique. As we recall at the introduction, there is a unique _monic orthogonal polynomial system_ associated with the inner product \((\cdot,\cdot)\). In the sequel, we denote by \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\) an orthonormal polynomial system, and by \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) the monic orthogonal polynomial system.
Orthogonal polynomial systems satisfy two three-term relations, one for each variable, as (1.1). For orthonormal polynomial system, \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\), and monic orthogonal polynomial system, \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\), the three-term relations have, respectively, the following forms
\[x_{i}\mathbb{P}_{n}=A_{n,i}\mathbb{P}_{n+1}+B_{n,i}\mathbb{P}_{n}+A_{n-1,i}^{T }\mathbb{P}_{n-1},\quad i=1,2, \tag{3.5}\]
for \(n\geqslant 0\), \(\mathbb{P}_{-1}=0\) and \(\mathbb{P}_{0}=(\mu_{0,0}^{-1/2})\), and
\[x_{i}\mathbb{Q}_{n}=L_{n,i}\mathbb{Q}_{n+1}+C_{n,i}\mathbb{Q}_{n}+D_{n,i} \mathbb{Q}_{n-1},\quad i=1,2, \tag{3.6}\]
for \(n\geqslant 0\), \(\mathbb{Q}_{-1}=0\) and \(\mathbb{Q}_{0}=(1)\). Here \(x_{1}=x\), \(x_{2}=y\), in order to simplify the notation. The matrices \(A_{n,i}\) of size \((n+1)\times(n+2)\) are full rank matrices, \(B_{n,i}\) and \(C_{n,i}\) are matrices of size \((n+1)\times(n+1)\), \(D_{n,i}\) are full rank matrices of size \((n+1)\times n\), and \(L_{n,i}\) are matrices of size \((n+1)\times(n+2)\) such that \(x_{i}\mathbb{X}_{n}=L_{n,i}\mathbb{X}_{n+1}\), \(i=1,2\), that are
\[L_{n,1}=\left(\begin{array}{cccc|c}1&&&\bigcirc\\ &1&&&0\\ &&\ddots&&\vdots\\ \bigcirc&&&1&0\end{array}\right)\quad\text{and}\quad L_{n,2}=\left(\begin{array} []{cccc|c}0&&&1&&&\bigcirc\\ 0&&&1&&&\\ \vdots&&&\ddots&&\\ 0&&&\bigcirc&&&1\end{array}\right).\]
Given \(\mathbf{u}\) a moment functional, the _matrix of moments_\(M_{n}\), \(n\in\mathbb{N}\), is a matrix of size \((n+1)(n+2)/2\) defined by
\[M_{n}=\begin{pmatrix}\langle\mathbf{u},\mathbb{X}_{0}\mathbb{X}_{0}^{T}\rangle &\langle\mathbf{u},\mathbb{X}_{0}\mathbb{X}_{1}^{T}\rangle&\cdots&\langle \mathbf{u},\mathbb{X}_{0}\mathbb{X}_{n}^{T}\rangle\\ \langle\mathbf{u},\mathbb{X}_{1}\mathbb{X}_{0}^{T}\rangle&\langle\mathbf{u}, \mathbb{X}_{1}\mathbb{X}_{1}^{T}\rangle&\cdots&\langle\mathbf{u},\mathbb{X}_{1} \mathbb{X}_{n}^{T}\rangle\\ \vdots&\vdots&&\vdots\\ \langle\mathbf{u},\mathbb{X}_{n}\mathbb{X}_{0}^{T}\rangle&\langle\mathbf{u}, \mathbb{X}_{n}\mathbb{X}_{1}^{T}\rangle&\cdots&\langle\mathbf{u},\mathbb{X}_{n} \mathbb{X}_{n}^{T}\rangle\end{pmatrix}, \tag{3.7}\]
where \(\langle\mathbf{u},\mathbb{X}_{m}\mathbb{X}_{n}^{T}\rangle\) is a block of size \((m+1)\times(n+1)\) given by
\[\langle\mathbf{u},\mathbb{X}_{m}\mathbb{X}_{n}^{T}\rangle=\begin{pmatrix} \langle\mathbf{u},x^{m+n}\rangle&\langle\mathbf{u},x^{m+n-1}y\rangle&\ldots& \langle\mathbf{u},x^{m}y^{n}\rangle\\ \langle\mathbf{u},x^{m+n-1}y\rangle&\langle\mathbf{u},x^{m+n-2}y^{2}\rangle& \ldots&\langle\mathbf{u},x^{m-1}y^{n+1}\rangle\\ \vdots&\vdots&&\vdots\\ \langle\mathbf{u},x^{n}y^{m}\rangle&\langle\mathbf{u},x^{n-1}y^{m+1}\rangle& \ldots&\langle\mathbf{u},y^{m+n}\rangle\end{pmatrix}. \tag{3.8}\]
See [11] for more details.
Let \(\alpha=(\alpha_{1},\alpha_{2})\) be a bi-index such that \(|\alpha|=\alpha_{1}+\alpha_{2}=n\), it means that \(\alpha=(n-k,k)\), for \(k=0,1,\ldots,n\). Using the notation \(\mathbf{x}^{\alpha}=x^{\alpha_{1}}y^{\alpha_{2}}\), the matrix \(M_{\alpha}(x,y)\) is defined by
\[M_{\alpha}(x,y)=\left(\begin{array}{cccc|c}&&&\langle\mathbf{u},\mathbf{x}^ {\alpha}\mathbb{X}_{0}\rangle\\ &M_{n-1}&&\langle\mathbf{u},\mathbf{x}^{\alpha}\mathbb{X}_{1}\rangle\\ &&&\vdots\\ &&&\langle\mathbf{u},\mathbf{x}^{\alpha}\mathbb{X}_{n-1}\rangle\\ \hline\mathbb{X}_{0}^{T}&\mathbb{X}_{1}^{T}&...&\mathbb{X}_{n-1}^{T}&\mathbf{ x}^{\alpha}\end{array}\right). \tag{3.9}\]
**Proposition 3.3** ([11]).: _Consider the monic polynomials \(Q_{n-k,k}^{n}(x,y)\), for \(k=0,1,\ldots,n\), the entries of \(\mathbb{Q}_{n}=(Q_{n,0}^{n}(x,y),Q_{n-1,1}^{n}(x,y),\ldots,Q_{0,n}^{n}(x,y))^{T},\) given by_
\[Q_{n-k,k}^{n}(x,y)=\frac{\det(M_{(n-k,k)}(x,y))}{\det(M_{n-1})},\quad k=0,1, \ldots,n,\]
_then \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) is the MOPS with respect to the moment functional \(\mathbf{u}\)._
## 4. Reflexive Bivariate Orthogonal Polynomials
First we need the definition of _reflexive weight function_.
**Definition 4.1**.: _Consider a region \(\Omega\subseteq\mathbb{R}^{2}\), such that it satisfies \((x,y)\in\Omega\Leftrightarrow(y,x)\in\Omega.\) A weight function \(W(x,y)\) defined in \(\Omega\), satisfying_
\[W(x,y)=W(y,x),\quad(x,y)\in\Omega, \tag{4.1}\]
_is called a reflexive weight function._
As we define at the introduction, a _reflexive moment functional_ is a functional \(\mathbf{u}\) such that its associated moments satisfy
\[\mu_{m,n}=\mu_{n,m},\quad m,n=0,1,\ldots.\]
In particular, if \(W(x,y)\) is a _reflexive weight function_, the moment functional
\[\langle\mathbf{u},f\rangle=\iint\limits_{\Omega}f(x,y)\,W(x,y)\,dx\,dy,\]
is a _reflexive moment functional_.
Now we can show that a reflexive weight function yields reflexive MOPS. This is one of the main results involving reflexive weight function.
**Theorem 4.2**.: _The polynomial vectors of the MOPS \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) associated with a reflexive weight function are reflexives, it means_
\[Q_{n-k,k}^{n}(x,y)=Q_{k,n-k}^{n}(y,x),\quad k=0,1\ldots,n.\]
Proof.: From Proposition 3.3, we can write, for \(k=0,1,\ldots,n\),
\[Q_{n-k,k}^{n}(x,y)=\frac{\det(M_{(n-k,k)}(x,y))}{\det(M_{n-1}(x,y))}\quad\text {and}\quad Q_{k,n-k}^{n}(y,x)=\frac{\det(M_{(k,n-k)}(y,x))}{\det(M_{n-1}(y,x) )},\]
where, from (3.7) and (3.8), the matrices \(M_{n-1}(x,y)\) and \(M_{n-1}(y,x)\) are
\[M_{n-1}(x,y)=\begin{pmatrix}\mu_{00}&\mu_{10}&\mu_{01}&\cdots&\mu_{n-1,0}& \cdots&\mu_{0,n-1}\\ \mu_{10}&\mu_{20}&\mu_{11}&\cdots&\mu_{n0}&\cdots&\mu_{1,n-1}\\ \mu_{01}&\mu_{11}&\mu_{02}&\cdots&\mu_{n-1,1}&\cdots&\mu_{0n}\\ \vdots&\vdots&\vdots&&\vdots&&\vdots\\ \mu_{n-1,0}&\mu_{n,0}&\mu_{n-1,1}&\cdots&\mu_{2n-2,0}&\cdots&\mu_{n-1,n-1}\\ \vdots&\vdots&\vdots&&\vdots&&\vdots\\ \mu_{0,n-1}&\mu_{1,n-1}&\mu_{0,n}&\cdots&\mu_{n-1,n-1}&\cdots&\mu_{0,2n-2}\\ \end{pmatrix}\]
and
\[M_{n-1}(y,x)=\begin{pmatrix}\mu_{00}&\mu_{01}&\mu_{10}&\cdots&\mu_{0,n-1}& \cdots&\mu_{n-1,0}\\ \mu_{01}&\mu_{02}&\mu_{11}&\cdots&\mu_{0n}&\cdots&\mu_{n-1,1}\\ \mu_{10}&\mu_{11}&\mu_{20}&\cdots&\mu_{1,n-1}&\cdots&\mu_{n,0}\\ \vdots&\vdots&\vdots&&\vdots&&\vdots\\ \mu_{0,n-1}&\mu_{0,n}&\mu_{1,n-1}&\cdots&\mu_{0,2n-2}&\cdots&\mu_{n-1,n-1}\\ \vdots&\vdots&\vdots&&\vdots&&\vdots\\ \mu_{n-1,0}&\mu_{n-1,1}&\mu_{n,0}&\cdots&\mu_{n-1,n-1}&\cdots&\mu_{2n-2,0}\\ \end{pmatrix}.\]
Since the weight function is reflexive, i.e., \(\mu_{m,n}=\mu_{n,m}\), for \(m,n=0,1,\dots\), we observe that \(M_{n-1}(x,y)=M_{n-1}(y,x)\) and their determinants are the same.
Now, from (3.9) the matrices \(M_{(n-k,k)}(x,y)\) and \(M_{(k,n-k)}(y,x)\) are given, respectively, by
\[\left(\begin{array}{cccccccc}\mu_{00}&\mu_{10}&\mu_{01}&\cdots&\mu_{n-1,0}& \cdots&\mu_{0,n-1}&\mu_{n-k,k}\\ \mu_{10}&\mu_{20}&\mu_{11}&\cdots&\mu_{n0}&\cdots&\mu_{1,n-1}&\mu_{n-k+1,k}\\ \mu_{01}&\mu_{11}&\mu_{02}&\cdots&\mu_{n-1,1}&\cdots&\mu_{0n}&\mu_{n-k,k+1}\\ \vdots&\vdots&\vdots&&\vdots&&\vdots&\vdots\\ \mu_{n-1,0}&\mu_{n,0}&\mu_{n-1,1}&\cdots&\mu_{2n-2,0}&\cdots&\mu_{n-1,n-1}&\mu _{2n-k-1,k}\\ \vdots&\vdots&\vdots&&\vdots&&\vdots&\vdots\\ \mu_{0,n-1}&\mu_{1,n-1}&\mu_{0,n}&\cdots&\mu_{n-1,n-1}&\cdots&\mu_{0,2n-2}&\mu _{n-k,k+n-1}\\ 1&x&y&\cdots&x^{n-1}&\cdots&y^{n-1}&x^{n-k}y^{k}\end{array}\right)\]
and
\[\left(\begin{array}{cccccccc}\mu_{00}&\mu_{01}&\mu_{10}&\cdots&\mu_{0,n-1}& \cdots&\mu_{n-1,0}&\mu_{n-k,k}\\ \mu_{01}&\mu_{02}&\mu_{11}&\cdots&\mu_{0n}&\cdots&\mu_{n-1,1}&\mu_{n-k,k+1}\\ \mu_{10}&\mu_{11}&\mu_{20}&\cdots&\mu_{1,n-1}&\cdots&\mu_{n,0}&\mu_{n-k+1,k}\\ \vdots&\vdots&\vdots&&\vdots&&\vdots&\vdots\\ \mu_{0,n-1}&\mu_{0,n}&\mu_{1,n-1}&\cdots&\mu_{0,2n-2}&\cdots&\mu_{n-1,n-1}&\mu _{n-k,k+n-1}\\ \vdots&\vdots&\vdots&&\vdots&&\vdots&\vdots\\ \mu_{n-1,0}&\mu_{n-1,1}&\mu_{n,0}&\cdots&\mu_{n-1,n-1}&\cdots&\mu_{2n-2,0}&\mu _{2n-k-1,k}\\ 1&y&x&\cdots&y^{n-1}&\cdots&x^{n-1}&x^{n-k}y^{k}\end{array}\right).\]
Once more using the fact that \(\mu_{m,n}=\mu_{n,m}\), for \(m,n=0,1,\dots\), we see that the number of permutations of rows and permutations of columns that transforms the matrix \(M_{(n-k,k)}(x,y)\) into the matrix \(M_{(k,n-k)}(y,x)\) is even, hence
\[\det(M_{n-k,k}(x,y))=\det(M_{k,n-k}(y,x)),\quad for\ k=0,1\dots,n,\]
and the result holds.
When the polynomial vectors of the MOPS \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) are reflexives for \(n\geqslant 0\), then \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) is called reflexive MOPS.
Next results give some connections of reflexive MOPS and some centrosymmetric matrices.
**Corollary 4.3**.: _Let \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) be a MOPS associated with a reflexive weight function satisfying (4.1). Then, for \(n\in\mathbb{N}\), the matrix \(H_{n}\) of size \(n+1\), given by \(H_{n}=\langle{\bf u},{\mathbb{Q}_{n}\mathbb{Q}_{n}^{T}}\rangle\), is centrosymmetric, i.e., \(H_{n}=(h_{ij})_{i,j=0}^{n}\), satisfies_
\[h_{ij}=h_{n-i,n-j},\quad\text{for $i,j=0,1,\dots,n$}.\]
Proof.: Observe that \(h_{ij}=\langle{\bf u},Q_{n-i,i}^{n}(x,y)Q_{n-j,j}^{n}(x,y)\rangle\), for \(i,j=0,1,\dots,n\).
From Theorem 4.2, we can write
\[h_{ij}=\iint\limits_{\Omega}Q_{i,n-i}^{n}(y,x)Q_{j,n-j}^{n}(y,x)W(x,y)dxdy.\]
Making a change of variables \(x\leftrightarrow y\), we have
\[h_{ij}=\iint\limits_{\Omega}Q_{i,n-i}^{n}(x,y)Q_{j,n-j}^{n}(x,y)W(y,x)dxdy.\]
Finally, using the property (4.1), we obtain
\[h_{ij}=\iint\limits_{\Omega}Q_{i,n-i}^{n}(x,y)Q_{j,n-j}^{n}(x,y)W(x,y)dxdy=h_{ n-i,n-j}.\]
Consider \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) the MOPS associated with a weight function \(W(x,y)\), and \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\) another OPS also associated with \(W(x,y)\). Using the explicit expression (3.1) and the orthogonality, the polynomial vectors \(\mathbb{P}_{n}\) and \(\mathbb{Q}_{n}\) are related by \(\mathbb{P}_{n}=G_{n}\mathbb{Q}_{n}\), \(n\geqslant 0\). In particular, as a consequence of Proposition 3.2, we have the following result.
**Corollary 4.4**.: _If \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) is the MOPS associated with a reflexive weight function \(W(x,y)\) and \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\) is an OPS associated with \(W(x,y)\) and given by \(\mathbb{P}_{n}=G_{n}\mathbb{Q}_{n}\), then \(\mathbb{P}_{n}\) is reflexive polynomial vector if and only if \(G_{n}\) is a centrosymmetric matrix._
## 5. Main Results
We now present more connections involving reverse matrices, centrosymmetric matrices and reflexive bivariate orthogonal polynomial systems.
The first result shows that the coefficient matrices of the three-term relations for reflexive MOPS given by (3.6) have the reverse property.
**Theorem 5.1**.: _Let \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) be a MOPS associated with a reflexive weight function satisfying (4.1) and let (3.6) be their three-term relations. Then, for \(n\in\mathbb{N}\), \(C_{n,1}\leftrightharpoons C_{n,2}\), and \(D_{n,1}\leftrightharpoons D_{n,2}\)._
Proof.: First we denote the matrices \(C_{n,k}=(c_{ij}^{(k)})_{i,j=0}^{n,n}\) and \(D_{n,k}=(d_{ij}^{(k)})_{i,j=0}^{n,n-1}\), for \(k=1,2.\) From the three-term relation (3.6), omitting the variables \((x,y)\), we can write
\[x\left(\begin{array}{c}Q_{n,0}^{n}\\ \vdots\\ Q_{n-k,k}^{n}\\ \vdots\\ Q_{0,n}^{n}\end{array}\right)=\left(\begin{array}{c}Q_{n+1,0}^{n+1}\\ \vdots\\ Q_{n+1-k,k}^{n+1}\\ \vdots\\ Q_{1,n}^{n+1}\end{array}\right)+C_{n,1}\left(\begin{array}{c}Q_{n,0}^{n}\\ \vdots\\ Q_{n-l,l}^{n}\\ \vdots\\ Q_{0,n}^{n}\end{array}\right)+D_{n,1}\left(\begin{array}{c}Q_{n-1,0}^{n-1} \\ \vdots\\ Q_{n-1-s,s}^{n-1}\\ \vdots\\ Q_{0,n-1}^{n-1}\end{array}\right) \tag{5.1}\]
and
\[y\left(\begin{array}{c}Q_{n,0}^{n}\\ \vdots\\ Q_{k,n-k}^{n}\\ \vdots\\ Q_{0,n}^{n}\end{array}\right)=\left(\begin{array}{c}Q_{n,1}^{n+1}\\ \vdots\\ Q_{k,n+1-k}^{n+1}\\ \vdots\\ Q_{0,n+1}^{n+1}\end{array}\right)+C_{n,2}\left(\begin{array}{c}Q_{n,0}^{n} \\ \vdots\\ Q_{n-l,l}^{n}\\ \vdots\\ Q_{0,n}^{n}\end{array}\right)+D_{n,2}\left(\begin{array}{c}Q_{n-1,0}^{n-1} \\ \vdots\\ Q_{n-1-s,s}^{n-1}\\ \vdots\\ Q_{0,n-1}^{n-1}\end{array}\right).\]
From (5.1), for \(k=0,1,\ldots,n\), we can write \(xQ_{n-k,k}^{n}(x,y)\) as
\[xQ_{n-k,k}^{n}(x,y)=Q_{n+1-k,k}^{n+1}(x,y)\]
\[+c_{k0}^{(1)}Q_{n,0}^{n}(x,y)+\cdots+c_{kl}^{(1)}Q_{n-l,l}^{n}(x,y)+\cdots+c_{kn}^{( 1)}Q_{0,n}^{n}(x,y) \tag{5.3}\]
\[+d_{k0}^{(1)}Q_{n-1,0}^{n-1}(x,y)+\cdots+d_{ks}^{(1)}Q_{n-1-s,s}^{n-1}(x,y)+ \cdots+d_{k,n-1}^{(1)}Q_{0,n-1}^{n-1}(x,y).\]
In the same way, working on (5.2), we can write \(yQ_{k,n-k}^{n}(x,y)\), for \(k=0,1,\ldots,n\), as
\[yQ_{k,n-k}^{n}(x,y)=Q_{k,n+1-k}^{n+1}(x,y)\] \[+c_{n-k,0}^{(2)}Q_{n,0}^{n}(x,y)+\cdots+c_{n-k,l}^{(2)}Q_{n-l,l}^{ n}(x,y)+\cdots+c_{n-k,n}^{(2)}Q_{0,n}^{n}(x,y) \tag{5.4}\] \[+d_{n-k,0}^{(2)}Q_{n-1,0}^{n-1}(x,y)+\cdots+d_{n-k,s}^{(2)}Q_{n-1 -s,s}^{n-1}(x,y)+\cdots+d_{n-k,n-1}^{(2)}Q_{0,n-1}^{n-1}(x,y)\]
Making the change of variables \(x\leftrightarrow y\) in (5.4) and using Theorem 4.2, we obtain
\[xQ_{n-k,k}^{n}(x,y) = Q_{n+1-k,k}^{n+1}(x,y)+c_{n-k,0}^{(2)}Q_{0,n}^{n}(x,y)+\cdots+c_ {n-k,l}^{(2)}Q_{l,n-l}^{n}(x,y)+\cdots\] \[+c_{n-k,n-l}^{(2)}Q_{n-l,l}^{n}(x,y)+\cdots+c_{n-k,n}^{(2)}Q_{n,0 }^{n}(x,y)\] \[+d_{n-k,0}^{(2)}Q_{0,n-1}^{n-1}(x,y)+\cdots+d_{n-k,s}^{(2)}Q_{s,n- 1-s}^{n-1}(x,y)+\cdots\] \[+d_{n-k,n-1-s}^{(2)}Q_{n-1-s,s}^{n-1}(x,y)+\cdots+d_{n-k,n-1}^{(2) }Q_{n-1,0}^{n-1}(x,y).\]
Since \(\{Q_{n-l,l}^{n}\}_{l=0}^{n}\cup\{Q_{n-1-s,s}^{n-1}\}_{s=0}^{n-1}\) is a linearly independent set, comparing (5.3) and (5.5), we obtain
\[c_{ij}^{(1)}=c_{n-i,n-j}^{(2)},\ \ \ i,j=0,1,\ldots,n,\]
and
\[d_{ij}^{(1)}=d_{n-i,n-1-j}^{(2)},\ \ \ i=0,1,\ldots,n,\ \ j=0,1,\ldots,n-1.\]
Hence, \(C_{n,1}\leftrightharpoons C_{n,2}\) and \(D_{n,1}\leftrightharpoons D_{n,2}\).
We remark that, since \(C_{n,1}\leftrightharpoons C_{n,2}\) and \(D_{n,1}\leftrightharpoons D_{n,2}\), it is enough to calculate the coefficient matrices associated with one variable and the other coefficient matrices are directly obtained.
The reciprocal is also true.
**Theorem 5.2**.: _If the matrices \(C_{n,i}\) and \(D_{n,i}\), \(i=1,2\), in (3.6) satisfy \(C_{n,1}\leftrightharpoons C_{n,2}\) and \(D_{n,1}\leftrightharpoons D_{n,2}\), then the associated MOPS \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) is reflexive._
Proof.: Let \(C_{n,k}=(c_{ij}^{(k)})_{i,j=0}^{n,n}\), \(D_{n,k}=(d_{ij}^{(k)})_{i,j=0}^{n,n-1}\), \(k=1,2\), and \(\mathbb{Q}_{n}\) be the monic orthogonal polynomial vector. The three-term relations (3.6) for \(n=0\), are
\[x\mathbb{Q}_{0}=L_{0,1}\mathbb{Q}_{1}+C_{0,1}\mathbb{Q}_{0},\]
\[y\mathbb{Q}_{0}=L_{0,2}\mathbb{Q}_{1}+C_{0,2}\mathbb{Q}_{0}.\]
Denoting \(\mathbb{Q}_{0}=(1)\) and \(\mathbb{Q}_{1}=\begin{pmatrix}x+\alpha\\ y+\beta\end{pmatrix}\), it follows that
\[x=(1\ \ \ 0)\begin{pmatrix}x+\alpha\\ y+\beta\end{pmatrix}+c_{00}^{(1)},\]
\[y=(0\ \ \ 1)\begin{pmatrix}x+\alpha\\ y+\beta\end{pmatrix}+c_{00}^{(2)}.\]
It is easy to see that \(\alpha=\beta\), since by hypothesis, \(c_{00}^{(1)}=c_{00}^{(2)}\). Hence, \(Q_{1,0}^{1}(x,y)=Q_{0,1}^{1}(y,x)\), that is, \(\mathbb{Q}_{0}\) and \(\mathbb{Q}_{1}\) are reflexive polynomial vectors.
We now prove by mathematical induction that, if \(\mathbb{Q}_{n}\) and \(\mathbb{Q}_{n-1}\) are reflexive polynomial vectors, then it holds for \(\mathbb{Q}_{n+1}\). Omitting the variables \((x,y)\), the three-term relations (3.6) can be written as
\[L_{n,1}\mathbb{Q}_{n+1}=x\mathbb{Q}_{n}-C_{n,1}\mathbb{Q}_{n}-D_{n,1}\mathbb{Q }_{n-1}, \tag{5.6}\]
\[L_{n,2}\mathbb{Q}_{n+1}=y\mathbb{Q}_{n}-C_{n,2}\mathbb{Q}_{n}-D_{n,2}\mathbb{Q }_{n-1}. \tag{5.7}\]
From (5.6), for \(k=0,1,\ldots,n\), we can write \(Q_{n+1-k,k}^{n+1}(x,y)\) as
\[Q_{n+1-k,k}^{n+1}(x,y)=xQ_{n-k,k}^{n}(x,y)-\sum_{j=0}^{n}c_{kj}^{(1)}Q_{n-j,j} ^{n}(x,y)-\sum_{j=0}^{n-1}d_{kj}^{(1)}Q_{n-1-j,j}^{n-1}(x,y).\]
On the other hand, from (5.7), we can write \(Q_{k,n+1-k}^{n+1}(x,y)\), for \(k=0,1,\ldots,n\), as
\[Q_{k,n+1-k}^{n+1}(x,y)=yQ_{k,n-k}^{n}(x,y)-\sum_{j=0}^{n}c_{n-k,j}^{(2)}Q_{n-j,j}^{n}(x,y)-\sum_{j=0}^{n-1}d_{n-k,j}^{(2)}Q_{n-1-j,j}^{n-1}(x,y),\]
Since \(\mathbb{Q}_{n}\) and \(\mathbb{Q}_{n-1}\) are reflexives, and \(C_{n,1}\leftrightharpoons C_{n,2}\) and \(D_{n,1}\leftrightharpoons D_{n,2}\), we get
\[Q_{k,n+1-k}^{n+1}(y,x)= xQ_{k,n-k}^{n}(y,x)-\sum_{j=0}^{n}c_{n-k,j}^{(2)}Q_{n-j,j}^{n}(y,x)- \sum_{j=0}^{n-1}d_{n-k,j}^{(2)}Q_{n-1-j,j}^{n-1}(y,x)\] \[= xQ_{n-k,k}^{n}(x,y)-\sum_{j=0}^{n}c_{k,n-j}^{(1)}Q_{j,n-j}^{n}(x,y )-\sum_{j=0}^{n-1}d_{k,n-j}^{(1)}Q_{j,n-1-j}^{n-1}(x,y)\] \[= xQ_{n-k,k}^{n}(x,y)-\sum_{j=0}^{n}c_{kj}^{(1)}Q_{n-j,j}^{n}(x,y)- \sum_{j=0}^{n-1}d_{kj}^{(1)}Q_{n-1-j,j}^{n-1}(x,y)\] \[= Q_{n+1-k,k}^{n}(x,y).\]
Therefore, \(\mathbb{Q}_{n+1}\) is reflexive polynomial vector.
In the next result we relate reflexive MOPS with centrosymmetric matrices.
**Theorem 5.3**.: _With the same assumptions of the Theorem 5.1, for \(n\in\mathbb{N}\), the following matrices_
\[\widehat{C}_{n}=\left(\begin{array}{c|c}C_{n,1}&\mathbb{0}\\ \hline\mathbb{0}&C_{n,2}\end{array}\right)\quad\text{and}\quad\widehat{D}_{n} =\left(\begin{array}{c|c}D_{n,1}&\mathbb{0}\\ \hline\mathbb{0}&D_{n,2}\end{array}\right)\]
_are centrosymmetric matrices._
Proof.: The proof follows directly from Proposition 2.11 and Theorem 5.1.
We now present the results for orthonormal polynomial systems.
We need the following lemma, about the square root of a symmetric centrosymmetric positive definite matrix. Following [13, p. 440], if \(X\) is a symmetric positive definite matrix, there exists a unique symmetric positive definite matrix, \(X^{\frac{1}{2}}\), such that \(X^{\frac{1}{2}}X^{\frac{1}{2}}=X\). The matrix \(X^{\frac{1}{2}}\) is called the square root matrix of \(X\).
**Lemma 5.4**.: _Let \(X\) be a symmetric centrosymmetric positive definite matrix. Then, the square root matrix \(X^{\frac{1}{2}}\) is symmetric centrosymmetric positive definite._
Proof.: Considering \(X=(x_{ij})_{i,j=0}^{n}\) a symmetric centrosymmetric positive definite of size \(n+1\), there exist an orthogonal matrix \(R=(r_{ij})_{i,j=0}^{n}\) and a diagonal matrix \(D=\operatorname{diag}(\lambda_{0},\lambda_{1},\ldots,\lambda_{n})\), where \(\lambda_{i}>0\), for \(i=0,1,\ldots,n\), are the eigenvalues of \(X\), such that
\[X=R\,D\,R^{T}, \tag{5.8}\]
see [13]. Hence, the square root matrix of \(X\) can be written as
\[X^{\frac{1}{2}}=R\,D^{\frac{1}{2}}\,R^{T}, \tag{5.9}\]
where \(D^{\frac{1}{2}}=\operatorname{diag}(\sqrt{\lambda_{0}},\sqrt{\lambda_{1}}, \ldots,\sqrt{\lambda_{n}})\) and we denote \(X^{\frac{1}{2}}=(\tilde{x}_{ij})_{i,j=0}^{n}\).
First we suppose that \(\lambda_{0}=\lambda_{1}=\cdots=\lambda_{n}\), then \(X^{\frac{1}{2}}=\sqrt{\lambda}I_{n+1}\) and it is symmetric centrosymmetric positive definite matrix.
We suppose now that the eigenvalues \(\lambda_{i}\), \(i=0,1,\ldots,n\), are not simultaneously equals. From (5.8) we observe that the entries of matrix \(X\) are linear combinations of \(\{\lambda_{0},\lambda_{1},\ldots,\lambda_{n}\}\), and it is possible to write
\[x_{ij}=\sum_{k=0}^{n}\,r_{ik}\,r_{jk}\,\lambda_{k}.=\sum_{k=0}^{n}\,d_{k}^{(i, j)}\,\lambda_{k},\]
where \(d_{k}^{(i,j)}=r_{ik}\,r_{jk}\), for \(k=0,1,\ldots,n\). Similarly, from (5.9), the entries of matrix \(X^{\frac{1}{2}}\) can be written as
\[\tilde{x}_{ij}=\sum_{k=0}^{n}\,r_{ik}\,r_{jk}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{ n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}.\]
From Proposition 2.10, we know that the eigenvectors of a symmetric centrosymmetric positive definite matrix are either symmetric or skew-symmetric vectors. In the decomposition \(X=RDR^{T}\) the columns of matrix \(R\) correspondent to the eigenvectors of \(X\). Hence the entries of the \(k\)th column of \(R\) satisfy either
\[r_{i,k}=r_{n-i,k},\quad i=0,1,\ldots,n,\]
or
\[r_{i,k}=-r_{n-i,k},\quad i=0,1,\ldots,n.\]
Therefore, for a fixed \(k\), one can conclude the following
\[d_{k}^{(i,j)}=r_{ik}\,r_{jk}=r_{n-i,k}\,r_{n-j,k}=d_{k}^{(n-i,n-j)}.\]
Finally, we observe that, for \(i,j=0,1,\ldots,n\),
\[\tilde{x}_{ij}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{ n}\,d_{k}^{(n-i,n-j)}\,\sqrt{\lambda_{k}}=\tilde{x}_{n-i,n-j},\]
therefore, \(X^{\frac{1}{2}}\) is symmetric centrosymmetric positive definite matrix.
Next results relate reflexive orthonormal polynomial system with centrosymmetric matrices.
**Theorem 5.5**.: _Consider the orthonormal polynomial system \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\) associated with a weight function \(W(x,y)\) satisfying (4.1) and defined by \(\mathbb{P}_{n}=H_{n}^{-1/2}\mathbb{Q}_{n}\), for \(n\geqslant 0\), where \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) is the associated MOPS and \(H_{n}=\langle\mathbf{u},\mathbb{Q}_{n}\mathbb{Q}_{n}^{T}\rangle\). Let \(A_{n,i}\) be the set of eigenvectors of \(X\), such that_
\[A_{n,i}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{ k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}= \sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\, \sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0 }^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n} \,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n} \,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n} \,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n} \,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n} \,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n} \,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n} \,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n} \,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n} \,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_ {k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n} \,d_{k}^{(i,j)}\,\sqrt{\lambda_{k}}=\sum_{k=0}^{n}\,d_{k}^{(i,j)}\,\sqrt{ \lambda_{k}}=\sum_
and \(B_{n,i}\), \(i=1,2\), be the coefficient matrices of the three-term relations (3.5) for \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\). Then, the matrices_
\[\widehat{A}_{n}=\left(\begin{array}{c|c}A_{n,1}&\mathbf{0}\\ \hline\mathbf{0}&A_{n,2}\end{array}\right)\quad\text{and}\quad\widehat{B}_{n}= \left(\begin{array}{c|c}B_{n,1}&\mathbf{0}\\ \hline\mathbf{0}&B_{n,2}\end{array}\right)\]
_are centrosymmetric._
Proof.: From [11], we know that, for orthonormal polynomial system \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\), defined by \(\mathbb{P}_{n}=H_{n}^{-1/2}\mathbb{Q}_{n}\), the matrices \(A_{n,i}\), \(B_{n,i}\), \(C_{n,i}\), and \(D_{n,i}\) of the associated three-term relations are related as
\[A_{n,i}=H_{n}^{\frac{1}{2}}D_{n+1,i}^{T}H_{n+1}^{-\frac{1}{2}}\quad and\quad B _{n,i}=H_{n}^{-\frac{1}{2}}C_{n,i}H_{n}^{\frac{1}{2}},\quad i=1,2,\]
where the matrix \(H_{n}^{\frac{1}{2}}\) is the symmetric centrosymmetric positive definite matrix such that \(H_{n}^{\frac{1}{2}}H_{n}^{\frac{1}{2}}=H_{n}\).
Hence, the matrix \(\widehat{A}_{n}\) can be written as
\[\widehat{A}_{n}=\left(\begin{array}{c|c}H_{n}^{\frac{1}{2}}D_{n+1,1}^{T}H_{ n+1}^{-\frac{1}{2}}&\mathbf{0}\\ \hline\mathbf{0}&H_{n}^{\frac{1}{2}}D_{n+1,2}^{T}H_{n+1}^{-\frac{1}{2}}\end{array} \right),\]
or
\[\widehat{A}_{n}=\left(\begin{array}{c|c}H_{n}^{\frac{1}{2}}&\mathbf{0}\\ \hline\mathbf{0}&H_{n}^{\frac{1}{2}}\end{array}\right)\left(\begin{array}{c |c}D_{n+1,1}^{T}&\mathbf{0}\\ \hline\mathbf{0}&D_{n+1,2}^{t}\end{array}\right)\left(\begin{array}{c|c}H_{ n+1}^{-\frac{1}{2}}&\mathbf{0}\\ \hline\mathbf{0}&H_{n+1}^{-\frac{1}{2}}\end{array}\right).\]
From Corollary 4.3 and Lemma 5.4, we know that \(H_{n}^{\frac{1}{2}}\) is centrosymmetric matrix, then \(H_{n}^{\frac{1}{2}}\leftrightharpoons H_{n}^{\frac{1}{2}}\). Hence, from Proposition 2.11
\[\left(\begin{array}{c|c}H_{n}^{\frac{1}{2}}&\mathbf{0}\\ \hline\mathbf{0}&H_{n}^{\frac{1}{2}}\end{array}\right)\quad\text{and}\quad \left(\begin{array}{c|c}H_{n+1}^{-\frac{1}{2}}&\mathbf{0}\\ \hline\mathbf{0}&H_{n+1}^{-\frac{1}{2}}\end{array}\right)\]
are centrosymmetric matrices. Furthermore, since \(D_{n+1,1}\leftrightharpoons D_{n+1,2}\), from Theorem 5.3
\[\left(\begin{array}{c|c}D_{n+1,1}^{T}&\mathbf{0}\\ \hline\mathbf{0}&D_{n+1,2}^{T}\end{array}\right)\]
is also centrosymmetric matrix. Therefore, from item 2) in Proposition 2.8, the matrix \(\widehat{A}_{n}\) is centrosymmetric.
Similarly, since \(B_{n,i}=H_{n}^{-\frac{1}{2}}C_{n,i}H_{n}^{\frac{1}{2}}\), \(i=1,2\), and \(C_{n,1}\leftrightharpoons C_{n,2}\), it follows that \(\widehat{B}_{n}\) is centrosymmetric matrix.
Finally, we show that the orthonormal polynomial system, \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\), given by \(\mathbb{P}_{n}=H_{n}^{-1/2}\mathbb{Q}_{n}\), associated with a reflexive weight function also has the reflexive property.
**Corollary 5.6**.: _With the same hypotheses of Theorem 5.5, the orthonormal polynomial \(\mathbb{P}_{n}\), defined by \(\mathbb{P}_{n}=H_{n}^{-1/2}\mathbb{Q}_{n}\), for \(n\geqslant 0\), is a reflexive polynomial vector. Furthermore, the associated coefficient matrices of the three-term relations (3.5) satisfy \(A_{n,1}\leftrightharpoons A_{n,2}\) and \(B_{n,1}\leftrightharpoons B_{n,2}\)._
Proof.: From Theorem 4.2, \(\mathbb{Q}_{n}\) is a reflexive polynomial vector. From Corollary 4.3 and Lemma 5.4, the matrix \(H_{n}^{\frac{1}{2}}\) is centrosymmetric. Hence, from Proposition 3.2, it follows that \(\mathbb{P}_{n}=H_{n}^{-1/2}\mathbb{Q}_{n}\) is a reflexive polynomial vector.
The reverse relations \(A_{n,1}\leftrightharpoons A_{n,2}\) and \(B_{n,1}\leftrightharpoons B_{n,2}\) follow from Proposition 2.11.
## 6. Particular Cases
In this section we present several particular cases and examples of reflexive OPS to illustrate the new concepts.
### Tensor product of univariate orthogonal polynomials
The simplest example of bivariate orthogonal polynomials system is associated with the tensor product of the same weight function in one variable. Consider \(w(x)\) a weight function defined for \(x\in(a,b)\), then
\[W(x,y)=w(x)w(y)=W(y,x),\quad for\ (x,y)\in(a,b)\times(a,b),\]
is a reflexive weight function. The associated OPS, \(\{\mathbb{P}_{n}\}_{n\geqslant 0}\), is given by
\[\mathbb{P}_{n}=\left(\begin{array}{c}p_{n}(x)p_{0}(y)\\ p_{n-1}(x)p_{1}(y)\\ \vdots\\ p_{0}(x)p_{n}(y)\end{array}\right),\]
where \(\{p_{n}(x)\}_{n\geqslant 0}\) is the orthogonal polynomial sequence with respect to \(w(x)\). Hence, \(\mathbb{P}_{n}\) is a reflexive orthogonal polynomial vector.
Moreover, consider the three-term recurrence relation for the orthogonal polynomials \(p_{n}(x)\), \(n\geqslant 0\), given by
\[xp_{n}(x)=\lambda_{n}p_{n+1}(x)+\gamma_{n}p_{n}(x)+\upsilon_{n}p_{n-1}(x), \quad n\geqslant 0,\]
with \(p_{-1}(x)=0\) and \(\lambda_{n},\gamma_{n},\upsilon_{n}\in\mathbb{R}\). It is known that the tree-term relations for the associated OPS are
\[x_{1}\mathbb{P}_{n}=\Lambda_{n,i}\mathbb{P}_{n+1}+\Gamma_{n,i}\mathbb{P}_{n}+ \Upsilon_{n,i}\mathbb{P}_{n-1},\quad i=1,2,\]
where \(x_{1}=x\), \(x_{2}=y\) and the coefficient matrices are given by
\[\Lambda_{n,1}=\left(\begin{array}{ccccc}\lambda_{n}&&&&\bigcirc 0\\ &\lambda_{n-1}&&&0\\ &&&\ddots&\vdots\\ \bigcirc&&&&\lambda_{0}&0\end{array}\right),\quad\Lambda_{n,2}=\left(\begin{array} []{ccccc}0&\lambda_{0}&&&&\bigcirc\\ 0&&\ddots&&&\\ \vdots&&&\lambda_{n-1}&\\ 0&\bigcirc&&&&\lambda_{n}\end{array}\right),\] \[\Gamma_{n,1}=\left(\begin{array}{ccccc}\gamma_{n}&&&&\bigcirc 0\\ &\gamma_{n-1}&&\\ &&\ddots&\\ &&&\gamma_{0}\end{array}\right),\quad\Gamma_{n,2}=\left(\begin{array}{ccccc} \gamma_{0}&&&&\bigcirc 0\\ &\ddots&&\\ &&\gamma_{n-1}&\\ \bigcirc&&&&\gamma_{n}\end{array}\right),\] \[\Upsilon_{n,1}=\left(\begin{array}{ccccc}\upsilon_{n}&&&& \bigcirc 0\\ &\upsilon_{n-1}&&\\ &&\ddots&\\ \bigcirc&&&&\upsilon_{1}\end{array}\right),\quad\text{and}\quad\Upsilon_{n,2}= \left(\begin{array}{ccccc}0&\ldots&0&0\\ \hline\upsilon_{1}&&&&\bigcirc 0\\ &\ddots&&\\ &&\ddots&&\\ &&\upsilon_{n-1}&\\ \bigcirc&&&&\upsilon_{n}\end{array}\right).\]
Clearly, \(\Lambda_{n,1}\leftrightharpoons\Lambda_{n,2}\), \(\Gamma_{n,1}\leftrightharpoons\Gamma_{n,2}\), and \(\Upsilon_{n,1}\leftrightharpoons\Upsilon_{n,2}\).
### Bivariate orthogonal polynomials on the simplex
Let us consider the family of the monic bivariate orthogonal polynomials on the simplex \(\Omega=\{(x,y)\in\mathbb{R}^{2}\,|\,x,y\geqslant 0,1-x-y\geqslant 0\}\) associated with the weight function
\[W^{(\alpha,\beta,\gamma)}(x,y)=x^{\alpha}y^{\beta}(1-x-y)^{\gamma},\quad\alpha, \beta,\gamma>-1.\]
We consider \(\alpha=\beta\) and the MOPS associated with the weight function
\[W_{\alpha,\alpha,\gamma}(x,y)=(xy)^{\alpha}(1-x-y)^{\gamma},\quad\alpha, \gamma>-1. \tag{6.1}\]
Observe that \(W_{\alpha,\alpha,\gamma}(x,y)\) defined in \(\Omega\) is a reflexive weight function. Following [11, p.36], let \(\{\mathbb{V}_{n}^{(\alpha,\gamma)}\}_{n\geqslant 0}\) be the monic OPS on the simplex orthogonal with respect to the weight function \(W_{\alpha,\alpha,\gamma}(x,y)\).
From the explicit formula given in [11, p. 36], we can calculate the first vectors of the MOPS, that are
\[\mathbb{V}_{0}^{(\alpha,\gamma)}=(1),\quad\mathbb{V}_{1}^{(\alpha,\gamma)}= \begin{pmatrix}x-\dfrac{2\alpha+1}{4\alpha+2\gamma+3}\\ y-\dfrac{2\alpha+1}{4\alpha+2\gamma+3}\end{pmatrix},\]
and
\[\mathbb{V}_{2}^{(\alpha,\gamma)}=\begin{pmatrix}x^{2}-\dfrac{2(2\alpha+3)}{4 \alpha+2\gamma+7}x+\dfrac{(2\alpha+1)(2\alpha+3)}{(4\alpha+2\gamma+5)(4\alpha+ 2\gamma+7)}\\ xy-\dfrac{2\alpha+1}{4\alpha+2\gamma+7}(x+y)+\dfrac{(2\alpha+1)^{2}}{(4\alpha+2 \gamma+5)(4\alpha+2\gamma+7)}\\ y^{2}-\dfrac{2(2\alpha+3)}{4\alpha+2\gamma+7}y+\dfrac{(2\alpha+1)(2\alpha+3)}{(4 \alpha+2\gamma+5)(4\alpha+2\gamma+7)}\end{pmatrix}.\]
These polynomial vectors are reflexives. Now, consider the three-term relation satisfied by \(\{\mathbb{V}_{n}^{(\alpha,\gamma)}\}_{n\geqslant 0}\),
\[x_{i}\mathbb{V}_{n}^{(\alpha,\gamma)}=L_{n,i}\mathbb{V}_{n+1}^{(\alpha,\gamma) }+C_{n,i}^{(\alpha,\gamma)}\mathbb{V}_{n}^{(\alpha,\gamma)}+D_{n,i}^{(\alpha, \gamma)}\mathbb{V}_{n-1}^{(\alpha,\gamma)},\quad i=1,2.\]
From [2], the shape of the coefficient matrices is
\[C_{n,1}^{(\alpha,\gamma)}=\begin{pmatrix}c_{00}^{(1)}&&&&\bigcirc\\ c_{10}^{(1)}&c_{11}^{(1)}&&&\\ &\ddots&\ddots&\\ \bigcirc&&c_{n,n-1}^{(1)}&c_{nn}^{(1)}\end{pmatrix},C_{n,2}^{(\alpha,\gamma)}= \begin{pmatrix}c_{00}^{(2)}&c_{01}^{(2)}&&\bigcirc\\ &c_{11}^{(2)}&\ddots&\\ &&\ddots&c_{n-1,n}^{(2)}\\ \bigcirc&&c_{nn}^{(2)}\end{pmatrix},\]
\[D_{n,1}^{(\alpha,\gamma)}=\begin{pmatrix}d_{00}^{(1)}&&&&\bigcirc\\ d_{10}^{(1)}&d_{11}^{(1)}&&&\\ d_{20}^{(1)}&d_{21}^{(1)}&d_{22}^{(1)}&&\\ &\ddots&\ddots&\ddots&\\ &&\ddots&\ddots&d_{n-1,n-1}^{(1)}\\ \bigcirc&&&d_{n,n-2}^{(1)}&d_{n,n-1}^{(1)}\end{pmatrix}\]
and
\[D_{n,2}^{(\alpha,\gamma)}=\begin{pmatrix}d_{00}^{(2)}&d_{01}^{(2)}&&&&\bigcirc\\ d_{10}^{(2)}&d_{11}^{(2)}&d_{12}^{(2)}&&&\\ &d_{2,1}^{(2)}&d_{22}^{(2)}&\ddots&&\\ &&\ddots&\ddots&d_{n-2,n-1}^{(2)}\\ &&&d_{n-1,n-2}^{(2)}&d_{n-1,n-1}^{(2)}\\ \bigcirc\bigcirc&&d_{n,n-1}^{(2)}\end{pmatrix}.\]
The explicit expressions of the entries are
\[c_{ii}^{(1)} =\frac{(i-n)(\alpha-i+n)}{2\alpha+\gamma+2n+1}+\frac{(-i+n+1)( \alpha-i+n+1)}{2\alpha+\gamma+2n+3}\] \[=\frac{(n-i+1)(\alpha+n-i+1)}{2\alpha+\gamma+2n+3}-\frac{(n-i)( \alpha+n-i)}{2\alpha+\gamma+2n+1}=c_{n-i,n-i}^{(2)},\quad 0\leqslant i\leqslant n,\] \[c_{i+1,i}^{(1)} =-\frac{2(i+1)(\alpha+i+1)}{(2\alpha+\gamma+2n+1)(2\alpha+ \gamma+2n+3)}\] \[=\frac{2(n-i-1-n)(\alpha-n+i+1+n)}{(2\alpha+\gamma+2n+1)(2\alpha+ \gamma+2n+3)}=c_{n-(i+1),n-i}^{(2)},\quad 0\leqslant i\leqslant n-1,\]
and \(c_{ij}^{(1)}=0\), for \(j>i\) or \(j<i-1\), \(c_{ij}^{(2)}=0\), for \(j<i\) or \(j>i+1\), it follows that \(C_{n,1}^{(\alpha,\gamma)}\lneq C_{n,2}^{(\alpha,\gamma)}\).
Analogously, working with the explicit expressions of the entries of the matrices \(D_{n,i}^{(\alpha,\gamma)}\), \(i=1,2\), given in [2], we get
\[d_{ii}^{(1)}=d_{n-i,n-1-i}^{(2)},\quad 0\leqslant i\leqslant n-1,\] \[d_{i+1,i}^{(1)}=d_{n-i-1,n-1-i}^{(2)},\quad 0\leqslant i\leqslant n-2,\] \[d_{i+2,i}^{(1)}=d_{n-i-2,n-1-i}^{(2)},\quad 0\leqslant i\leqslant n -2,\]
and, since \(d_{ij}^{(1)}=0\), for \(j>i\) or \(j<i-2\), \(d_{ij}^{(2)}=0\), for \(j<i-1\) or \(j>i+1\), it follows that \(D_{n,1}^{(\alpha,\gamma)}\lneq D_{n,2}^{(\alpha,\gamma)}\).
### Bivariate Freud weight function
In [6] we investigate the bivariate Freud weight function given by
\[W(x,y)=e^{-q(x,y)},\qquad(x,y)\in\mathbb{R}^{2},\]
where
\[q(x,y)=a_{4,0}\,x^{4}+a_{2,2}\,x^{2}\,y^{2}+a_{0,4}\,y^{4}+a_{2,0}\,x^{2}+a_{0,2}\,y^{2}\]
and \(a_{i,j}\) are real parameters. Setting \(a_{4,0}=a_{0,4}=a\), \(a_{2,2}=b\), and \(a_{2,0}=a_{0,2}=c\), the particular case,
\[W(x,y)=e^{-[a(x^{4}+y^{4})+b\,x^{2}\,y^{2}+c(x^{2}+y^{2})]}\]
is a reflexive weight function. Since \(W(x,y)\) is an even function, the MOPS \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) satisfy the three-term relations
\[x_{i}\mathbb{Q}_{n}=L_{n,i}\mathbb{Q}_{n+1}+D_{n,i}\mathbb{Q}_{n-1},\quad i=1,2,\]
for \(n\geqslant 0\), \(\mathbb{Q}_{-1}=0\) and \(\mathbb{Q}_{0}=(1)\), where \(D_{n,1}\leftrightharpoons D_{n,2}\).
### Uvarov modification
Let \(W(x,y)\) be a weight function defined on a domain \(\Omega\subset\mathbb{R}^{2}\) such that all moments exist
\[\iint\limits_{\Omega}x^{k}\,y^{l}\,W(x,y)dxdy<+\infty,\]
for \(k,l\geqslant 0\). Define the inner products
\[(f,g)=\iint\limits_{\Omega}f(x,y)g(x,y)W(x,y)dxdy,\]
and
\[(f,g)_{U}=(f,g)+\mathbf{f}(\mathbf{x})\,\Lambda\,\mathbf{g}(\mathbf{x})^{T},\]
where \(\mathbf{x}=((x_{0},y_{0}),(x_{1},y_{1}),\ldots,(x_{n},y_{n}))\), \((x_{i},y_{i})\in\mathbb{R}^{2}\), \(i=0,1,\ldots,n\), are fixed points, \(\Lambda\) is a symmetric positive semi-definite matrix of size \((n+1)\times(n+1)\), \(\mathbf{f}(\mathbf{x})=(f(x_{0},y_{0})\), \(f(x_{1},y_{1}),\ldots,f(x_{n},y_{n}))\), and \(\mathbf{g}(\mathbf{x})=(g(x_{0},y_{0}),g(x_{1},y_{1}),\ldots,g(x_{n},y_{n}))\). This modification of the original inner product is known as Uvarov modification, see [2, 9].
Now, we show that if \(W(x,y)\) is reflexive weight function, \(\Lambda\) is centrosymmetric matrix, and \(\mathbf{x}=((x_{0},y_{0}),(x_{1},y_{1}),\ldots,(x_{n},y_{n}))\) is such that \(x_{i}=y_{n-i}\), \(i=0,1,\ldots,n\), then the inner product \((f,g)_{U}\) is reflexive in the sense that
\[(x^{k},y^{l})_{U}=(x^{l},y^{k})_{U},\quad k,l\geqslant 0.\]
In fact, let \(\Lambda=(\lambda_{ij})_{i,j=0}^{n,n}\), and
\[(x^{l},y^{k})_{U}=\iint\limits_{\Omega}x^{l}y^{k}W(x,y)dxdy+\mathbf{x}^{ \mathbf{l}}\Lambda(\mathbf{y}^{\mathbf{k}})^{T},\]
where \(\mathbf{x}^{\mathbf{l}}=(x_{0}^{l},x_{1}^{l},\ldots,x_{n}^{l})\) and \(\mathbf{y}^{\mathbf{k}}=(y_{0}^{k},y_{1}^{k},\ldots,y_{n}^{k})\). Then,
\[(x^{l},y^{k})_{U} =\iint\limits_{\Omega}x^{l}y^{k}W(x,y)dxdy+\sum_{i=0}^{n}\sum_{j =0}^{n}\lambda_{ij}x_{i}^{l}y_{j}^{k}\] \[=\iint\limits_{\Omega}y^{l}x^{k}W(y,x)dxdy+\sum_{i=0}^{n}\sum_{j =0}^{n}\lambda_{ij}x_{i}^{l}y_{j}^{k}\] \[=\iint\limits_{\Omega}x^{k}y^{l}W(x,y)dxdy+\sum_{i=0}^{n}\sum_{j =0}^{n}\lambda_{ij}x_{i}^{l}y_{j}^{k}\] \[=\iint\limits_{\Omega}x^{k}y^{l}W(x,y)dxdy+\sum_{i=0}^{n}\sum_{j =0}^{n}\lambda_{n-i,n-j}x_{n-j}^{k}y_{n-i}^{l},\]
making the change of variables \(x\leftrightarrow y\) in the first above integral, using the fact that \(W\) is reflexive, \(\Lambda\) is centrosymmetric matrix and \(x_{i}=y_{n-i}\), \(i=0,1,\ldots,n\). Hence, making \(r=n-i\), \(s=n-j\) and using that \(\Lambda\) is symmetric, it becomes
\[(x^{l},y^{k})_{U}=\iint\limits_{\Omega}x^{k}y^{l}W(x,y)dxdy+\sum_{s=0}^{n} \sum_{r=0}^{n}\lambda_{s,r}x_{s}^{k}y_{r}^{l}=(x^{k},y^{l})_{U}.\]
As a numerical example, we consider again OPS on simplex in \(\mathbb{R}^{2}\), with the weight function \(W^{(1,1/2)}(x,y)\) defined in (6.1). Let \(\Lambda=\frac{1}{2}I_{3}\) and \(\mathbf{x}=((1,0),(0,0)\)
\((0,1)\)), hence, the reflexive Uvarov inner product is given by
\[(f,g)_{U}= \iint\limits_{\Omega}\,f(x,y)\,g(x,y)\,x\,y\,\sqrt{1-x-y}\,dx\,dy\] \[+\frac{1}{2}[f(1,0)g(1,0)+f(0,0)g(0,0)+f(0,1)g(0,1)].\]
The first polynomial vectors of the associated MOPS were calculated in [2], and they are reflexives:
\[\mathbb{Q}_{0}=(1),\quad\mathbb{Q}_{1}=\begin{pmatrix}x-\dfrac{10459}{31361}\\ y-\dfrac{10459}{31361}\end{pmatrix},\]
and
\[\mathbb{Q}_{2}=\begin{pmatrix}x^{2}-\dfrac{320811709991693}{321113175737485}x+ \dfrac{36006461568}{64222635147497}y+\dfrac{51957376}{30832708855}\\ xy-\dfrac{5355008}{7115240505}(x+y)-\dfrac{69058048}{92498126565}\\ y^{2}+\dfrac{36006461568}{64222635147497}x-\dfrac{320811709991693}{321113175737485}y +\dfrac{51957376}{30832708855}\end{pmatrix}.\]
### Christoffel modification
Let \(\lambda(x,y)\) be a real bivariate polynomial given as
\[\lambda(x,y)=a\,(x^{2}+y^{2})+b\,x\,y+c\,(x+y)+d, \tag{6.2}\]
where \(a,b,c,d\in\mathbb{R}\) and \(|a|+|b|>0\). We consider the functional \(\mathbf{v}\) obtained from a polynomial modification of a reflexive functional \(\mathbf{u}\), given by
\[\langle\mathbf{v},p(x,y)\rangle=\langle\lambda(x,y)\mathbf{u},p(x,y)\rangle= \langle\mathbf{u},\lambda(x,y)\,p(x,y)\rangle,\]
for \(p(x,y)\in\Pi\). The Christoffel modification on a multivariate functional, using multiplication by a polynomial of degree \(2\), was studied in [10].
Let \(v_{mn}\) be the associated moments of the functional \(\mathbf{v}.\) Hence,
\[v_{m,n} =\langle\mathbf{v},x^{m}y^{n}\rangle\] \[=\langle\mathbf{u},[a(x^{2}+y^{2})+b\,x\,y+c(x+y)+d]x^{m}y^{n}\rangle\] \[=a\langle\mathbf{u},x^{m+2}y^{n}+x^{m}y^{n+2}\rangle+b\langle \mathbf{u},x^{m+1}y^{n+1}\rangle\] \[\quad+c\langle\mathbf{u},x^{m+1}y^{n}+x^{m}y^{n+1}\rangle+d \langle\mathbf{u},x^{m}y^{n}\rangle\] \[=a\langle\mathbf{u},x^{n}y^{m+2}+x^{n+2}y^{m}\rangle+b\langle \mathbf{u},x^{n+1}y^{m+1}\rangle\] \[\quad+c\langle\mathbf{u},x^{n}y^{m+1}+x^{n+1}y^{m}\rangle+d \langle\mathbf{u},x^{n}y^{m}\rangle\] \[=\langle\mathbf{u},[a(x^{2}+y^{2})+b\,x\,y+c(x+y)+d]x^{n}y^{m} \rangle=v_{n,m},\]
since \(\mathbf{u}\) is reflexive. Therefore, the Christoffel modification of a reflexive moment functional by a polynomial of type (6.2) preserves the reflexive property.
Let \(\{\mathbb{Q}_{n}\}_{n\geq 0}\) and \(\{\widetilde{\mathbb{Q}}_{n}\}_{n\geq 0}\) be the respective MOPS associated respectively with \(\mathbf{u}\) and \(\mathbf{v}\). We remark that, since \(\mathbf{u}\) and \(\mathbf{v}\) are reflexive moment functional, then \(\{\mathbb{Q}_{n}\}_{n\geq 0}\) and \(\{\widetilde{\mathbb{Q}}_{n}\}_{n\geq 0}\) are reflexive MOPS.
From [10, Th. 4.1], it is known that, for \(n\geq 1\), there exist real matrices \(R_{n}\) and \(S_{n}\) of respective sizes \((n+1)\times n\) and \(n\times(n-1)\), with \(S_{2}\not\equiv 0\), such that
\[\mathbb{Q}_{n}=\widetilde{\mathbb{Q}}_{n}+R_{n}\widetilde{\mathbb{Q}}_{n-1}+S_ {n}\widetilde{\mathbb{Q}}_{n-2},\quad n\geq 1. \tag{6.3}\]
We now verify that the matrices \(R_{n}\) and \(S_{n}\) are centrosymmetric. First we denote \(R_{n}=(r_{ij}^{(n)})_{i,j=0}^{n,n-1}\), \(S_{n}=(s_{ij}^{(n)})_{i,j=0}^{n,n-2}\),
\[\mathbb{Q}_{n}=(Q_{n,0}^{n}(x,y),Q_{n-1,1}^{n}(x,y),\ldots,Q_{0,n}^{n}(x,y))^{T},\]
and
\[\widetilde{\mathbb{Q}}_{n}=(\widetilde{Q}_{n,0}^{n}(x,y),\widetilde{Q}_{n-1, 1}^{n}(x,y),\ldots,\widetilde{Q}_{0,n}^{n}(x,y))^{T}.\]
From relation (6.3) we get
\[Q_{n-k,k}^{n}(x,y)=\widetilde{Q}_{n-k,k}^{n}(x,y)+\sum_{j=0}^{n-1}r_{n-k,j}^{( n)}\widetilde{Q}_{n-1-j,j}^{n-1}(x,y)+\sum_{j=0}^{n-2}s_{n-k,j}^{(n)}\widetilde{Q}_ {n-2-j,j}^{n-2}(x,y), \tag{6.4}\]
for \(k=0,1,\ldots,n\), and
\[Q_{k,n-k}^{n}(y,x)=\widetilde{Q}_{k,n-k}^{n}(y,x)+\sum_{j=0}^{n-1}r_{kj}^{(n) }\widetilde{Q}_{n-1-j,j}^{n-1}(y,x)+\sum_{j=0}^{n-2}s_{kj}^{(n)}\widetilde{Q}_ {n-2-j,j}^{n-2}(y,x). \tag{6.5}\]
Using the fact that \(\{\widetilde{\mathbb{Q}}_{n}\}_{n\geqslant 0}\) is reflexive, the representation (6.5) becomes
\[Q_{k,n-k}^{n}(y,x)=\widetilde{Q}_{n-k,k}^{n}(x,y)+\sum_{j=0}^{n-1}r_{kj}^{(n) }\widetilde{Q}_{j,n-1-j}^{n-1}(x,y)+\sum_{j=0}^{n-2}s_{kj}^{(n)}\widetilde{Q} _{j,n-2-j}^{n-2}(x,y).\]
Making \(l=n-1-j\) in the first summation and \(l=n-2-j\) in the second, we get
\[Q_{k,n-k}^{n}(y,x)=\widetilde{Q}_{n-k,k}^{n}(x,y)+\sum_{l=0}^{n-1}r_{k,n-1-l}^ {(n)}\widetilde{Q}_{n-1-l,l}^{n-1}(x,y)+\sum_{l=0}^{n-2}s_{k,n-2-l}^{(n)} \widetilde{Q}_{n-2-l,l}^{n-2}(x,y) \tag{6.6}\]
for \(k=0,1,\ldots,n\).
Comparing expressions (6.4) and (6.6), and using that \(\mathbb{Q}_{n}\) is reflexive polynomial vector, we have \(r_{n-k,j}^{(n)}=r_{k,n-1-j}^{(n)}\) and \(s_{n-k,j}^{(n)}=s_{k,n-2-j}^{(n)}\). Finally, setting \(i=n-k\), then
\[r_{ij}^{(n)}=r_{k,n-1-j}^{(n)}\quad\text{and}\quad s_{ij}^{(n)}=s_{k,n-2-j}^{( n)}.\]
Therefore, \(R_{n}\) and \(S_{n}\), \(n\geqslant 1\), are centrosymmetric matrices.
Conversely, from [10, Th. 4.3], consider \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) a MOPS associated with a moment functional \(\mathbf{u}\) and two sequences of matrices \(\{R_{n}\}_{n\geqslant 1}\) and \(\{S_{n}\}_{n\geqslant 2}\) of size \((n+1)\times n\) and \((n+1)\times(n-1)\) respectively, with \(S_{2}\not\equiv\mathtt{0}\). Then, the monic polynomial system, \(\{\widetilde{\mathbb{Q}}_{n}\}_{n\geqslant 0}\), defined by \(\widetilde{\mathbb{Q}}_{0}=\mathbb{Q}_{0}\),
\[\widetilde{\mathbb{Q}}_{1} =\mathbb{Q}_{1}-R_{1}\mathbb{Q}_{0},\] \[\widetilde{\mathbb{Q}}_{n} =\mathbb{Q}_{n}-R_{n}\mathbb{Q}_{n-1}-S_{n}\mathbb{Q}_{n-2},\quad n \geqslant 2,\]
is a MOPS associated with a moment functional \(\mathbf{v}=\lambda(x,y)\mathbf{v}\), where
\[\lambda(x,y)=S_{2}^{T}H_{2}^{-1}\mathbb{Q}_{2}+M_{1}^{T}H_{1}^{-1}\mathbb{Q}_{ 1}+H_{0}^{-1}\mathbb{Q}_{0}\]
and \(H_{n}=\langle\mathbf{u},\mathbb{Q}_{n}\mathbb{Q}_{n}^{T}\rangle\).
Now we consider the particular case when \(\mathbf{u}\) is a reflexive moment functional and \(\{\mathbb{Q}_{n}\}_{n\geqslant 0}\) is a reflexive MOPS. If the matrices \(R_{n}\), \(n\geqslant 1\) and \(S_{n}\), \(n\geqslant 2\), are centrosymmetric matrices, from Proposition 3.2 follows that the MOPS \(\{\widetilde{\mathbb{Q}}_{n}\}_{n\geqslant 0}\) is also a reflexive MOPS.
Moreover, from Corollary 4.3 and items 3) and 5) of Proposition 2.8, the polynomial \(\lambda(x,y)\) is a reflexive polynomial. Therefore, \(\mathbf{v}=\lambda(x,y)\mathbf{u}\) is a reflexive moment functional.
## Acknowledgements
This work is part of the doctoral thesis of the second author (GSC) at UNESP, Sao Jose do Rio Preto, SP, Brazil, and it began to be developed during a research visit by GSC to the University of Granada, Spain, supported by the grant 88887.716711/2022-00 from CAPES, in the scope of the Program CAPES-PrInt, International Cooperation Project number 88887.310463/2018-00, Brazil.
First author (CFB) thanks for the supported, through the Grant 2022/09575-5, from FAPESP, Sao Paulo, Brazil.
Third author (TEP) thanks Grant CEX2020-001105-M funded by MCIN/AEI/ 10.13039/501100011033 and Research Group Goya-384, Spain.
|
2301.09666 | Near zero-field microwave-free magnetometry with ensembles of
nitrogen-vacancy centers in diamond | We study cross-relaxation features near zero magnetic field with ensembles of
nitrogen-vacancy (NV) centers in diamond and examine their properties in
samples with a range (0.9 ppm - 16.0 ppm) of NV concentrations. The observed
NV-NV cross-relaxation features between differently oriented NV centers in high
(greater than 0.9 ppm)-NV-density samples hold promise for a variety of
magnetometry applications where microwave fields (or any bias field) disturb
the system under study. We theoretically determine the values of the bias
magnetic fields corresponding to cross-relaxations between different axes and
experimentally validate them. The behavior of zero-field cross-relaxation
features as a function of temperature is also investigated. | Omkar Dhungel, Till Lenz, Muhib Omar, Joseph Shaji Rebeirro, Minh-Tuan Luu, Ali Tayefeh Younesi, Ronald Ulbricht, Viktor Ivady, Adam Gali, Arne Wickenbrock, Dmitry Budker | 2023-01-23T19:09:22Z | http://arxiv.org/abs/2301.09666v2 | # Zero-field microwave-free magnetometry with ensembles of nitrogen-vacancy centers in diamond
###### Abstract
We study cross-relaxation features near zero magnetic field with ensembles of nitrogen-vacancy (NV) centers in diamond and examine their properties in samples with a range (0.9 ppm - 16.0 ppm) of NV concentrations. The observed NV-NV cross-relaxation features between differently oriented NV centers in high (\(\gtrsim 0.9\) ppm)-NV-density samples hold promise for a variety of magnetometry applications where microwave fields (or any bias field) disturb the system under study. We theoretically determine the values of the bias magnetic fields corresponding to cross-relaxations between different axes and experimentally validate them. The behavior of zero-field cross-relaxation features as a function of temperature is also investigated. The sensitivity of magnetometry based on this method is determined to be in the 10 nT/\(\sqrt{\text{Hz}}\) range.
## I Introduction
Nitrogen-vacancy (NV) centers in diamond are used in applications as magnetic field [1], electric field [2] and temperature sensors [3] over a wide range of environmental conditions, e.g., pressure and temperature [4; 5]. In most applications as magnetometers, the magnetic field is determined by using optically detected magnetic resonance (ODMR) [6]. In this measurement protocol, the diamond is illuminated with green laser light and a microwave field of tunable frequency is applied to measure the energy separation between the magnetically-sensitive spin 1 ground-state levels to obtain the magnetic field value. For measuring weak magnetic fields where Zeeman shifts of sublevels may be nonlinear in field [1], this method usually requires a bias field to lift the degeneracy between the \(m_{s}=\pm 1\) sublevels. However, the use of microwave and external bias magnetic fields may disrupt the system of interest in some applications. To overcome this limitation, different strategies emerged. Recently, microwave-free magnetometry [7], as well as vector magnetometry [8] based on the level anti-crossing in the triplet ground state at 102.4 mT have been demonstrated. Zero-field magnetometry was realized for both ensemble and single NV centers by using circularly polarized microwave fields to individually address transitions to the \(m_{s}=+1\) or \(m_{s}=-1\) states [9; 10]. Still, due to the high external magnetic field or the application of microwave fields, these techniques may be problematic for studying systems where both external magnetic field and the microwaves might disturb the system. Examples of such systems are high-T\({}_{c}\) superconductors (T\({}_{c}\) stands for the superconducting transition temperature) [11], zero- to ultra-low field NMR [12], biological samples and various magnetic materials[13].
Recently, ODMR experiments with an additional radiofrequency field were used to study features occurring in fluorescence measurements at zero and low fields [14; 15; 16]. In addition, microwave-free magnetometry at low field was proposed. The technique consists in measuring the position of the cross-relaxation resonances [17; 18; 19] and allows for vector magnetometry [20; 21]. This new magnetometry technique therefore overcomes the limitations of microwave-based protocols.
In this work, we perform a detailed study of the cross-relaxation features with respect to the sample cut, NV density, and the sample temperature. In addition, we study the cross-relaxations under a transverse field upto \(\approx 2.0\) mT as a function of the azimuthal angle. We numerically predict and experimentally verify the observed cross-relaxation patterns.
## II NV-center ground state
The fluorescence rate is reduced due to the \(T_{1}\) relaxation between bright and dark states. The coupling mediated by dipolar coupling leads to faster \(T_{1}\) relaxation and such coupling is enhanced when NV states of different axes are degenerate [18]. In the following, we are studying the Hamiltonian to determine at what external fields this might be observable and compare it to the ex
periment. The NV ground state in the presence of an external magnetic field can be described by the Hamiltonian:
\[H=D(S_{z}^{2}-\frac{1}{3}\vec{S}^{2})+E(S_{x}^{2}-S_{y}^{2})+g_{s}\mu_{B}\vec{B} \cdot\vec{S}\,, \tag{1}\]
where \(D\)= 2.87 GHz is the axial and \(E\) is the transverse zero-field-splitting (ZFS) parameters, \(\vec{B}\) is the magnetic field, \(\vec{S}\) is the electronic spin with components \(S_{x}\), \(S_{y}\) and \(S_{z}\). The electron \(g\) factor is \(g_{s}\) = 2.003 [22] and \(\mu_{B}\) is the Bohr magneton. The Larmor frequency of the NV center is \(\Omega=\frac{g_{s}\mu_{B}}{\hbar}\,B\). Here, the \(z\)-axis is chosen along the NV axis. The spin operators \(S_{x}\), \(S_{y}\), \(S_{z}\) can be written as:
\[\begin{pmatrix}0&\frac{1}{\sqrt{2}}&0\\ \frac{1}{\sqrt{2}}&0&\frac{1}{\sqrt{2}}\\ 0&\frac{1}{\sqrt{2}}&0\end{pmatrix},\begin{pmatrix}0&\frac{-i}{\sqrt{2}}&0\\ \frac{i}{\sqrt{2}}&0&\frac{-i}{\sqrt{2}}\\ 0&\frac{1}{\sqrt{2}}&0\end{pmatrix},\begin{pmatrix}1&0&0\\ 0&0&0\\ 0&0&-1\end{pmatrix}\,. \tag{2}\]
A unit vector with an angle of \(\beta\) to the \(z\) axis can be written as:
\[\vec{n}=(\cos\phi\sin\beta,\sin\beta\sin\phi,\cos\beta), \tag{3}\]
where \(\phi\) is the azimuthal angle. Then eq. (1) can be written as:
\[H=\begin{pmatrix}D+\Omega\cos\beta&\frac{e^{-i\phi}\Omega\sin\beta}{\sqrt{2}} &E\\ \frac{e^{i\phi}\Omega\sin\beta}{\sqrt{2}}&0&\frac{e^{-i\phi}\Omega\sin\beta}{ \sqrt{2}}\\ E&\frac{e^{i\phi}\Omega\sin\beta}{\sqrt{2}}&D-\Omega\cos\beta\end{pmatrix}. \tag{4}\]
A schematic of the energy levels obtained by diagonalization of the Hamiltonian (4) is shown in Fig. 1. The ground state \(|g\rangle\) is split by the dipole-dipole interaction described by the parameter \(D\), lifting the degeneracy between the \(m_{s}\)= \(\pm\)1 and \(m_{s}\)= 0 sublevels. Application of a magnetic field along NV-axis further lifts the \(m_{s}=\pm\)1 degeneracy level by 2\(\gamma_{e}B_{\rm NV}\) above the \(m_{s}\)= -1 when \(B_{\rm NV}\) is applied.
Consider the following four possible crystallographic axes of diamond lattice for NV centers: [111], [\(\overline{1}\)1], [\(\overline{1}\)1] and [\(\overline{1}\)1]. In the diamond samples used in this work, the NV centers are uniformly distributed over the possible orientations. Since NVs of all orientations have equal ground state splitting, the transition energies of all orientations cross at \(\vec{B}=0\). When the splittings of orientationally inequivalent NV centers match, a cross-relaxation feature occurs and we observe a decrease in fluorescence. A recently developed model [23] suggests that local energy relaxation occurs when a randomly distributed portion of NV centers rapidly incoherently depolarizes. Through dipolar interactions, these spins can depolarize the entire ensemble at zero-field. It was also observed that there is a significant contribution of local electric fields and the interaction between same and differently oriented NVs on depolarization at zero-field for high-density samples [24].
Let us concentrate on the ground \(m_{s}\)= \(\pm\)1 states. Only \(m_{s}\)= \(\pm\)1 states are taken into account since these are the states for which the transition energies to \(m_{s}=0\) intersect/cross at zero-field. The ground state splitting for transition energies of different axes differs when a transverse field is present. As a result, there are multiple crossings between the transition energies at various values of \(B_{z}\). The magnitude of the transverse field and the azimuth angle \(\phi\) affect positions of crossings. When the field is scanned along [111] in the presence of 2 mT of transverse field along \(\hat{x}\) (this direction is defined so that one of the carbons associated with the NV center is in the x-z plane), the transition energies behave as shown in Fig. 1 (b) for ground \(\pm\)1 states for all possible NV axes. There are fifteen transition-energy crossings, however some of them occur exactly at the same \(B_{z}\) such that only five cross-relaxation features are expected. Note that for the calculations shown in Fig. 1 (for 2 mT of transverse field), we neglected the parameter \(E\). While in general, up to six crossing positions can be observed when the [111] axis of the diamond is aligned with the z-axis of the setup.
## III Experimental setup
To study the cross-relaxation features, we use a home-built wide-field fluorescence microscope (Fig. 2), where a
Figure 1: (a) Energy-level diagram of the NV center in the presence of magnetic field. (b) Energy separation (transition energy) between the ground \(m_{s}=0\) and \(\pm\)1 states for all four crystallographic axes as a function of \(B_{z}\) when a 2 mT transverse field is applied along \(\hat{x}\). Blue stars are the positions where the crossings happen between transition energies for different NV orientations. Fifteen transition-energy crossing between different orientations are expected in total.
laser (Toptica iBeam smart) with the output wavelength of 515 nm is employed. The diamond is mounted on a rotational mount, and the laser beam is reflected via a dichroic mirror. The beam is then focused into the diamond using a microscope objective (Olympus PLN Plan Akromat 10x Microscope Objective, 0.25 NA, 10.6 mm WD). The NV-center fluorescence is collected using the same objective, passing the dichroic mirror and a long-pass filter to remove the green reflection of the laser light from diamond. The fluorescence is collected by utilizing lenses of various focal lengths and detected with a photodiode (APD120A/M - Si Avalanche Photodetector, 400 - 1000 nm). Electromagnetic coils are used to apply magnetic field in three different directions.
## IV Results
### Characterization of zero-field feature
We explored the properties of the cross-relaxation feature at zero field in different diamond samples with varying NV density. During these experiments neither microwave nor radiofrequency fields were applied. We only observed the cross-relaxation feature at zero-field in diamonds with NV density \(\gtrsim 0.9\) ppm. Moreover, linewidth and contrast of the zero-field feature depends on the NV-center density. In the examples in Fig. 3, both linewidth and contrast increase with NV-center density. The contrast of the feature also depends on the laser intensity. Table 1 summarizes the characteristics of the observed zero-field feature in different samples. Characterization of the the zero-field feature was performed with a laser spot diameter of 50 \(\mu\)m. For the bulk NVs, the optimum contrast is observed at different values of laser intensity for each sample. Higher laser intensity is used in the case of NV layers because the signal-to-noise ratio for layer samples is low at reduced power and the contrast increases with laser intensity. The NV density of the Sumi_300 kev sample is estimated based on a comparative fluorescence measurement with respect to the 2170612-13 sample. The linewidth and contrast are slightly position dependent because of the variation of NV density over the sample. The lowest (i.e, most favorable) ratio of linewidth and contrast is observed for 3.7 ppm-NV-concentration bulk sample ("George").
### NV-NV cross-relaxation in the presence of a transverse field
This section describes calculations and experimentally measured additional cross-relaxation features observed when \(B_{z}\) is along [111] (and [100] later on) when applying 2 mT (1.6 mT for [100]) transverse field.
Figure 4(a) shows a density plot of the simulated contrast based on the numerically evaluated transition fre
\begin{table}
\begin{tabular}{l c c c c c c} Diamond Name & Cut & [N] density (ppm) & NV density (ppm) & Laser intensity (mW) & \(\gamma\) (mT) & Optimum C (\%) & \(\gamma\) (mT)/ C (\%) \\ \hline Super diamond & 111 & \(\sim\)3 & 0.9 & 5 & 0.12 & 0.04 & 3.00 \\ George & 100 & \(\sim\)13 & 3.7 & 5.93 & 0.20 & 0.91 & 0.21 \\
1970608-29 & 111 & - & 3.8 & 2.98 & 0.78 & 2.85 & 0.27 \\ S2 & 111 & \(<\)100 & 16.0 & 20.2 & 1.42 & 4.01 & 0.35 \\ sumi\_300 kev* & 110 & \(\sim\)150 & 14.0 & 37.00 & 1.42 & 1.21 & 1.17 \\
2170612-13* & 100 & \(\sim\)14 & 4.5 & 37.00 & 0.17 & 0.29 & 0.59 \\ \end{tabular}
\end{table}
Table 1: Characterization of the zero-field feature in different diamond samples where \(B_{z}\) is along the cut surface. Here, \(\gamma\) and C are the linewidth and contrast, respectively. The samples with * in their names are thin-layer samples.
Figure 3: Zero-fields features recorded on two different diamond samples while magnetic field along \(\hat{z}\). For sample S2 (16 ppm NV density), full width at half maximum (FWHM) and the contrast are 1.42(3) mT and 3.95(4) %, respectively. For sample 1970608-29 (3.8 ppm NV density), these are 0.78(4) mT and 1.79(5) %.
Figure 2: Schematic diagram of experimental setup.
quency differences as a function of \(B_{z}\) along [111] and the azimuthal angle of the transverse magnetic field (2 mT). Here, we assume that the cross-relaxation features have a Lorentzian line shape. The parameters of the Lorentzian function are adjusted to reflect the measured data. There are multiple crossings between the transition energies for differently oriented NVs at different values of \(B_{z}\). The position of the crossings and the overlap between the transition energies depend on \(\phi\) when the diamond is rotated around \(\hat{z}\) with a transverse field present. When \(B_{z}\) is along [111] (Fig. 4 (a)), for every multiple of 60\({}^{\circ}\), starting at 30\({}^{\circ}\), transitions for NVs along two of three non-aligned crystallographic axes overlap each another. These are the orientations where the crossings between the transition energies at the three values of \(B_{z}\) occur. Also, two out of three inequivalent transition energies successively cross each other at \(B_{z}\) = 0 for every multiple of 60\({}^{\circ}\), starting from 0\({}^{\circ}\). For each of these values of \(\phi\), crossings occur at five distinct values of \(B_{z}\), Fig. 1 (b) shows one of those cases when \(\phi=0^{\circ}\).
Figure 4 (b) shows experimentally measured cross-relaxation patterns between NVs oriented along different axes. For every multiple of 60\({}^{\circ}\), starting from 30\({}^{\circ}\) there is an overlap of three separate cross-relaxation features at -1.4 mT and +1.4 mT and of two separate features at +0.7 mT and -0.7 mT. As a result, cross-relaxation features are observed at only three values of \(B_{z}\) at these angles. For multiples of 60\({}^{\circ}\) starting from 0\({}^{\circ}\), there is an overlap of two separate features at \(B_{z}\) = 0 so that crossings are observed at these angles. There are always six cross-relaxation values of \(B_{z}\) for the rest of the angles but these values depend on the angle \(\phi\). The observed cross-relaxation features when we apply transverse field have narrower linewidth than the zero-field feature, which can be attributed to the effect of electric field, strain, and additional cross-relaxation channels in the sample. We also note the different widths of the lines in the figure. This is related to the difference in the derivatives of the crossing transition energies as a function of \(B_{z}\). Note also that one can estimate the direction of the magnetic field from the positions of the crossings, which is useful for vector magnetometry applications [21].
Figure 4: Simulation and experimental measurement of the cross-relaxation features in the presence of a transverse field of 2 mT. (a) simulation, (b) measurement of the transition energy difference between \(m_{s}\)= +1 and -1 states for all four NV axes. Here \(B_{z}\) is along [111] and the azimuthal angle of the transverse field is \(\phi\). (c), (d) Varying the transverse field and \(B_{z}\) at an angle of 5\({}^{\circ}\). (e), (f) Transition energy difference between \(m_{s}\)= +1 and -1 states when \(B_{z}\) is along [100] at varying angles when 16 mT of transverse field is applied.
Figure 5: Dominant central cross-relaxation feature observed while applying 1.6 mT of transverse field at an azimuthal angle of \(\approx\) 90\({}^{\circ}\). The FWHM and the contrast for this feature are 0.20(2) mT and 1.85(10)%, respectively.
When \(B_{z}\) is scanned for each value of the transverse field, the splitting of the cross-relaxation features linearly depends upon the transverse field i.e. all of these cross-relaxation features are magnetically sensitive. (An exception are crossings occurring at \(B_{z}\!=\!0\) for certain values of the azimuthal angle.) Figure 4(c), (d) shows the simulated and experimentally measured cross-relaxation features for \(\phi\!\approx\!5^{\circ}\) angle. It shows the linear dependence on the transverse magnetic field. A limitation of these extra cross-relaxation features (other than the zero-field feature) for magnetometry applications requiring operation near-zero field is that application of a 0.5-1.0 mT transverse field is necessary to clearly resolve these features.
Similarly, in the case of \(B_{z}\) along [100] (Fig. 4 (e)), due to the crystal symmetry, transverse fields are equal for two groups of NVs that have pairs of transition energies crossing each other at \(B_{z}\!=\!0\) for every multiple of \(90^{\circ}\), starting at \(0^{\circ}\). Since, these two pairs of transition energies intersect at \(B_{z}\!=\!0\) there is just one crossing between the transition energies at \(B_{z}\!=\!0\). Figure 5 shows the dominant single cross-relaxation feature at one of such angles, \(\approx\!90^{\circ}\). At this angle, the contrast is larger than that of the zero-field feature of the same sample ("George") without transverse field. Other small features are still observed, likely because of a small misalignment with respect to the transverse field. Additionally, two out of every four transition energies successively overlap at every multiple of \(45^{\circ}\), resulting in crossings at three different values of \(B_{z}\). Figure 4(e) shows the density plot of transition-frequency differences as a function of \(B_{z}\) along [100] and the azimuthal angle of the transverse magnetic field at 1.6 mT. Figure 4 (f) shows the experimentally measured cross-relaxations features. Since the transition energies for each angle cross at \(B_{z}\!=\!0\) a cross-relaxation feature is always visible at zero-field. There are five cross-relaxation positions for the remaining angles. In our setup, while rotating the diamond, there is a slight translation. Therefore, if we compare the experimentally measured data in Fig. 4(f) with the calculations in Fig. 4(e), we see a slight displacement of the features which, however, does not obscure the overall good agreement.
The cross-relaxation features in the presence of a transverse field are measured on diamond sample "S2" for [111] and on sample "George" for [100] (see Tab. 1).
The photon shot noise limited sensitivity is approximated as \(\eta\approx\Delta B/C/\sqrt{I_{0}}\), where \(\Delta B=0.20\) mT is the linewidth and \(C=1.85\%\) is the contrast for the narrowest linewidth we observed at \(\phi\,=\!90^{\circ}\) in the presence of a 1.6 mT transverse field for the "George" sample. In the present work, we did not fully optimize the sensitivity as a function of light power and instead estimated the sensitivity for a benchmark value of the photon detection rate (achieved with a few milliwatt of green light power on the diamond) of \(I_{0}\!=6\!\times\!10^{12}\) /s. For these values, the estimated photon shot noise limited sensitivity is around 4.5 nT/\(\sqrt{\rm Hz}\) with a spot diameter on the diamond of \(\approx 50\) \(\mu\)m, which is sufficient to study superconducting vortices and magnetic properties of magnetic materials [11; 25]. We estimate that the sensitivity can be improved by at least an order of magnitude by optimizing the light power and improving the light-collection efficiency [26].
### Temperature dependence of zero-field feature
We observed that the sample temperature affects the contrast of the zero-field feature. The temperature dependence was investigated on two samples cut along (111) with the results shown in Fig. 6.
There are several effects that are of note. The zero-field signal is only observed above \(\approx\!20\) K. The contrast increases with temperature up to 60 K. Above 60 K, the contrast saturates and then declines above \(\approx 250\) K, retaining a sizable value at room temperature (Fig. 6 b). The highest contrast is observed in the range of 180-200 K. The linewidth is roughly constant over the entire temperature range (Fig. 6 c). The high-temperature behavior is likely due to the increase of the longitudinal relaxation rate with temperature.
In order to understand the observed temperature dependence of the contrast of the zero-field feature at cryogenic temperatures, we need to identify the factors affecting the contrast. To do so we first study the energy-level structure of coupled NV centers, see Fig. 7(a). When hyperfine, strain and electric-field-induced splittings of the spin states are neglected, the energy levels of a many-NV system fall into branches with a ladder-like structure, where the spacing of the steps is equal to the zero-field splitting (ZFS) parameter \(D\). It is important to note the degeneracy of the branches. The lowest-lying \(|0,0,\ldots,0\rangle\) state is non-degenerate, even for NV centers of different orientations. The higher-lying states are, however, degenerate and include mixed \(|0\rangle\) and \(|\pm 1\rangle\) states, see Fig. 7(a) illustrating this for for three NV centers. When all NV centers are completely polarized to \(|0\rangle\), the many-spin system populates the non-degenerate (lowest energy) state. Spin-relaxation effects may connect the lower-lying state with-higher lying states; however, such processes are suppressed by the large value of the ZFS compared to the typical strength of dipole-dipole interactions. Drop of the NV polarization, i.e., population of the higher-lying energy states, enables spin flip-flop among the NV enters. However, the averaged probability of finding the NV centers in \(|0\rangle\) does not change even in this case. Dipolar spin relaxation alone cannot account for the zero-field feature.
To understand depolarization in a dense NV ensemble, we need to utilize the concept of spin-fluctuators as explained in Ref. [23]. Spin-fluctuators are NV centers with a short lived ground-state electron spin, through which a polarized, dipolar coupled NV bath can dissipate its energy faster than through the conventional spin-lattice relaxation. At non-zero magnetic field, only NV centers and fluctuators along the same axes couple. At zero magnetic field, however, all NV centers couple to each other,
which increases the effective density of the NV centers and the spin-fluctuator bath. In turn, this gives rise to the zero-field feature. The contrast of the zero-field feature thus depends on the density of the NV centers as well as the ratio of the short lived spin fluctuators and the "normal" long-lived NV centers. Since the former is not temperature but sample dependent, the latter as a possible temperature-dependent factor.
As discussed in Ref. [23], spin-fluctuators can be close NV centers of different charge states, where electron (charge) tunneling between the centers can shorten the spin-state lifetime in the ground state of the negatively charged NV center. Therefore, the fluctuator per normal NV center ratio and, correspondingly, the contrast of the zero-field feature depend on the NV(0)/NV(\(-\)) ratio, which is temperature, excitation-power, and sample dependent. Considering the applied 515 nm excitation, which is energetic enough to ionize the NV center from the \({}^{1}E\) shelving state (see, for example, Ref. [29]), we attribute the temperature dependence of the contrast to the temperature dependence of the lifetime of the \({}^{1}E\) electronic state of the NV center. The \({}^{1}E\) electronic state couples to the higher-lying \({}^{1}A_{1}\) state through the pseudo Jahn-Teller effect and to the even higher lying \({}^{1}E^{\prime}\) state through the dynamics Jahn-Teller effect [27]. The first excited vibronic level can be found 16 meV above the lowest vibronic level of \({}^{1}E\) state, which gives rise to the characteristic temperature dependence, depicted for example for the lifetime of \({}^{1}E\) in Fig. 7 (b). Boltzmann occupation of the excited vibronic levels significantly increases beyond 20 K, which on one hand shortens the lifetime of the \({}^{1}E\) state, see Fig. 7 (b), and on the other hand increases the photo-ionization rate from \({}^{1}E\) state towards the conduction band. We attribute the onset of the zero-field contrast at around 20 K to the increasing ionization rate due to the thermal occupation of the vibronic excited energy levels of the \({}^{1}E\) state (e.g., such an effect is discussed for silicon carbide divacancy centers in Ref. [30]), whereas the shortening of the \({}^{1}E\) lifetime, i.e., intersystem crossing towards the ground state of NV(\(-\)), will compete with this photo-ionization process at elevated temperatures. Quantitative simulation of the contrast requires additional information on the dynamics of the spin fluctuators, which is the subject of further investigations.
From the study of temperature dependence and the above discussion we conclude that the NV(0)/NV(-) ratio governs the temperature dependence of the zero-field feature contrast. Sensitive dependence of this parameter on excitation power and wavelength gives pathways to engineer the contrast of zero-field feature without the need for microwave excitations.
### Conclusion
We have investigated the zero-field feature with respect to the NV density and the axis of sample cut. The linewidth and the contrast of this feature increases with NV density. In the presence of a transverse field, various cross-relaxation features are detected due to the dipolar interaction between differently oriented NVs providing depolarization channels. The number and location of cross-relaxation features are determined by the diamond cut, azimuthal angle and the strength of the transverse field. These cross-relaxation features follow specific pattern (well reproduced by our theoretical calculations) when the azimuthal angle is changed with respect to the transverse field. These features have narrower linewidth
Figure 6: Temperature dependence of zero-field feature. a) zero-field feature at selected temperatures for sample S2. zero-field feature only start observing at \(\approx\) 20 K. b), c) linewidth and contrast dependence on temperature for sample S2 and 1970608-29. The features are Lorentzian fitted with 95 % confidence interval for each temperature to extract the linewidth and contrast.
Figure 7: a) Energy levels of three coupled NV centers and b) temperature dependence of the lifetime of the \({}^{1}E\) NV shelving state [27; 28].
than the zero-field feature. Moreover, higher contrast is observed at certain angles where there is only one decay channel available to depolarize. Finally, the temperature dependence of the zero-field feature was studied. The feature is only observed above \(20\,\mathrm{K}\). The contrast increases up-to \(60\,\mathrm{K}\), levels off, and gradually decreases from about \(250\,\mathrm{K}\).
These results will be used in scalar- and vector-magnetometry applications, forming the basis of a practical near-zero-field microwave-free or even all-optical technique. With optimization of excitation-light intensity, fluorescence-collection efficiency, and sensing volume, we expect to achieve sensitivities better than \(100\,\mathrm{pT}/\sqrt{\mathrm{Hz}}\).
### Acknowledgement
We thank Junichi Isoya for helpful discussions and providing a sample. This work was supported by the European Commission's Horizon Europe Framework Program under the Research and Innovation Action MUQUABIS GA no. 101070546, by the German Federal Ministry of Education and Research (BMBF) within the Quantumtechnologien program (Grant No. 13N15064 and Grant No. 13N16455), and by the DAAD/JSPS 2021-23 cooperation grant No. 57569949. A.G. and V.I. acknowledge support from the Ministry of Culture and Innovation and the National Research, Development and Innovation Office within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004). V.I. acknowledges support from the National Research, Development, and Innovation Office of Hungary (NKFIH) (Grant No. FK 145395) and the Knut and Alice Wallenberg Foundation through WB-SQD2 project (Grant No. 2018.0071). A.G. acknowledges the Hungarian NKFIH grant No. KKP129866 of the National Excellence Program of Quantum-coherent materials project, the EU HE EIC Pathfinder project QuMicro (Grant No. 101046911), and the QuantERA II project MAESTRO.
|
2306.02794 | Long-term normalized difference urban index (NDUI) data time series for
urban studies | Keeping continuous, long-term data to examine changes in urban surroundings
is crucial as cities expand and develop. The DMSP OLS nighttime lights data and
the Landsat NDVI were used to create the Normalized Difference Urbanization
Index (NDUI), which has proven to be an invaluable resource for studying urban
areas. However, DMSP's reach and usefulness are constrained by the fact that
data collecting ended in 2014 while VIIRS has continued to collect the
nighttime lights data since 2012. The unavailability of DMSP translates to a
challenge in performing urban studies using the NDUI. In this work, we address
this difficulty and suggest a novel approach to bringing the NDUI time series
up to date. We first map the VIIRS to DMSP using 2012 as a calibration year and
then construct an updated NDUI time series. ClimateDownscaleSuite is used and
Swin Transformer is selected as the best model for the mapping. The Swin
Transformer model and the sophisticated machine learning capabilities it offers
are used in conjunction with the VIIRS evening lighting data collected after
2012. By using this strategy, not only is the NDUI time series extended, but
the potential of AI in filling in data gaps and boosting urban studies is also
highlighted. | Manmeet Singh, Subhasis Ghosh, Harsh Kamath, Vaisakh SB, Chandana Mitra, Shivam Saxena, Suryachandra Rao, Marshall Shepherd, Dev Niyogi | 2023-06-05T11:40:48Z | http://arxiv.org/abs/2306.02794v1 | # Long-term normalized difference urban index (NDUI) data time series for urban studies
###### Abstract
Keeping continuous, long-term data to examine changes in urban surroundings is crucial as cities expand and develop. The DMSP OLS nighttime lights data and the Landsat NDVI were used to create the Normalized Difference Urbanization Index (NDUI), which has proven to be an invaluable resource for studying urban areas. However, DMSP's reach and usefulness are constrained by the fact that data collecting ended in 2014 while VIIRS has continued to collect the nighttime lights data since 2012. The unavailability of DMSP translates to a challenge in performing urban studies using the NDUI. In this work, we address this difficulty and suggest a novel approach to bringing the NDUI time series up to date. We first map the VIIRS to DMSP using 2012 as a calibration year and then construct an updated NDUI time series. ClimateDownscaleSuite is used and Swin Transformer is selected as the best model for the mapping. The Swin Transformer model and the sophisticated machine learning capabilities it offers are used in conjunction with the VIIRS evening lighting data collected after 2012. By using this strategy, not only is the NDUI time series extended, but the potential of AI in filling in data gaps and boosting urban studies is also highlighted.
## 1 Introduction
The global trend of urbanization continues to rise over time at an unprecedented rate (World Bank, 2023). It is predicted that around 68 percent of the global population will start living in urban contexts by 2050(United Nations Department of Economic and Social Affairs, 2019). Such rapid and massive land transformation around the world have significant impacts on human health, environment, and on climate change over time (Bai et al., 2017; Jiang et al., 2021; Kahn, 2009; Nair et al., 2023). Remote sensing data plays a crucial role in understanding and managing rapidly evolving urban environments (Netzband et al., n.d.). A long-term global urban
dataset give a better understanding of global urban dynamics of the changing world experiencing complex human-environment interactions (X. Li & Gong, 2016). At the same time, there is also an increasing need to study the urban footprints at sufficient spatial details and longer timeframe to understand and mitigate the adverse impacts (Shao et al., 2023). Presently, there are some urban datasets out there including the World Settlement Footprint 2015 (Marconcini et al., 2020) available for 2015-2016, GhSL Settlement Grid (Pesaresi, Martino; Freire, 2016) available for 1975-2014, GhSL, Built-Up Grid (Pesaresi et al., 2016) available for 1975-2014 that are commonly used for long-term urban studies. However, these datasets are available for a limited period of time only and come with coarse resolutions. This poor spatial resolution makes these datasets highly unsuitable for large-scale analysis. Therefore, there is a strong need for a global dataset that comes with high resolution and offers continued time series coverage for a long period of time. One of the challenges of maintaining such high-resolution long-term satellite datasets is the discontinuation of satellite missions and technological upgrades of sensors. Remote Sensing satellites have a limited operational lifespan, and as newer missions are launched, older satellites are decommissioned or become non-operational. This leads to a disruption in the continuity of data collection, making it difficult to establish consistent time-series datasets for long-term analysis. Additionally, advancements in sensor technology often result in the deployment of satellites with improved capabilities and higher resolution sensors. While this is beneficial for obtaining more detailed information, it also introduces a change in the sensor characteristics, making it challenging to compare data collected by different generations of satellites. Combining DMSP OLS (Li et al. 2017) nighttime lights data with Landsat NDVI to create 30m Normalized Difference Urban Index (NDUI) (Zhang et al. 2015) has been an invaluable resource for urban-climate researchers in recent times. However, despite its usefulness, the NDUI is not as longitudinally applicable as it couldbe because the DMSP data stopped in 2014.The present work introduces a novel method for filling up this information gap by bringing the NDUI time series up to date by producing DMSP-OLS like dataset beyond 2014 from VIIRS nighttime images and latest state-of-art Artificial Intelligence / machine learning algorithms. Figure 1 below visualizes the comparative scenes from DMSP, VIIRS, and the generated NDUI over Austin, Texas, United states.
Figure 1: A comparison of the DMSP OLS nighttime lights data, VIIRS nighttime lights data, and the NDUI data generated by the proposed method over time. The datasets are shown a 3 x 3 degree box over Austin, Texas, United States for the year 2012.
## 2 Datasets
In this study, the DMSP OLS and VIIRS nighttime lights datasets were used. They are described as below:
### DMSP OLS
The Operational Linescan System (OLS) of the Defense Meteorological Satellite Program has been compiling satellite data for a wide range of applications. The DMSP-OLS is a constellation of satellites that take pictures of Earth in both the visible and infrared spectrums at a spatial resolution of 30 arc seconds. The OLS is special because it can capture worldwide nighttime low-light imagery (Elyidge et al., 1997). It has the capability to detect visible and near-infrared (VNIR) emission sources at night. It is especially helpful for keeping an eye on man-made light sources like city lights and gas flares. Since the early 1970s, when the DMSP-OLS began providing continuous data, it has been possible to conduct long-term studies of numerous characteristics of Earth's surface. Studies like human settlement patterns, urban expansion, energy use etc. have benefited greatly from this information (Elyidge et al., 2009).
Despite its popularity, DMSP-OLS information has certain drawbacks too. Since quantitative remote sensing was not the system's initial intent, the collected data is uncalibrated in terms of radioactivity. In addition, sensors can get overloaded in densely populated locations, making it difficult to track shifts there (Elyidge et al., 2014). However, despite these caveats, due to its long-term, constant data collection and worldwide coverage, the DMSP-OLS remained a vital resource for both scientific and policy-oriented applications for a long time. Google Earth Engine data repository contains this data for the period of 1992-2014. This provides cloud-free composites made using all DMSP-OLS sensor data collected by US Air Force Weather Agency and processed by NOAA's National Geophysica Data Center for calendar years.
### VIIRS
The VIIRS sensor on the Suomi National Polar-orbiting Partnership (NPP) and the NOAA-20 satellites collects the Visible Infrared Imaging Radiometer Suite (VIIRS) Nighttime Lights Data. This information allows observation and measurement of nocturnal light emissions, as well as pictures and measurements of Earth's atmosphere, seas, and land surfaces (Elyidge et al., 2013). The VIIRS sensor has a resolution of 375-750 meters and captures data in 22 spectral bands from the visible to the long-wave infrared. The Day/Night Band (DNB) is a standout feature since it can pick up faint signals during the night. In order to give "visible" pictures when sunlight is unavailable, the DNB employs a mix of moonlight, airglow, zodiacal light, stars, and anthropogenic light sources (Miller et al., 2013). Urbanization, population increase, economic activity, and even power outages and natural calamities can be tracked with VIIRS. For instance, researchers have utilized VIIRS data to observe shifts in nighttime light patterns to monitor the rehabilitation of communities following natural catastrophes (Roman et al., 2019).
Additionally, environmental research has also made use of VIIRS's Nighttime Lights Data. Many animals are impacted by artificial lights. VIIRS NTL data has been supporting studying their
behavior and migration patterns induced by light pollution for years. (Gaston et al., 2012). The National Oceanic and Atmospheric Administration (NOAA) provides the VIIRS Nighttime Lights Data for free on their website. The information is cleaned and made available in several forms, such as CSV and GeoTIFF, to meet the requirements of a wide range of studies. The data is also available in Google Earth Engine data repository as VIIRS Nighttime Day/Night Band Composites Version 1 from 2012 to till date (accessed on 24\({}^{\text{th}}\) May 2023) as shared by the Earth Observation Group, Payne Institute for Public Policy, Colorado School of Mines.
## 3 Methodology
We obtained DMSP-OLS and VIIRS NTL satellite imageries from Google Earth Engine data repository. Since the data availability of these two satellite systems overlapped in 2012, it became easier to harmonize the datasets and serve as a reference point for calibration. We utilized ClimateDownscaleSuite to transform the VIIRS data into the DMSP-OLS. The primary goal of this programme is to help users choose and execute the best statistical model for downscaling climatic variables. For the cross-calibration of the images with nighttime lights from the two satellite sensors, we selected the Swin Transformer model (Liang et al., 2021) after testing xx machine lerning based models using ClimateDownscaleSuite. The SWIN Transformer builds upon the Vision Transformer (ViT)(Dosovitskiy et al., 2021), which applies the transformer architecture initially developed for natural language processing tasks to computer vision. The ViT model segments images into fixed-size patches, linearly embeds these patches, and then applies a sequence of transformer layers.The SWIN Transformer enhances this approach by using a hierarchical structure that applies self-attention across local windows of image patches and then shifts these windows across layers. This strategy allows the model to capture both local and global image contexts effectively while maintaining computational efficiency. Because of its capacity to simulate both local and global interactions at once, the Swin Transformer has demonstrated great performance across a wide range of applications(Y. Li et al., 2022; Liu et al., 2022).
Swin Transformer was introduced by researchers from Microsoft Research Asia as a new sort of vision transformer in 2021. Shifted window transformer is where the name "Swin" originates from. When applied to visual tasks, this model was proposed to address the shortcomings of previous transformer-based models (Liu et al., 2021). The Swin Transformer's crowning innovation is its utilization of non-overlapping, displaced local windows inside the input pictures. When compared to the global self-attention employed in conventional transformers, the computational cost and memory requirements are reduced with this method since self-attention mechanisms may be applied inside these small windows. These local windows are combined into bigger ones as the Swin Transformer's layers proceed, allowing the model to take in more global information (Liu et al., 2021). The equations for Swin Transformer are as follows:
\[\begin{split}\text{z}^{\text{i}}=\text{W-MSA (LN (z^{\text{i}-1})) + z^{\text{i}-1}}\\ \text{z}^{\text{i}}=\text{MLP (LN (z^{\text{i}})) + z^{\text{i}}}\\ \text{z}^{\text{i}+1}=\text{SW-MSA (LN (z^{\text{i}})) + z^{\text{i}}}\\ \text{z}^{\text{i}+1}=\text{MLP (LN (z^{\text{i}+1})) + z^{\text{i}+1}}\end{split}\]
where, W-MSA represents windowed multi-head self-attention using regular window partitioning and SW-MSA stands for windowed multi-headed self-attention using shifted window partitioning. More details can be found in Liu et al., 2021.
There have been few attempts in the past to create DMSP-OLS like dataset to extend the time series of the original DMSP Nighttime Lights data through various cross-sensor calibration approaches (Ghosh et al., 2021; X. Li et al., 2013; Nechaev et al., 2021; Tu et al., 2020; Zheng et al., 2019). They used different approaches such as pseudo-invariant functions (PIF) paradigm or regression models. However, the problem with most methods is that they assume a linear or
Figure 2: Schematic representation of the Swin Transformer model.
polynomial relationship between DMSP and VIIRS radiances, which is not always the case due to the different resolution, dynamic range, and saturation effects of the sensors(X. Li et al., 2017). Ghosh et al. tried to extend the series of DMSP from 2013 to 2019 based on a Convolutional Neural Network (CNN) based model (Residual U-Net) discussed in their sequence paper (Nechaev et al., 2021). Since the primary F18 satellite of DMSP stopped collecting usable nighttime data from the beginning of 2014, this work took products from F15 and F16 satellites of DMSP that have been collecting pre-dawn data, and F18 and F15 satellite images for early-evening period upon a discovery that these satellites slided from day/night orbit to dawn/dusk orbit from 2012 onwards and calibrates it with VIIRS mid-night data to study the diurnal pattern of nighttime lights. Following this method, the authors were able to extend the timeframe from 2013 to 2019(Nechaev et al., 2021). However, this method relies highly on the data augmentation to use the limited number of the same year DMSP and VIIRS NTL maps for the network training which is not possible since the DMSP mission ended completely. One of the main benefits of using Swin Transformation besides being it a highly sophisticated AI based model is that it can be trained using spectral signatures of entities or features based on previous year datasets and its ability to accurately detect and synthesize those entities for any corresponding years. Figure 3 below shows comparative scenes from DMSP, VIIRS, and the simulated DMSP developed using the proposed methodology.
The Swin Transformer's superior performance on a wide range of visual tasks is due in large part to its hierarchical structure. In image classification, object recognition, and semantic segmentation, for example, it has achieved state-of-the-art results. Its modular design makes it an adaptable model for a variety of uses (Liu et al., 2021), and it scales well to accommodate varying amounts of available computing power. Video comprehension and 3D object identification are only two examples of downstream tasks that may make use of a transformer-based backbone, and the Swin Transformer architecture is entirely compatible with them. It offers a fresh approach to incorporating transformer-based models into computer vision tasks, opening up new avenues for investigation and innovation in the area (Liu et al., 2021). In this study we use Swinir (Liang et al 2021), which is an image restoration algorithm to perform image to image reg
Figure 3: Comparative scenes from DMSP, VIIRS, and Simulated DMSP
## 3 Data fusion and NDUI generation
We followed the original methodology to develop Normalized Difference Urban Index (NDUI) dataset used by Zhang et al. (Zhang et al., 2015), and systematically infused our simulated DMSP data to extend the lifespan of the NDUI dataset. We took the readily accessible VIIRS data from GEE and utilized the Swin Transformer model to create a long-term DMSP OLS-like nighttime lights data collection. When combined with the Landsat NDVI, this larger data set offers a long-term high resolution global NDUI time series for urban research (Figure 4).
Figure 4: Workflow illustrating the data fusion and NDUI generation process.
Swin Transformer proved to be well-suited for the cross-calibration challenge because of its ability to efficiently process input in high dimensions. The model's capacity to capture both local and global information, thanks to its hierarchical architecture and the flexibility to adjust the attention window throughout the picture, allowed a precise translation of the VIFRS data, and helped extending the NDUI lifespan. Our model was able to successfully harmonize VIFRS data and capture long-term changes in urban land cover on 30 m resolution. This method provides a solid framework for tracking and evaluating urbanization trends over time by cross-validating different satellite datasets and making use of cutting-edge modeling approaches.
## 4 Validation
The validation of the NDUI dataset has been done by comparing the trends in urbanization index and population density as shown in Figure 6. Population density and urbanization are intricately linked concepts that have a profound impact on social, economic, and environmental aspects of life. Urbanization refers to the process by which rural areas become urban, typically involving shifts from farming-based economies to industrial and service activities. It often coincides with significant population growth and is usually accompanied by the development of infrastructure and housing to accommodate the influx of people. As urbanization progresses, it generally leads to an increase in population density, primarily due to migration from rural areas
Figure 5: Spatial maps of NDUI over Austin, Texas, United States for the years 1999 to 2017. The last subplot shows the time evolution of NDUI during the period.
and increased birth rates. This is because urban areas often provide more job opportunities, better healthcare, and educational facilities, thus attracting individuals and families from less developed areas seeking improved living conditions. On the other hand, an increase in population density can also spur urbanization. When population density in a given area increases, it often creates a demand for more infrastructure, services, and job opportunities. This demand can drive the transformation of rural areas into urban ones as local economies adapt to meet the changing needs of the population. Therefore, it's a reciprocal relationship where urbanization can drive population density and vice versa.
## 5 Applications and future work
The applications of long-NDUI dataset are manifold for urban climate studies. Some of them are as below:
Urban Heat Island (UHI) study, urban cluster research, and many other fields can all benefit from the expanded NDUI time series. This study also demonstrates the promising future for using AI methods like the Swin Transformer to broaden and improve urban research.
Focusing on the specific climatic and meteorological features of metropolitan regions, urban climate research is an important topic of study. Insights like these can help shape the way our cities are planned and built in the future, making them better places to live. Among the many facets of urban climate studies are:
Urban Heat Island Effect: When compared to the surrounding rural regions, metropolitan areas tend to be hotter due to the urban heat island effect. High heat absorption and re-radiation by man-made surfaces like asphalt and buildings contribute significantly to the problem.
Figure 6: Normalized Difference Urban Index (left) and population density (right) averaged over Austin, Texas, US
Urban Microclimate: The study of specific weather patterns within a city is called "urban microclimate." It is possible for urban elements like buildings, parks, and bodies of water to generate microclimates with their own individual temperature, humidity, wind, and precipitation patterns.
Air Quality: Poor air quality is a common problem in urban areas owing to emissions from cars, factories, and other man-made sources. Scientists investigate where and how much these pollutants are produced, as well as their effects on people and the planet.
Urban Hydrology: When it comes to the water cycle--from precipitation to evaporation and condensation--urban hydrology is the study of how human activity alters these processes. Flash floods and diminished groundwater recharge are only two problems that can arise when impermeable surfaces are widely used in urban areas.
Urban Wind Flow: Wind Flow in Cities: Wind tunnels, downdrafts, and updrafts are all phenomena that may be attributed to the built environment of cities. Affected by these include pedestrian comfort, energy consumption, and the spread of air pollution.
Urban Biometeorology: This field, known as "urban biometeorology," investigates the effects of city weather on all forms of life. Researchers may look into the effects of urban heat islands on human health or the effects of urban growth on native species, for instance.
Urban Energy Use and Climate Change: Energy use in cities accounts for a disproportionate share of total U.S. greenhouse gas emissions. Scientists are interested in learning more about ways to slow global warming through adjustments to human behavior and the built environment.
Green and Blue Infrastructure: The term "green and blue infrastructure" describes the incorporation of plants and bodies of water into urban planning to ameliorate environmental conditions within.
Urban Climate Adaptation and Resilience: As climate change causes more extreme weather events and increasing sea levels, cities will need to adapt to these challenges and become more resilient. Scientists look at how to make urban areas more resistant to weather changes.
Urban Climate Modeling: What we call "urban climate modeling" is the practice of employing computer simulation and prediction models to study and analyze urban weather patterns. Both small-scale representations of specific neighborhoods and macro-level representations of whole cities or metropolitan areas may be used.
Future work involves mapping Landsat 5 TM and Landsat 8 ETM to Landsat 7 ETM to construct longer NDVI data and thus longer NDVI dataset.
## 6 Conclusion
To sum up, the suggested approach successfully fills in the data gap after 2012, bringing the NDVI (Normalized Difference Urban Index) time series up to date. This novel method combines the Swin Transformer's capabilities with the extensive datasets from DMSP-OLS and VIIRS to provide a continuous, long-term urban index that is more in line with actual urbanization patterns.
Figure 7: Examples of potential applications of the extended NDVI time series in urban studies.
When AI methods are applied to this massive dataset over time, new avenues for expanding our knowledge of urban settings become available. Intricate patterns and trends in urban growth and development may now be more easily discernible, thanks to the AI's capacity to extract useful insights from complicated and large-scale data. In turn, this may help shape urban policy and planning in ways that improve sustainability and resilience in our built environments. Therefore, our research not only fills in a long-standing data gap but also ushers in a new age of urban studies driven by AI-based discoveries and innovations.
**Funding:** This study has been supported through the NASA IDS program (Grant No. NNH19ZDA001N-IDS).
|
2303.11888 | Penalty-Based Imitation Learning With Cross Semantics Generation Sensor
Fusion for Autonomous Driving | In recent times, there has been a growing focus on end-to-end autonomous
driving technologies. This technology involves the replacement of the entire
driving pipeline with a single neural network, which has a simpler structure
and faster inference time. However, while this approach reduces the number of
components in the driving pipeline, it also presents challenges related to
interpretability and safety. For instance, the trained policy may not always
comply with traffic rules, and it is difficult to determine the reason for such
misbehavior due to the lack of intermediate outputs. Additionally, the
successful implementation of autonomous driving technology heavily depends on
the reliable and expedient processing of sensory data to accurately perceive
the surrounding environment. In this paper, we provide penalty-based imitation
learning approach combined with cross semantics generation sensor fusion
technologies (P-CSG) to efficiently integrate multiple modalities of
information and enable the autonomous agent to effectively adhere to traffic
regulations. Our model undergoes evaluation within the Town 05 Long benchmark,
where we observe a remarkable increase in the driving score by more than 12%
when compared to the state-of-the-art (SOTA) model, InterFuser. Notably, our
model achieves this performance enhancement while achieving a 7-fold increase
in inference speed and reducing the model size by approximately 30%. For more
detailed information, including code-based resources, they can be found at
https://hk-zh.github.io/p-csg/ | Hongkuan Zhou, Aifen Sui, Letian Shi, Yinxian Li | 2023-03-21T14:29:52Z | http://arxiv.org/abs/2303.11888v4 | Penalty-Based Imitation Learning With Cross Semantics Generation Sensor Fusion for Autonomous Driving
###### Abstract
In recent times, there has been a growing focus on end-to-end autonomous driving technologies. This technology involves the replacement of the entire driving pipeline with a single neural network, which has a simpler structure and faster inference time. However, while this approach reduces the number of components in the driving pipeline, it also presents challenges related to interpretability and safety. For instance, the trained policy may not always comply with traffic rules, and it is difficult to determine the reason for such misbehavior due to the lack of intermediate outputs. Additionally, the successful implementation of autonomous driving technology heavily depends on the reliable and expedient processing of sensory data to accurately perceive the surrounding environment. In this paper, we provide penalty-based imitation learning approach combined with cross semantics generation sensor fusion technologies (P-CSG) to efficiently integrate multiple modalities of information and enable the autonomous agent to effectively adhere to traffic regulations. Our model undergoes evaluation within the Town 05 Long benchmark, where we observe a remarkable increase in the driving score by more than 12% when compared to the state-of-the-art (SOTA) model, InterFuser. Notably, our model achieves this performance enhancement while achieving a 7-fold increase in inference speed and reducing the model size by approximately 30%.
## I Introduction
Autonomous driving is an emerging field of research at the intersection of robotics and computer vision. Recently, end-to-end autonomous driving [1][2][3], integrating the perception module and decision-making module into one learning system to optimize, gains popularity in the researches as it proved surprisingly powerful with minimum training data gained from simulation environment. However, end-to-end approach still suffers from the problem of interpretability and can not guarantee the most important factor "safety" in autonomous driving. Our primary objective is to enhance the safety of the end-to-end system through two main approaches. Firstly, we aim to enhance the reliability of the multi-sensor fusion algorithm, which will enable the system to perceive its surrounding environment with greater accuracy and robustness. Secondly, in order to enhance the interpretability of the end-to-end approach, we focus on refining the policy learning algorithm, enabling the autonomous agent to effectively adhere to traffic regulations.
The fusion of LiDAR and RGB sensors recently show impressive results in the context of autonomous driving. LiDAR sensors provide accurate 3D information of surrounding environment while they lack color information compared to RGB sensors; RGB sensors are more suitable to recognize traffic lights and traffic sign patterns while they are not resilient to bright light and other bad weather conditions compared to LiDAR sensors. Some fusion technologies [4][5] have achieved commable results in the field of object detection. In terms of end-to-end autonomous driving, [6][7][8] focus more on attention-based approaches to extract the global context from different modalities. Despite its potential, the additional Transformer architecture leads to a significant increase in both the training time and inference time of the model. To address this issue, we fuse the information (Figure 1) obtained from LiDAR and RGB by aligning their shared semantic information with auxiliary losses. This approach requires fewer parameters, yet it still produces remarkable results.
Reinforcement learning (RL) [9] and imitation learning (IL) [10] are two learning paradigms for end-to-end autonomous driving. Even though reinforcement learning demonstrates huge potential in autonomous driving, it often confronts limitations due to low sample efficiency and the requirement for careful reward development to learn, which poses challenges in obtaining sufficient training data for effective learning. Meanwhile, RL algorithms may also learn to take risky actions that lead to accidents or unsafe driving behaviors. Consequently, other researchers have turned to imitation learning approaches. However, current imitation learning approaches still lack effective mechanism to ensure safety during training process. After careful study, we found that the metric of autonomous driving and the objective function of imitation learning are not unified which means a
Fig. 1: **Illustration. To safely navigate in the road, the ego-vehicle must capture the surrounding context from the RGB camera (left) and LiDAR (right). Our P-CSG model integrates both modalities by capturing shared semantic features via feature alignment and cross semantic generation.**
low loss of learning objective does not guarantee the good performance of the agent in the testing environment. The traffic rules e.g. forbidding running a red light and stop sign are not reflected in the objective function. Our objective is to develop a novel objective function by incorporating penalty mechanisms, with the intention of augmenting the trained model's responsiveness to traffic rule violations. This integration aims to instill a heightened awareness of traffic regulations during the training process, ultimately leading to improved overall performance of the model.
Our main contributions to this paper can be summarized as follows:
* We proposed a penalty-based imitation learning approach that leverages constraint optimizations to make the end-to-end autonomous driving model more sensitive to traffic rule violations. This objective function design also unifies the metric of autonomous driving and the objective of imitation learning. We refer this approach as penalty-based imitation learning.
* We proposed a novel multi-sensor fusion model to extract the shared features and unique features between different modalities, making it easier for the decision network to get a global context for policy generation.
## II related works
### _End-to-End Autonomous Driving_
Today's autonomous driving technologies have two main branches, modular and end-to-end approaches. Modular approaches apply a fine-grained pipeline of software modules working together to control the vehicle. In contrast, the entire pipeline of end-to-end driving is treated as one single learning task. End-to-end approaches have shown great success in computer vision tasks, such as object detection [11][12][13][14], object tracking [15], and semantic segmentation [16][17]. The success of these tasks builds a solid foundation for end-to-end autonomous driving. It is reasonable to believe end-to-end approaches are capable of solving autonomous driving problems in the near future. The most common learning methods for end-to-end autonomous driving are imitation learning [6, 18, 19, 3, 20, 21] and reinforcement learning [22][23].
### _Safety Mechanism in End-to-End Autonomous Driving_
In the realm of autonomous driving, a key challenge is implementing safety mechanisms that can prevent accidents and protect passengers, pedestrians, and other road users. Within the framework of imitation learning, the agent learns driving skills by emulating expert demonstrations. The quality of these demonstrations has a significant impact on the agent's ability to drive safely in traffic. To improve the safety of the autonomous driving agent, researchers in [7] focus on enhancing the quality of the expert agent, while those in [8] introduce an additional safety module that filters out potentially dangerous driving behaviors generated by the network. Our contribution is to introduce the "Penalty" concept to the imitation learning framework, which incentivizes the trained agent to adopt safer driving behaviors.
### _Multi-sensor Fusion Technologies_
Sensor Fusion technologies are commonly employed for 3D object detection and motion forecasting. Among the various types of sensors that can be integrated, the fusion of LiDAR and camera sensors is most frequently employed, where LiDAR data serves as a supplement to image data, providing additional information about the surrounding environment and improving data reliability due to its consistency in various environments. There are three branches of sensor fusion: early fusion, middle fusion, and late fusion. In early fusion, the data is fused before being fed into the learnable system, which is the most efficient approach. In middle fusion, the information is merged in the middle of the network, and the fused features are used to produce task-specific outputs. Late fusion is an ensemble learning method that combines the outputs generated by each modality into a final result.
Multi-sensor fusion has received much research attention in the field of end-to-end autonomous driving. Prior works such as LateFusion[18] used a large Multi-Layer Perception (MLP) network to process the features extracted by the perception networks of LiDAR and RGB inputs. This MLP layer takes the tasks of features weighting, selection, and fusion which makes it hard to capture a global context of multi-modality inputs. TransFuser[6] provides an approach that leverages the attention mechanism to fuse the LiDAR and RGB information. They used the transformer architecture to achieve the multi-modality global context. The Transformer-based fusion model is applied to different resolutions between the LiDAR and RGB perception networks. TransFuser+ [7], as an extension of TransFuser, introduced more headers in the neural networks which incorporate four auxiliary tasks: depth prediction and semantic segmentation from the image branch; HD map prediction, and vehicle object detection from the BEV branch. These auxiliary tasks help to visualize the black box of the whole network. In addition, this approach also guarantees important information flow in the latent space because the information contained in the latent space should not only be able to complete the navigation task but also manually pre-defined auxiliary tasks.
## III Methodologies
In this section, we propose a novel multi-sensor fusion approach and a penalty-based Imitation Learning paradigm for end-to-end autonomous driving.
### _Problem Setting_
The task we concentrate on is point-to-point navigation in an urban setting where the goal is to complete a route with safe reactions to dynamic agents such as moving vehicles and pedestrians. The traffic rules should also be followed.
**Imitation Learning (IL):** Imitation Learning can learn a policy \(\pi\) that clone the behavior of an expert policy \(\pi^{*}\). In our setup, the policy is conditioned on the multi-modalities inputs of current observations. We used the Behavior Clone (BC) approach of IL. An expert policy is applied in the
environment to collect a large dataset \(\mathcal{D}=\{(\textbf{x}^{i},\textbf{w}^{i})\}_{i=1}^{Z}\) with the size of \(Z\), which contains the observation of the environment \(\textbf{x}^{i}\) and a set of waypoints \(\textbf{w}^{i}\) in the future timesteps. The objective function is defined as:
\[\mathcal{F}=\mathbb{E}_{(\mathcal{X},\mathcal{W})\sim\mathcal{D}}[\mathcal{L}( \mathcal{W},\pi(\mathcal{X}))] \tag{1}\]
where \(\mathcal{L}\) is the loss function.
In our setting, the observation \(\mathcal{X}\) consists of one RGB image and one LiDAR point cloud from the current time step. We used only one single frame since other works since [24, 25] have shown that using multiple frames does not improve the information gain much. A PID controller \(\mathcal{I}\) is applied to perform low-level control, i.e. steer, throttle, and brake based on these predicted future waypoints.
**Global Planner:** According to CARLA [26] 0.9.10's protocol, the high-level goal locations \(G\) is provided as GPS coordinates. This goal location \(G\) is sparse (hundreds of meters apart) which can only be used as guidance. In contrast, those to be predicted waypoints are dense, only a few meters way away from each other.
### _Cross Semantics Generation_
The motivation of our approach is based on the fact that multi-modal inputs have shared semantic information and also unique information. For instance, the geometric attributes and spatial coordinates of both vehicles and pedestrians are shared information that can be extracted from both LiDAR and RGB input. Figure 1 demonstrates the shared information of LiDAR and RGB input. The unique information refers to the complementary information that other inputs do not have. In the case of RGB input, unique information often pertains to features such as the color of traffic lights, patterns on traffic signs, and similar attributes. On the other hand, in the context of LiDAR input, unique information pertains to spatial relationships of objects. Our multi-sensor fusion approach aims to extract and align the shared features from LiDAR and RGB input sources so that the later decision network can leverage the organized features to achieve better performance.
To extract the shared information from LiDAR and RGB inputs, we propose cross semantics generation sensor fusion. As Figure 2 demonstrates, the front RGB and top-down pre-processed LiDAR pseudo images will first be fed into two residual networks [27] to extract the corresponding RGB and LiDAR features. Note that the LiDAR point cloud is pre-processed into the bird's eye view pseudo images which is the same setting as [6]. We use four different linear layers to extract the shared features and unique features of LiDAR and RGB. The shared features of RGB are used to generate the top-down semantic segmentation align with LiDAR input; The shared features of LiDAR are used to generate the semantic segmentation of corresponding RGB input. We refer this approach as cross semantics generation since the information from one modality is utilized to generate semantic representations of the other modality. In this way, the information flow is said to be 'crossed', as each modality contributes to the understanding of the other. The extracted shared features will maximized since the information derived from one modality should strive to generate an accurate semantic segmentation of the other modality to the best of its ability. An extra L2 loss is introduced to align the shared features of RGB and LiDAR into the same latent space. In our setup, the semantic segmentation contains 4 channels, the drivable area, the non-drivable area, the object in the drivable areas like vehicles and pedestrians, and others. In terms of the unique features from RGB input, we mainly concentrate on the traffic lights and stop signs. As we can see from the figure, the unique features from RGB input are used to train the traffic light and stop sign indicator which ensures the important information flows of traffic lights and stop signs in the neural network. These headers are also critical for later penalty-based Imitation which we will discuss in the following sections.
### _Waypoint Prediction Network_
As shown in Figure 2, all the unique and shared features are concatenated into a 512-dimensional feature vector. This vector is fed into an MLP to reduce the dimension to 64 for computational efficiency reasons. The hidden layer of GRU is initialized with a 64-dimensional feature vector. GRU's update gate controls the information flow from the hidden layer to the output. In each timestep, it also takes the current location and goal location as input. We follow the approach of [19] that a single GRU layer is followed by a linear layer which takes the state of the hidden layer and predicts the relative position of the waypoint compared to the previous waypoint for \(T=4\) time-steps. Hence, the predicted future waypoints are formed as \(\{w_{t}=w_{t-1}+\delta w_{t}\}_{t=1}^{T}\). The start symbol for GRU is given by (0,0).
**Controller**: Based on the predicted waypoints, we use two PID controllers for lateral and longitudinal directions respectively. We follow the settings of [2].
### _Loss Functions_
Similar to previous works [6, 2], we also use \(L_{1}\) loss as our reconstruction loss. For each input, the loss function can be formalized as:
\[\mathcal{L}=\sum_{t=1}^{T}||w_{t}-w_{t}^{gt}||_{1} \tag{2}\]
where \(w_{t}\) is the t-th predicted waypoints and \(w_{t}^{gt}\) is the t-th ground truth waypoint produced by the expert policy.
**Auxiliary Tasks**: In our cross semantics generation approach, we have four extra auxiliary tasks along with the main imitation learning task. As we explained in the above section, two of the auxiliary tasks are semantic segmentation. In order to ensure some important information flow in the network, we introduce two extra classification headers, namely traffic light classification and stop sign classification. These two headers help the neural network to capture traffic light and stop sign information which is significant for later penalty-based Imitation learning.
**Front View Semantics**. Front-view semantic segmentation has four different channels. We define \(y_{f}\) as the ground
truth 3D tensor with the dimension \(H_{f}\times W_{f}\times 4\) and \(\hat{y}_{f}\) as the output of the front view decoder with the same shape.
**Top-down View Semantics**. Like front-view semantic segmentation, top-down-view semantic segmentation also has four channels. We define \(y_{td}\) as the ground truth 3D tensor with the dimension \(H_{td}\times H_{td}\times 4\) and \(\hat{y}_{td}\) as the output of the top-down view decoder with the same shape.
**Image-LiDAR Alignment Loss**. This loss aims to align the shared semantic features of Image and LiDAR into the same latent space. We use an L2-loss to align these features.
**Traffic Light Classification**. The output of the traffic light decoder should be a vector of 4 which indicates these four states red light, yellow light, green light, and none in the current frame. We then define \(y_{l}\) as the ground truth traffic light vector of length 4 and \(\hat{y}_{l}\) as the output of the traffic light decoder with the same shape.
**Stop Sign Classification**. The output of the stop sign decoder should have a vector of 2 which indicates if a stop sign exists in the current frame. The ground truth stop sign vector of length 2 and the output of the stop sign decoder with the same shape are defined as \(y_{s}\) and \(\hat{y}_{s}\), respectively. Based on what we defined above, the new loss function is given by:
\[\begin{split}\mathcal{L}=&\sum_{t=1}^{T}||w_{t}-w_{ t}^{gt}||_{1}+\omega_{f}\mathcal{L}_{\text{CE}}(y_{f},\hat{y}_{f})+\\ &\omega_{td}\mathcal{L}_{\text{CE}}(y_{td},\hat{y}_{td})+\omega_ {l}\mathcal{L}_{\text{CE}}(y_{l},\hat{y}_{l})+\\ &\omega_{s}\mathcal{L}_{\text{CE}}(y_{s},\hat{y}_{s})+\omega_{a} \mathcal{L}_{2}(y_{s},\hat{y}_{s})\end{split} \tag{3}\]
where \(\mathcal{L}_{CE}\) and \(\mathcal{L}_{2}\) are the cross entropy loss and L2 loss, respectively. \(\omega_{f}\), \(\omega_{td}\), \(\omega_{l}\), \(\omega_{s}\), \(\omega_{a}\) are the weights for these auxiliary losses.
### _Penalty-based Imitation Learning_
We found that the objective function design for imitation learning and the autonomous driving metric are not unified which means a low loss of the objective function does not guarantee a high driving score and high route completion. After careful study, we figure out there exist two potential reasons.
* The expert agent still makes mistakes when generating the dataset. Sometimes, the expert agent runs a red light and violates the stop sign rule.
* The objective function is not sensitive to serious violations of the traffic rules, i.e. the violation of red lights and stop signs. The average objective function
Fig. 2: **Architecture. The top-down LiDAR pseudo image and front camera image go through two residual networks to extract 512 dimension feature vectors. We use four different MLPs to extract the shared features and the unique features. The unique features of RGB input are used to generate stop signs and traffic light indicators. The shared features of LiDAR are used to reconstruct the segmentation of RGB input while the shared features of RGB are used to reconstruct the segmentation of top-down LiDAR input. An alignment loss is used to align the shared features from LiDAR and camera inputs into the same space. These shared features and unique features are concatenated along with the measurements (velocity, throttle, steer, brake from the last frame) and then go through one MLP to reduce the size. Finally, they will be fed into one GRU decoder to predict short-term waypoints.**
loss may not increase too much when violating the traffic rules despite that this violation may cause serious consequences which result in a huge drop in driving score and route completion.
Behavior Cloning (BC), as an imitation learning method, aims to clone the behavior of the expert agent. In such a way, the performance of the trained agent can no longer be better than the expert agent. If the expert agent makes a mistake, the trained agent will learn how to make that mistake instead of getting rid of that mistake.
Our aim is to reformulate the objective function of imitation learning in line with traffic rules, whereby the agent is penalized (higher loss) when it generates short-term future waypoints that violate the traffic rules during the training process.
The traffic rules can be modeled as constrained functions which refer to conditions of the optimization problem that the solution must satisfy. In our setting, we concentrate on two kinds of traffic rule violations and one common driving experience, namely red light violations, stop sign violations, and slowing down when turning, because these are the main problems we found in our vanilla imitation learning approach. We first define three corresponding penalties to quantify these violations. Figure 3 illustrates these penalties.
#### Iii-B1 Red Light Penalty
For the red light violation, we design a red light penalty as follows:
\[\mathcal{P}_{\mathrm{tl}}=\mathbb{E}_{\mathcal{X}\sim\mathcal{D}}[\mathbb{1} _{\mathrm{red}}\cdot\sum_{i=1}^{t}c_{i}\cdot\max\{0,w_{i}-\overline{p}\}] \tag{4}\]
where \(w_{i}\) is the \(i\)-th predicted waypoints of the trained agent; \(\overline{p}\) is the position of the stop line at the intersection. Both \(w_{i},\overline{p}\) are in the coordinate system of the ego car. \(c_{i}\) is the weight parameter and \(\sum_{i}c_{i}=1.\mathbb{1}_{\mathrm{red}}\) indicates if a red light that influences the agent exists in the current frame. \(\mathcal{X}\) is the input of the current frame and \(\mathcal{D}\) is the whole data set.
In the scenarios of red lights, an extra red light penalty is defined by the distances of the predicted waypoints beyond the stop line at the intersection. If the predicted waypoints are within the stop line, then the penalty remains zero. On the other hand, if the predicted waypoints are beyond the stop line, the sum of distances between those waypoints and the stop line will be calculated as the red light penalty. The additional information for the red light penalty calculation like traffic light information and stop line location is pre-processed and saved in each frame of our dataset.
#### Iii-B2 Stop Sign Penalty
Similar to the red light penalty, a stop sign penalty is given when the predicted waypoints violate the stop sign rule. The penalty is formalized as follows:
\[\mathcal{P}_{\mathrm{ss}}=\mathbb{E}_{\mathcal{X}\sim\mathcal{D}}[\mathbb{1} _{\mathrm{stopsign}}\cdot\max\{v-\epsilon,0\}] \tag{5}\]
where \(v\) is the desired speed calculated by
\[v=\frac{w_{0}-w_{1}}{\Delta t} \tag{6}\]
\(w_{0}\) and \(w_{1}\) is the first and second predicted waypoint, and \(\Delta t\) is the time interval between each frame when collecting the data. \(\mathbb{1}_{\mathrm{stopsign}}\) is an indicator for stop sign checking. If the vehicle drives into the area that a stop influences, this indicator turns to 1 otherwise it remains zero. \(\epsilon\) is the maximum speed required to pass stop sign tests.
Fig. 3: **Penalty Illustration.** To ensure compliance with red light and stop penalty rules, as well as promoting deceleration during turning maneuvers, our approach incorporates three distinct penalty types. The first column of the figures exemplifies the red light penalty, wherein waypoints situated beyond the stop line receive a penalty when the traffic light is red. In the second column, we demonstrate the stop sign penalty, wherein predicted waypoints within the vicinity of a stop sign are penalized if the agent fails to decelerate adequately. The speed penalty is enforced during turning actions as shown in last two figures. Specifically, if the predicted waypoints indicate an excessive speed, a speed penalty is imposed.
#### Iii-B3 Speed Penalty
A speed penalty will be applied if the agent attempts to turn at excessive speed. The motivation to introduce this speed is based on the common driving experience of human beings. Also, we observe the agent sometimes can not avoid hitting pedestrians when turning at high speed since it has less time to react. The penalty is formalized as:
\[\mathcal{P}_{\mathrm{sp}}=\mathbb{E}_{\mathcal{X}\sim\mathcal{D}}[\sin(d\theta) \cdot\max\{v-v_{\mathrm{lb}},0\}] \tag{7}\]
where \(d\theta\) is the direction deviation between the current frame and the next frame. Like stop sign penalty, \(v\) is defined in (6). \(v_{\mathrm{lb}}\) is the speed lower bound. Speed under the lower bound will not be imposed by speed punishment.
With the help of these penalties, the constrained optimization can be formalized as:
\[\min \mathcal{F}\] s.t. \[\mathcal{P}_{\mathrm{tl}},\mathcal{P}_{\mathrm{ss}},\mathcal{P}_{ \mathrm{sp}}=0 \tag{8}\]
where \(\mathcal{F}\) is the objective function defined in (1).
The Lagrange multiplier strategy can be applied here. We introduce three Lagrange Multiplier \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) and the Lagrange function is defined by:
\[\min \mathcal{F}+\lambda_{1}\mathcal{P}_{\mathrm{tl}}+\lambda_{2}\mathcal{P}_{ \mathrm{ss}}+\lambda_{3}\mathcal{P}_{\mathrm{sp}} \tag{9}\]
This is the final objective function to optimize. For simplicity, these Lagrange multipliers \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) are considered fixed hyper-parameters. Well-chosen \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) are important for optimization. According to our experiments, too large \(\lambda\) influences the behaviors in other scenarios while too smaller \(\lambda\) is not powerful enough for the agent to obey the corresponding traffic rules.
The red light indicator and stop sign indicator headers are important for the agent to learn from the stop sign and red light penalty because the information flow of the stop sign and red light helps the agent to build the logistic connection between behavior, observation, and punishment.
## IV Experiments
In this section, our experiment setup will first be described. Then we compare our model against other baselines. We also provide ablation studies to show the improvements from penalty-based imitation learning and cross semantics generation.
### _Task Description_
The task we concentrate on is a navigation task along the predefined routes in different scenarios and areas. There exists GPS signals guiding the vehicle. Low signal or no signal situations are not taken into consideration. Some predefined scenarios will appear in each route to test the agent's ability to avoid the emergencies, such as obstacle avoidance, other vehicles running a red light, and the sudden appearance of pedestrians on the road. There exists a time limit for the agent to complete the route. Time exceeding is considered a failure in terms of route completion.
### _Training Dataset_
Realistic driving data is hard to achieve. Alternatively, we use the Carla simulator [26] to collect the training data processed by the expert policy. We use the same training dataset as TransFuser [6]. It includes 8 towns and around 2500 routes through junctions with an average length of 100m and about 1000 routes along curved highways with an average length of 400m. We used the expert agent same as TransFuser to generate these training data.
### _Test Result_
#### Iv-C1 Benchmark
We use Town05 long benchmarks to evaluate our model. Town05 long benchmark contains 10 routes and all of these routes are over 2.5km. This benchmark is also used by InterFuser [8] and TransFuser.
#### Iv-C2 Baseline
The other baselines we chose to compare with our model are TransFuser+, TransFuser, Geometric Fusion, and LateFusion. **TransFuser**[6] introduces the Transformer into the multi-sensor fusion architecture to achieve better end-to-end autonomous driving results. **TransFuser+**[7], as an extension of TransFuser, leverages several auxiliary losses to ensure important information flows such as traffic light and road line information in the network. **InterFuser**[8] developed a safety control module to regulate the behaviors of the agent, preventing the agent violate the traffic rules. **LateFusion**[18] uses a simple Multi-Layer Perception Network to integrate multi-modal information. **Geometric Fusion**[6] implements both LiDAR-to-image and Image-to-LiDAR fusion to aggregate the information from LiDAR and image to increase the end-to-end autonomous driving ability.
As Table I shows, our model achieves the highest route completion and driving scores among all baselines. Compared to TransFuser, Transfuser+, and LateFusion, our model has a huge increase in driving scores and route complications. InterFuser, the current state-of-the-art model, performs well because its safety module avoids dangerous behavior inferred by the neural networks. However, this structure modularizes the decision-making process and these conflicted acts of the safety module and the neural network may have potential risks. Another disadvantage of modular approaches is that the predefined inputs and outputs of individual sub-systems might not be optimal for the driving task in different scenarios. [28] analyses the end-to-end approaches and modular approaches of autonomous driving in detail. In contrast to InterFuser, we intend to restrict the behaviors of the agent by introducing penalties to the objective function so that the whole autonomous driving process remains end-to-end. As the results demonstrate, our penalty-based imitation learning can also avoid dangerous behaviors of the agent and make the agent more sensitive to the traffic rules. It achieves even better performance than InterFuser.
### _Ablation Study_
In this subsection, we will analyze the influences of different penalty weights for corresponding traffic rules. As Table II demonstrates, two extra weights for each penalty are selected for comparison. We also provide the result of models
without CSG and penalties for comprehensive analysis. We notice that the infractions of traffic lights and stop signs are largely reduced by adding penalties. Our proposed multi-sensor fusion technology (CSG) also decreases the possibility of hitting obstacles such as vehicles, pedestrians, and other statics. The results of different penalty weights are also listed in the table for comparison. Note that the default weights \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) we choose for our best model are 0.5, 0.01, and 0.5 respectively. We found that assigning greater weight to more severe violations will increase the performance of our model. For instance, we apply greater penalties for the red light and the stop sign violations compared to overspeeding by turning, since those two violations cause more serious consequences.
### _Inference and Training Efficiency_
In this subsection, we aim to compare training time, inference time and parameter number across three SOTA models in the field of end-to-end autonomous driving to gain insights into their computational characteristics. As illustrated in Table III, our model stands out by having the fewest parameters and the shortest training and inference times when compared to the other two state-of-the-art models. These findings provide compelling evidence of our model's reduced complexity. With fewer parameters and faster processing times, our model showcases an efficient design, delivering comparable performance while minimizing computational demands. With the shorter inference time, our model also guarantee a safer and more efficient navigation, since the autonomous driving system can quickly detect and react to changes in the environment.
## V Discussion
In this work, we introduce novel approaches for both Multi-sensor Fusion and Imitation Learning objective function design. The Cross Semantic Generation approach aims to extract and enhance the shared semantic information from LiDAR and RGB inputs. We used some auxiliary losses to regularize the feature space, ensuring the information flow of the features which are important for driving decisions according to human experience. Penalty-based Imitation Learning further increases the level of compliance of the agent with traffic rules. Some other approaches use an extra module to ensure the agent obeys traffic rules. NEAT [3], LAV [20] introduce some low-level control strategies in the PID controller to force braking at red lights. InterFuser uses a safety module to avoid dangerous actions such as collisions with other vehicles. These strategies largely increase the performance of the agent. However, these extra modules also make the network no longer end-to-end. With penalty-based
\begin{table}
\begin{tabular}{c|c c c|c c c c} \hline \hline \multirow{2}{*}{Model} & Driving & Route & Infraction & Collision & Collision & Collision & Red light & Stop Sign Infraction \\ & Score & Complication & Score & Pedestrian & Vehicle & Static & Infraction & Infraction \\ & \(\%\), \(\uparrow\) & \(\%\), \(\uparrow\) & \(\#\)/km, \(\downarrow\) & \(\#\)/km, \(\downarrow\) & \(\#\)/km, \(\downarrow\) & \(\#\)/km, \(\downarrow\) & \(\#\)/km, \(\downarrow\) \\ \hline \hline P-CSG (ours) & \(\mathbf{56.38}\pm 4.18\) & \(\mathbf{94.00}\pm 1.75\) & \(\mathbf{0.61}\pm 0.05\) & \(\mathbf{0.00}\pm 0.00\) & \(\mathbf{0.08}\pm 0.02\) & \(\mathbf{0.00}\pm 0.00\) & \(\mathbf{0.03}\pm 0.01\) & \(\mathbf{0.01}\pm 0.01\) \\ No CSG & \(42.767\pm 7.78\) & \(83.23\pm 1.86\) & \(0.51\pm 0.11\) & \(0.03\pm 0.01\) & \(0.15\pm 0.04\) & \(0.04\pm 0.03\) & \(0.04\pm 0.00\) & \(0.02\pm 0.00\) \\ No penalty & \(34.98\pm 5.64\) & \(76.20\pm 13.43\) & \(0.51\pm 0.18\) & \(0.01\pm 0.00\) & \(0.15\pm 0.11\) & \(0.03\pm 0.01\) & \(0.05\pm 0.02\) & \(0.05\pm 0.01\) \\ \(\lambda_{1}=0.3\) & \(37.19\pm 8.87\) & \(82.40\pm 1.60\) & \(0.48\pm 0.09\) & \(0.01\pm 0.00\) & \(0.11\pm 0.03\) & \(0.04\pm 0.01\) & \(0.06\pm 0.01\) & \(\mathbf{0.01}\pm 0.00\) \\ \(\lambda_{1}=0.1\) & \(45.02\pm 5.54\) & \(69.35\pm 1.90\) & \(0.70\pm 0.07\) & \(0.01\pm 0.01\) & \(0.09\pm 0.07\) & \(\mathbf{0.00}\pm 0.00\) & \(0.04\pm 0.01\) & \(\mathbf{0.01}\pm 0.00\) \\ \(\lambda_{2}=0.005\) & \(53.20\pm 7.02\) & \(87.55\pm 5.73\) & \(0.62\pm 0.10\) & \(0.03\pm 0.01\) & \(0.07\pm 0.01\) & \(\mathbf{0.00}\pm 0.00\) & \(0.05\pm 0.02\) & \(0.02\pm 0.02\) \\ \(\lambda_{2}=0.05\) & \(49.43\pm 6.73\) & \(93.87\pm 5.51\) & \(0.53\pm 0.02\) & \(0.01\pm 0.01\) & \(0.09\pm 0.04\) & \(\mathbf{0.00}\pm 0.00\) & \(0.05\pm 0.02\) & \(\mathbf{0.01}\pm 0.00\) \\ \(\lambda_{3}=0.3\) & \(47.30\pm 3.58\) & \(92.21\pm 4.52\) & \(0.51\pm 0.07\) & \(\mathbf{0.00}\pm 0.00\) & \(0.11\pm 0.02\) & \(\mathbf{0.00}\pm 0.00\) & \(0.05\pm 0.02\) & \(0.03\pm 0.01\) \\ \(\lambda_{3}=0.7\) & \(47.35\pm 4.97\) & \(94.45\pm 3.95\) & \(0.50\pm 0.06\) & \(0.01\pm 0.01\) & \(0.13\pm 0.03\) & \(\mathbf{0.00}\pm 0.00\) & \(0.05\pm 0.00\) & \(\mathbf{0.01}\pm 0.01\) \\ \hline \hline \end{tabular}
\end{table} TABLE II:
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{Model} & Parameters & Total & Training & Inference \\ & Number (M) & Training & Time & Time \\ & Frames (K) & (min. \(\rho\) epoch) & (s \(\prime\) frame) \\ \hline P-CSG(ours) & 36 & 209 & 33 & 0.043 \\ Interfuser & 53 & 232 & 112 & 0.312 \\ Transfuser+ & 168 & 164 & 199 & 0.071 \\ \hline \hline \end{tabular}
\end{table} TABLE III:
\begin{table}
\begin{tabular}{c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Model} & Driving & Route & Infraction & Collision & Collision & Collision & Red light & Stop Sign Infraction \\ & Score & Complication & Score & Pedestrian & Vehicle & Static & Infraction & Infraction \\ & \(\%\), \(\uparrow\) & \(\%\), \(\uparrow\) & \(\uparrow\) & \(\#\)/km, \(\downarrow\) & \(\#\)/km, \(\downarrow\) & \(\#\)/km, \(\downarrow\) & \(\#\)/km, \(\downarrow\) & \(\#\)/km, \(\downarrow\) \\ \hline \hline P-CSG (ours) & \(\mathbf{56.38}\pm 4.18\) & \(\mathbf{94.00}\pm 1.75\) & \(\mathbf{0.61}\pm 0.05\) & \(\mathbf{0.00}\pm 0.00\) & \(\mathbf{0.08}\pm 0.02\) & \(\mathbf{0.00}\pm 0.00\) & \(0.03\pm 0.01\) & \(0.01\pm 0.01\) \\ TransFuser & \(34.50\pm 2.54\) & \(61.16\pm 4.75\) & \(0.56\pm 0.06\) & \(0.01\pm 0.01\) & \(0.58\pm 0.07\) & \(0.38\pm 0.05\) & \(0.12\pm 0.03\) & \(0.05\pm 0.02\) \\ TransFuser+ & \(36.19\pm 0.90\) & \(70.13\pm 6.80\) & \(0.51\pm 0.03\) & \(\mathbf{0.00}\pm 0.00\) & \(0.40\pm 0.13\) & \(0.04\pm 0.03\) & \(0.11\pm 0.10\) & \(0.04\pm 0.01\) \\ Interfuser & \(50.64\pm 3.51\) & \(89.13\pm 4.12\) & \(0.57\pm 0.05\) & \(\mathbf{0.00}\pm 0.00\) & \(0.09\pm 0.04\) & \(0.01\pm 0.01\) & \(\mathbf{0.02}\pm 0.01\) & \(0.04\pm 0.01\) \\ Geometric Fusion & \(31.30\pm 5.2\) & \(57.17\pm 11.16\) & \(0.54\pm 0.04\) & \(0.01\pm 0.01\) & \(0.43\pm 0.08\) & \(0.02\pm 0.01\) & \(0.11\pm 0.0
Imitation Learning, we aim to avoid those decisions detached from the network. We use the penalty to make the agent more sensitive to traffic rules. The end-to-end nature of the network is guaranteed while constraining the agent to comply with traffic regulations.
## VI Conclusion
The key points for the performance of end-to-end autonomous driving are improving fusion technologies and policy learning methods. These two points turn into two important questions. How to efficiently extract and integrate the features from different modalities? How to effectively use these features to learn a stable and well-performing policy approaching or even surpassing the human level. In this paper, we contribute to the abovementioned aspects and achieve state-of-the-art performance. Compared to modular autonomous driving technologies, end-to-end autonomous driving has lower hardware costs and less expensive maintenance. It is also adaptable to different scenarios simply by feeding data. We believe end-to-end autonomous driving can be deployed in actual vehicles in the near future.
## VII Acknowledgement
This work is supported by Huawei Trustworthy Technology and Engineering Laboratory. We thank Prof. Fengxiang Ge and Wei Cao for the insightful discussion.
|
2305.15960 | Dirac fermion spectrum of the fractional quantum Hall states | Applying a unified approach, we study the integer quantum Hall effect (IQHE)
and fractional quantum Hall effect (FQHE) in the Hofstadter model with short
range interactions between fermions. An effective field, that takes into
account the interaction between fermions, is determined by both amplitude and
phase. Its amplitude is proportional to the interaction strength, the phase
corresponds to the minimum energy. In fact, the problem is reduced to the
Harper equation with two different scales: the first is a magnetic scale with
the cell size corresponding to a unit quantum magnetic flux, the second scale
determines the inhomogeneity of the effective field, forms the steady fine
structure of the Hofstadter spectrum and leads to the realization of fractional
quantum Hall states. In a sample of finite size with open boundary conditions,
the fine structure of the Hofstadter spectrum consists of the Dirac branches of
the fermion excitations and includes the fine structure of the edge chiral
modes. The Chern numbers of the topological Hofstadter bands are conserved
during the formation of their fine structure. The edge modes are formed into
the Hofstadter bands. They connect the nearest-neighbor subbands and determine
the conductance for the fractional filling. | I. N. Karnaukhov | 2023-05-25T11:57:26Z | http://arxiv.org/abs/2305.15960v1 | [
###### Abstract
Applying a unified approach, we study the integer quantum Hall effect (IQHE) and fractional quantum Hall effect (FQHE) in the Hofstadter model with short range interactions between fermions. An effective field, that takes into account the interaction between fermions, is determined by both amplitude and phase. Its amplitude is proportional to the interaction strength, the phase corresponds to the minimum energy. In fact, the problem is reduced to the Harper equation with two different scales: the first is a magnetic scale with the cell size corresponding to a unit quantum magnetic flux, the second scale determines the inhomogeneity of the effective field, forms the steady fine structure of the Hofstadter spectrum and leads to the realization of fractional quantum Hall states. In a sample of finite size with open boundary conditions, the fine structure of the Hofstadter spectrum consists of the Dirac branches of the fermion excitations and includes the fine structure of the edge chiral modes. The Chern numbers of the topological Hofstadter bands are conserved during the formation of their fine structure. The edge modes are formed into the Hofstadter bands. They connect the nearest-neighbor subbands and determine the conductance for the fractional filling.
Harper-Hofstadter model, quantum Hall effect, fractional quantum Hall effect] Dirac fermion spectrum of the fractional quantum Hall states I. N. Karnaukhov 1]I. N. Karnaukhov 1]I. N. Karnaukhov 2]G. V. Kurdyumov Institute for Metal Physics of the NAS of Ukraine, 36 Vernadsky Boulevard, 03142 Kyiv, Ukraine 1
Footnote 1: Corresponding author: [email protected].
## 1 Introduction
The Harper-Hofstadter model [1, 2] plays a key role in the modern understanding and description of topological states on a 2D lattice. It allows us to describe the nontrivial behavior of fermions in an external magnetic field with their arbitrary dispersion at different filling, to determine the structure of topological bands, and to calculate the Chern numbers in a wide range of magnetic fluxes. For rational magnetic fluxes penetrating into a magnetic cell with size \(q\) (\(q\) is defined in units of the lattice spacing), the Hofstadter model has an exact solution [3, 4]. In experimental realizable magnetic fields, which corresponds to semi-classical limit with a magnetic scale \(q\simeq 10^{3}\)-\(10^{4}\), the spectrum of quasiparticle excitations is well described in the framework of the Landau levels near the edge spectrum [5, 6, 7], and the Dirac levels in graphene [8, 9, 10]. Irrational magnetic fluxes can be realized only in the samples of small sizes when the size of a sample \(L\) is less than a magnetic scale \(q\)[7]. In this case, the \(q\) value is the maximum scale in the model.
IQHE is explained in the framework of the Hofstadter model [5, 6, 7, 8, 10, 11, 12], while the same cannot be said about FQHE. Unfortunately, the Hofstadter model is incapable of explaining FQHE, because it does not take into account the interaction between quantum particles. FQHE is not sensitive to spin degrees of freedom, so the repulsion between fermions should be taken into account first. A theory that could explain all the diversity of the FQHE is still lacking, and the nature of the FQHE remains an open question in condensed matter physics. Let us pay tribute to the ideas [13, 14], from which it becomes clear that the effect itself is not trivial.
The purpose of this work is not to explain the numerous experimental data on the measurement fractional Hall conductance, but to understand the nature of the FQHE. The material of the paper is presented in the following format: original part as an example, well-known results as a counterexample.
## 2 Model Hamiltonian and method
We study FQHE in the framework of the Hofstadter model defined for interacting electrons on a square lattice with the Hamiltonian \({\cal H}={\cal H}_{0}+{\cal H}_{\rm int}\)
\[{\cal H}_{0} = -\sum_{\sigma=\uparrow,\downarrow}\sum_{n,m}\left[a_{n,m;\sigma}^{ \dagger}a_{n+1,m;\sigma}+{\rm e}^{2{\rm i}\pi n\phi}a_{n,m;\sigma}^{\dagger}a_ {n,m+1;\sigma}+H.c.\right] \tag{2.1}\] \[-\mu\sum_{\sigma=\uparrow,\downarrow}\sum_{j}n_{j;\sigma}-H\sum_ {j}\left(n_{j;\uparrow}-n_{j;\downarrow}\right),\] \[{\cal H}_{\rm int} = U\sum_{j}n_{j;\uparrow}n_{j;\downarrow}, \tag{2.2}\]
where \(a_{n,m;\sigma}^{\dagger}\) and \(a_{n,m;\sigma}\) are the fermion operators located at a site \(j=\{n,m\}\) with spin \(\sigma=\uparrow,\downarrow\), \(n_{j;\sigma}=a_{j;\sigma}^{\dagger}a_{j;\sigma}\) denotes the density operator, \(\mu\) is a chemical potential. The Hamiltonian \({\cal H}_{0}\) describes the hoppings of fermions between the nearest-neighbor lattice sites. A magnetic flux through the unit cell \(\phi=H/\Phi_{0}\) is determined in the quantum flux unit \(\Phi_{0}=h/e\). Here, \(H\) is a magnetic field and a lattice constant is equal to unit. \({\cal H}_{\rm int}\) term is determined by the on-site Hubbard interaction \(U\).
The interaction term (2.2) can be conveniently redefined in the momentum representation \({\cal H}_{\rm int}=VU\sum_{\bf K}n_{\bf K;\uparrow}n_{-{\bf K}\downarrow}\), where \(n_{\bf K;\sigma}=\frac{1}{V}\sum_{j}\exp({\rm i}{\bf K}\,{\bf j})n_{j;\sigma}\), the volume is equal to \(V=L\times L\). Using the mean field approach, we rewrite this term as follows \({\cal H}_{\rm int}=V(\lambda_{\bf K;\uparrow}n_{-{\bf K}\downarrow}+\lambda_ {-{\bf K}\downarrow}n_{{\bf K}\downarrow})\) with an effective field \(\lambda_{\bf K;\sigma}=U\langle n_{{\bf K};\sigma}\rangle\), which is determined by a fixed value of the wave vector \({\bf K}\). In this case, the value of \({\bf K}\) is a free parameter of the mean-field approximation that minimizes the energy of the electron liquid, in contrast to \(q\) the value of which is determined by an external magnetic field. In the experiments, the magnetic fields correspond to the semi-classical limit with a magnetic scale \(q\sim 10^{3}\)-\(10^{4}\), which corresponds to small values \(K\sim 10^{-3}\)-\(10^{-4}\). The density of fermions for the states near the low energy edge of the spectrum is small \(\sim 1/q\). In the small \({\bf K}\)-limit, the expression for \(\lambda_{\bf K;\sigma}\) is simplified \(\lambda_{\bf K;\sigma}=\lambda_{\sigma}+0(K^{2})\), where \(\lambda_{\sigma}=U\rho_{\sigma}\), \(\rho_{\sigma}\) is the density of electrons with spin \(\sigma\). The Zeeman energy shifts the energies of electron bands with different spins, removes the spin degeneracy, and does not change the topological state of the electron liquid. This makes it possible to explicitly disregard the dependence of the electron energy on the spin and to consider the problem for spinless fermions. The model is reduced to a spinless fermion liquid with the interaction term
\[{\cal H}_{\rm int}=\frac{\lambda}{2}\sum_{j}\left[\exp({\rm i}{\bf K}\,{\bf j} )+\exp(-{\rm i}{\bf K}\,{\bf j})\right]n_{j}=\lambda\sum_{j}\cos({\bf K}\,{ \bf j})n_{j},\]
with \(\lambda=U\rho\), \(n_{j}\) and \(\rho\) are the density operator of spinless fermions and their filling.
We study the 2D system in a hollow cylindrical geometry with open boundary conditions (a cylinder axis along the \(x\)-direction and the boundaries along the \(y\)-direction). The Hamiltonian \({\cal H}_{0}\) describes the chains of spinless fermions oriented along the \(y\)-axis (\(n\) is a coordinate of the chain in the \(x\)-direction) connected by single-particle tunneling with the tunneling constant equal to unit. The wave function of free fermions in the \(y\)-chains, which is determined by the wave vector \(k_{y}\) are localized in the \(x\)-direction [15, 16] (for each \(k_{y}\) value). The amplitudes of the wave function with different values \(k_{y}\) overlap in the \(x\)-direction, the eigenstates of the Hamiltonian \({\cal H}_{0}\) are the Bloch form. All states with different \(n\) are bounded via a magnetic flux. The on-site Hubbard interaction does not break the time reversal symmetry and chirality of the spectrum of the fermion liquid. Therefore, the effective Hamiltonian also should not break these symmetries for rational fluxes. These conditions are fulfilled in the case when \({\bf K}=(K,0)\), where \(K\) and \(q\) form the states with rational periods. Making the ansatz for the wave function \(\psi(n,m)=\exp({\rm i}k_{y}m)g_{n}\) (which determines the state with the energy \(\epsilon\)), we obtain the Harper equation for the model Hamiltonian (2.1), (2.2)
\[\epsilon g_{n}=-g_{n+1}-g_{n-1}-2\cos(k_{y}+2\pi n\phi)g_{n}+\lambda\cos(Kn)g_{ n}. \tag{2.3}\]
This equation is the key in studying FQHE.
The problem is reduced to the \((1+1)\)D quantum system, where the states of fermions are determined by two phases: the first is a magnetic phase \(\phi\), the second is the phase \(K\), which is connected with interaction. The \(K\) value corresponds to the minimum energy of the system. It minimizes the energy of electron liquid upon interaction (2.2). At \(T=0\) K, the model is a three parameter model. We shall analyze the phase state of the interacting spinless fermions for arbitrary rational fluxes \(\phi=p/q\) (\(p\) and \(q\) are coprime integers), \(U\) and \(\rho\). In the Hofstadter model of noninteracting fermions, the states of fermions with different \(\phi\) are topologically similar in the following sense: the Chern numbers, the Hall conductance are determined by the magnetic flux, filling or the number of the filled isolated Hofstadter bands (HBs) that correspond to this filling, while they do not depend on the structure of the bands [5, 6, 7] (their widths, the values of the gaps between them).
The effect of the interaction on the behavior of fermion liquid is reduced to the appearance of an inhomogeneous \(\lambda\)-field, which is determined by the magnitude \(\lambda\) and phase \(K\). We shall use the following parametrization \(K=2\pi r/s\), where \(r\) and \(s\) are relatively prime integers. Such trivial solutions \(K=0\) (\(s\to\infty\)) and \(K=2\pi\) (\(r=s=1\)) correspond to the maximum energy. According to (2.3), the energy \(\epsilon\) is shifted to the maximum value \(+\lambda\). In the \(K\to 0\) (or \(s\to\infty\)) limit, the solution for \(K\) corresponds to irrational fluxes that are realized at \(s>L\)[7]. We consider the steady state of the system for rational fluxes, namely for integer \(s/q=\alpha\), when \(q\leqslant s\), or integer \(\alpha^{-1}\), when \(q>s\). The minimum energy corresponds to nontrivial solution for \(K\) at a given magnetic flux \(\phi\). The fine structure of the Hofstadter spectrum is realized at \(\alpha>1\), when the interaction scale is maximum \(s>q\). In the case \(\alpha<1\), the spectrum is renormalized, its structure remains the same, i.e., only the Landau levels.
## 3 Example of explanation of FQHE
Splitting of low energy Hofstadter bands, a fine structure of the spectrum in the semi-classical limit
First of all we provide numerical analysis of the quasi-particle excitations near the edge of the spectrum considering rational fluxes \(\phi\) and \(K\) in the semi-classical limit with \(p=1\) and \(r=1\) for different \(q\gg 1\), \(s\gg 1\) and filling \(\rho\ll 1\). Magnetic fields, at which measurements are carried out, correspond to large \(q\sim 10^{3}\)-\(10^{4}\), so there is no point in considering the case of \(q\sim 1\). As a reasonable compromise with numerical calculations (large, but not very large \(q\)), we consider the splitting (due to the interaction) of low energy fermion bands at \(q=10^{2}\). For states near the edge spectrum, the value of \(\lambda\) corresponds to a weak interaction limit, because the filling \(\rho\sim 1/q\) or \(\rho\sim 1/s\).
We show that the FQHE is determined by the fine structure of the spectrum, which is formed due to the on-site repulsion in an external magnetic field. We consider the formation of a fine structure of low energy HBs, which correspond to filling less than \(1/q\) for the first (the lowest) HB and when filling \(1/q\leqslant\rho\leqslant 2/q\) for the second band, where \(q=10^{2}\) is fixed for numerical calculations. A rather obvious consequence follows from numerical calculations: in a weak coupling at \(\rho U<1\), that is valid in semi-classical limit for an arbitrary bare value of \(U\), the fine structure of the spectrum does not depend on the value of \(\lambda\). This allows us to consider the evolution of the fermion spectrum for a fixed value of \(U=1\) or \(\lambda=\rho\) and different \(s\). We fix \(q=10^{2}\), \(U=1\) and calculate the spectrum for various \(\alpha=s/q\), which corresponds to rational fluxes, when \(\alpha\) or \(\alpha^{-1}\) is an integer.
It is really nice that the spectrum has a fairly simple topologically stable structure. The number of HBs in the spectrum is equal to \(q\), at \(\alpha>1\), \(\alpha\) subbands form a fine structure of each HB. The values of the gaps between low energy HBs \(\Delta_{j,j+1}(\alpha)\) (\(j\) numerates the band) depend on \(q\) and \(\lambda\) and insignificantly depend on \(\alpha\). At \(q=10^{2}\) and \(U=1\), \(\Delta_{1,2}(\alpha)\simeq 0.1038\), \(\Delta_{2,3}(\alpha)\simeq 0.082\) for \(2\leqslant\alpha\leqslant 7\), and \(\Delta_{1,2}=0.1237\), \(\Delta_{2,3}=0.1217\) at \(U=0\) in the Hofstadter model, for comparison (details of the calculation are presented in Appendix A) \(\alpha\)-narrow subbands form a fine structure of the \(j\)-HB, the bandwidth of \(i\)-subband in \(j\)-HB is denoted as \(\epsilon_{j,i}(\alpha)\). According to numerical calculations provided in Appendix A \(\epsilon_{j,i}(\alpha)<0.02\) for \(j=1,2\) and \(1\leqslant\alpha\leqslant 7\), their values increase with an increase of the HB number and decrease with an increase of \(\alpha\).
Extremely small quasigaps \(\delta\varepsilon_{j,i,i+1}(\alpha)\) separate subbands \(i\) and \(i+1\) in the fine structure of the \(j\)-HB. Their values are calculated for two HBs \(\sim 10^{-10}\)-\(10^{-13}\) (the calculated values of \(\delta\varepsilon_{j,i,i+1}(\alpha)\) are given
Figure 1: (Colour online) The energy density as a function of \(\alpha\) calculated at \(q=10^{2},U=1\), \(\lambda=U(\rho_{r}+\nu/q)\) (\(\rho_{r}\) is the fermion density corresponding to \(r\)-filled HB, \(\nu\) is the fractional filling of the \(r+1\) HB) for: a) \(\frac{1}{2}\)-filling for the first (blue line) and second (brown line) HB (the points \(\alpha\geqslant 2\) characterize the unstable fine structure of HB); b) \(\frac{1}{4}\)-filling for the first HB [\(\frac{2}{4}\)- or \(\frac{1}{2}\)-filling is shown in a)], steady fractional state with \(\alpha\geqslant 4\) and filling \(\frac{1}{4}\) is also realized in the second HB; c) \(\frac{1}{3}\)-filling for the first (blue lines) and second (brown lines) HB, steady fractional state for \(\nu=\frac{1}{3}\) unsteady state for \(\nu=\frac{2}{3}\); \(\frac{1}{7}\)- filling for the first (blue lines) d), second (brown lines) e) and third (green lines) f) HB, the steady states are shown at \(\nu=\frac{1}{7}\), \(\frac{2}{7}\), \(\frac{3}{7}\).
in Appendix A). A fine structure of the spectrum forms from the Dirac subbands.
A structure of the spectrum which includes two low energy HB has the following form for \(\alpha=3\) as an example:
\[\epsilon_{1,1}(3)=0.0017\Longrightarrow\delta\varepsilon_{1;1,2}(3) \sim 5\cdot 10^{-12}\Longrightarrow\epsilon_{1,2}(3)=0.0067\Longrightarrow \delta\varepsilon_{1;2,3}(3)\sim 9\cdot 10^{-11}\] \[\Longrightarrow\epsilon_{1,3}(3)=0.0050\Longrightarrow\Delta_{1, 2}(3)=0.1038\Longrightarrow\epsilon_{2,1}(3)=0.0066\Longrightarrow\delta \varepsilon_{2;1,2}(3)\sim 9\cdot 10^{-13}\] \[\Longrightarrow\epsilon_{2,2}(3)=0.0166\Longrightarrow\delta \varepsilon_{1;2,3}(3)\sim 9\cdot 10^{-11}\Longrightarrow\epsilon_{2,3}(3)=0.0100 \Longrightarrow\Delta_{2,3}(3)=0.0820.\]
Positive values of \(\delta\varepsilon_{j,i,i+1}(\alpha)\) correspond to the quasigaps or zero density states at fraction fillings, since their values are extremely small \(\sim 10^{-10}\)-\(10^{-13}\). In semiclassical limit, the fine structure of the low energy HB does not change at different \(q\). According to numerical calculations, at a given \(\alpha\) each HB is split by quasigaps into \(\alpha\) subbands, the fine structure of each HB is formed. Thus, the spectrum is determined by two types of gaps, the gaps \(\Delta_{j,j+1}\) which determine the insulator states of the system with an entire filling \(\rho_{j}=j/q\) (where \(j\) is integer) and \(\delta\varepsilon_{j,i,i+1}\) with a fractional filling in each HB \(\nu=i/\alpha\) (here, \(i=1,\ldots,\alpha-1\)). Moreover, \(\Delta_{j,j+1}\gg\delta\varepsilon_{j,i,j,i+1}\). Thus, the structure of the spectrum is preserved in a fairly wide range of values \(\lambda\), as noted above. Most likely, the quasigaps in the spectrum determine the points of tangency of the subbands, and therefore they are defined as quasigaps and the spectrum of HB includes only Dirac subbands.
### Fractional filled steady state
In this subsection we consider a stability of the fine structure of the Hofstadter spectrum. Let us fix the chemical potential which corresponds to the fractional filling of each HB (for \(\alpha=2\), \(\nu=\frac{1}{3}\)) and numerically calculate the energy of electron liquid for different rational fluxes, which corresponds to integer \(\alpha\) and \(\alpha^{-1}\). A steady state corresponds to the minimum energy for a given filling. The energy density as a function of \(\alpha\) is shown in figure 1 a). A steady state is realized at \(\alpha=\frac{1}{20}\) for the first HB and \(\alpha=\frac{1}{25}\) for the second. As a result, the fine structure of the Hofstadter spectrum is unstable at \(\alpha=2\) or \(\frac{1}{2}\)-fractional filling. It follows from numerical analysis of stability of fine structures at different \(\alpha\) that the fine structure of HB is stable when HB is filled \(\nu<\frac{1}{2}\). The point \(\nu=\frac{1}{2}\) is similar to the point of the phase transition. Therefore, in this point the behavior of the electron liquid is rather critical. In figure 1 we also presented the calculations of energy density for \(\nu=\frac{1}{4},\frac{3}{4}\) for the first HB b), \(\nu=\frac{1}{3},\frac{2}{3}\) for the first and second HB c), \(\nu=\frac{1}{7},\frac{2}{7},\frac{3}{7}\) for the first d) second e) and third f) HB. For steady fractional Hall states, the minimum energy is reached at \(\alpha_{c}=3\) for \(\nu=\frac{1}{3}\), \(\alpha_{c}=4\) for \(\nu=\frac{1}{4}\), \(\alpha_{c}=7\) for \(\nu=\frac{1}{7},\frac{2}{7},\frac{3}{7}\). For rational fluxes, the energy density does not depend on the value of \(\alpha\) at \(\alpha>\alpha_{c}\). The Hubbard interaction shifts HB (decreasing the energy compared to the homogeneous state) and increases the bandwidths of subbands (increasing the energy). The summarized energy occurs as a result of the competition of these terms, that are determined by \(U\).
### The Dirac spectrum.
Edge modes, fractional Hall conductance
It is convenient to consider the behavior of the edge modes when calculating quasiparticle excitations for the stripe geometry with open boundary conditions: the boundaries, parallel to the \(y\)-axis, are the edges of the hollow cylinder. Let us analyse the behavior of the fermion spectrum in the case of a strong anisotropic hopping integral in equation (2.3)
\[\epsilon g_{n}=-tg_{n+1}-tg_{n-1}-2\cos(k_{y}+2\pi n\phi)g_{n}+\lambda\cos(Kn) g_{n}, \tag{3.1}\]
where the hopping integral for fermions between chains \(t\ll 1\).
The behavior of the topological properties of the system is universal in the sense that they do not depend on the parameters of the Hamiltonian over a wide range of their variation. This makes it possible to analyze the spectrum of the quasi-particle excitations in a weak limit with respect to \(t\). The Bloch fermion states in the chains are described by the excitation energies \(\kappa(k_{y},n)=-2\cos(k_{y}+2\pi\frac{n}{q})+\rho U\cos(2\pi/s)\)
at \(t=0\). At \(t\neq 0\), the fermions tunnel between the chains. In the case when the energies of fermions in the chains \(n_{1}\) and \(n_{2}\) coincide, the interaction between fermions is maximum and is determined by a distance between the chains \(\sim t^{|m_{1}-n_{2}|}\) in the weak \(t\)-limit. For \(U=0\), the fermion spectrum is gapped in these resonance points, while in the semi-classical limit it is described by the Landau levels. The tunneling of fermions in the gaps is determined as the tunneling between Majorana fermions with different chirality \(\chi_{n}\) and \(\nu_{n}\), located at the different chains [5, 6]. At the same time, the tunneling between the Majorana fermions located at different edges is forbidden, and these chiral modes are free and localized at different edges. The number of these chiral modes defines the Hall conductance in IQHE [5, 6].
The Hubbard interaction breaks a condition of the resonance for the given energies between all chains. The fermion energies in the chains are the same for only three chains. As a result, the gaps into HBs are not formed. Chiral modes are localized at the edges; these modes are defined by the conditions \(\kappa(k_{y},n_{1})=\kappa(k_{y},n_{2})\), where \(n_{1}\) and \(n_{2}\) are the chains on different edges and \(n_{2}=L-n_{1}+1\): for example, \(n_{1}=1\) and \(n_{2}=L\); \(n_{1}=2\) and \(n_{2}=L-1\). For \(U>0\), the energy corresponding to the conditions \(\kappa(k_{y},1)=\kappa(k_{y},2)=\kappa(k_{y},L)\) is maximum, so the energies of the modes localized at the edges \(1\) and \(L\), split off from the upper edge of HB. The conditions \(\kappa(k_{y},n_{1})=\kappa(k_{y},n_{2})\) for \(n_{1}>1\) are realized for the corresponding energies inside HB. In other words, the edge mode moves inside the sample with a decreasing energy inside HB. The energies at which the chiral Majorana fermions are formed and localized at the edges with \(n_{1}>1\) and \(n_{2}<L\) correspond to the states inside HB. The above is illustrated by numerical calculations of the fermionic spectrum.
Let us focus on the calculation of two low energy HB at \(\alpha=3\) and three HB at \(\alpha=7\). We fix the Fermi energies which correspond to fractional filling \(\nu=\frac{1}{3}\) in the second HB and \(\nu=\frac{3}{7}\) in the third HB. In the case \(\alpha=3\), each HB splits into three subbands forming its fine structure. The quasigaps between their subbands are extremely small [see in figure 2 a)]. HB form the edge modes in the forbidden region of the spectrum between them. These modes split from the upper and lower HB and are localized at the boundaries. The edge modes coexist with the fine structure of each HB, except the first, in which they are not formed. In contrast to IQHE, these modes also connect the nearest-neighbor subbands in the fine structure of each HB (except the first one). Extremely small quasigaps do not kill the topology of HB (the number of the edge modes is conserved at a filling of HB). The Hall conductance is determined by the same edge modes with different fraction filling \(1+\frac{1}{3}\) for \(\alpha=3\) and \(2+\frac{3}{7}\) for \(\alpha=7\). Note that in the semi-classical limit the Dirac spectrum of the fermion excitations is realized for arbitrary fractional filling.
Figure 2: (Colour online) A fine structure of the two a) and three b) lower energy HBs (as illustration of the Dirac spectrum) calculated at \(q=100\), \(U=1\), \(\nu=\frac{1}{3}\) a) and \(\nu=\frac{3}{7}\) b) for sample in the form of a hollow cylinder with open boundary conditions along the \(y\)-direction, \(k_{y}\) is the wave vector, red dashed lines denote the Fermi energies. The dotted lines mark the dispersion of edge modes, the inserts illustrate them, where the amplitude of the wave function is calculated as a function of the \(x\)-coordinate at \(k_{y}/2\pi=0.48,0.52\) a) (\(1\) and \(6\cdot 10^{3}\) are the boundaries).
## 4 Conclusions
The Hofstadter model with short-range repulsion is considered within the mean-field approach, which allows one to study FQHE. We described IQHE and FQHE using the same approach and it is shown that:
* short-range repulsion forms a steady fine structure of the Hofstadter spectrum when the filling of HB is less than a half;
* at fractional filling of HB a fine structure of HB is formed from the Dirac subbands;
* these quasigaps do not destroy the HB topology, only the HB determines the number of the edge modes (the Chern number of the HB is conserved);
* chiral edge modes located at the boundaries connect the nearest-neighbor subbands and determine the Hall conductance with fractional filling;
* chiral edge modes are not formed in the first HB. Therefore, fractional Hall conductance is not realized for the filling of the lowest (first) HB.
A fine structure of the Landau levels (HBs) splits into the Dirac fermion spectra. For the half-filled Landau level, the Dirac composite fermions were proposed in [17] and were studied in [18] in the framework of low-energy effective field theory. Numerical calculations were carried out in the semi-classical limit, which corresponds to the experimental conditions. The results obtained cannot be explained within the framework of the model and within the approach to its solution proposed in [19].
## Appendix A Example
Numerical calculations of the low energy structure of the spectrum are presented in this section, results of calculations were obtained at fixed \(q=10^{2}\) and \(U=1\). In the semi-classical limit, the Hofstadter spectrum is reduced to the Landau levels, which are separated by the gaps \(\Delta_{j,j+1}(\alpha)\), where \(j\) numerates HB and \(\alpha\) determines the splitting of the band. Below we provide a set of the calculated values of \(\Delta_{j,j+1}(\alpha)\):
\[\Delta_{1,2}(1) =0.1043, \Delta_{2,3}(1) =0.0842;\] \[\Delta_{1,2}(2) =0.1039, \Delta_{2,3}(2) =0.0824;\] \[\Delta_{1,2}(3) =0.1038, \Delta_{2,3}(3) =0.0820;\] \[\Delta_{1,2}(4) =0.1037, \Delta_{2,3}(4) =0.0819;\] \[\Delta_{1,2}(5) =0.1037, \Delta_{2,3}(5) =0.0818;\] \[\Delta_{1,2}(6) =0.1037, \Delta_{2,3}(6) =0.0818;\] \[\Delta_{1,2}(7) =0.1037, \Delta_{2,3}(7) =0.0818.\]
At \(\alpha\geqslant 2\) the gap between HB is practically independent of \(\alpha\).
A fine structure of HB is determined by the value of \(\alpha\), each \(j\)-HB includes \(\alpha\) subbands with band width \(\epsilon_{j,i}(\alpha)\), where \(i\) numerates the subband in the HB \(1\leqslant i\leqslant\alpha\). The calculation results for two
low-energy HBs are presented below
\[\epsilon_{1,1}(1) = 0.0197,\quad\epsilon_{2,1}(1)=0.0381;\] \[\epsilon_{1,1}(2) = 0.0050,\quad\epsilon_{1,2}(2)=0.0100,\quad\epsilon_{2,1}(2)=0.0148,\quad\epsilon_{2,2}(2)=0.0198;\] \[\epsilon_{1,1}(3) = 0.0017,\quad\epsilon_{1,2}(3)=0.0067,\quad\epsilon_{1,3}(3)=0.0050,\quad\epsilon_{2,1}(3)=0.0066,\quad\epsilon_{2,2}(3)=0.0166,\] \[\epsilon_{2,3}(3) = 0.0100;\] \[\epsilon_{1,1}(4) = 0.0007,\quad\epsilon_{1,2}(4)=0.0035,\quad\epsilon_{1,3}(4)=0.0053,\quad\epsilon_{1,4}(4)=0.0029,\quad\epsilon_{2,1}(4)=0.0036,\] \[\epsilon_{2,2}(4) = 0.0106,\quad\epsilon_{2,3}(4)=0.0123,\quad\epsilon_{2,4}(4)=0.0058;\] \[\epsilon_{1,1}(5) = 0.0004,\quad\epsilon_{1,2}(5)=0.0020,\quad\epsilon_{1,3}(5)=0.0037,\quad\epsilon_{1,4}(5)=0.0040,\quad\epsilon_{1,5}(5)=0.0019,\] \[\epsilon_{2,1}(5) = 0.0023,\quad\epsilon_{2,2}(5)=0.0070,\quad\epsilon_{2,3}(5)=0.0099,\quad\epsilon_{2,4}(5)=0.0090,\quad\epsilon_{2,5}(5)=0.0038;\] \[\epsilon_{1,1}(6) = 0.0002,\quad\epsilon_{1,2}(6)=0.0012,\quad\epsilon_{1,3}(6)=0.0025,\quad\epsilon_{1,4}(6)=0.0033,\quad\epsilon_{1,5}(6)=0.0030,\] \[\epsilon_{1,6}(6) = 0.0013,\quad\epsilon_{2,1}(6)=0.0016,\quad\epsilon_{2,2}(6)=0.0049,\quad\epsilon_{2,3}(6)=0.0075,\quad\epsilon_{2,4}(6)=0.0083,\] \[\epsilon_{2,5}(6) = 0.0067,\quad\epsilon_{2,6}(6)=0.0027;\] \[\epsilon_{1,1}(7) = 0.0001,\quad\epsilon_{1,2}(7)=0.0008,\quad\epsilon_{1,3}(7)=0.0017,\quad\epsilon_{1,4}(7)=0.0025,\quad\epsilon_{1,5}(7)=0.0029,\] \[\epsilon_{1,6}(7) = 0.0024,\quad\epsilon_{1,7}(7)=0.0010,\quad\epsilon_{2,1}(7)=0.0011,\quad\epsilon_{2,2}(7)=0.0036,\quad\epsilon_{2,3}(7)=0.0057,\] \[\epsilon_{2,4}(7) = 0.0070,\quad\epsilon_{2,5}(7)=0.0069,\quad\epsilon_{2,6}(7)=0.0051,\quad\epsilon_{2,7}(7)=0.0020.\]
As expected, the bandwidth in \(j\)-HB decreases with \(\alpha\) and increases with \(j\).
Narrow subbands with bandwidths \(\epsilon(j,i)\ll\Delta(j,j+1)\) form the fine structure of each HB. Quasigaps between subbands \(i\) and \(i+1\) in fine structure of \(j\)-HB \(\delta\varepsilon_{j,i,i+1}(\alpha)\) are extremally small, so they are the following values for two low energy HBs
\[\delta\varepsilon_{1;1,2}(2) \sim 3\cdot 10^{-11},\quad\delta\varepsilon_{2;1,2}(2) \sim 2\cdot 10^{-10};\] \[\delta\varepsilon_{1;1,2}(3) \sim 5\cdot 10^{-12},\quad\delta\varepsilon_{1;2,3}(3) \sim 9\cdot 10^{-11},\quad\delta\varepsilon_{2;1,2}(3)\sim 9\cdot 10^{-13}, \quad\delta\varepsilon_{2;2,3}(3)\sim 1\cdot 10^{-10};\] \[\delta\varepsilon_{1;1,2}(4) \sim 4\cdot 10^{-12},\quad\delta\varepsilon_{1;2,3}(4) \sim 3\cdot 10^{-11},\quad\delta\varepsilon_{1;3,4}(4)\sim 6\cdot 10^{-11}, \quad\delta\varepsilon_{2;1,2}(4)\sim 3\cdot 10^{-11},\] \[\delta\varepsilon_{2;2,3}(4) \sim 9\cdot 10^{-13},\quad\delta\varepsilon_{2;3,4}(4) \sim 2\cdot 10^{-11};\] \[\delta\varepsilon_{1;1,2}(5) \sim 1\cdot 10^{-11},\quad\delta\varepsilon_{1;2,3}(5) \sim 1\cdot 10^{-11},\quad\delta\varepsilon_{1;3,4}(5)\sim 2\cdot 10^{-11},\quad \delta\varepsilon_{1;4,5}(5)\sim 1\cdot 10^{-11},\] \[\delta\varepsilon_{2;1,2}(5) \sim 4\cdot 10^{-11},\quad\delta\varepsilon_{2;2,3}(5) \sim 1\cdot 10^{-10},\quad\delta\varepsilon_{2;3,4}(5)\sim 3\cdot 10^{-11}, \quad\delta\varepsilon_{2;4,5}(5)\sim 2\cdot 10^{-11};\] \[\delta\varepsilon_{1;1,2}(6) \sim 1\cdot 10^{-11},\quad\delta\varepsilon_{1;2,3}(6) \sim 4\cdot 10^{-12},\quad\delta\varepsilon_{1;3,4}(6)\sim 3\cdot 10^{-11},\quad \delta\varepsilon_{1;4,5}(6)\sim 3\cdot 10^{-11},\] \[\delta\varepsilon_{1;5,6}(6) \sim 3\cdot 10^{-11},\quad\delta\varepsilon_{2;1,2}(6) \sim 2\cdot 10^{-11},\quad\delta\varepsilon_{2;2,3}(6)\sim 9\cdot 10^{-11}, \quad\delta\varepsilon_{2;3,4}(6)\sim 1\cdot 10^{-10},\] \[\delta\varepsilon_{2;4,5}(6) \sim 3\cdot 10^{-11},\] \[\delta\varepsilon_{2;5,6}(6) \sim 5\cdot 10^{-11};\] \[\delta\varepsilon_{1;1,2}(7) \sim 3\cdot 10^{-12},\quad\delta\varepsilon_{1;2,3}(7) \sim 3\cdot 10^{-11},\quad\delta\varepsilon_{1;3,4}(7)\sim 8\cdot 10^{-12},\quad\delta\varepsilon_{1;4,5}(7)\sim 4 \cdot 10^{-11},\] \[\delta\varepsilon_{1;5,6}(7) \sim 2\cdot 10^{-11},\quad\delta\varepsilon_{1;6,7}(7) \sim 1\cdot 10^{-11},\quad\delta\varepsilon_{2;1,2}(7)\sim 1\cdot 10^{-11},\quad \delta\varepsilon_{2;2,3}(7)\sim 2\cdot 10^{-11},\] \[\delta\varepsilon_{2;3,4}(7) \sim 9\cdot 10^{-11},\quad\delta\varepsilon_{2;4,5}(7) \sim 5\cdot 10^{-11},\quad\delta\varepsilon_{2;5,6}(7)\sim 5\cdot 10^{-11},\quad \delta\varepsilon_{2;6,7}(7)\sim 3\cdot 10^{-11}.\]
Positive values of \(\delta\varepsilon_{j,i,i+1}(\alpha)\) correspond to quasigaps or zero density states at fraction fillings, since its values are extremely small \(\sim 10^{-10}\)-\(10^{-13}\). It follows from numerical calculations that, in the semiclassical limit, the fine structure of a low-energy HB does not change for different \(q\).
## Appendix B Counterexample
As a counterexample, we analyze the results of the paper [19]. The authors considered a similar model, namely the Hofstadter model with a periodic potential \(U_{1}\cos(2\pi\chi/a)+U_{2}\cos(2\pi x/b)\), for both small and large \(U_{1}/\omega\), \(U_{2}/\omega\), where \(\omega\) is the cyclotron frequency. The problem is reduced to a solution of equation (2.3) for rational magnetic fluxes determined as \(p/q\). The authors believe that the two-dimensional periodic potential forms the gaps into HB and obtain the Diophantine equation for Chern numbers at fillings that correspond to these gaps. I quote: "If the Fermi surface is located in the \(r\)-th gap
of the \(N\)-th Landau level the total Hall conductance is equal to \(\sigma_{H}=(e^{2}/h)(t_{r}+N-1)\) [equation (10)], with \(t_{r}\) the solution of" \(r=s_{r}q+t_{r}p\) (\(|s|\leqslant p/2\)) [equation (9) in [19]]. The Diophantine equation (9) and formula (10) are the main result of [19]. The problem is reduced to the traditional Hofstadter model with the same Diophantine equation (11). Therefore, the Hofstadter model with the two-dimensional periodic potential did not describe the fractional Hall states.
As an illustration, the authors considered the flux to be equal to \(\frac{7}{11}\) with the first 11 values of \(t_{r}\): \(-3\), \(5\), \(2\), \(-1\), \(-4\), \(4\), \(1\), \(-2\), \(6\), \(3\), \(0\) and so that the Hall current is proportional to \(-3\) or \(8\) in each subband. Unfortunately, this set of \(t_{r}\) values does not satisfy the spectrum symmetry, such as \(-3+5+2-1-4+4+1-2+6+3+0=11=q\). This spectrum is shown in figure 3 a) for \(U_{1}=U_{2}=0\), the structure of the first HB is shown in figure 3 b) for \(U_{1}=0.002\), \(U_{2}=0\) and \(a=3q=33\). Numerical analysis shows that the gaps that form the fine structure of each HB (see in figure 3) are absent for a weak periodic potential and we cannot talk about the Hall conductance.
Numerical analysis (see the calculations obtained in the semi-classical limit) shows the absence of the gaps (only extreme small quasi-gaps or peculiarities of the density of states at partial filling), when the Hubbard interaction is taken into account. Note that there are no calculations of the gap values [19], only an assumption.
According to equations (9), (10) [19], the Chern number is different for different HB fillings, and when the HB filling changes, the topological phase transitions occur between topological states with different topological indices. This result follows from equations (9), (10) [19]. The periodic potential does not break the time reversal symmetry (like a magnetic field), it has a different nature and cannot induce topological phase transitions between the states with different topological indices. Numerical calculations show that the Chern numbers of HB are not changed (in the sense that the number of chiral edge modes remains the same when the HB is filled). The obtained results really make sense, but not the topological states discussed in [19].
|
2303.17221 | Self-normalized partial sums of heavy-tailed time series | We study the joint limit behavior of sums, maxima and $\ell^p$-type moduli
for samples taken from an $\mathbb{R}^d$-valued regularly varying stationary
sequence with infinite variance. As a consequence, we can determine the
distributional limits for ratios of sums and maxima, studentized sums, and
other self-normalized quantities in terms of hybrid characteristic functions
and Laplace transforms. These transforms enable one to calculate moments of the
limits and to characterize the differences between the iid and stationary cases
in terms of indices which describe effects of extremal clustering on
functionals acting on the dependent sequence. | Muneya Matsui, Thomas Mikosch, Olivier Wintenberger | 2023-03-30T08:33:52Z | http://arxiv.org/abs/2303.17221v1 | # Self-normalized partial sums of heavy-tailed time series
###### Abstract.
We study the joint limit behavior of sums, maxima and \(\ell^{p}\)-type moduli for samples taken from an \(\mathbb{R}^{d}\)-valued regularly varying stationary sequence with infinite variance. As a consequence, we can determine the distributional limits for ratios of sums and maxima, studentized sums, and other self-normalized quantities in terms of hybrid characteristic functions and Laplace transforms. These transforms enable one to calculate moments of the limits and to characterize the differences between the iid and stationary cases in terms of indices which describe effects of extremal clustering on functionals acting on the dependent sequence.
Key words and phrases:Regularly varying sequence, extremal clusters, sums, maxima, self-normalization, ratio limits 2020 Mathematics Subject Classification: Primary 60F05; Secondary 60E07, 60E10, 60G70, 62E20 Muneya Matsui's research is partly supported by the JSPS Grant-in-Aid for Scientific Research C (19K11868). Thomas Mikosch's research is partly supported by the grant No 9040-00086B of the Danish Free Research Council (DFF).
## 1. Introduction
In this paper we consider the following _mixing condition_: for some integer sequences \(r_{n}\to\infty\), \(k_{n}=[n/r_{n}]\to\infty\),
\[\Psi_{n}(\mathbf{u},x) = \left(\mathbb{E}\big{[}\exp\big{(}i\,a_{n}^{-1}\mathbf{u}^{\top} \mathbf{S}_{r_{n}}\big{)}\,\mathbf{1}\big{(}a_{n}^{-1}M_{r_{n}}^{|\mathbf{X} |}\leq x\big{)}\right]\right)^{k_{n}}+o(1)\,,\qquad n\to\infty\,,\]
\[x>0\,,\qquad\mathbf{u}\in\mathbb{R}^{d}\,. \tag{2.1}\]
Condition (2.1) ensures the asymptotic independence of \(k_{n}\) maxima and sums over disjoint blocks of length \(r_{n}\). It follows from strong mixing properties of \((\mathbf{X}_{n})\) or by coupling arguments; see for example Rio [27] as a general reference and Section 5 below.
### The anti-clustering condition
Anti-clustering conditions are also standard for proving limit theory for sums and maxima of stationary sequences; see for example Basrak and Segers [8]. They ensure that clusters of extreme events cannot last forever. The following condition was introduced in Bartkiewicz et al. [6]; it is particularly tailored for partial sums.
An \(\mathbb{R}^{d}\)-valued stationary regularly varying sequence \((\mathbf{X}_{t})\) satisfies the **anti-clustering condition** if for some \(r_{n}\to\infty\) such that \(k_{n}=[n/r_{n}]\to\infty\),
\[\lim_{k\to\infty}\limsup_{n\to\infty}n\,\sum_{j=k}^{r_{n}}\mathbb{E}\big{[}(|a _{n}^{-1}\mathbf{X}_{j}|\wedge x)\,(|a_{n}^{-1}\mathbf{X}_{0}|\wedge x)\big{]} =0\,,\qquad x=1\,. \tag{2.2}\]
Here and throughout the paper \((a_{n})\) satisfies \(n\,\mathbb{P}(|\mathbf{X}|>a_{n})\to 1\) as \(n\to\infty\). Condition (2.2) can be checked by coupling arguments; see for example Kulik et al. [20] and Section 5 below. For \(a\geq 0\) we have \(a\wedge x\leq x\,(a\wedge 1)\,\mathbf{1}(x\geq 1)+(a\wedge 1)\,\mathbf{1}(x<1)\). Therefore (2.2) holds for every \(x>0\) if it does for \(x=1\).
Condition (2.2) is symmetric in time: since \((\mathbf{X}_{t})\) is stationary it is equivalent to
\[\lim_{k\to\infty}\limsup_{n\to\infty}n\,\sum_{j=k}^{r_{n}}\mathbb{E}\big{[}(|a _{n}^{-1}\mathbf{X}_{-j}|\wedge 1)\,(|a_{n}^{-1}\mathbf{X}_{0}|\wedge 1)\big{]}=0\,.\]
Since \(a\wedge b=a\,\mathbf{1}(a\leq b)+b\,\mathbf{1}(a>b)\), \(a,b>0\), (2.2) implies
\[\lim_{k\to\infty}\limsup_{n\to\infty}n\,\sum_{j=k}^{r_{n}}\mathbb{P}\big{(}| \mathbf{X}_{j}|>x\,a_{n}\,,\,|\mathbf{X}_{0}|>x\,a_{n}\big{)}=0\,,\qquad x>0\,,\]
hence the anti-clustering condition
\[\lim_{k\to\infty}\limsup_{n\to\infty}\,\mathbb{P}\big{(}\max_{j=k,\ldots,r_{n }}|\mathbf{X}_{j}|>x\,a_{n}\,\big{|}\,|\mathbf{X}_{0}|>x\,a_{n}\big{)}=0\,, \qquad x>0\,,\]
which is standard for proving limit theory for the extremes of a stationary sequence; see for example Davis and Hsing [13] and Basrak and Segers [8].
If \(\alpha\in(1,2)\) we have \(\mathbb{E}[a_{n}^{-1}|\mathbf{X}|\wedge 1]\leq c/a_{n}\) (here and in what follows, \(c\) denotes any positive constant whose value is not of interest) while for \(\alpha\in(0,1)\) by Karamata's theorem for regularly varying functions (see Bingham et al. [9]) \(\mathbb{E}[a_{n}^{-1}|\mathbf{X}|\wedge 1]\leq c/n\). Therefore
\[n\,r_{n}(\mathbb{E}[a_{n}^{-1}|\mathbf{X}|\wedge 1])^{2} = \left\{\begin{array}{ll}O(r_{n}/n)=o(1)\,,&\mbox{if $\alpha\in(0,1) $}\,,\\ O(r_{n}\,n/a_{n}^{2})=o(1)\,,&\mbox{if $\alpha\in(1,2)$ and also $r_{n}=o(a_{n}^{2}/n)$}. \end{array}\right.\]
Under these conditions on \((r_{n})\), (2.2) holds for an iid sequence \((\mathbf{X}_{t})\). Moreover, (2.2) turns into
\[\lim_{k\to\infty}\limsup_{n\to\infty}n\,\sum_{j=k}^{r_{n}}\operatorname{cov} \bigl{(}|a_{n}^{-1}\mathbf{X}_{j}|\wedge 1\,,|a_{n}^{-1}\mathbf{X}_{0}| \wedge 1\bigr{)}=0\,. \tag{2.3}\]
### Properties of regularly varying stationary sequences
Write \(\mathbf{\Theta}=(\mathbf{\Theta}_{t})_{t\in\mathbb{Z}}\) for the spectral tail process of \((\mathbf{X}_{t})\). Under (2.1) and (2.2) we have the property that \(\|\mathbf{\Theta}\|_{\alpha}^{\alpha}=\sum_{t\in\mathbb{Z}}|\mathbf{\Theta}_{t }|^{\alpha}<\infty\) a.s.; see Janssen [17] and Buritica et al. [11]. Then one can define the _spectral cluster process_\(\mathbf{Q}=\mathbf{\Theta}/\|\mathbf{\Theta}\|_{\alpha}\). We will also make use of a change of measure of \(\mathbf{Q}\) in \(\ell^{\alpha}(\mathbb{R}^{d})\) given by
\[\mathbb{P}(\widetilde{\mathbf{Q}}\in\cdot) = \mathbb{P}\Bigl{(}\frac{\mathbf{Q}}{\max_{t\in\mathbb{Z}}| \mathbf{Q}_{t}|}\in\cdot\,\Bigl{|}\max_{t\in\mathbb{Z}}|Y\mathbf{Q}_{t}|>1 \Bigr{)}\,, \tag{2.4}\]
and the extremal index of \((|{\bf X}_{t}|)\) can be expressed as
\[\theta_{|{\bf X}|}=\mathbb{P}(Y\,\max_{t\in\mathbb{Z}}|{\bf Q}_{t}|>1)=\mathbb{E} \big{[}\max_{t\in\mathbb{Z}}|{\bf Q}_{t}|^{\alpha}\big{]}\,. \tag{2.5}\]
An alternative way of defining regular variation of the stationary process \(({\bf X}_{t})\) is via the vague convergence relations on \(\mathbb{R}_{\bf 0}^{d(h+1)}=\mathbb{R}^{d(h+1)}\backslash\{{\bf 0}\}\), for \(h\geq 0\),
\[n\,\mathbb{P}\big{(}a_{n}^{-1}({\bf X}_{0},\ldots,{\bf X}_{h})\in\cdot\big{)} \stackrel{{ v}}{{\to}}\mu_{h}(\cdot)\,,\qquad n\to\infty\,, \tag{2.6}\]
where the _tail measures_\((\mu_{h})\) are non-null and have the homogeneity property \(\mu_{h}(t\,\cdot)=t^{-\alpha}\,\mu_{h}(\cdot)\), \(t>0\). We observe that, due to stationarity of \(({\bf X}_{t})\), these measures have the consistency property \(\mu_{h+1}(\mathbb{R}^{d}\times\cdot)=\mu_{h}(\cdot)=\mu_{h+1}(\cdot\times \mathbb{R}^{d})\); see Kulik and Soulier [19].
An important tool for dealing with regularly varying stationary sequences is the _time-change formula_ (see Basrak and Segers [8]), for \(h\geq 0\), \(t\in\mathbb{Z}\),
\[\mathbb{P}(({\boldsymbol{\Theta}}_{-h},\ldots,{\boldsymbol{\Theta}}_{h})\in \cdot\mid{\boldsymbol{\Theta}}_{-t}\neq{\bf 0}) = \mathbb{E}\Big{[}\frac{|{\boldsymbol{\Theta}}_{t}|^{\alpha}}{ \mathbb{E}\big{[}|{\boldsymbol{\Theta}}_{t}|^{\alpha}\big{]}}{\bf 1}\Big{(}\frac{({ \boldsymbol{\Theta}}_{t-h},\ldots,{\boldsymbol{\Theta}}_{t+h})}{|{\boldsymbol{ \Theta}}_{t}|}\in\cdot\Big{)}\Big{]}\,. \tag{2.7}\]
## 3. Joint convergence of sums, maxima and norms
### Joint convergence of sums and maxima
In this section we prove our main result about the joint convergence of normalized maxima and sums based on a regularly varying stationary sequence. We restrict ourselves to indices \(\alpha\in(0,2)\backslash\{1\}\). The main reason for this is that centering of sums for \(\alpha=1\) is complicated and stretches the proofs due to much technical detail. Here and in the subsequent sections we use transform techniques (hybrid characteristic functions, Laplace transforms). This is in contrast to some of the more recent developments in limit theory for regularly varying stationary sequences; various authors prefer to use point process techniques (first proving the weak convergence of the processes of the points \((a_{n}^{-1}{\bf X}_{t})\), then applying a.s. continuous mappings to these points); see for example Davis and Hsing [13], Resnick [26], Kulik and Soulier [19], Krizmanic [18]. They obtain the series representation \((0<\alpha<1)\) of Lepage et al. [21]
\[a_{n}^{-1}(M_{n}^{|{\bf X}|},{\bf S}_{n})\stackrel{{ d}}{{\to}} \Big{(}\sup_{i\geq 1}\Gamma_{i}^{-1/\alpha}\sup_{j\in\mathbb{Z}}|{\bf Q}_{ ij}|,\sum_{i\geq 1}\Gamma_{i}^{-1/\alpha}\sum_{j\in\mathbb{Z}}{\bf Q}_{ij}\Big{)}\,,\qquad n\to \infty\,. \tag{3.1}\]
When dealing with partial sums for \(\alpha\in(1,2)\), a difficulty is the _vanishing-small-values condition_, i.e., the sums of the truncated quantities \(({\bf X}_{t}/a_{n})\) have to be negligible. We avoid these conditions. A further advantage of transform techniques is that it is easier to identify the (joint) limit distribution of sums, maxima, self-normalized sums, and their distributional characteristics such as moments.
**Theorem 3.1**.: _Consider an \(\mathbb{R}^{d}\)-valued regularly varying stationary process \(({\bf X}_{t})\) with index \(\alpha\in(0,2)\backslash\{1\}\). If \(\alpha\in(1,2)\) we also assume that \(\mathbb{E}[{\bf X}]=0\). Choose the normalizing constants \((a_{n})\) such that \(n\,\mathbb{P}(|{\bf X}|>a_{n})\to 1\) as \(n\to\infty\). Assume the mixing condition (2.1) and the anti-clustering condition (2.2) for the same integer sequences \(r_{n}\to\infty\), \(k_{n}=[n/r_{n}]\to\infty\) as \(n\to\infty\). Then_
\[a_{n}^{-1}(M_{n}^{|{\bf X}|},{\bf S}_{n})\stackrel{{ d}}{{\to}}( \eta_{\alpha},{\boldsymbol{\xi}}_{\alpha})\,,\qquad n\to\infty\,,\]
_where \(\eta_{\alpha}\) is Frechet-distributed with a positive extremal index \(\theta_{|{\bf X}|}\) and \({\boldsymbol{\xi}}_{\alpha}\) is \(\alpha\)-stable with characteristic function_
\[\varphi_{{\boldsymbol{\xi}}_{\alpha}}({\bf u})=\exp\big{(}-c_{\alpha}\,\sigma ^{\alpha}({\bf u})\big{(}1-i\,\beta({\bf u})\tan(\alpha\,\pi/2)\big{)}\big{)} \,,\qquad{\bf u}\in\mathbb{R}^{d}\,, \tag{3.2}\]
_constant \(c_{\alpha}=\Gamma(2-\alpha)\,\cos(\alpha\,\pi/2)\,/(1-\alpha)\), and parameter functions_
\[\beta({\bf u}) = \frac{\mathbb{E}\big{[}({\bf u}^{\top}\sum_{t\in\mathbb{Z}}{\bf Q }_{t})_{+}^{\alpha}-({\bf u}^{\top}\sum_{t\in\mathbb{Z}}{\bf Q}_{t})_{-}^{ \alpha}\big{]}}{\mathbb{E}\big{[}|{\bf u}^{\top}\sum_{t\in\mathbb{Z}}{\bf Q}_{ t}|^{\alpha}\big{]}}\,,\qquad\sigma^{\alpha}({\bf u})=\mathbb{E}\Big{[}\Big{|}{\bf u}^{ \top}\sum_{t\in\mathbb{Z}}{\bf Q}_{t}\Big{|}^{\alpha}\Big{]}\,. \tag{3.3}\]
_Moreover, the joint distribution of \((\eta_{\alpha},\boldsymbol{\xi}_{\alpha})\) is characterized by the hybrid characteristic function_
\[\mathbb{E}\big{[}{\rm e}\,^{i\,\mathbf{u}^{\top}\boldsymbol{\xi}_{ \alpha}}\mathbf{1}(\eta_{\alpha}\leq x)\big{]}\] \[= \varphi_{\boldsymbol{\xi}_{\alpha}}(\mathbf{u})\,\exp\Big{(}- \int_{0}^{\infty}\mathbb{E}\Big{[}{\rm e}\,^{i\,y\,\mathbf{u}^{\top}\sum_{t=- \infty}^{\infty}\mathbf{Q}_{t}}\,\mathbf{1}\Big{(}y\,\max_{t\in\mathbb{Z}}| \mathbf{Q}_{t}|>x\Big{)}\Big{]}\,d(-y^{-\alpha})\Big{)}\] \[= \varphi_{\boldsymbol{\xi}_{\alpha}}(\mathbf{u})\,\Phi_{\alpha}^{ \theta_{|\mathbf{X}|}}(x)\,\exp\Big{(}-\theta_{|\mathbf{X}|}\int_{x}^{\infty }\mathbb{E}\big{[}{\rm e}\,^{i\,y\,\mathbf{u}^{\top}\sum_{t=-\infty}^{\infty} \widetilde{\mathbf{Q}}_{t}}-1\big{]}\,d(-y^{-\alpha})\Big{)}\,,\;\mathbf{u} \in\mathbb{R}^{d}\,,x>0\,, \tag{3.4}\]
_where \(\mathbf{Q}=\boldsymbol{\Theta}/\|\boldsymbol{\Theta}\|_{\alpha}\) and \(\widetilde{\mathbf{Q}}\) is defined in (2.4) and the extremal index \(\theta_{|\mathbf{X}|}\) is given in (2.5)._
In the iid univariate case and for general maximum domains of attraction, Chow and Teugels [12] showed a result of the type (3.4).
Setting \(x=\infty\) in (3.4), we conclude that \(a_{n}^{-1}\mathbf{S}_{n}\stackrel{{ d}}{{\to}}\boldsymbol{\xi}_{\alpha}\). Similarly, setting \(\mathbf{u}=\mathbf{0}\), we obtain for \(x>0\),
\[\mathbb{P}(a_{n}^{-1}M_{n}^{|\mathbf{X}|}\leq x) \to \mathbb{P}\big{(}\eta_{\alpha}\leq x\big{)}\] \[= \exp\Big{(}-\int_{0}^{\infty}\mathbb{P}\Big{(}y\,\max_{t\in \mathbb{Z}}|\mathbf{Q}_{t}|>x\Big{)}\,d(-y^{-\alpha})\Big{)}\] \[= \exp\Big{(}-x^{-\alpha}\,\mathbb{E}\Big{[}\max_{t\in\mathbb{Z}}| \mathbf{Q}_{t}|^{\alpha}\Big{]}\Big{)}\] \[= \Phi_{\alpha}^{\theta_{|\mathbf{X}|}}(x)\,,\qquad n\to\infty\,.\]
In the last step we used the identity (2.5). Therefore the factor
\[\exp\Big{(}-\theta_{|\mathbf{X}|}\int_{x}^{\infty}\mathbb{E}\big{[}{\rm e}\, ^{i\,y\,\mathbf{u}^{\top}\sum_{t=-\infty}^{\infty}\widetilde{\mathbf{Q}}_{t} }-1\big{]}\,d(-y^{-\alpha})\Big{)}\]
in the limit hybrid characteristic function (3.4) describes the dependence between \(\eta_{\alpha}\) and \(\boldsymbol{\xi}_{\alpha}\).
In the iid and asymptotically independent cases, i.e., when \(\boldsymbol{\Theta}_{t}=\mathbf{Q}_{t}=\mathbf{0}\), \(t\neq 0\), \(\mathbf{Q}_{0}=\boldsymbol{\Theta}_{0}\), (3.4) turns into
\[\mathbb{E}\big{[}{\rm e}\,^{i\,\mathbf{u}\,\boldsymbol{\xi}_{ \alpha}}\,\mathbf{1}(\eta_{\alpha}\leq x)\big{]} = \varphi_{\boldsymbol{\xi}_{\alpha}}(\mathbf{u})\,\exp\Big{(}- \int_{x}^{\infty}\mathbb{E}\big{[}{\rm e}\,^{i\,y\,\mathbf{u}^{\top} \boldsymbol{\Theta}_{0}}\big{]}\,d(-y^{-\alpha})\Big{)}\,,\] \[\mathbf{u}\in\mathbb{R}^{d}\,,\qquad x>0\,.\]
The right-hand side does not factorize into \(\varphi_{\boldsymbol{\xi}_{\alpha}}(\mathbf{u})\mathbb{P}(\eta_{\alpha}\leq x)\) for all \(\mathbf{u}\in\mathbb{R}^{d}\) and \(x>0\). Therefore \(\eta_{\alpha}\) and \(\boldsymbol{\xi}_{\alpha}\) are dependent. The hybrid characteristic function \(\Psi\) of \((\eta_{\alpha},\boldsymbol{\xi}_{\alpha})\) factorizes, i.e., for all \(\mathbf{u}\in\mathbb{R}^{d}\), \(x>0\),
\[\Psi(\mathbf{u},x)=\mathbb{E}\big{[}{\rm e}\,^{i\,\mathbf{u}\,\boldsymbol{\xi} _{\alpha}}\,\mathbf{1}(\eta_{\alpha}\leq x)\big{]}=\varphi_{\boldsymbol{\xi} _{\alpha}}(\mathbf{u})\,\mathbb{P}(\eta_{\alpha}\leq x)\,,\]
if and only if \(\eta_{\alpha}\), \(\boldsymbol{\xi}_{\alpha}\) are independent since the quantity \(\Psi\) determines the distribution of \((\eta_{\alpha},\boldsymbol{\xi}_{\alpha})\). In view of (3.4), \(\eta_{\alpha}\) and \(\boldsymbol{\xi}_{\alpha}\) are independent if and only if
\[\int_{x}^{\infty}\mathbb{E}\big{[}{\rm e}\,^{i\,y\,\mathbf{u}^{\top}\sum_{t=- \infty}^{\infty}\widetilde{\mathbf{Q}}_{t}}-1\big{]}\,d(-y^{-\alpha})=0\,, \qquad\mathbf{u}\in\mathbb{R}^{d}\,,\qquad x>0\,.\]
If \(Y\) is Pareto\((\alpha)\) and independent of \((\widetilde{\mathbf{Q}}_{t})\) the latter condition for \(x=1\) implies that the characteristic function of \(Y\sum_{t=-\infty}^{\infty}\widetilde{\mathbf{Q}}_{t}\) is \(1\) which in turn implies that \(\sum_{t\in\mathbb{Z}}\mathbf{Q}_{t}=\sum_{t\in\mathbb{Z}}\boldsymbol{\Theta}_{t }=\mathbf{0}\) a.s. We conclude from the form of \(\varphi_{\boldsymbol{\xi}_{\alpha}}\) that \(\boldsymbol{\xi}_{\alpha}=\mathbf{0}\) a.s. Thus, \(\eta_{\alpha}\) and \(\boldsymbol{\xi}_{\alpha}\) are independent if and only if \(\boldsymbol{\xi}_{\alpha}\) degenerates.
Proof.: It suffices to show that
\[\Psi_{n}(\mathbf{u},x)\to\Psi(\mathbf{u},x)\,,\qquad n\to\infty\,,\qquad\mathbf{ u}\in\mathbb{R}^{d}\,,\;x>0\,. \tag{3.5}\]
The mixing condition (2.1) and a Taylor expansion yield for fixed \(\mathbf{u}\in\mathbb{R}^{d}\), \(x>0\),
\[\log\Psi_{n}(\mathbf{u},x) = k_{n}\left(\mathbb{E}\!\left[\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u} ^{\top}\mathbf{S}_{n}}\,\mathbf{1}\big{(}a_{n}^{-1}M_{r_{n}}^{|\mathbf{X}|}\leq x \big{)}\right]-1\right)\left(1+o(1)\right),\ n\to\infty\,. \tag{3.6}\]
By a telescoping sum argument we will prove that, for every \(\mathbf{u}\in\mathbb{R}^{d}\) and \(x>0\),
\[\lim_{k\to\infty}\limsup_{n\to\infty}\Big{|}k_{n}\left(\mathbb{E} \!\left[\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}\mathbf{S}_{n}}\,\mathbf{1 }\big{(}a_{n}^{-1}M_{r_{n}}^{|\mathbf{X}|}\leq x\big{)}\right]-1\right)\] \[-n\left(\mathbb{E}\!\left[\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^ {\top}\mathbf{S}_{k}}\,\mathbf{1}(a_{n}^{-1}M_{k}^{|\mathbf{X}|}\leq x) \right]-\mathbb{E}\!\left[\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top} \mathbf{S}_{k-1}}\,\mathbf{1}(a_{n}^{-1}M_{k-1}^{|\mathbf{X}|}\leq x)\right] \right)\Big{|}=0\,. \tag{3.7}\]
We write \(\mathbf{S}_{-j}=\mathbf{X}_{-j}+\cdots+\mathbf{X}_{-1}\) and \(M_{-j}^{|\mathbf{X}|}=\max_{-j\leq t\leq-1}|\mathbf{X}_{t}|\) for \(j\geq 1\) as well as, for any integers \(a\leq b\), \(M_{a,b}^{|\mathbf{X}|}=\max_{a\leq t\leq b}|\mathbf{X}_{t}|\). The term inside the absolute values in (3.7) can be expressed as the sum of two negligible residual terms (which we omit in the sequel) and the main term given by
\[J_{n}:= k_{n}\,\sum_{j=1}^{r_{n}-k}\mathbb{E}\!\left[\mathrm{e}^{\,i\,a_{n }^{-1}\mathbf{u}^{\top}\mathbf{S}_{-k-j}}\mathbf{1}(a_{n}^{-1}M_{-k-j}^{| \mathbf{X}|}\leq x)-\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}(\mathbf{S}_ {-k-j}-\mathbf{S}_{-j})}\mathbf{1}(a_{n}^{-1}M_{-k-j,-j-1}^{|\mathbf{X}|}\leq x)\right.\] \[\qquad\qquad\left.-\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top} \mathbf{S}_{-k-j+1}}\mathbf{1}(a_{n}^{-1}M_{-k-j+1}^{|\mathbf{X}|}\leq x)\right.\] \[\qquad\qquad\left.+\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top} (\mathbf{S}_{-k-j+1}-\mathbf{S}_{-j})}\mathbf{1}(a_{n}^{-1}M_{-k-j+1,-j-1}^{| \mathbf{X}|}\leq x)\right]. \tag{3.8}\]
Our goal is to approximate \(J_{n}\) by some simpler quantities and to show that those are negligible as \(n\to\infty\) and \(k\to\infty\). Suppressing the dependence on \(x\), we write
\[A_{n}^{c}=\{a_{n}^{-1}|\mathbf{X}_{-k-j}|>x,a_{n}^{-1}M_{-j}^{|\mathbf{X}|}>x \}\,.\]
We first observe that
\[k_{n}\,\sum_{j=1}^{r_{n}-k}\mathbb{P}\!\left(A_{n}^{c}\right) = k_{n}\,\sum_{j=1}^{r_{n}-k}\mathbb{P}\!\left(a_{n}^{-1}|\mathbf{ X}_{0}|>x,a_{n}^{-1}M_{k,k+j-1}^{|\mathbf{X}|}>x\right)\] \[\leq k_{n}\,\sum_{j=1}^{r_{n}-k}\sum_{l=k}^{k+j-1}\mathbb{P}\!\left(a_ {n}^{-1}|\mathbf{X}_{0}|>x,a_{n}^{-1}|\mathbf{X}_{l}|>x\right)\] \[\leq n\,\sum_{l=k}^{r_{n}}\mathbb{P}\!\left(a_{n}^{-1}|\mathbf{X}_{0}| >x,a_{n}^{-1}|\mathbf{X}_{l}|>x\right). \tag{3.9}\]
Following the discussion after the anti-clustering condition (2.2), we conclude that the right-hand side converges to zero by first letting \(n\to\infty\) and then \(k\to\infty\). Hence, under the same limit regime, \(J_{n}\) can be made arbitrarily close to
\[k_{n}\,\sum_{j=1}^{r_{n}-k}\mathbb{E}\!\left[\left(1-\mathbf{1}( A_{n}^{c})\right)\right.\] \[\times\!\left(\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top} \mathbf{S}_{-k-j}}\mathbf{1}(a_{n}^{-1}M_{-k-j}^{|\mathbf{X}|}\leq x)-\mathrm{e }^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}(\mathbf{S}_{-k-j}-\mathbf{S}_{-j})} \mathbf{1}(a_{n}^{-1}M_{-k-j,-j-1}^{|\mathbf{X}|}\leq x)\right.\] \[\left.\quad-\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}\mathbf{ S}_{-k-j+1}}\mathbf{1}(a_{n}^{-1}M_{-k-j+1}^{|\mathbf{X}|}\leq x)\right.\] \[\left.\quad+\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}(\mathbf{ S}_{-k-j+1}-\mathbf{S}_{-j})}\mathbf{1}(a_{n}^{-1}M_{-k-j+1,-j-1}^{|\mathbf{X}|}\leq x) \right)\right]. \tag{3.10}\]
Writing
\[A_{n1}=\{a_{n}^{-1}|\mathbf{X}_{-k-j}|\leq x\}\,,A_{n2}=\{a_{n}^{-1}M_{-j}^{| \mathbf{X}|}\leq x\}\,,A_{n3}=\{a_{n}^{-1}|\mathbf{X}_{-k-j}|\leq x\,,a_{n}^{-1 }M_{-j}^{|\mathbf{X}|}\leq x\}\,,\]
we may replace the multiplier \(1-{\bf 1}(A_{n}^{c})\) in (3.10) by \({\bf 1}(A_{n1})+{\bf 1}(A_{n2})-{\bf 1}(A_{n3})\). Then we obtain
\[k_{n}\,\sum_{j=1}^{r_{n}-k}\Big{(}\mathbb{E}\big{[}\big{(}{\rm e} \,^{i\,a_{n}^{-1}{\bf u}^{\top}{\bf S}_{-k-j}}-{\rm e}\,^{i\,a_{n}^{-1}{\bf u}^{ \top}{\bf S}_{-k-j+1}}\big{)}{\bf 1}(a_{n}^{-1}M_{-k-j}^{|{\bf X}|}\leq x)-\] \[\qquad\qquad\big{(}{\rm e}\,^{i\,a_{n}^{-1}{\bf u}^{\top}({\bf S}_ {-k-j}-{\bf S}_{-j})}-{\rm e}\,^{i\,a_{n}^{-1}{\bf u}^{\top}({\bf S}_{-k-j+1}-{ \bf S}_{-j})}\big{)}{\bf 1}(a_{n}^{-1}M_{-k-j,-j-1}^{|{\bf X}|}\leq x)\big{]}\] \[\qquad\qquad+\mathbb{E}\big{[}\big{(}{\rm e}\,^{i\,a_{n}^{-1}{\bf u }^{\top}{\bf S}_{-k-j}}-{\rm e}\,^{i\,a_{n}^{-1}{\bf u}^{\top}({\bf S}_{-k-j}-{ \bf S}_{-j})}\big{)}{\bf 1}(a_{n}^{-1}M_{-k-j}^{|{\bf X}|}\leq x)-\] \[\qquad\qquad\big{(}{\rm e}\,^{i\,a_{n}^{-1}{\bf u}^{\top}{\bf S}_ {-k-j+1}}-{\rm e}\,^{i\,a_{n}^{-1}{\bf u}^{\top}({\bf S}_{-k-j+1}-{\bf S}_{-j}) }\big{)}{\bf 1}(a_{n}^{-1}M_{-k-j+1}^{|{\bf X}|}\leq x)\big{]}\] \[\qquad\qquad-\mathbb{E}\big{[}(1-{\rm e}\,^{-i\,a_{n}^{-1}{\bf u}^ {\top}{\bf S}_{-j}})\,({\rm e}\,^{i\,a_{n}^{-1}{\bf u}^{\top}S_{-k-j}}-{\rm e} \,^{i\,a_{n}^{-1}{\bf u}^{\top}S_{-k-j+1}})\,{\bf 1}(a_{n}^{-1}\,M_{-k-j}^{|{ \bf X}|}\leq x)\big{]}\Big{)}\] \[=: I_{n1}+I_{n2}-I_{n3}\,,\]
and \(I_{n1}\) turns into
\[I_{n1} = k_{n}\,\sum_{j=1}^{r_{n}-k}\mathbb{E}\big{[}\big{(}{\rm e}\,^{i \,a_{n}^{-1}{\bf u}^{\top}({\bf S}_{-k-j}-{\bf S}_{-j})}-{\rm e}\,^{i\,a_{n}^{ -1}{\bf u}^{\top}({\bf S}_{-k-j+1}-{\bf S}_{-j})}\big{)}\] \[\qquad\qquad\big{(}{\rm e}\,^{i\,a_{n}^{-1}{\bf u}^{\top}{\bf S}_ {-j}}\,{\bf 1}(a_{n}^{-1}M_{-j}^{|{\bf X}|}\leq x)-1\big{)}{\bf 1}(a_{n}^{-1}M_{-k-j,-j -1}^{|{\bf X}|}\leq x)\big{]}\] \[= k_{n}\,\sum_{j=1}^{r_{n}-k}\mathbb{E}\big{[}\big{(}{\rm e}\,^{-i \,a_{n}^{-1}{\bf u}^{\top}{\bf S}_{-j}}\big{(}{\rm e}\,^{i\,a_{n}^{-1}{\bf u}^ {\top}{\bf S}_{-k-j}}-{\rm e}\,^{i\,a_{n}^{-1}{\bf u}^{\top}{\bf S}_{-k-j+1}} \big{)}\big{)}\] \[\qquad\qquad\big{(}({\rm e}\,^{i\,a_{n}^{-1}{\bf u}^{\top}{\bf S}_ {-j}}-1)\,{\bf 1}(a_{n}^{-1}M_{-j}^{|{\bf X}|}\leq x)-{\bf 1}(a_{n}^{-1}M_{-j}^{|{ \bf X}|}>x)\big{)}\] \[\qquad\qquad{\bf 1}(a_{n}^{-1}M_{-k-j,-j-1}^{|{\bf X}|}\leq x) \big{]}\,.\]
Using stationarity and the fact that \(x\mapsto{\rm e}\,^{i\,x}\) is a Lipschitz function bounded by \(1\), we obtain for \({\bf u}\neq{\bf 0}\),
\[|I_{n1}| \leq k_{n}\,\sum_{j=1}^{r_{n}-k}\mathbb{E}\Big{[}(|a_{n}^{-1}{\bf u}^ {\top}{\bf X}_{-k-j}|\wedge 2)\,\Big{(}(|a_{n}^{-1}{\bf u}^{\top}{\bf S}_{-j}| \wedge 2)+{\bf 1}(a_{n}^{-1}M_{-j}^{|{\bf X}|}>x)\Big{)}\Big{]}\] \[\leq n\,|{\bf u}|^{2}\sum_{l=k}^{r_{n}}\mathbb{E}\big{[}(|a_{n}^{-1}{ \bf X}_{0}|\wedge(3/|{\bf u}|))\,(|a_{n}^{-1}{\bf X}_{l}|\wedge(3/|{\bf u}|)) \big{]}\] \[+n\,|{\bf u}|\,\sum_{l=k}^{r_{n}}\mathbb{E}\big{[}(|a_{n}^{-1}{ \bf X}_{0}|\wedge(3/|{\bf u}|))\,{\bf 1}(a_{n}^{-1}|{\bf X}_{l}|>x)\big{]}\] \[\leq n\,|{\bf u}|^{2}\sum_{l=k}^{r_{n}}\mathbb{E}\big{[}(|a_{n}^{-1}{ \bf X}_{0}|\wedge(3/|{\bf u}|))\,(|a_{n}^{-1}{\bf X}_{l}|\wedge(3/|{\bf u}|)) \big{]}\] \[+n\,|{\bf u}|\,\sum_{l=k}^{r_{n}}\mathbb{E}\big{[}(|a_{n}^{-1}{ \bf X}_{0}|\wedge(3/|{\bf u}|))\,(|(xa_{n})^{-1}{\bf X}_{l}|\wedge 1)\big{]}\,.\]
Here we also used sub-additivity. Under the anti-clustering condition (2.2) the right-hand side converges to zero by first letting \(n\to\infty\) and then \(k\to\infty\).
We also have
\[I_{n2} = k_{n}\,\sum_{j=1}^{r_{n}-k}\mathbb{E}\big{[}{\rm e}\,^{i\,a_{n}^{- 1}{\bf u}^{\top}{\bf S}_{-k-j+1}}\,\big{(}1-{\rm e}\,^{-i\,a_{n}^{-1}{\bf u}^{ \top}{\bf S}_{-j}}\big{)}\] \[\qquad\qquad\big{(}({\rm e}\,^{i\,a_{n}^{-1}{\bf u}^{\top}{\bf X}_ {-k-j}}-1)\,{\bf 1}(a_{n}^{-1}|{\bf X}_{-k-j}|\leq x)-{\bf 1}(a_{n}^{-1}|{\bf X}_{-k-j}|>x) \big{)}\]
\[\mathbf{1}(a_{n}^{-1}M_{-k-j+1}^{|\mathbf{X}|}\leq x)\big{]}\,.\]
Now, a similar argument as for \(I_{n1}\) shows that \(|I_{n2}|\) is negligible by first letting \(n\to\infty\) and then \(k\to\infty\), and a similar argument also applies to \(|I_{n3}|\). Thus we proved (3.7).
Recalling (3.5)-(3.7), it remains to characterize the limit \(\Psi(\mathbf{u},x)\):
\[\log\Psi(\mathbf{u},x)\] \[= \lim_{k\to\infty}\lim_{n\to\infty}n\left(\mathbb{E}\big{[}\mathrm{ e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}\mathbf{S}_{k}}\,\mathbf{1}(a_{n}^{-1}M_{k}^{| \mathbf{X}|}\leq x)\big{]}-\mathbb{E}\big{[}\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{ u}^{\top}\mathbf{S}_{k-1}}\,\mathbf{1}(a_{n}^{-1}M_{k-1}^{|\mathbf{X}|}\leq x )\big{]}\right)\] \[= \lim_{k\to\infty}\lim_{n\to\infty}n\,\mathbb{E}\big{[}\mathrm{e}^ {\,i\,a_{n}^{-1}\mathbf{u}^{\top}(\mathbf{X}_{0}+\mathbf{S}_{k})}\,\mathbf{1} \big{(}a_{n}^{-1}(|\mathbf{X}_{0}|\lor M_{k}^{|\mathbf{X}|})\leq x\big{)}\] \[\qquad\qquad-\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}\mathbf{ S}_{k}}\,\mathbf{1}\big{(}a_{n}^{-1}M_{k}^{|\mathbf{X}|}\leq x\big{)}\big{]}\] \[= \lim_{k\to\infty}\lim_{n\to\infty}n\left(\mathbb{E}\big{[} \mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}(\mathbf{X}_{0}+\mathbf{S}_{k})} \big{]}-\mathbb{E}\big{[}\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}\mathbf{ S}_{k}}\big{]}\right)\] \[\quad+\lim_{k\to\infty}\lim_{n\to\infty}n\,\mathbb{E}\big{[} \mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}\mathbf{S}_{k}}\,\mathbf{1}\big{(} a_{n}^{-1}M_{k}^{|\mathbf{X}|}>x\big{)}\] \[\qquad\qquad\qquad-\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}( \mathbf{X}_{0}+\mathbf{S}_{k})}\,\mathbf{1}\big{(}a_{n}^{-1}(|\mathbf{X}_{0}| \lor M_{k}^{|\mathbf{X}|})>x\big{)}\big{]}\] \[=: I_{4}+I_{5}\,,\qquad\mathbf{u}\in\mathbb{R}^{d}\,,\qquad x>0\,.\]
We will employ the regular variation and stationarity of \((\mathbf{X}_{t})\) to deal with \(I_{4}\) and \(I_{5}\). In the last part of the proof we will show that
\[I_{4}=\log\varphi_{\mathbf{\xi}_{\alpha}}(\mathbf{u})\,. \tag{3.11}\]
As regards \(I_{5}\), we observe that as \(n\to\infty\),
\[n\,\mathbb{E}\big{[}\mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top} \mathbf{S}_{k}}\,\mathbf{1}\big{(}a_{n}^{-1}M_{k}^{|\mathbf{X}|}>x\big{)}- \mathrm{e}^{\,i\,a_{n}^{-1}\mathbf{u}^{\top}(\mathbf{X}_{0}+\mathbf{S}_{k})} \,\mathbf{1}\big{(}a_{n}^{-1}(|\mathbf{X}_{0}|\lor M_{k}^{|\mathbf{X}|})>x \big{)}\big{]}\] \[= \int_{\mathbb{R}^{dk}}\mathrm{e}^{\,i\,\mathbf{u}^{\top}\sum_{t =0}^{k-1}\mathbf{x}_{t}}\,\mathbf{1}\Big{(}\max_{0\leq t\leq k-1}|\mathbf{x}_ {t}|>x\Big{)}\,\big{[}n\,\mathbb{P}(a_{n}^{-1}(\mathbf{X}_{0},\ldots,\mathbf{ X}_{k-1})\in d\mathbf{x})\big{]}\] \[-\int_{\mathbb{R}^{dk+1}}\mathrm{e}^{\,i\,\mathbf{u}^{\top}\sum_{t =0}^{k}\mathbf{x}_{t}}\,\mathbf{1}\Big{(}\max_{0\leq t\leq k}|\mathbf{x}_{t}|>x \Big{)}\,\big{[}n\,\mathbb{P}(a_{n}^{-1}(\mathbf{X}_{0},\ldots,\mathbf{X}_{k}) \in d\mathbf{x})\big{]}\] \[\to \int_{\mathbb{R}^{dk}}\mathrm{e}^{\,i\,\mathbf{u}^{\top}\sum_{t =0}^{k-1}\mathbf{x}_{t}}\,\mathbf{1}\Big{(}\max_{0\leq t\leq k-1}|\mathbf{x}_ {t}|>x\Big{)}\,\mu_{k-1}(d\mathbf{x})\] \[-\int_{\mathbb{R}^{dk+1}}\mathrm{e}^{\,i\,\mathbf{u}^{\top}\sum_{t =0}^{k}\mathbf{x}_{t}}\,\mathbf{1}\Big{(}\max_{0\leq t\leq k}|\mathbf{x}_{t}|>x \Big{)}\,\mu_{k}(d\mathbf{x})\] \[= \int_{\mathbb{R}^{dk+1}}\Big{[}\mathrm{e}^{\,i\,\mathbf{u}^{\top} \sum_{t=1}^{k}\mathbf{x}_{t}}\,\mathbf{1}\Big{(}\max_{1\leq t\leq k}|\mathbf{x} _{t}|>x\Big{)}-\mathrm{e}^{\,i\mathbf{u}^{\top}\sum_{t=0}^{k}\mathbf{x}_{t}}\, \mathbf{1}\Big{(}\max_{0\leq t\leq k}|\mathbf{x}_{t}|>x\Big{)}\,\Big{]}\,\mu_{k} (d\mathbf{x})\,.\]
In the limit relation we used the vague convergence to the tail measure \(\mu_{k}(\cdot)\) in \(\mathbb{R}^{d(k+1)}_{\mathbf{0}}\) defined in (2.6) for \(k=0,1,\ldots\), and the facts that the integrands vanish in some neighborhood of the origin and are continuous with respect to any homogeneous measure. The last identity follows from the consistency property of the tail measures \((\mu_{k})\). Now, expressing the tail measures in terms of the tail process \((Y\,\mathbf{\Theta}_{t})\) of \((\mathbf{X}_{t})\) and using dominated convergence, \(I_{5}\) turns into
\[I_{5} = \lim_{k\to\infty}\int_{0}^{\infty}\mathbb{E}\Big{[}\mathrm{e}^{\,i\, y\,\mathbf{u}^{\top}\sum_{t=1}^{k}\mathbf{\Theta}_{t}}\,\mathbf{1}\Big{(}y\,\max_{1 \leq t\leq k}|\mathbf{\Theta}_{t}|>x\Big{)}\] \[\qquad\qquad-\mathrm{e}^{\,iy\,\mathbf{u}^{\top}\sum_{t=0}^{k} \mathbf{\Theta}_{t}}\,\mathbf{1}\Big{(}y\,\max_{0\leq t\leq k}|\mathbf{ \Theta}_{t}|>x\Big{)}\Big{]}\,d\big{(}-y^{-\alpha}\big{)}\] \[= \int_{0}^{\infty}\mathbb{E}\Big{[}\mathrm{e}^{\,iy\,\mathbf{u}^{ \top}\sum_{t=1}^{\infty}\mathbf{\Theta}_{t}}\,\mathbf{1}\Big{(}y\,\max_{t \geq 1}|\mathbf{\Theta}_{t}|>x\Big{)}\] \[\qquad\qquad-\mathrm{e}^{\,i\,y\mathbf{u}^{\top}\sum_{t=0}^{\infty} \mathbf{\Theta}_{t}}\,\mathbf{1}\Big{(}y\,\max_{t\geq 0}|\mathbf{\Theta}_{t}|>x\Big{)}\Big{]}\,d \big{(}-y^{-\alpha}\big{)}\,.\]
Using the definition of \({\mathbf{Q}}={\boldsymbol{\Theta}}/\|{\boldsymbol{\Theta}}\|_{\alpha}\) together with the time-change formula (2.7), the right-hand side can be written as
\[I_{5} = \int_{0}^{\infty}\sum_{j\in{\mathbb{Z}}}{\mathbb{E}}\Big{[}|{ \boldsymbol{\Theta}}_{j}|^{\alpha}\Big{(}{\rm e}^{\,iy\,{\mathbf{u}}^{\top} \sum_{t=-\infty}^{\infty}{\mathbf{Q}}_{t}}\,{\mathbf{1}}\Big{(}y\,\max_{t\geq 1 }|{\mathbf{Q}}_{t}|>x\Big{)}\] \[\qquad\qquad\qquad-{\rm e}^{\,iy\,{\mathbf{u}}^{\top}\sum_{t=0}^ {\infty}{\mathbf{Q}}_{t}}\,{\mathbf{1}}\Big{(}y\,\max_{t\geq 2}|{\mathbf{Q}}_{t}|>x \Big{)}\Big{)}\Big{]}\,d\big{(}-y^{-\alpha}\big{)}\] \[= \int_{0}^{\infty}\sum_{j\in{\mathbb{Z}}}{\mathbb{E}}\Big{[}\Big{(} {\rm e}^{\,iy\,{\mathbf{u}}^{\top}\sum_{t=-j+1}^{\infty}{\mathbf{Q}}_{t}}\,{ \mathbf{1}}\Big{(}y\,\max_{t\geq-j+1}|{\mathbf{Q}}_{t}|>x\Big{)}\] \[\qquad\qquad\qquad-{\rm e}^{\,iy\,{\mathbf{u}}^{\top}\sum_{t=-j}^ {\infty}{\mathbf{Q}}_{t}}\,{\mathbf{1}}\Big{(}y\,\max_{t\geq-j}|{\mathbf{Q}}_ {t}|>x\Big{)}\Big{)}\Big{]}\,d\big{(}-y^{-\alpha}\big{)}\] \[= -\int_{0}^{\infty}{\mathbb{E}}\Big{[}{\rm e}^{\,i\,y\,{\mathbf{u }}^{\top}\sum_{t=-\infty}^{\infty}{\mathbf{Q}}_{t}}\,{\mathbf{1}}\Big{(}y\, \max_{t\in{\mathbb{Z}}}|{\mathbf{Q}}_{t}|>x\Big{)}\Big{]}\,d(-y^{-\alpha})\,.\]
Thus we proved that
\[\log\Psi({\mathbf{u}},x) = I_{4}+I_{5}\] \[= \log\varphi_{{\boldsymbol{\xi}}_{\alpha}}({\mathbf{u}})-\int_{0}^ {\infty}{\mathbb{E}}\Big{[}{\rm e}^{\,i\,y\,{\mathbf{u}}^{\top}\sum_{t=-\infty }^{\infty}{\mathbf{Q}}_{t}}\,{\mathbf{1}}\Big{(}y\,\max_{t\in{\mathbb{Z}}}|{ \mathbf{Q}}_{t}|>x\Big{)}\Big{]}\,d(-y^{-\alpha})\,.\]
This is the logarithm of the desired limit hybrid characteristic function in terms of \({\mathbf{Q}}\). Changing variables and appealing to the definition of \(\widetilde{{\mathbf{Q}}}\) in (2.4), we also have
\[\Psi({\mathbf{u}},x)\] \[= \varphi_{{\boldsymbol{\xi}}_{\alpha}}(u)\exp\Big{(}-\int_{0}^{ \infty}\Big{(}{\mathbb{E}}\Big{[}\Big{(}{\rm e}^{\,i\,y\,{\mathbf{u}}^{\top} \sum_{t=-\infty}^{\infty}{\mathbf{Q}}_{t}}-1\Big{)}\,{\mathbf{1}}\Big{(}y\, \max_{t\in{\mathbb{Z}}}|{\mathbf{Q}}_{t}|>x\Big{)}\Big{]}\,d(-y^{-\alpha}) \Big{)}\] \[= \varphi_{{\boldsymbol{\xi}}_{\alpha}}(u)\,\Phi_{\alpha}^{\theta_{ |{\mathbf{X}}|}}(x)\] \[=\]
and the desired result follows.
_Proof of (3.11)._ An application of the continuous mapping theorem for regularly varying random vectors ensures that \({\mathbf{S}}_{k}\) inherits regular variation with index \(\alpha\): for \(k\geq 1\),
\[n\,{\mathbb{P}}(a_{n}^{-1}{\mathbf{S}}_{k}\in\cdot)\ \stackrel{{ v}}{{\to}}\ \ \widetilde{\mu}_{{\mathbf{S}}_{k}}(\cdot):=\mu_{k-1}\big{(}\big{\{}({ \mathbf{x}}_{0},\ldots,{\mathbf{x}}_{k-1})\in{\mathbb{R}}^{dk}:\,{\mathbf{x}}_{0} +\cdots+{\mathbf{x}}_{k-1}\in\cdot\big{\}}\big{)}\,,\qquad n\to\infty\,.\]
Therefore the distribution of \({\mathbf{S}}_{k}\) belongs to the domain of attraction of an \(\alpha\)-stable law (possibly degenerate), hence there exists an \(\alpha\)-stable random variable \({\boldsymbol{\xi}}_{\alpha}^{(k)}\) such that for iid copies \(({\mathbf{S}}_{k,i})\) of \({\mathbf{S}}_{k}\),
\[(a_{n}^{(k)})^{-1}\sum_{i=1}^{n}{\mathbf{S}}_{k,i}\stackrel{{ d}}{{\to}}{\boldsymbol{\xi}}_{\alpha}^{(k)}\,,\]
where
\[a_{n}^{(k)}=a_{n}\,\big{[}\mu_{k-1}\big{(}\big{\{}({\mathbf{x}}_{0},\ldots,{ \mathbf{x}}_{k-1})\in{\mathbb{R}}^{dk}:\,|{\mathbf{x}}_{0}+\cdots+{\mathbf{x}} _{k-1}|>1\big{\}}\big{)}\big{]}^{1/\alpha}=:a_{n}\,C_{k}^{1/\alpha}\,,\]
such that
\[n\,{\mathbb{P}}(|{\mathbf{S}}_{k}|>a_{n}^{(k)}) \sim n\,{\mathbb{P}}(|{\mathbf{X}}|>a_{n}^{(k)})\,C_{k}\to 1\,,\qquad n\to\infty\,.\]
Since \(\mathbf{S}_{k}\) is regularly varying one can also define its tail measure as the vague limit
\[\frac{\mathbb{P}(x^{-1}\mathbf{S}_{k}\in\cdot)}{\mathbb{P}(|\mathbf{S}_{k}|>x)}= \frac{\mathbb{P}(|\mathbf{X}|>x)}{\mathbb{P}(|\mathbf{S}_{k}|>x)}\frac{\mathbb{ P}(x^{-1}\mathbf{S}_{k}\in\cdot)}{\mathbb{P}(|\mathbf{X}|>x)}\stackrel{{ v}}{{\to}}\mu_{\mathbf{S}_{k}}(\cdot)=C_{k}^{-1} \widetilde{\mu}_{\mathbf{S}_{k}}(\cdot)\,.\]
An application of Lemma 3.5 in Petrov [24] yields the equivalent relation
\[n\left(\varphi_{a_{n}^{-1}\mathbf{S}_{k}}(\mathbf{u})-1\right)=n\left( \varphi_{(a_{n}^{(k)}/a_{n})\,(a_{n}^{(k)})^{-1}\mathbf{S}_{k}}(\mathbf{u})-1 \right)\to\log\varphi_{C_{k}^{1/\alpha}\mathbf{\xi}_{\alpha}^{(k)}}(\mathbf{u })\,,\qquad\mathbf{u}\in\mathbb{R}^{d}\,. \tag{3.12}\]
The log-characteristic function of the \(\alpha\)-stable \(\mathbf{\xi}_{\alpha}^{(k)}\) can be written as
\[\log\varphi_{C_{k}^{1/\alpha}\mathbf{\xi}_{\alpha}^{(k)}}(\mathbf{ u}) = \log\varphi_{\mathbf{\xi}_{\alpha}^{(k)}}(C_{k}^{1/\alpha}\, \mathbf{u})\] \[= \int_{\mathbb{R}_{0}^{d}}\left(\mathrm{e}^{i\,C_{k}^{1/\alpha}\, \mathbf{u}^{\top}\,\mathbf{y}}-1-i\,C_{k}^{1/\alpha}\,\mathbf{u}^{\top}\, \mathbf{y}\,\,\mathbf{1}_{(1,2)}(\alpha)\right)\mu_{\mathbf{S}_{k}}(d\mathbf{ y})\] \[=: -c_{\alpha}\,\sigma_{k}^{\alpha}(C_{k}^{1/\alpha}\,\mathbf{u}) \left(1-i\,\beta_{k}(C_{k}^{1/\alpha}\,\mathbf{u})\,\tan(\alpha\,\pi/2)\right),\qquad\mathbf{u}\in\mathbb{R}^{d}\,,\]
where, by homogeneity of the tail measure,
\[\sigma_{k}^{\alpha}(C_{k}^{1/\alpha}\,\mathbf{u}) := \mu_{\mathbf{S}_{k}}\big{(}\big{\{}\mathbf{x}\in\mathbb{R}^{d}:\, |C_{k}^{1/\alpha}\,\mathbf{u}^{\top}\mathbf{x}|>1\big{\}}\big{)}=\widetilde{ \mu}_{\mathbf{S}_{k}}\big{(}\big{\{}\mathbf{x}\in\mathbb{R}^{d}:\,|\mathbf{u}^ {\top}\mathbf{x}|>1\big{\}}\big{)}\,,\] \[\beta_{k}(C_{k}^{1/\alpha}\mathbf{u}) := \frac{\widetilde{\mu}_{\mathbf{S}_{k}}\big{(}\big{\{}\mathbf{x} \in\mathbb{R}^{d}:\,\mathbf{u}^{\top}\mathbf{x}>1\big{\}}\big{)}-\widetilde{ \mu}_{\mathbf{S}_{k}}\big{(}\big{\{}\mathbf{x}\in\mathbb{R}^{d}:\,\mathbf{u}^{ \top}\mathbf{x}<-1\big{\}}\big{)}}{\widetilde{\mu}_{\mathbf{S}_{k}}\big{(} \big{\{}\mathbf{x}\in\mathbb{R}^{d}:\,|\mathbf{u}^{\top}\mathbf{x}|>1\big{\}} \big{)}}\,.\]
Then, using the definition of \(\widetilde{\mu}_{\mathbf{S}_{k}}\) and the consistency of the tail measures \((\mu_{k})\), we have
\[\sigma_{k}^{\alpha}(C_{k}^{1/\alpha}\mathbf{u}) = \mu_{k-1}\big{(}\big{\{}(\mathbf{x}_{0},\ldots,\mathbf{x}_{k-1}) \in\mathbb{R}^{dk}:|\mathbf{u}^{\top}(\mathbf{x}_{0}+\cdots+\mathbf{x}_{k-1}) |>1\big{\}}\big{)}\] \[= \mu_{k}\big{(}\big{\{}(\mathbf{x}_{0},\ldots,\mathbf{x}_{k})\in \mathbb{R}^{d(k+1)}:|\mathbf{u}^{\top}(\mathbf{x}_{0}+\cdots+\mathbf{x}_{k-1}) |>1\big{\}}\,,\mathbf{x}_{k}\in\mathbb{R}^{d}\big{\}}\big{)}\] \[= \mu_{k}\big{(}\big{\{}(\mathbf{x}_{0},\ldots,\mathbf{x}_{k})\in \mathbb{R}^{d(k+1)}:|\mathbf{u}^{\top}(\mathbf{x}_{1}+\cdots+\mathbf{x}_{k}) |>1\big{\}}\,,\mathbf{x}_{0}\in\mathbb{R}^{d}\big{\}}\big{)}\,,\]
and the quantities \(\beta_{k}(\mathbf{u})\) can be expressed similarly. Keeping this remark in mind and exploiting (3.12), we have as \(n\to\infty\),
\[I_{4} = n\left(\varphi_{a_{n}^{-1}}\mathbf{s}_{\mathbf{\mathbb{R}}_{+1}}( \mathbf{u})-\varphi_{a_{n}^{-1}\mathbf{S}_{k}}(\mathbf{u})\right)\] \[\to \log\varphi_{C_{k+1}^{1/\alpha}\mathbf{\xi}_{\alpha}^{(k+1)}}( \mathbf{u})-\log\varphi_{C_{k}^{1/\alpha}\mathbf{\xi}_{\alpha}^{(k)}}(\mathbf{u})\] \[= -c_{\alpha}\,\Big{(}\underbrace{\big{(}\sigma_{k+1}^{\alpha}(C_{k +1}^{1/\alpha}\mathbf{u})-\sigma_{k}^{\alpha}(C_{k}^{1/\alpha}\mathbf{u}) \big{)}}_{=:\Delta_{1}(k)}\] \[\qquad\quad-i\,\tan(\alpha\,\pi/2)\underbrace{\big{(}\beta_{k+1}( C_{k+1}^{1/\alpha}\mathbf{u})\sigma_{k+1}^{\alpha}(C_{k+1}^{1/\alpha}\mathbf{u})- \beta_{k}(C_{k}^{1/\alpha}\mathbf{u})\sigma_{k}^{\alpha}(C_{k}^{1/\alpha} \mathbf{u})\big{)}}_{=:\Delta_{2}(k)}\Big{)}\,.\]
Our next goal is to identify \(\Delta_{1}(k)\) and \(\Delta_{2}(k)\) as follows:
\[\Delta_{1}(k) = \mathbb{E}\Big{[}\Big{|}\mathbf{u}^{\top}\sum_{i=0}^{k}\mathbf{ \Theta}_{i}\Big{|}^{\alpha}-\Big{|}\mathbf{u}^{\top}\sum_{i=1}^{k}\mathbf{ \Theta}_{i}\Big{|}^{\alpha}\Big{]}\,,\] \[\Delta_{2}(k) = \mathbb{E}\Big{[}\Big{(}\Big{(}\mathbf{u}^{\top}\sum_{i=0}^{k} \mathbf{\Theta}_{i}\Big{)}_{+}^{\alpha}-\Big{(}\mathbf{u}^{\top}\sum_{i=1}^{k} \mathbf{\Theta}_{i}\Big{)}_{+}^{\alpha}\Big{)}-\Big{(}\Big{(}\mathbf{u}^{\top} \sum_{i=0}^{k}\mathbf{\Theta}_{i}\Big{)}_{-}^{\alpha}-\Big{(}\mathbf{u}^{\top} \sum_{i=1}^{k}\mathbf{\Theta}_{i}\Big{)}_{-}^{\alpha}\Big{)}\Big{]}\,.\]
We give the details only for \(\Delta_{1}(k)\). Rewriting the tail measures \(\mu_{k}\) in terms of the tail process \((Y\,\boldsymbol{\Theta}_{t})\), we have
\[\Delta_{1}(k) = \int_{\mathbb{R}_{\boldsymbol{0}}^{dk}}\Big{(}\boldsymbol{1}\Big{(} \Big{|}\mathbf{u}^{\top}\sum_{i=0}^{k}\mathbf{x}_{i}\Big{|}>1\Big{)}- \boldsymbol{1}\Big{(}\Big{|}\mathbf{u}^{\top}\sum_{i=1}^{k}\mathbf{x}_{i} \Big{|}>1\Big{)}\Big{)}\,\boldsymbol{1}(|\mathbf{x}_{0}|>0)\,\mu_{k}(d\mathbf{ x})\] \[= \int_{0}^{\infty}\Big{(}\mathbb{P}\Big{(}y\,\Big{|}\mathbf{u}^{ \top}\sum_{i=0}^{k}\boldsymbol{\Theta}_{i}\Big{|}>1\Big{)}-\mathbb{P}\Big{(}y \,\Big{|}\mathbf{u}^{\top}\sum_{i=1}^{k}\boldsymbol{\Theta}_{i}\Big{|}>1\Big{)} \Big{)}\,d(-y^{-\alpha})\] \[= \mathbb{E}\Big{[}\Big{|}\mathbf{u}^{\top}\sum_{i=0}^{k} \boldsymbol{\Theta}_{i}\Big{|}^{\alpha}-\Big{|}\mathbf{u}^{\top}\sum_{i=1}^{ k}\boldsymbol{\Theta}_{i}\Big{|}^{\alpha}\Big{]}\,.\]
Our next step is to show that the following limits exist
\[\Delta_{1}:=\lim_{k\to\infty}\Delta_{1}(k) = \mathbb{E}\Big{[}\Big{|}\mathbf{u}^{\top}\sum_{i=0}^{\infty} \boldsymbol{\Theta}_{i}\Big{|}^{\alpha}-\Big{|}\mathbf{u}^{\top}\sum_{i=1}^{ \infty}\boldsymbol{\Theta}_{i}\Big{|}^{\alpha}\Big{]}\] \[\Delta_{2}:=\lim_{k\to\infty}\Delta_{2}(k) = \mathbb{E}\Big{[}\Big{(}\Big{(}\mathbf{u}^{\top}\sum_{i=0}^{ \infty}\boldsymbol{\Theta}_{i}\Big{)}_{+}^{\alpha}-\Big{(}\mathbf{u}^{\top} \sum_{i=1}^{\infty}\boldsymbol{\Theta}_{i}\Big{)}_{+}^{\alpha}\Big{)}\] \[-\Big{(}\Big{(}\mathbf{u}^{\top}\sum_{i=0}^{\infty}\boldsymbol{ \Theta}_{i}\Big{)}_{-}^{\alpha}-\Big{(}\mathbf{u}^{\top}\sum_{i=1}^{\infty} \boldsymbol{\Theta}_{i}\Big{)}_{-}^{\alpha}\Big{)}\Big{]}\,.\]
Then the limit
\[\log\varphi_{C_{k+1}^{1/\alpha}\,\boldsymbol{\xi}_{\alpha}^{(k+1)}(\mathbf{u} )-\log\varphi_{C_{k}^{1/\alpha}\,\boldsymbol{\xi}_{\alpha}^{(k)}}(\mathbf{u} )\to\log\varphi_{\boldsymbol{\xi}_{\alpha}}(\mathbf{u})\,,\qquad k\to\infty\,,\]
exists and is equal to \(-c_{\alpha}(\Delta_{1}-i\,\tan(\alpha\,\pi/2)\Delta_{2})\). This expression agrees with the characteristic function \(\varphi_{\boldsymbol{\xi}_{\alpha}}\) given in (3.2), (3.3). We restrict ourselves to the calculations for \(\Delta_{1}\); the case \(\Delta_{2}\) is similar. Using the following consequence of the mean value theorem
\[|a+b|^{\alpha}-|b|^{\alpha}\leq(\alpha\lor 1)(|a|+|b|)^{(\alpha-1)\lor 0}|a|^{ \alpha\wedge 1}\]
and the fact that \(|\boldsymbol{\Theta}_{0}|=1\), we have for fixed \(\mathbf{u}\in\mathbb{R}^{d}\),
\[|\Delta_{1}(k)| \leq c\,\mathbb{E}\Big{[}\Big{(}1+\Big{|}\sum_{i=1}^{k}\mathbf{u}^{ \top}\boldsymbol{\Theta}_{i}\Big{|}\Big{)}^{(\alpha-1)\lor 0}\Big{]}\] \[\leq c\,\mathbb{E}\Big{[}\Big{(}\sum_{i=0}^{\infty}|\boldsymbol{ \Theta}_{i}|\Big{)}^{(\alpha-1)\lor 0}\Big{]}<\infty\,.\]
The right-hand side is trivially finite for \(\alpha\in(0,1)\) while for \(\alpha\in(1,2)\) we employ the anti-clustering condition (2.2); see Lemma A.1. In view of (3.14) we are allowed to use dominated convergence and conclude that (3.13) holds. In the limit we introduce the spectral cluster process \(\mathbf{Q}=\boldsymbol{\Theta}/\|\boldsymbol{\Theta}\|_{\alpha}\):
\[\Delta_{1} = \mathbb{E}\Big{[}\|\boldsymbol{\Theta}\|_{\alpha}^{\alpha}\left( \Big{|}\mathbf{u}^{\top}\sum_{i=0}^{\infty}\frac{\boldsymbol{\Theta}_{i}}{\| \boldsymbol{\Theta}\|_{\alpha}}\Big{|}^{\alpha}-\Big{|}\mathbf{u}^{\top}\sum_{ i=1}^{\infty}\frac{\boldsymbol{\Theta}_{i}}{\|\boldsymbol{\Theta}\|_{\alpha}} \Big{|}^{\alpha}\right)\Big{]}\] \[= \sum_{t\in\mathbb{Z}}\mathbb{E}\Big{[}\Big{|}\mathbf{u}^{\top} \sum_{i=-t}^{\infty}\frac{\boldsymbol{\Theta}_{i}}{\|\boldsymbol{\Theta}\|_{ \alpha}}\Big{|}^{\alpha}-\Big{|}\mathbf{u}^{\top}\sum_{i=1-t}^{\infty}\frac{ \boldsymbol{\Theta}_{i}}{\|\boldsymbol{\Theta}\|_{\alpha}}\Big{|}^{\alpha} \Big{]}\,.\]
The last identity follows by multiple application of the time-change formula (2.7). We observe that we have a telescoping sum structure, resulting in
\[\Delta_{1} = \mathbb{E}\Big{[}\Big{|}\mathbf{u}^{\top}\sum_{i\in\mathbb{Z}}\frac {\boldsymbol{\Theta}_{i}}{\|\boldsymbol{\Theta}\|_{\alpha}}\Big{|}^{\alpha} \Big{]}=\mathbb{E}\Big{[}\Big{|}\mathbf{u}^{\top}\sum_{i\in\mathbb{Z}}\mathbf{ Q}_{i}\Big{|}^{\alpha}\Big{]}\,.\]
A similar argument yields the identity
\[\mathbb{E}\Big{[}\Big{(}\sum_{i\in\mathbb{Z}}|\mathbf{Q}_{i}|\Big{)}^{\alpha }\Big{]}=\mathbb{E}\Big{[}\Big{(}\sum_{i=0}^{\infty}|\boldsymbol{\Theta}_{i}| \Big{)}^{\alpha}-\Big{(}\sum_{i=1}^{\infty}|\boldsymbol{\Theta}_{i}|\Big{)}^{ \alpha}\Big{]}\,. \tag{3.15}\]
In particular, the expectation on the left-hand side is finite.
### Joint convergence of sums, maxima and \(\ell^{p}\)-norms
Our next goal is to prove joint convergence of \(a_{n}^{-1}(\mathbf{S}_{n},M_{n}^{|\mathbf{X}|},\gamma_{n,p})\). We work under the following mixing condition slightly stronger than (2.1):
\[\Psi_{n,p}(\mathbf{u},x,\lambda) := \mathbb{E}\Big{[}\exp\big{(}i\,a_{n}^{-1}\mathbf{u}^{\top} \mathbf{S}_{n}-a_{n}^{-p}\lambda\gamma_{n,p}^{p}\big{)}\,\mathbf{1}\big{(}a_{n }^{-1}M_{n}^{|\mathbf{X}|}\leq x\big{)}\Big{]}\] \[= \Big{(}\mathbb{E}\Big{[}\exp\big{(}i\,a_{n}^{-1}\mathbf{u}^{\top }\mathbf{S}_{r_{n}}-a_{n}^{-p}\lambda\gamma_{r_{n},p}^{p}\big{)}\,\mathbf{1} \big{(}a_{n}^{-1}M_{r_{n}}^{|\mathbf{X}|}\leq x\big{)}\Big{]}\Big{)}^{k_{n}}+o( 1), \tag{3.16}\] \[\qquad n\to\infty,\qquad(\mathbf{u},x,\lambda)\in\mathbb{R}^{d} \times\mathbb{R}^{2}_{+}\,.\]
This mixing condition can be derived by strong mixing or coupling arguments; see Section 5 below.
**Theorem 3.2**.: _Assume the conditions of Theorem 3.1 and the mixing condition (3.16). Then, with the notation of the latter result, for \(\alpha<p\),_
\[a_{n}^{-1}(\mathbf{S}_{n},M_{n}^{|\mathbf{X}|},\gamma_{n,p})\stackrel{{ d}}{{\to}}(\boldsymbol{\xi}_{\alpha},\eta_{\alpha},\zeta_{\alpha,p})\,, \qquad n\to\infty\,,\]
_where the joint distribution is described by the joint hybrid characteristic function-Laplace transform of \((\boldsymbol{\xi}_{\alpha},\eta_{\alpha},\zeta_{\alpha,p}^{p})\) given by_
\[\mathbb{E}\big{[}\mathrm{e}^{\,i\,\mathbf{u}^{\top}\, \boldsymbol{\xi}_{\alpha}}\,\mathbf{1}(\eta_{\alpha}\leq x)\,\mathrm{e}^{\,- \lambda\,\zeta_{\alpha,p}^{p}}\big{]}\] \[= \exp\Big{(}\int_{0}^{\infty}\mathbb{E}\Big{[}\mathrm{e}^{\,i\,y \,\mathbf{u}^{\top}\sum_{t=-\infty}^{\infty}\mathbf{Q}_{t}-y^{p}\lambda\sum_{ t=-\infty}^{\infty}|\mathbf{Q}_{t}|^{p}}\,\mathbf{1}\Big{(}y\,\max_{t\in \mathbb{Z}}|\mathbf{Q}_{t}|\leq x\Big{)}\] \[-1-i\,y\,\mathbf{u}^{\top}\sum_{t\in\mathbb{Z}}\mathbf{Q}_{t}\, \mathbf{1}_{(1,2)}(\alpha)\Big{]}\,d(-y^{-\alpha})\Big{)}\,,\qquad\mathbf{u} \in\mathbb{R}^{d},\,x>0,\,\lambda>0\,. \tag{3.17}\]
Analogous results about the joint convergence of maxima, partial sums and finitely many \(\gamma_{n,p_{k}}\) for \(p_{k}>0\) can be proved by similar techniques as below. In Theorem 3.1 the joint convergence of sums and maxima was described via the convergence of hybrid characteristic functions. Here we appeal to the convergence of an extension of these: we use the point-wise convergence of the joint hybrid characteristic function-Laplace transform of \(a_{n}^{-1}(\mathbf{S}_{n},M_{n}^{|\mathbf{X}|})\) and \(a_{n}^{-p}\gamma_{n,p}^{p}\).
**Remark 3.3**.: Notice that \(\mathbb{E}[\|\mathbf{Q}\|_{p}^{\alpha}]<\infty\) for \(\alpha/p<1\) since, by concavity of the function \(x\mapsto x^{\alpha/p}\), \(x>0\), we obtain \(\|\mathbf{Q}\|_{p}^{\alpha}\leq\|\mathbf{Q}\|_{\alpha}^{\alpha}=1\) a.s. Therefore the limit Laplace transform of \((a_{n}^{-p}\gamma_{n,p}^{p})\) is well defined. We consider a Frechet random variable \(Y_{p}\) with index \(p>0\) which is independent of \(\mathbf{Q}\). Then the distribution of \(\zeta_{\alpha,p}\) is provided by the Laplace transform
\[\mathbb{E}\big{[}\mathrm{e}^{\,-\lambda\,\zeta_{\alpha,p}^{p}} \big{]} = \exp\Big{(}\int_{0}^{\infty}\mathbb{E}\Big{[}\mathrm{e}^{\,-y^{p} \lambda\sum_{t=-\infty}^{\infty}|\mathbf{Q}_{t}|^{p}}-1\Big{]}\,d(-y^{-\alpha}) \Big{)}\] \[= \exp\Big{(}-\int_{0}^{\infty}\mathbb{E}\big{[}\mathbb{P}\big{(}y\, \lambda^{1/p}\,Y_{p}\,\|\mathbf{Q}\|_{p}>1\mid\mathbf{Q}\big{)}\big{]}\,d(-y ^{-\alpha})\Big{)}\]
\[= \mathbb{E}\big{[}\mathrm{e}^{\,i\,a_{n}^{-1}\,\mathbf{u}^{\top} \,\mathbf{S}_{r_{n}}\,\mathbf{1}\big{(}a_{n}^{-1}\widetilde{M}_{r_{n}}\leq\lambda ^{-1/p}\,,a_{n}^{-1}M_{r_{n}}^{|\mathbf{X}|}\leq x\big{)}\big{]}\,.\]
We interpret the right-hand side as hybrid characteristic function of the process
\[\mathbf{Z}_{t}:=(\mathbf{X}_{t},Y_{p,t}|\mathbf{X}_{t}|,|\mathbf{X}_{t}|)\,, \qquad t\in\mathbb{Z}\,.\]
Since \(p>\alpha\) the random variable \(Y_{p,t}\) has moments of order \(p^{\prime}\in(\alpha,p)\). Therefore an application of the multivariate Breiman theorem of Basrak et al. [7] and regular variation of \((\mathbf{X}_{t})\) yield that \((\mathbf{Z}_{t})\) is an \(\mathbb{R}^{d+2}\)-valued regularly varying stationary sequence with index \(\alpha\). Writing \((\mathbf{\Theta}_{t})\) for the
spectral tail process of \((\mathbf{X}_{t})\) and exploiting the independence of \((\mathbf{X}_{t})\) and \((Y_{p,t})\), we also have for \(h\geq 0\),
\[\mathbb{P}\left(\frac{1}{|\mathbf{X}_{0}|}\left(\left(\begin{array}[] {c}\mathbf{X}_{0}\\ Y_{p,0}|\mathbf{X}_{0}|\\ |\mathbf{X}_{0}|\end{array}\right)\,,\ldots,\left(\begin{array}{c}\mathbf{X} _{h}\\ Y_{p,h}|\mathbf{X}_{h}|\\ |\mathbf{X}_{h}|\end{array}\right)\right)\in\cdot\left|\,|\mathbf{X}_{0}|>x\right)\] \[\stackrel{{ w}}{{\to}} \mathbb{P}\left(\left(\left(\begin{array}{c}\boldsymbol{\Theta}_{0} \\ Y_{p,0}|\boldsymbol{\Theta}_{0}|\\ |\boldsymbol{\Theta}_{0}|\end{array}\right)\,,\ldots,\left(\begin{array}{c} \boldsymbol{\Theta}_{h}\\ Y_{p,h}|\boldsymbol{\Theta}_{h}|\\ |\boldsymbol{\Theta}_{h}|\end{array}\right)\right)\in\cdot\right)\,,\qquad x\to \infty\,.\]
The limit highlights the regular variation structure of \((\mathbf{Z}_{t})\).
The anti-clustering condition (2.2) is easily checked on \((\mathbf{Z}_{t})\) by exploiting the corresponding properties of \((\mathbf{X}_{t})\) and using a domination argument on the conditional expectation given \((Y_{p,t})\). Exploiting the alternative mixing condition (3.16), we may proceed as in the proof of Theorem 3.1 to conclude that, as \(n\to\infty\),
\[k_{n}\log\left(\mathbb{E}\big{[}\mathrm{e}^{\,i\,a_{n}^{-1} \,\mathbf{u}^{\top}\,\mathbf{S}_{r_{n}}}\,\mathbf{1}\big{(}a_{n}^{-1}\widetilde {M}_{r_{n}}\leq\lambda^{-1/p}\,,a_{n}^{-1}M_{r_{n}}^{|\mathbf{X}|}\leq x\big{)} \big{]}\right)\] \[\to \mathbb{E}[\log\varphi_{\mathbf{f}_{\alpha}}(\mathbf{u})]\] \[= \int_{0}^{\infty}\mathbb{E}\Big{[}\mathrm{e}^{\,i\,y\,\sum_{t\in \mathbb{Z}}\mathbf{u}^{\top}\mathbf{Q}_{t}}\,\mathbf{1}\Big{(}y\,\max_{t\in \mathbb{Z}}\,Y_{p,t}\,|\mathbf{Q}_{t}|\leq\lambda^{-1/p}\,,y\,\max_{t\in \mathbb{Z}}\,|\mathbf{Q}_{t}|\leq x\Big{)}\] \[\qquad-1-i\,y\,\sum_{t\in\mathbb{Z}}\,\mathbf{u}^{\top}\mathbf{Q }_{t}\,\mathbf{1}_{(1,2)}(\alpha)\Big{]}\,d(-y^{-\alpha})\] \[= \int_{0}^{\infty}\mathbb{E}\Big{[}\mathrm{e}^{\,i\,y\,\mathbf{u} ^{\top}\sum_{t\in\mathbb{Z}}\,\mathbf{Q}_{t}\,-y^{p}\,\lambda\,\sum_{t\in \mathbb{Z}}|\mathbf{Q}_{t}|^{p}}\,\mathbf{1}\Big{(}y\,\max_{t\in\mathbb{Z}}| \mathbf{Q}_{t}|\leq x\Big{)}\] \[\qquad-1-i\,y\,\mathbf{u}^{\top}\sum_{t\in\mathbb{Z}}\,\mathbf{Q }_{t}\,\mathbf{1}_{(1,2)}(\alpha)\Big{]}\,d(-y^{-\alpha})\,. \tag{3.18}\]
In the last identity we again used the distribution of the iid Frechet variables \(Y_{p,t}\), \(t\in\mathbb{Z}\). The desired result follows.
## 4. Ratio limit theorems for self-normalized quantities
### Ratios of sums and maxima
A direct consequence of Theorem 3.1 and the continuous mapping theorem is the following characterization of the limit distribution of the ratios \((\mathbf{S}_{n}/M_{n}^{|\mathbf{X}|})\).
**Corollary 4.1**.: _Under the assumptions of Theorem 3.1 we have that_
\[\frac{\mathbf{S}_{n}}{M_{n}^{|\mathbf{X}|}}\stackrel{{ d}}{{\to}}\frac{\boldsymbol{\xi}_{\alpha}}{\eta_{ \alpha}}=:\mathbf{R}_{\alpha}\,,\qquad n\to\infty\,,\]
_where the random vector \(\mathbf{R}_{\alpha}\) has characteristic function, for every \(\mathbf{u}\in\mathbb{R}^{d}\),_
\[\varphi_{\mathbf{R}_{\alpha}}(\mathbf{u})=\frac{\mathbb{E}\big{[} \mathrm{e}^{\,i\,\mathbf{u}^{\top}\,\sum_{t=-\infty}^{\infty}\widetilde{ \mathbf{Q}}_{t}}\big{]}}{\int_{0}^{\infty}\mathbb{E}\Big{[}1+i\,y\, \mathbf{u}^{\top}\,\sum_{t\in\mathbb{Z}}\widetilde{\mathbf{Q}}_{t}\,\mathbf{1 }_{(1,2)}(\alpha)-\mathrm{e}^{\,i\mathbf{y}\mathbf{u}^{\top}\,\sum_{t=-\infty }^{\infty}\widetilde{\mathbf{Q}}_{t}}\,\mathbf{1}(y\leq 1)\Big{]}\,d(-y^{- \alpha})}\,.\]
**Remark 4.2**.: In the expression for \(\varphi_{\mathbf{R}_{\alpha}}\) we need to ensure that \(\sum_{t\in\mathbb{Z}}\widetilde{\mathbf{Q}}_{t}\) is well defined. If \(1<\alpha<2\) and the anti-clustering condition (2.2) holds we have \(\mathbb{E}\Big{[}\Big{(}\sum_{t\in\mathbb{Z}}|\mathbf{Q}_{t}|\Big{)}^{\alpha} \Big{]}<\infty\); see (3.15).
By the definition of \((\widetilde{\mathbf{Q}}_{t})\) (cf. (2.4)), the fact that \(\max_{t\in\mathbb{Z}}|\mathbf{Q}_{t}|\leq 1\) and Jensen's inequality we get
\[\mathbb{E}\Big{[}\sum_{t\in\mathbb{Z}}|\widetilde{\mathbf{Q}}_{t}| \Big{]} = \theta_{|\mathbf{X}|}^{-1}\mathbb{E}\Big{[}\big{(}\max_{t\in \mathbb{Z}}|\mathbf{Q}_{t}|\big{)}^{\alpha-1}\sum_{t\in\mathbb{Z}}|\mathbf{Q}_ {t}|\Big{]}\leq\theta_{|\mathbf{X}|}^{-1}\mathbb{E}\Big{[}\sum_{t\in\mathbb{Z} }|\mathbf{Q}_{t}|\Big{]}\] \[\leq \theta_{|\mathbf{X}|}^{-1}\Big{(}\mathbb{E}\Big{[}\Big{(}\sum_{t \in\mathbb{Z}}|\mathbf{Q}_{t}|\Big{)}^{\alpha}\Big{]}\Big{)}^{1/\alpha}<\infty\,.\]
For \(0<\alpha<1\) similar calculations yield
\[\mathbb{E}\Big{[}\sum_{t\in\mathbb{Z}}|\widetilde{\mathbf{Q}}_{t}|\Big{]}= \theta_{|\mathbf{X}|}^{-1}\mathbb{E}\Big{[}\frac{\sum_{t\in\mathbb{Z}}| \mathbf{\Theta}_{t}|}{\big{(}\max_{t\in\mathbb{Z}}|\mathbf{\Theta}_{t}|\big{)} ^{1-\alpha}\sum_{t\in\mathbb{Z}}|\mathbf{\Theta}_{t}|^{\alpha}}\Big{]}\leq \theta_{|\mathbf{X}|}^{-1}<\infty\,.\]
We can also derive
\[\mathbb{E}[\mathbf{R}_{\alpha}]=\frac{\mathbb{E}\Big{[}\sum_{t\in\mathbb{Z}} \widetilde{\mathbf{Q}}_{t}\Big{]}}{1-\alpha}\,, \tag{4.1}\]
see Appendix C. The right-hand side term is well defined since \(\sum_{t\in\mathbb{Z}}|\widetilde{\mathbf{Q}}_{t}|\) has finite expectation. For iid centered random variables \((X_{i})\) this is in agreement with the known fact that \(\mathbb{E}[R_{\alpha}]=(1-\alpha)^{-1}=\lim_{n\to\infty}\mathbb{E}[S_{n}/M_{n}]\); see Bingham et al. [9]. The convergence of \((\mathbb{E}[\mathbf{S}_{n}/M_{n}^{|\mathbf{X}|}])\) for dependent \((\mathbf{X}_{t})\) is an open question.
Proof.: By virtue of Theorem 3.1 and the continuous mapping theorem the limit relation \(\mathbf{S}_{n}/\mathbf{M}_{n}^{|\mathbf{X}|}\stackrel{{ d}}{{ \to}}\mathbf{R}_{\alpha}\) follows. It remains to identify the characteristic function of \(\mathbf{R}_{\alpha}\).
We re-write (3.4) as follows: for \(x>0\) and \(\mathbf{u}\in\mathbb{R}^{d}\),
\[\Psi(\mathbf{u},x) = \mathbb{E}\big{[}\mathrm{e}^{\,i\mathbf{u}^{\top}\boldsymbol{ \xi}_{\alpha}}\mathbf{1}(\eta_{\alpha}\leq x)\big{]}\] \[= \varphi_{\boldsymbol{\xi}_{\alpha}}(\mathbf{u})\exp\Big{(}-\theta _{|\mathbf{X}|}\int_{x}^{\infty}\mathbb{E}\big{[}\mathrm{e}^{\,i\,y\,\mathbf{u }^{\top}\sum_{t=-\infty}^{\infty}\widetilde{\mathbf{Q}}_{t}}\big{]}\,d(-y^{- \alpha})\Big{)}\,.\]
Denote the density of \(\eta_{\alpha}\) by \(f_{\eta_{\alpha}}\). We differentiate both sides of the latter identity with respect to \(x\):
\[\mathbb{E}\big{[}\mathrm{e}^{\,i\,\mathbf{u}^{\top}\boldsymbol{ \xi}_{\alpha}}\bigm{|}\eta_{\alpha}=x\big{]}\,f_{\eta_{\alpha}}(x) = \Psi(\mathbf{u},x)\,\,\alpha\,x^{-\alpha-1}\,\theta_{|\mathbf{X}| }\underbrace{\mathbb{E}\big{[}\mathrm{e}^{\,i\,x\,\mathbf{u}^{\top}\sum_{t=- \infty}^{\infty}\widetilde{\mathbf{Q}}_{t}}\big{]}}_{=:g(x\,\mathbf{u})}\,.\]
In particular, we obtain
\[\mathbb{E}\big{[}\mathrm{e}^{\,i\mathbf{u}^{\top}\boldsymbol{ \xi}_{\alpha}/x}\bigm{|}\eta_{\alpha}=x\big{]}\,f_{\eta_{\alpha}}(x) = \Psi(\mathbf{u}/x,x)\,\,\alpha\,x^{-\alpha-1}\theta_{|\mathbf{X}| }\,\,g(\mathbf{u})\,.\]
Integration yields the following expression for \(\varphi_{\mathbf{R}_{\alpha}}(\mathbf{u})\):
\[\int_{0}^{\infty}\mathbb{E}\big{[}\mathrm{e}^{\,i\mathbf{u}^{ \top}\boldsymbol{\xi}_{\alpha}/\eta_{\alpha}}\bigm{|}\eta_{\alpha}=x\big{]}\,f_ {\eta_{\alpha}}(x)\,dx\] \[= g(\mathbf{u})\,\theta_{|\mathbf{X}|}\,\int_{0}^{\infty}\Psi( \mathbf{u}/x,x)\,d(-x^{-\alpha})\] \[= g(\mathbf{u})\,\theta_{|\mathbf{X}|}\,\int_{0}^{\infty}\varphi_{ \boldsymbol{\xi}_{\alpha}}(\mathbf{u}/x)\,\exp\Big{(}-\theta_{|\mathbf{X}|} \int_{x}^{\infty}\mathbb{E}\big{[}\mathrm{e}^{\,i\,(y/x)\,\mathbf{u}^{\top} \sum_{t=-\infty}^{\infty}\widetilde{\mathbf{Q}}_{t}}\big{]}\,d(-y^{-\alpha}) \Big{)}\,d(-x^{-\alpha})\] \[= g(\mathbf{u})\,\theta_{|\mathbf{X}|}\,\int_{0}^{\infty}\exp \Big{(}x^{-\alpha}\Big{(}\log\varphi_{\boldsymbol{\xi}_{\alpha}}(\mathbf{u}) -\theta_{|\mathbf{X}|}\int_{1}^{\infty}\mathbb{E}\big{[}\mathrm{e}^{\,i\,y\, \mathbf{u}^{\top}\sum_{t=-\infty}^{\infty}\widetilde{\mathbf{Q}}_{t}}\big{]} \,d(-y^{-\alpha})\Big{)}\Big{)}\,d(-x^{-\alpha})\,.\]
Plugging in the characteristic function \(\varphi_{\boldsymbol{\xi}_{\alpha}}\) and changing variables, we obtain
\[g(\mathbf{u})\,\theta_{|\mathbf{X}|}\,\int_{0}^{\infty}\exp\Big{(}z \theta_{|\mathbf{X}|}\int_{0}^{\infty}\mathbb{E}\Big{[}\mathrm{e}^{\,i\mathbf{ u}^{\top}\sum_{t=-\infty}^{\infty}\widetilde{\mathbf{Q}}_{t}}\mathbf{1}(y\leq 1)-1-i\,y\, \mathbf{u}^{\top}\,\sum_{t\in\mathbb{Z}}\widetilde{\mathbf{Q}}_{t}\,\mathbf{1}_{( 1,2)}(\alpha)\Big{]}\] \[d(-y^{-\alpha})\Big{)}\,dz\]
\[=\frac{\mathbb{E}\big{[}\mathrm{e}^{\,i\,\mathbf{u}^{\top}\sum_{t=- \infty}^{\infty}\widetilde{\mathbf{Q}}_{t}}\big{]}}{\int_{0}^{\infty}\mathbb{E} \Big{[}1+i\,y\,\mathbf{u}^{\top}\,\sum_{t\in\mathbb{Z}}\widetilde{\mathbf{Q}}_{ t}\,\mathbf{1}_{(1,2)}(\alpha)-\mathrm{e}^{\,iy\mathbf{u}^{\top}\sum_{t=- \infty}^{\infty}\widetilde{\mathbf{Q}}_{t}\,\mathbf{1}(y\leq 1)\Big{]}\,d(-y^{- \alpha})}\,.\]
This is the desired formula for the characteristic function of \(\mathbf{R}_{\alpha}\).
**Remark 4.3**.: By the continuous mapping theorem and a similar argument as in the proof above we have
\[\lim_{m\to\infty}\lim_{n\to\infty}\mathbb{E}\big{[}\mathrm{e}^{\, i\mathbf{u}^{T}\mathbf{S}_{n}/M_{n}^{|\mathbf{X}|}}\mid M_{n}^{|\mathbf{X}|}>m\,a_{n} \big{]} = \lim_{m\to\infty}\mathbb{E}\big{[}\mathrm{e}^{\,i\mathbf{u}^{T} \mathbf{R}_{\alpha}}\mid\eta_{\alpha}>m\big{]}\] \[= \lim_{m\to\infty}\frac{g(\mathbf{u})\int_{m}^{\infty}\Psi( \mathbf{u}/x,x)\,d(-x^{-\alpha})}{g(\mathbf{0})\int_{m}^{\infty}\Psi(\mathbf{ 0},x)d(-x^{-\alpha})}\] \[= g(\mathbf{u})\lim_{m\to\infty}\frac{\int_{0}^{m^{-\alpha}}\Psi( \mathbf{u}\,y^{1/\alpha},y^{-1/\alpha})\,dy}{\int_{0}^{m^{-\alpha}}\Psi( \mathbf{0},y^{-1/\alpha})dy}\] \[= g(\mathbf{u})\lim_{m\to\infty}\frac{\Psi(\mathbf{u}/m,m)}{\Psi( \mathbf{0},m)}\] \[= g(\mathbf{u})\,.\]
Here we also used l'Hospital's rule and the explicit form of the hybrid characteristic function \(\Psi\), satisfying \(\Psi(\mathbf{u}/m,m)\to 1\) as \(m\to\infty\). Because \(g(\mathbf{u})=\mathbb{E}\big{[}\mathrm{e}^{\,i\,\mathbf{u}^{\top}\sum_{t=- \infty}^{\infty}\widetilde{\mathbf{Q}}_{t}}\big{]}\), \(\mathbf{u}\in\mathbb{R}^{d}\), this implies that as \(m\to\infty\),
\[\lim_{n\to\infty}\mathbb{P}\Big{(}\frac{\mathbf{S}_{n}}{M_{n}^{| \mathbf{X}|}}\in\cdot\Big{|}M_{n}^{|\mathbf{X}|}>m\,a_{n}\Big{)} = \mathbb{P}\Big{(}\frac{\boldsymbol{\xi}_{\alpha}}{\eta_{\alpha}} \in\cdot\Big{|}\eta_{\alpha}>m\Big{)}\stackrel{{ w}}{{\to}} \mathbb{P}\Big{(}\sum_{t=-\infty}^{\infty}\widetilde{\mathbf{Q}}_{t}\in\cdot \Big{)}\,.\]
The case \(\boldsymbol{\xi}_{\alpha}=\mathbf{0}\) corresponds to \(\sum_{t=-\infty}^{\infty}\widetilde{\mathbf{Q}}_{t}=\mathbf{0}\) a.s. In what follows, we assume that this condition is not satisfied. Then \(|\boldsymbol{\xi}_{\alpha}|\) and \(\eta_{\alpha}\) are tail-equivalent. Using the regular variation property of \(\eta_{\alpha}\), we get the convergence of
\[\mathbb{P}\big{(}m^{-1}(\boldsymbol{\xi}_{\alpha},\eta_{\alpha})\in\{( \mathbf{u},x)\in\mathbb{R}^{d}\times(0,\infty):(\mathbf{u}/x,x)\in A\times(y, \infty)\}\big{)}/\mathbb{P}(\eta_{\alpha}>m)\,,\qquad m\to\infty\,,\]
for every continuity set \(A\) of the distribution of \(\sum_{t=-\infty}^{\infty}\widetilde{\mathbf{Q}}_{t}\) and \(y>0\). The sets \(A\times(y,\infty)\) generate the vague convergence in \(\mathbb{R}^{d}\times(0,\infty)\) of these measures, implying that \((\boldsymbol{\xi}_{\alpha},\eta_{\alpha})\) is an \(\mathbb{R}^{d}\times\mathbb{R}_{+}\)-valued regularly varying vector with index \(\alpha>0\). The regular variation properties of \((\boldsymbol{\xi}_{\alpha},\eta_{\alpha})\) then follows from the vague convergence on \(\mathbb{R}^{d}\times(0,\infty)\) extended to \((\mathbb{R}^{d}\times\mathbb{R}_{+})\setminus\{\mathbf{0}\}\), using the regular variation properties of the \(\alpha\)-stable distribution of \(\boldsymbol{\xi}_{\alpha}\).
### Studentized sums
Corollary 4.1 deals with a special case of self-normalization of the (possibly centered) sum processes \((\mathbf{S}_{n})\) constructed from an \(\mathbb{R}^{d}\)-valued regularly varying sequence \((\mathbf{X}_{t})\) with index \(\alpha\in(0,2)\backslash\{1\}\). Indeed, by virtue of the joint convergence of the partial sums and maxima in Theorem 3.1 with the same normalization \((a_{n})\) the latter sequence can be replaced by \((M_{n}^{|\mathbf{X}|})\), leading to a ratio limit theorem for sums and maxima. An advantage of this approach is that the _unknown_ sequence \((a_{n})\) is replaced by the observed maxima \((M_{n}^{|\mathbf{X}|})\). The price one has to pay for this is that the limit distribution of the ratio is an unfamiliar distribution which does not belong to the \(\alpha\)-stable class and, for its evaluation, one has to employ Monte-Carlo simulations or numerical techniques.
The classical example of self-normalization is studentization: the standard deviation in the normalization of the classical central limit theorem is replaced by an empirical estimator. Our goal is to derive limit theory for the partial sums \((\mathbf{S}_{n})\) with the \(\ell^{p}\)-norm-type moduli \(\gamma_{n,p}\) for \(p>0\) defined in (1.1). We assume \(\alpha<2\) and \(p>\alpha\). Then \((|\mathbf{X}_{t}|^{p})\) is a regularly varying sequence with index \(\alpha/p\). In particular, we can always choose \(p=2\), corresponding to the classical studentization.
Following the ideas of the proof of Corollary 4.1, we obtain a limit for the ratios \((\gamma_{n,p}/M_{n}^{|{\mathbf{X}}|})\).
**Corollary 4.4**.: _Under the assumptions of Theorem 3.2 we have_
\[\frac{\gamma_{n,p}}{M_{n}^{|{\mathbf{X}}|}}\overset{d}{\to}\frac{\zeta_{\alpha, p}}{\eta_{\alpha}}\,,\qquad n\to\infty.\]
_For \(p>\alpha\), \((\zeta_{\alpha,p}/\eta_{\alpha})^{p}\) has Laplace transform_
\[\mathbb{E}[\mathrm{e}^{\,-\lambda\,(\zeta_{\alpha,p}/\eta_{\alpha})^{p}}] = \frac{\mathbb{E}\big{[}\mathrm{e}^{\,-\lambda\,\sum_{t=-\infty}^{ \infty}|\widetilde{\mathbf{Q}}_{t}|^{p}}\big{]}}{\int_{0}^{\infty} \mathbb{E}\big{[}1-\mathrm{e}^{\,-y^{p}\,\lambda\,\sum_{t=-\infty}^{\infty}| \widetilde{\mathbf{Q}}_{t}|^{p}}\mathbf{1}(y\leq 1)\big{]}\,d(-y^{-\alpha})}\,, \qquad\lambda>0\,.\]
The proof follows the lines of the one for Corollary 4.1; it is omitted. Similarly to Remark 4.3 we can also derive that \((\zeta_{\alpha,p},\eta_{\alpha})\) is an \(\mathbb{R}_{+}^{2}\)-value regularly varying vector with index \(\alpha\), satisfying
\[\mathbb{P}\Big{(}\frac{\zeta_{\alpha,p}}{\eta_{\alpha}}\in\cdot\Big{|}\eta_{ \alpha}>m\Big{)}\overset{w}{\to}\mathbb{P}\Big{(}\sum_{t=-\infty}^{\infty}| \widetilde{\mathbf{Q}}_{t}|^{p}\in\cdot\Big{)},\qquad m\to\infty\,.\]
The studentized sums also converge
\[\frac{\mathbf{S}_{n}}{\gamma_{n,p}}\overset{d}{\to}\frac{\boldsymbol{\xi}_{ \alpha}}{\zeta_{\alpha,p}}=:\mathbf{R}_{\alpha,p}\,,\qquad n\to\infty\,. \tag{4.2}\]
However, it is even more difficult to describe the limit distribution. We can still derive the first moment of the ratio \(\mathbf{R}_{\alpha,p}\) from the joint characteristic function- Laplace transform \(\Phi_{\boldsymbol{\xi}_{\alpha},\zeta_{\alpha,p}^{p}}(\mathbf{u},\lambda):= \mathbb{E}\big{[}\mathrm{e}^{\,i\,\mathbf{u}^{\top}\,\boldsymbol{\xi}_{\alpha }-\lambda\,\zeta_{\alpha,p}^{p}}\big{]}\), \((\mathbf{u},\lambda)\in\mathbb{R}^{d}\times\mathbb{R}_{+}\), given in Theorem 3.2.
**Proposition 4.5**.: _Under the assumptions of Theorem 3.2, (4.2) holds and_
\[\mathbb{E}[\mathbf{R}_{\alpha,p}]=\frac{\Gamma((1-\alpha)/p)}{\Gamma(1/p) \Gamma(1-\alpha/p)}\mathbb{E}\Big{[}\frac{\|\mathbf{Q}\|_{p}^{\alpha}}{ \mathbb{E}[\|\mathbf{Q}\|_{p}^{\alpha}]}\frac{\sum_{t=-\infty}^{\infty}\mathbf{ Q}_{t}}{\|\mathbf{Q}\|_{p}}\Big{]}\,. \tag{4.3}\]
Proof.: We start by verifying the finiteness of \(\mathbb{E}[\|\mathbf{Q}\|_{p}^{\alpha-1}\|\mathbf{Q}\|_{1}]\), \(\alpha<p\), ensuring the integrability properties needed throughout this proof. For \(\alpha\geq 1\) we use the monotonicity of the \(\ell^{p}\)-norms \(\|\mathbf{Q}\|_{p}\leq\|\mathbf{Q}\|_{1}\) for \(p\geq 1\). We obtain \(\mathbb{E}[\|\mathbf{Q}\|_{p}^{\alpha-1}\|\mathbf{Q}\|_{1}]\leq\mathbb{E}[\| \mathbf{Q}\|_{1}^{\alpha}]\) which is finite; see Remark 3.3. For \(\alpha<1\) we have \(\|\mathbf{Q}\|_{p}\geq\|\mathbf{Q}\|_{\infty}\) hence
\[\|\mathbf{Q}\|_{p}^{\alpha-1}\|\mathbf{Q}\|_{1}\leq\|\mathbf{Q}\|_{\infty}^{ \alpha-1}\|\mathbf{Q}\|_{1}\leq\|\mathbf{Q}\|_{\alpha}^{\alpha}=1\,,\]
by definition of the spectral cluster process \((\mathbf{Q}_{t})\).
The following formula is immediate from the definition of the \(\Gamma\)-function:
\[\frac{1}{x^{1/p}}=\frac{1}{\Gamma(1/p)}\int_{0}^{\infty}\lambda^{1/p-1}\mathrm{ e}^{\,-\lambda x}d\lambda=\frac{p}{\Gamma(1/p)}\int_{0}^{\infty}\mathrm{e}^{\,- \lambda^{p}x}d\lambda\,,\qquad x>0\,. \tag{4.4}\]
We need the following lemma whose proof is postponed to Appendix B.
**Lemma 4.6**.: _Under the assumptions of Theorem 3.2 we have \(\mathbb{E}\big{[}|\boldsymbol{\xi}_{\alpha}|\mathrm{e}^{\,-\lambda^{p}\zeta_{ \alpha,p}^{p}}\big{]}<\infty\)._
Then an application of Fubini's theorem and (4.4) yields
\[\mathbb{E}[\mathbf{R}_{\alpha,p}] = \mathbb{E}\Big{[}\frac{\boldsymbol{\xi}_{\alpha}}{(\zeta_{\alpha, p}^{p})^{1/p}}\Big{]}=\frac{p}{\Gamma(1/p)}\int_{0}^{\infty}\mathbb{E}\big{[} \boldsymbol{\xi}_{\alpha}\mathrm{e}^{\,-\lambda^{p}\zeta_{\alpha,p}^{p}} \big{]}d\lambda\,.\]
By another application of Lemma 4.6 the integrand coincides with
\[\frac{1}{i}\frac{\partial\Phi_{\boldsymbol{\xi}_{\alpha},\zeta_{ \alpha,p}^{p}}(\mathbf{0},\lambda^{p})}{\partial\mathbf{u}} = \int_{0}^{\infty}\mathbb{E}\Big{[}y\,\sum_{t=-\infty}^{\infty}\mathbf{ Q}_{t}\left(\mathrm{e}^{\,-(\lambda y)^{p}\|\mathbf{Q}\|_{p}^{p}}-\mathbf{1}_{(1,2)}( \alpha)\right)\Big{]}\,d(-y^{-\alpha})\,\,\mathbb{E}\big{[}\mathrm{e}^{\,- \lambda^{p}\zeta_{\alpha,p}^{p}}\big{]}\,.\]
Exploiting (4.4) for \(0<\alpha<1\), the formula
\[\int_{0}^{\infty}z\left(\mathrm{e}\,^{-z^{p}\,x}-1\right)d(-z^{-\alpha})=(\alpha/ p)\Gamma((1-\alpha)/p)\,x^{(\alpha-1)/p}\,,\]
for \(\alpha>1\), and changing the variable to \(z=\lambda\,y\,\|\mathbf{Q}\|_{p}\), we obtain
\[\int_{0}^{\infty}\mathbb{E}\Big{[}y\sum_{t=-\infty}^{\infty} \mathbf{Q}_{t}\mathrm{e}\,^{-(\lambda y)^{p}\|\mathbf{Q}\|_{p}^{p}}\big{(} \mathrm{e}\,^{-(\lambda y)^{p}\|\mathbf{Q}\|_{p}^{p}}-\mathbf{1}_{(1,2)}( \alpha)\big{)}\Big{]}d(-y^{-\alpha})\] \[= \frac{\alpha}{p}\Gamma((1-\alpha)/p)\lambda^{\alpha-1}\mathbb{E} \Big{[}\|\mathbf{Q}\|_{p}^{\alpha}\frac{\sum_{t=-\infty}^{\infty}\mathbf{Q}_{t }}{\|\mathbf{Q}\|_{p}}\Big{]}\,.\]
Combining these results, one obtains
\[\mathbb{E}[\mathbf{R}_{\alpha,p}] = \frac{\Gamma((1-\alpha)/p)}{\Gamma(1/p)}\mathbb{E}\Big{[}\| \mathbf{Q}\|_{p}^{\alpha}\frac{\sum_{t=-\infty}^{\infty}\mathbf{Q}_{t}}{\| \mathbf{Q}\|_{p}}\Big{]}\int_{0}^{\infty}\alpha\lambda^{\alpha-1}\mathbb{E} \big{[}\mathrm{e}\,^{-\lambda^{p}\zeta_{\alpha,p}^{p}}\big{]}d\lambda\,.\]
We recall the Laplace transform of \(\zeta_{\alpha,p}^{p}\) derived in Remark 3.3, i.e.,
\[\mathbb{E}\big{[}\mathrm{e}\,^{-\lambda^{p}\zeta_{\alpha,p}^{p}}\big{]}=\exp \big{(}-\Gamma(1-\alpha/p)\,\mathbb{E}[\|\mathbf{Q}\|_{p}^{\alpha}]\,\lambda^ {\alpha}\big{)}\,,\qquad\lambda>0\,.\]
Plugging in the previous expression, we arrive at
\[\mathbb{E}[\mathbf{R}_{\alpha,p}] = \frac{\Gamma((1-\alpha)/p)}{\Gamma(1/p)}\mathbb{E}\Big{[}\| \mathbf{Q}\|_{p}^{\alpha}\frac{\sum_{t=-\infty}^{\infty}\mathbf{Q}_{t}}{\| \mathbf{Q}\|_{p}}\Big{]}\int_{0}^{\infty}\mathrm{e}\,^{-\Gamma(1-\alpha/p) \mathbb{E}[\|\mathbf{Q}\|_{p}^{\alpha}]\lambda^{\alpha}}d(\lambda^{\alpha})\] \[= \frac{\Gamma((1-\alpha)/p)}{\Gamma(1/p)}\mathbb{E}\Big{[}\| \mathbf{Q}\|_{p}^{\alpha}\frac{\sum_{t=-\infty}^{\infty}\mathbf{Q}_{t}}{\| \mathbf{Q}\|_{p}}\Big{]}\frac{1}{\Gamma(1-\alpha/p)\mathbb{E}[\|\mathbf{Q}\| _{p}^{\alpha}]}\,.\]
The desired result follows.
_The case \(p=2\)._ It will be convenient to introduce the spectral cluster process \(\widehat{\mathbf{Q}}\) of the regularly varying sequence \((\mathbf{X}_{t})\) by a change of measure
\[\mathbb{P}(\widehat{\mathbf{Q}}\in\cdot)=\mathbb{E}\Big{[}\frac{\|\mathbf{Q} \|_{2}^{\alpha}}{\mathbb{E}[\|\mathbf{Q}\|_{2}^{\alpha}]}\mathbf{1}\Big{(} \frac{\mathbf{Q}}{\|\mathbf{Q}\|_{2}}\in\cdot\Big{)}\Big{]}\,.\]
Then (4.3) turns into
\[\mathbb{E}[\mathbf{R}_{\alpha,2}]=\frac{\Gamma((1-\alpha)/2)}{\Gamma(1/2) \Gamma(1-\alpha/2)}\mathbb{E}\Big{[}\sum_{t=-\infty}^{\infty}\widehat{ \mathbf{Q}}_{t}\Big{]}\,.\]
### Greenwood statistics
We consider a univariate positive regularly varying stationary time series \((X_{t})\) with index \(\alpha\in(0,1)\). Its spectral tail process is denoted by \((\Theta_{t})\) and the corresponding spectral cluster process by \(Q=(Q_{t})=(\Theta_{t}/\|\Theta\|_{\alpha})\). The _Greenwood statistic_ (see Greenwood [15]) is the ratio statistic
\[\frac{\gamma_{n,2}^{2}}{S_{n}^{2}}=\frac{X_{1}^{2}+\cdots+X_{n}^{2}}{(X_{1}+ \cdots+X_{n})^{2}}\,,\qquad n\geq 1\,.\]
Under the conditions of Theorem 3.2 a continuous mapping argument yields the more general result for \(\alpha<p\)
\[T_{n,p}:=\frac{X_{1}^{p}+\cdots+X_{n}^{p}}{(X_{1}+\cdots+X_{n})^{p}}\xrightarrow {d}\xrightarrow{\zeta_{\alpha,p}^{p}},\]
where \(\zeta_{\alpha,p}^{p}\) is totally skewed to the right \(\alpha/p\)-stable and \(\xi_{\alpha}\) is \(\alpha\)-stable.
**Corollary 4.7**.: _Under the conditions of Theorem 3.2 we have for \(\alpha<p\wedge 1\),_
\[\mathbb{E}[T_{n,p}]\to\mathbb{E}\Big{[}\frac{\zeta_{\alpha,p}^{p}}{ \xi_{\alpha}^{p}}\Big{]} = \frac{\Gamma(p-\alpha)}{\Gamma(p)\,\Gamma(1-\alpha)}\,\mathbb{E} \Big{[}\frac{\|Q\|_{1}^{\alpha}}{\mathbb{E}[\|Q\|_{1}^{\alpha}]}\frac{\|Q\|_{p }^{p}}{\|Q\|_{1}^{p}}\Big{]}\,.\]
_In particular,_
\[\mathbb{E}\Big{[}\frac{\zeta_{\alpha,2}^{2}}{\xi_{\alpha}^{2}} \Big{]} = (1-\alpha)\,\mathbb{E}\Big{[}\frac{\|Q\|_{1}^{\alpha}}{\mathbb{E} [\|Q\|_{1}^{\alpha}]}\frac{\|Q\|_{2}^{2}}{\|Q\|_{1}^{2}}\Big{]}.\]
In the case of asymptotic independence when \(\Theta_{t}=0\) for \(t\neq 0\) the right-hand side turns into \(\Gamma(p-\alpha)/(\Gamma(p)\,\Gamma(1-\alpha))\), in agreement with Albrecher et al. [1, 2] in the iid case.
Proof.: Considering \(X_{t}^{\prime}=X_{t}^{p}\) we obtain a regularly varying sequence of index \(\alpha/p\) with spectral cluster process \((Q_{t}^{p})\). The limit can be interpreted as the ratio \(R_{\alpha/p,1/p}^{\prime}=\xi_{\alpha/p}^{\prime}/\zeta_{\alpha/p,1/p}^{\prime}\) with joint characteristic function-Laplace transform of \((\xi_{\alpha/p}^{\prime},\zeta_{\alpha/p,1/p}^{\prime})\) given by
\[\mathbb{E}\big{[}\mathrm{e}^{\,i\,u\,\xi_{\alpha/p}^{\prime}- \lambda\,(\zeta_{\alpha/p,1/p}^{\prime})^{1/p}}\big{]} = \exp\Big{(}\int_{0}^{\infty}\mathbb{E}\Big{[}\mathrm{e}^{\,i\,y \,u\|Q\|_{p}^{p}-y^{1/p}\lambda\,\|Q\|_{1}}-1\Big{]}d(-y^{-\alpha/p})\Big{)} \,,\qquad u\in\mathbb{R},\,\lambda>0\,.\]
The expression of the first moment follows from an application of Proposition 4.5.
**Example 4.8**.: Consider the scaled sample kurtosis for the stationary sequence \((X_{t})_{t\in\mathbb{Z}}\):
\[\frac{\sum_{i=1}^{n}|X_{t}|^{4}}{(\sum_{i=1}^{n}|X_{t}|^{2})^{2}}=\frac{\|X\|_ {4}^{4}}{\|X\|_{2}^{4}}\leq 1\,.\]
Assume that \((X_{t})\) is regularly varying with index \(0<\alpha<2\). Then \((|X_{t}|^{2})\) is also regularly varying with index \(0<\alpha/2<1\) and if it satisfies the assumptions of Corollary 4.7 we obtain, keeping the same notation,
\[\frac{\|X\|_{4}^{4}}{\|X\|_{2}^{4}}\stackrel{{ d}}{{ \to}}\frac{\zeta_{\alpha/2,2}^{2}}{\xi_{\alpha/2}^{2}}\quad\text{and}\quad \mathbb{E}\Big{[}\frac{\|X\|_{4}^{4}}{\|X\|_{2}^{4}}\to\mathbb{E}\Big{[}\frac{ \zeta_{\alpha/2,2}^{2}}{\xi_{\alpha/2}^{2}}\Big{]}=(1-\alpha/2)\mathbb{E} \Big{[}\frac{\|Q\|_{2}^{\alpha}}{\mathbb{E}[\|Q\|_{2}^{\alpha}]}\frac{\|Q\|_ {4}^{4}}{\|Q\|_{2}^{4}}\Big{]}.\]
### Ratios of norms
Consider a regularly varying stationary process \((\mathbf{X}_{t})\) with index \(\alpha>0\). For positive \(q>0\) we consider the norm-type modulus of the sample \(\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\) given by \(\|\mathbf{X}\|_{q}=(\sum_{t=1}^{n}|\mathbf{X}_{t}|^{q})^{1/q}\). Here we suppress the dependence on \(n\) in the notation.
For \(q,r>0\) we are interested in the limit behavior of the ratios \(\|\mathbf{X}\|_{q}/\|\mathbf{X}\|_{r}\). We rephrase this ratio in terms of the regularly varying stationary sequence \((Z_{t})=(|\mathbf{X}_{t}|^{q})\) with index \(\alpha/q\). By a continuous mapping argument the tail process of \((Z_{t})\) is given by \((|\boldsymbol{\Theta}_{t}|^{q})\). Adapting the notation to the \(Z\)-sequence, we obtain
\[\|\mathbf{X}\|_{q}/\|\mathbf{X}\|_{r}=\big{(}\|\mathbf{Z}\|_{1}/\|\mathbf{Z}\| _{r/q}\big{)}^{1/q}\,.\]
If \((Z_{t})\) satisfies the conditions of Theorem 3.2, in particular \(\alpha<q\wedge r\), then the continuous mapping theorem yields
\[\|\mathbf{X}\|_{q}/\|\mathbf{X}\|_{r}\stackrel{{ d}}{{\to}}R_{ \alpha/q,r/q}^{1/q}\stackrel{{ d}}{{=}}\xi_{\alpha/q}^{1/q}/\zeta_{ \alpha/q,r/q}^{1/q}\,,\qquad n\to\infty\,.\]
Here \((\xi_{\alpha/q},\zeta_{\alpha/q,r/q}^{r/q})\) have the joint characteristic function-Laplace transform
\[\mathbb{E}\big{[}\mathrm{e}^{\,i\,x\,\xi_{\alpha/q}-\lambda\, \zeta_{\alpha/q,r/q}^{\,r/q}}\big{]}\] \[= \exp\Big{(}\int_{0}^{\infty}\mathbb{E}\Big{[}\mathrm{e}^{\,i\,y\, x\,\sum_{t=-\infty}^{\infty}Q_{t}-\lambda\,y^{r/q}\,\sum_{t=-\infty}^{\infty}Q_{t}^{r/q} }-1\Big{]}d(-y^{-\alpha/q})\Big{)}\,,\qquad(x,\lambda)\in\mathbb{R}\times \mathbb{R}_{+}\,,\]
and \((Q_{t})=(|\boldsymbol{\Theta}_{t}|^{q}/\|\boldsymbol{\Theta}\|_{q}^{\alpha})\) is the spectral cluster process of \((Z_{t})\). We also observe that for \(\alpha<r\leq q\) we have \(\|\mathbf{X}\|_{q}/\|\mathbf{X}\|_{r}\leq 1\) a.s., hence uniform integrability yields the convergence of the moments
\[\mathbb{E}\big{[}\|\mathbf{X}\|_{q}/\|\mathbf{X}\|_{r}\big{]}\to\mathbb{E} \big{[}\xi_{\alpha/q}^{1/q}/\zeta_{\alpha/q,r/q}^{1/q}\big{]}\,,\qquad n\to \infty\,.\]
It is desirable to derive a more explicit expression for the limit.
## 5. Examples
### Sufficient conditions via coupling
The anti-clustering condition (2.2) and the mixing condition (3.16) can be checked by using a coupled version of \((\mathbf{X}_{t})_{t\in\mathbb{Z}}\).
**Proposition 5.1**.: _We assume the following conditions:_
1. _There exists a coupled version_ \((\mathbf{X}_{t}^{*})\) _of the regularly varying stationary process_ \((\mathbf{X}_{t})\) _with index_ \(\alpha\in(0,1)\cup(1,2)\)_:_ \((\mathbf{X}_{t}^{*})\) _is distributed as_ \((\mathbf{X}_{t})\) _and_ \((\mathbf{X}_{t}^{*})_{t\geq 1}\) _is independent of_ \((\mathbf{X}_{t})_{t\leq 0}\)_._
2. _For some integer sequences_ \((r_{n})\) _and_ \((\ell_{n})\) _such that_ \(r_{n}=o((a_{n}^{2}/n)\wedge n)\)_,_ \(\ell_{n}=o(r_{n})\)_, and for_ \(q<\alpha\wedge 1\)_,_ \(p>\alpha\)_, we have_ (5.1) \[k_{n}\,a_{n}^{-q}\sum_{t=\ell_{n}}^{r_{n}}\big{(}\mathbb{E}\big{[}|\mathbf{X}_ {t}-\mathbf{X}_{t}^{*}|^{q}\big{]}\big{)}^{(1/p)\lor 1}\to 0\,,\qquad n \to\infty\,.\]
3. _For the same sequence_ \((r_{n})\) _and the same_ \(q<\alpha\wedge 1\) _we have_ (5.2) _Then the anti-clustering condition (_2.2_) and the mixing condition (_3.16_) with_ \(p\) _as in (_5.1_) hold for every_ \((\mathbf{u},x,\lambda)\in\mathbb{R}^{d}\times\mathbb{R}^{2}_{+}\) _and Theorem_ 3.2 _applies._
Proof.: We start by checking (2.2). Because \(r_{n}=o((a_{n}^{2}/n)\wedge n)\) it is enough to show that (2.3) holds. Using the basic inequality
\[|\mathrm{cov}(V,W)|\leq\mathbb{E}[|W-W^{*}||V|]\,,\]
where \(W^{*}\) is a copy of \(W\) independent of \(V\) for \(|V|\vee|W|\leq 1\) a.s., we easily obtain the sufficient condition
\[\lim_{k\to\infty}\limsup_{n\to\infty}n\,\sum_{j=k}^{r_{n}}\mathbb{E}\big{[} \big{(}\big{|}a_{n}^{-1}\big{(}\mathbf{X}_{j}-\mathbf{X}_{j}^{*}\big{)}\big{|} \wedge 1\big{)}(|a_{n}^{-1}\mathbf{X}_{0}|\wedge 1)\big{]}=0\,.\]
The desired result follows by observing that \(|x|\wedge 1\leq|x|^{q^{\prime}}\wedge 1\) for \(0<q^{\prime}\leq 1\).
Next we verify (3.16). We start by showing that (5.1) implies
\[k_{n}\,a_{n}^{-q}\sum_{t=\ell_{n}}^{r_{n}}\mathbb{E}\big{[}|\mathbf{X}_{t}- \mathbf{X}_{t}^{*}|^{q}+\big{|}|\mathbf{X}_{t}|^{p}-|\mathbf{X}_{t}^{*}|^{p} \big{|}^{q/p}\big{]}\to 0\,,\qquad n\to\infty\,. \tag{5.3}\]
For \(\alpha<p\leq 1\) this follows by the triangular inequality
\[\big{|}|\mathbf{X}_{t}|^{p}-|\mathbf{X}_{t}^{*}|^{p}\big{|}^{q/p}\leq|\mathbf{ X}_{t}-\mathbf{X}_{t}^{*}|^{q}\,,\qquad t\geq 1\,.\]
For \(p\geq 1\) we apply the mean value theorem: there exists some \(0<\xi<1\) such that
\[\big{|}|\mathbf{X}_{t}|^{p}-|\mathbf{X}_{t}^{*}|^{p}\big{|}^{q/p}=\big{(}p|| \mathbf{X}_{t}|+\xi(|\mathbf{X}_{t}^{*}|-|\mathbf{X}_{t}|)|^{(p-1)}||\mathbf{X }_{t}|-|\mathbf{X}_{t}^{*}|\big{|}\big{)}^{q/p}\,.\]
Then the Holder inequality yields
\[\mathbb{E}\big{[}\big{|}|\mathbf{X}_{t}|^{p}-|\mathbf{X}_{t}^{*}| ^{q/p}\big{]} \leq c_{1}\,\big{(}\mathbb{E}[||\mathbf{X}_{t}|+\xi(|\mathbf{X}_{t}^{*} |-|\mathbf{X}_{t}|)|^{q}]\big{)}^{(p-1)/p}\big{(}\mathbb{E}\big{[}|\mathbf{X} _{t}-\mathbf{X}_{t}^{*}|^{q}\big{]}\big{)}^{1/p}\] \[\leq c_{2}\big{(}\mathbb{E}\big{[}|\mathbf{X}_{t}-\mathbf{X}_{t}^{*}|^{ q}\big{]}\big{)}^{1/p}\,,\qquad t\geq 1\,,\]
where \(c_{1}\), \(c_{2}>0\) do not depend on \(t\). This proves (5.3) for \(p\geq 1\) as well.
Next we introduce the intermediate sequence \((\ell_{n})\) in the mixing condition (3.16) by an application of an asymptotic negligibility argument on functionals on small blocks \((\mathbf{X}_{r_{n}(j-1)+t})_{1\leq t\leq\ell_{n}
\(k_{n}\). Because the anti-clustering condition (5.2) is also satisfied for the intermediate sequence \((\ell_{n})\) we deduce similarly as in the proof of Theorem 3.2, see equation (3.18), that the sequences
\[\Big{(}\frac{n}{\ell_{n}}\log\big{(}\mathbb{E}\big{[}\exp\big{(}i\,a_{n}^{-1} \mathbf{u}^{\top}\mathbf{S}_{\ell_{n}}-a_{n}^{-p}\lambda\gamma_{\ell_{n},p}^{p} \big{)}\,\mathbf{1}\big{(}a_{n}^{-1}M_{\ell_{n}}^{|\mathbf{X}|}\leq x\big{)} \big{]}\big{)}\Big{)}\]
converge for every \((\mathbf{u},x,\lambda)\). Remembering that \(\ell_{n}/r_{n}\to 0\) as \(n\to\infty\) we achieve that
\[\lim_{n\to\infty}\log\Big{(}\mathbb{E}\big{[}\exp\big{(}i\,a_{n}^ {-1}\mathbf{u}^{\top}\mathbf{S}_{\ell_{n}}-a_{n}^{-p}\lambda\gamma_{\ell_{n},p }^{p}\big{)}\,\mathbf{1}\big{(}a_{n}^{-1}M_{\ell_{n}}^{|\mathbf{X}|}\leq x \big{)}\big{]}^{k_{n}}\Big{)}\] \[= \lim_{n\to\infty}\frac{\ell_{n}}{r_{n}}\frac{n}{\ell_{n}}\log \big{(}\mathbb{E}\big{[}\exp\big{(}i\,a_{n}^{-1}\mathbf{u}^{\top}\mathbf{S}_{ \ell_{n}}-a_{n}^{-p}\lambda\gamma_{\ell_{n},p}^{p}\big{)}\,\mathbf{1}\big{(}a_ {n}^{-1}M_{\ell_{n}}^{|\mathbf{X}|}\leq x\big{)}\big{]}\big{)}=0\,.\]
We immediately deduce the asymptotic negibility of \(k_{n}\) independent copies of the functionals on small blocks \((a_{n}^{-1}\mathbf{S}_{\ell_{n}},a_{n}^{-p}\gamma_{\ell_{n},p}^{p},a_{n}^{-1}M_ {\ell_{n}}^{|\mathbf{X}|})\). Arguments similar to the ones developed after equation (5.4) yield that
\[\mathbb{E}\Big{[}\prod_{j=1}^{k_{n}}\exp\big{(}i\,a_{n}^{-1} \mathbf{u}^{\top}\mathbf{S}_{jr_{n}-\ell_{n},jr_{n}}-a_{n}^{-p}\lambda\gamma_ {jr_{n}-\ell_{n},jr_{n},p}^{p}\big{)}\mathbf{1}\big{(}a_{n}^{-1}M_{jr_{n}- \ell_{n},jr_{n}}^{|\mathbf{X}|}\leq x\big{)}\Big{]}\] \[- \Big{(}\mathbb{E}\big{[}\exp\big{(}i\,a_{n}^{-1}\mathbf{u}^{\top }\mathbf{S}_{\ell_{n}}-a_{n}^{-p}\lambda\gamma_{\ell_{n},p}^{p}\big{)}\, \mathbf{1}\big{(}a_{n}^{-1}M_{\ell_{n}}^{|\mathbf{X}|}\leq x\big{)}\big{]} \Big{)}^{k_{n}}=o(1)\,.\]
We conclude the asymptotic negligibility of the functionals on small blocks \((\mathbf{X}_{r_{n}(j-1)+t})_{1\leq t\leq\ell_{n}}\), \(1\leq j\leq k_{n}\) under the modified condition (5.1) such that \((\ell_{n})\) is replaced by \((r_{n}-\ell_{n})\).
Then the mixing condition (3.16) turns into one based on large blocks only
\[\mathbb{E}\Big{[}\prod_{j=1}^{k_{n}}\exp\big{(}i\,a_{n}^{-1} \mathbf{u}^{\top}\mathbf{S}_{j(r_{n}-1)+\ell_{n},jr_{n}}-a_{n}^{-p}\lambda \gamma_{j(r_{n}-1)+\ell_{n},jr_{n},p}^{p}\big{)}\mathbf{1}\big{(}a_{n}^{-1}M_ {j(r_{n}-1)+\ell_{n},jr_{n}}^{|\mathbf{X}|}\leq x\big{)}\Big{]}\] \[- \Big{(}\mathbb{E}\big{[}\exp\big{(}i\,a_{n}^{-1}\mathbf{u}^{\top }\mathbf{S}_{\ell_{n},r_{n}}-a_{n}^{-p}\lambda\gamma_{\ell_{n},r_{n},p}^{p} \big{)}\,\mathbf{1}\big{(}a_{n}^{-1}M_{\ell_{n},r_{n}}^{|\mathbf{X}|}\leq x \big{)}\big{]}\Big{)}^{k_{n}}=o(1)\,. \tag{5.4}\]
We use a telescoping sum argument over \(1\leq j\leq k_{n}\) on the difference and show that it converges to zero as \(n\to\infty\). Using the properties of the coupled version, this difference can be expressed as a sum of \(k_{n}\) summands plus negligible terms which we ignore. Up to a constant multiplier the absolute value of a typical summand is bounded by
(5.5)
where \(\mathbf{S}_{\ell_{n},r_{n}}^{*}=\sum_{t=\ell_{n}}^{r_{n}}\mathbf{X}_{t}^{*}\), \(\gamma_{\ell_{n},r_{n},p}^{*\,p}=\sum_{t=\ell_{n}}^{r_{n}}|\mathbf{X}_{t}^{*}|^ {p}\) and \(M_{\ell_{n},r_{n}}^{|\mathbf{X}^{*}|}=\max_{\ell_{n}\leq t\leq r_{n}}|\mathbf{X} _{t}^{*}|\). We observe that (5.5) is bounded by
\[\mathbb{E}\big{[}\big{|}{\rm e}\,^{i\,a_{n}^{-1}\mathbf{u}^{\top }\mathbf{S}_{\ell_{n},r_{n}}}-{\rm e}\,^{i\,a_{n}^{-1}\mathbf{u}^{\top} \mathbf{S}_{\ell_{n},r_{n}}^{*}}\big{|}\big{]}+\mathbb{E}\big{[}\big{|}{\rm e }\,^{-a_{n}^{-p}\,\lambda\,\gamma_{\ell_{n},r_{n},p}^{p}}-{\rm e}\,^{-a_{n}^{-p }\,\lambda\,\gamma_{\ell_{n},r_{n},p}^{*\,p}}\big{|}\big{]}\] \[+\mathbb{E}\big{[}\big{|}\mathbf{1}\big{(}a_{n}^{-1}M_{\ell_{n },r_{n}}^{|\mathbf{X}|}\leq x\big{)}-\mathbf{1}\big{(}a_{n}^{-1}M_{\ell_{n},r_{n} }^{|\mathbf{X}^{*}|}\leq x\big{)}\big{|}\big{]}\] \[\leq \mathbb{E}\big{[}\big{|}\big{(}a_{n}^{-1}\mathbf{u}^{\top}( \mathbf{S}_{\ell_{n},r_{n}}-\mathbf{S}_{\ell_{n},r_{n}}^{*})\big{)}\wedge 1 \big{|}^{q}\big{]}+\mathbb{E}\big{[}\big{|}\big{(}a_{n}^{-p}\,\lambda\,(\gamma_{ \ell_{n},r_{n},p}^{*\,p}-\gamma_{\ell_{n},r_{n},p}^{*\,p})\big{)}\wedge 1\big{|}^{q^{ \prime}}\big{]}\] \[+\Big{[}\mathbb{P}\big{(}a_{n}^{-1}M_{\ell_{n},r_{n}}^{|\mathbf{X} |}>x\,,a_{n}^{-1}M_{\ell_{n},r_{n}}^{|\mathbf{X}^{*}|}\leq x\big{)}+\mathbb{P} \big{(}a_{n}^{-1}M_{\ell_{n},r_{n}}^{|\mathbf{X}^{*}|}>x\,,a_{n}^{-1}M_{\ell_{n}, r_{n}}^{|\mathbf{X}|}\leq x\big{)}\Big{]}\] \[=: I_{1}+I_{2}+I_{3}\,.\]
Here we choose \(q^{\prime}=q/p<1\). Then we have for some constant \(c=c(\mathbf{u},\lambda)>0\),
\[k_{n}(I_{1}+I_{2}) \leq c\,k_{n}\,a_{n}^{-q}\,\sum_{t=\ell_{n}}^{r_{n}}\mathbb{E}\big{[} |\mathbf{X}_{t}-\mathbf{X}_{t}^{*}|^{q}\big{]}+\big{|}|\mathbf{X}_{t}|^{p}-| \mathbf{X}_{t}^{*}|^{p}\big{|}^{q/p}\big{]}\,,\]
and the right-hand side converges to zero in view of (5.3).
By a symmetry argument it suffices to bound the first term in \(I_{3}\); the second one can be treated analogously. Since the anti-clustering condition is satisfied by \((|\mathbf{X}_{t}|)\) there exists \(\theta>0\) such that for every \(x>0\), we apply Theorem 3 of Segers [29] to obtain
\[\theta_{n}=\frac{\mathbb{P}(M_{r_{n}}^{|\mathbf{X}|}>x\,a_{n})}{r_{n}\,\mathbb{ P}(|\mathbf{X}|>x\,a_{n})}\to\theta\,,\qquad n\to\infty\,.\]
Thus, for every \(\varepsilon>0\),
\[k_{n}\mathbb{P}\Big{(}a_{n}^{-1}M_{r_{n}}^{|\mathbf{X}|}>x\,,a_{ n}^{-1}M_{r_{n}}^{|\mathbf{X}^{*}|}\leq x\,,\max_{1\leq t\leq r_{n}}|\mathbf{X}_{t }-\mathbf{X}^{*}_{t}|\leq x\,\varepsilon\,a_{n}\Big{)}\] \[\leq k_{n}\left(\mathbb{P}\big{(}M_{r_{n}}^{|\mathbf{X}|}>x\,a_{n} \big{)}-\mathbb{P}\big{(}M_{r_{n}}^{|\mathbf{X}|}>x\,(1+\varepsilon)\,a_{n} \big{)}\right)\] \[= \theta\,n\left(\mathbb{P}(|\mathbf{X}|>x\,a_{n})-\mathbb{P}(| \mathbf{X}|>x(1+\varepsilon)\,a_{n})\right)+o(1)\] \[\to \theta\,x^{-\alpha}\,(1-(1+\varepsilon)^{-\alpha})\,,\qquad n\to \infty\,,\]
and the right-hand side converges to \(0\) as \(\varepsilon\downarrow 0\). Moreover, for every \(x,\varepsilon>0\),
\[\lim_{n\to\infty}k_{n}\,\mathbb{P}\big{(}\max_{1\leq t\leq r_{n}}|\mathbf{X}_ {t}-\mathbf{X}^{*}_{t}|>x\,\varepsilon\,a_{n}\big{)}=0\,.\]
This follows from (5.1) and an application of Markov's inequality of the order \(q\). Thus we proved that \(\lim_{n\to\infty}k_{n}\,I_{3}=0\). The proof is finished.
### Iterated random Lipschitz functions
Assume \((\mathbf{X}_{t})\) is the solution of a system of iterated random functions: there exists a function \(g\) and an iid sequence \((\varepsilon_{t})\) such that
\[\mathbf{X}_{t}=g(\mathbf{X}_{t-1},\varepsilon_{t}),\qquad t\in\mathbb{Z}\,. \tag{5.6}\]
We assume that this system is _contractive_, i.e., there exist \(0<\rho<1\) and \(q>0\) such that
\[\mathbb{E}[|g(\mathbf{x}_{0},\varepsilon_{0})-g(\mathbf{x}_{1},\varepsilon_{0 })|^{q}]\leq\rho\,|\mathbf{x}_{0}-\mathbf{x}_{1}|^{q}\,,\qquad\mathbf{x}_{0}, \mathbf{x}_{1}\in\mathbb{R}^{d}\,. \tag{5.7}\]
If there exists \(\mathbf{x}\in\mathbb{R}^{d}\) such that
\[\mathbb{E}[|g(\mathbf{x},\varepsilon_{0})|^{q}]<\infty\,, \tag{5.8}\]
the fixed point theorem in the complete space \(L^{q}\) ensures the existence of a unique stationary solution \((\mathbf{X}_{t})\) which admits finite moments of order \(q>0\).
A coupled version \((\mathbf{X}^{*}_{t})\) is easily obtained as follows:
\[\mathbf{X}^{*}_{t}=\begin{cases}g(\mathbf{X}^{*}_{t-1},\epsilon_{t}),&t\geq 1 \,,\\ g(\mathbf{X}^{*}_{t-1},\epsilon^{\prime}_{t}),&t\leq 0\,,\end{cases}\]
where \((\epsilon^{\prime}_{t})\) is an independent copy of \((\epsilon_{t})\). We have
**Proposition 5.2**.: _Assume that the unique stationary solution \((\mathbf{X}_{t})\) of the iterated random function system (5.6) is regularly varying with index \(\alpha>0\) and satisfies (5.7), (5.8) for some \(\rho\in(0,1)\) and \(q<\alpha\wedge 1\). Then the conditions of Proposition 5.1 are satisfied._
Proof.: A recursive argument yields for some constant \(c>0\),
\[\mathbb{E}\big{[}|\mathbf{X}_{t}-\mathbf{X}^{*}_{t}|^{q}]\leq c\,\rho^{t}\,, \qquad t\geq 1\,.\]
Then (2) in Proposition 5.1 follows choosing \(C>0\) sufficiently large in the intermediate sequence \(\ell_{n}=[C\log n]\) such that
\[k_{n}\,a_{n}^{-q}\sum_{t=\ell_{n}}^{r_{n}}\left(\mathbb{E}\big{[}|\mathbf{X}_ {t}-\mathbf{X}^{*}_{t}|^{q}\big{]}\right)^{(1/p)\lor 1}\leq c\rho^{\ell_{n}} \frac{n}{r_{n}a_{n}^{q}}\leq c\frac{n^{C\log\rho+1-\delta}}{a_{n}^{q}}\to 0\,, \qquad n\to\infty\,.\]
It remains to verify (3) in Proposition 5.1.We also have
\[\mathbb{E}\big{[}\big{(}\big{|}a_{n}^{-1}\big{(}\mathbf{X}_{t}- \mathbf{X}_{t}^{*}\big{)}\big{|}^{q}\wedge 1\big{)}\mid\mathbf{X}_{t-1},\mathbf{X}_{t-1}^{*} \big{]} \leq \mathbb{E}\big{[}|a_{n}^{-1}(\mathbf{X}_{t}-\mathbf{X}_{t}^{*})|^{q }\mid\mathbf{X}_{t-1},\mathbf{X}_{t-1}^{*}\big{]}\wedge 1\] \[\leq \big{(}\rho\,\big{|}a_{n}^{-1}\big{(}\mathbf{X}_{t-1}-\mathbf{X}_ {t-1}^{*}\big{)}\big{|}^{q}\big{)}\wedge 1\,.\]
Thus, using the filtration \(\mathcal{F}_{t}=\sigma(\epsilon_{t},\ldots,\epsilon_{1},\mathbf{X}_{0}, \mathbf{X}_{0}^{\prime})\), \(t\geq 1\), and the Markov property we obtain
\[\mathbb{E}\big{[}\big{(}\big{|}a_{n}^{-1}\big{(}\mathbf{X}_{t}- \mathbf{X}_{t}^{*}\big{)}\big{|}^{q}\wedge 1\big{)}\big{(}\big{|}a_{n}^{-1} \mathbf{X}_{0}\big{|}^{q}\wedge 1\big{)}\big{]}\] \[\leq \mathbb{E}\big{[}\big{[}\big{(}\rho\,\big{|}a_{n}^{-1}\big{(} \mathbf{X}_{t-1}-\mathbf{X}_{t-1}^{*}\big{)}\big{|}^{q}\big{)}\wedge 1\big{)} \big{(}\big{|}a_{n}^{-1}\mathbf{X}_{0}\big{|}^{q}\wedge 1\big{)}\big{]}\] \[\leq \mathbb{E}\big{[}\big{(}\big{(}\rho^{t}\,\big{|}a_{n}^{-1}\big{(} \mathbf{X}_{0}-\mathbf{X}_{0}^{*}\big{)}\big{|}^{q}\big{)}\wedge 1\big{)} \big{(}\big{|}a_{n}^{-1}\mathbf{X}_{0}\big{|}^{q}\wedge 1\big{)}\big{]}\,,\]
again using a recursive argument in the last step. Observing that \(\mathbf{X}_{0}\) and \(\mathbf{X}_{0}^{*}\) are independent, we obtain
\[\mathbb{E}\big{[}\big{(}\big{|}a_{n}^{-1}\big{(}\mathbf{X}_{t}- \mathbf{X}_{t}^{*}\big{)}\big{|}^{q}\wedge 1\big{)}\big{(}\big{|}a_{n}^{-1} \mathbf{X}_{0}\big{|}^{q}\wedge 1\big{)}\big{]}\] \[\leq\] \[\leq\]
By Karamata's theorem there exists a positive constant \(c>0\) such that
\[\mathbb{E}\big{[}\big{(}\big{|}a_{n}^{-1}\mathbf{X}_{0}\big{|}^{q }\wedge 1\big{)}^{2}\big{]} \leq c\,\mathbb{P}(|X_{0}|>a_{n})\,.\]
For \(q>\alpha/2\) we achieve
\[n\mathbb{E}\big{[}\big{(}\big{|}a_{n}^{-1}\big{(}\mathbf{X}_{t}- \mathbf{X}_{t}^{*}\big{)}\big{|}^{q}\wedge 1\big{)}\big{(}\big{|}a_{n}^{-1} \mathbf{X}_{0}\big{|}^{q}\wedge 1\big{)}\big{]} \leq c\rho^{\alpha t/(2q)}+o(1)\,.\]
Then (3) in Proposition 5.1 and the desired result follow.
### Examples
In this section we consider two examples of regularly varying stationary time series: an autoregressive process or order \(1\) (AR(1)) and the solution to an affine stochastic recurrence equation (SRE).
**Example 5.3**.: **A regularly varying AR(1) process.** We consider the causal stationary solution of the AR(1) equations \(X_{t}=\varphi X_{t-1}+Z_{t}\), \(t\in\mathbb{Z}\), for some \(\varphi\in(-1,1)\backslash\{0\}\) and an iid regularly varying noise sequence \((Z_{t})\). This means that a generic element \(Z\) satisfies the tail balance condition, for \(q_{\pm}\geq 0\) such that \(q_{+}+q_{-}=1\),
\[\frac{\mathbb{P}(\pm Z>x)}{\mathbb{P}(|Z|>x)}\to q_{\pm}\,,\qquad x\to \infty\,.\]
Then \((Z_{t})\) has spectral tail process \(\mathbb{P}(\Theta_{0}^{Z}=\pm 1)=q_{\pm}\), \(\Theta_{t}^{Z}=0\), \(t\neq 0\). It is well known (e.g. Kulik and Soulier [19]) that \((X_{t})\) is regularly varying with index \(\alpha\) and spectral tail process
\[\Theta_{t} = \Theta_{Z}\,\mathrm{sign}(\varphi^{J+t})|\varphi|^{t}\,\mathbf{1}( J+t\geq 0)=\Theta_{Z}\,\mathrm{sign}(\varphi^{J})\varphi^{t}\,\mathbf{1}(t\geq-J)=\Theta_{0} \,\varphi^{t}\,\mathbf{1}(t\geq-J)\,,\qquad t\in\mathbb{Z}\,,\]
where \(\mathbb{P}(\Theta_{0}=\pm 1)=p_{\pm}\), \(J\) and \((\Theta_{Z})\) are independent, and
\[p_{\pm} = q_{\pm}\,\mathbf{1}(\varphi>0)+\frac{q_{\pm}+q_{\mp}|\varphi|^{ \alpha}}{1+|\varphi|^{\alpha}}\,\mathbf{1}(\varphi<0)\] \[\mathbb{P}(J=j) = |\varphi|^{\alpha\,j}\,(1-|\varphi|^{\alpha})\,,\qquad j=0,1,\ldots\,.\]
The _forward spectral tail process_ is given by \(\Theta_{t}=\Theta_{0}\,\varphi^{t}\), \(t\geq 0\), and the spectral cluster process \((Q_{t})\) by \(Q_{t}=\Theta_{t}/\|\Theta\|_{\alpha}=\Theta_{t}\,(1-|\varphi|^{\alpha})^{1/\alpha}\), \(t\in\mathbb{Z}\), and the extremal index by \(\theta_{|X|}=1-|\varphi|^{\alpha}\).
We observe that \(X_{t}=g(X_{t-1},Z_{t})=\varphi\,X_{t-1}+Z_{t}\), \(t\in\mathbb{Z}\), constitute a contractive iterative function system: for \(q<\alpha\),
\[\mathbb{E}\big{[}|g(x_{0},Z_{0})-g(x_{1},Z_{0})|^{q}\big{]}=|\varphi|^{q}\,|x_{0 }-x_{1}|^{q}\,.\]
Hence the conditions of Proposition 5.2 are satisfied and all mixing and anti-clustering conditions needed for the results in this paper hold.
**Example 5.4**.: **A regularly varying solution to a SRE.** We consider the causal solution to an affine SRE \(X_{t}=A_{t}X_{t-1}+B_{t}\), \(t\in\mathbb{Z}\), where \((A_{t},B_{t})\), \(t\in\mathbb{Z}\), is an iid \(\mathbb{R}^{2}\)-valued sequence. We further assume that a generic element \((A,B)\) of this sequence satisfies the following conditions: (i) there exists \(\alpha>0\) such that \(\mathbb{E}[|A|^{\alpha}]=1\), \(\mathbb{E}[|A|^{\alpha}\log^{+}|A|]<\infty\), \(\mathbb{E}[|B|^{\alpha}]<\infty\), (ii) the conditional law of \(\log|A|\) given \(\{A\neq 0\}\) is non-arithmetic, (iii) \(\mathbb{P}(A\,x+B=x)<1\) for every \(x\in\mathbb{R}\). Then there exists an a.s. unique causal solution to the SRE with the property \(\mathbb{P}(\pm X_{0}>x)\sim c_{\pm}\,x^{-\alpha}\) as \(x\to\infty\) for constants \(c_{\pm}\) such that \(c_{+}+c_{-}>0\). This follows from classical Kesten-Goldie theory; cf. Theorem 2.4.7 in Buraczewski et al. [10]. The sequence \((X_{t})\) is regularly varying with index \(\alpha\), forward spectral tail process \(\Theta_{t}=\Theta_{0}\,A_{1}\cdots A_{t}\), \(t\geq 0\), where \(\mathbb{P}(\Theta=\pm 1)=c_{\pm}/(c_{+}+c_{-})\) and extremal index \(\theta_{|X|}=\mathbb{P}\big{[}\big{(}1-\sup_{t\geq 1}|A_{1}\cdots A_{t}|^{ \alpha}\big{)}_{+}\big{]}\); see Basrak and Segers [8].
We observe that \(X_{t}=g(X_{t-1},(A_{t},B_{t}))=A_{t}X_{t-1}+B_{t}\), \(t\in\mathbb{Z}\), constitute a contractive iterative function system: for \(0<q<\alpha\), with \(\rho=\mathbb{E}[|A_{0}|^{q}]\), \(\mathbb{E}[|g(x_{0},(A_{0},B_{0}))-g(x_{1},(A_{0},B_{0}))|^{q}]=\rho\,|x_{0}-x _{1}|^{q}\,.\) The value \(\rho<1\) by convexity of the function \(f(q)=\mathbb{E}[|A_{0}|^{q}]\) and since \(f(\alpha)=1\). Hence the conditions of Proposition 5.2 are satisfied and all mixing and anti-clustering conditions needed for the results in this paper hold.
## Appendix A Some auxiliary results
**Lemma A.1**.: _Consider an \(\mathbb{R}^{d}\)-valued stationary regularly varying sequence \((\mathbf{X}_{t})\) with index \(\alpha\in(1,2)\). If the anti-clustering condition (2.2) is satisfied then_
(A.1) \[J:=\mathbb{E}\Big{[}\Big{(}\sum_{j=0}^{\infty}|\mathbf{\Theta}_{j}|\Big{)}^{ \alpha-1}\Big{]}<\infty\,.\]
Proof.: By sub-additivity we have
\[J \leq \mathbb{E}\Big{[}\Big{(}\sum_{j=0}^{\infty}|\mathbf{\Theta}_{j}| \,\mathbf{1}(|\mathbf{\Theta}_{j}|\leq 1)\Big{)}^{\alpha-1}\Big{]}+\mathbb{E} \Big{[}\sum_{j=0}^{\infty}|\mathbf{\Theta}_{j}|^{\alpha-1}\,\mathbf{1}(| \mathbf{\Theta}_{j}|>1)\Big{]}=:I_{1}+I_{2}\,.\]
By Jensen's inequality,
\[I_{1}\leq\Big{(}\mathbb{E}\Big{[}\sum_{j=0}^{\infty}|\mathbf{\Theta}_{j}|\, \mathbf{1}(|\mathbf{\Theta}_{j}|\leq 1)\Big{]}\Big{)}^{\alpha-1}\,.\]
We prove that the right-hand side is finite by showing
(A.2) \[\sum_{j=0}^{\infty}\mathbb{E}\big{[}|\mathbf{\Theta}_{j}|\wedge 1\big{]}<\infty\,.\]
Using the decomposition
\[|a_{n}^{-1}\mathbf{X}_{j}|\wedge 1 = |a_{n}^{-1}\mathbf{X}_{j}|\,\mathbf{1}(|\mathbf{X}_{j}|\leq a_{n} )+\mathbf{1}(|\mathbf{X}_{j}|>a_{n})\,,\]
condition (2.2) implies
\[\lim_{k\to\infty}\limsup_{n\to\infty}\sum_{j=k}^{r_{n}}\,n\,\,\mathbb{E}\big{[} (|a_{n}^{-1}\mathbf{X}_{j}|\wedge 1)\,\mathbf{1}(|\mathbf{X}_{0}|>a_{n})\big{]}=0\,.\]
Hence for every \(\varepsilon>0\) there exists an integer \(k_{0}\) sufficiently large such that
\[\limsup_{n\to\infty}\sum_{j=k}^{k+h}\,n\,\,\mathbb{E}\big{[}(|a_{n}^{-1}{\bf X}_{ j}|\wedge 1)\,{\bf 1}(|{\bf X}_{0}|>a_{n})\big{]}\leq\varepsilon\,,\qquad k\geq k_{0} \,,\qquad h\geq 0\,.\]
Using the regular variation of \(({\bf X}_{t})\) with tail process \((Y\,{\boldsymbol{\Theta}}_{t})\), we can determine the limit of each summand
\[\lim_{n\to\infty}n\,\,\mathbb{E}\big{[}(|a_{n}^{-1}{\bf X}_{j}| \wedge 1)\,{\bf 1}(|{\bf X}_{0}|>a_{n})\big{]} = \lim_{n\to\infty}\mathbb{E}\big{[}(|a_{n}^{-1}{\bf X}_{j}|\wedge 1) \big{|}\,\,|{\bf X}_{0}|>a_{n}\big{]}\] \[= \mathbb{E}\big{[}|Y\,{\boldsymbol{\Theta}}_{j}|\wedge 1\big{]}\,.\]
Thus the Cauchy criterion for the sequence \((\sum_{j=0}^{k}\mathbb{E}\big{[}|Y\,{\boldsymbol{\Theta}}_{j}|\wedge 1\big{]})_{k \geq 0}\) holds. Since \(Y>1\) a.s. (A.2) follows.
It remains to show that \(I_{2}<\infty\). By stationarity we can show similarly to (A.2) that
\[\sum_{j=0}^{\infty}\mathbb{E}[|{\boldsymbol{\Theta}}_{-j}|\wedge 1]<\infty.\]
By the time-change formula (2.7) on p. 4 we obtain that
\[\mathbb{E}[|{\boldsymbol{\Theta}}_{-j}|\wedge 1] = \mathbb{E}[|{\boldsymbol{\Theta}}_{-j}|\wedge 1\mid{ \boldsymbol{\Theta}}_{-j}\neq{\boldsymbol{0}}]\mathbb{P}({\boldsymbol{\Theta}}_ {-j}\neq{\boldsymbol{0}})\] \[= \mathbb{E}[|{\boldsymbol{\Theta}}_{-j}|\wedge|{\boldsymbol{\Theta }}_{0}|\,\mid{\boldsymbol{\Theta}}_{-j}\neq{\boldsymbol{0}}]\mathbb{E}\big{[}|{ \boldsymbol{\Theta}}_{j}|^{\alpha}\big{]}\] \[= \mathbb{E}[|{\boldsymbol{\Theta}}_{j}|^{\alpha}\left(|{\boldsymbol {\Theta}}_{j}|^{-1}\wedge 1\right)]\,.\]
We conclude that
\[\infty > \sum_{j=0}^{\infty}\mathbb{E}[|{\boldsymbol{\Theta}}_{-j}|\wedge 1]= \sum_{j=0}^{\infty}\mathbb{E}[|{\boldsymbol{\Theta}}_{j}|^{\alpha}\left(|{ \boldsymbol{\Theta}}_{j}|^{-1}\wedge 1\right)]\] \[= \sum_{j=0}^{\infty}\mathbb{E}[|{\boldsymbol{\Theta}}_{j}|^{ \alpha-1}\wedge|{\boldsymbol{\Theta}}_{j}|^{\alpha}]>\sum_{j=0}^{\infty} \mathbb{E}[|{\boldsymbol{\Theta}}_{j}|^{\alpha-1}{\bf 1}(|{\boldsymbol{\Theta}}_{j}|>1)],\]
implying \(I_{2}<\infty\).
## Appendix B Proof of Lemma 4.6
Proof.: For \(1<\alpha<2\), \({\boldsymbol{\xi}}_{\alpha}\) is \(\alpha\)-stable and \(\mathbb{E}[|{\boldsymbol{\xi}}_{\alpha}|]<\infty\) which implies the statement of Lemma 4.6.
Next we consider the case \(0<\alpha<1\). We use the series representation (3.1):
\[|{\boldsymbol{\xi}}_{\alpha}|=\Big{|}\sum_{i=1}^{\infty}\Gamma_{i}^{-1/\alpha }\sum_{j\in\mathbb{Z}}{\bf Q}_{ij}\Big{|}\leq\sum_{i=1}^{\infty}\Gamma_{i}^{- 1/\alpha}\sum_{j\in\mathbb{Z}}|{\bf Q}_{ij}|=:\xi_{\alpha}^{\prime}\,.\]
The right-hand side represents a positive \(\alpha\)-stable random variable \(\xi_{\alpha}^{\prime}\). Indeed, \((\sum_{j\in\mathbb{Z}}|{\bf Q}_{ij}|)_{i\in\mathbb{Z}}\) is a sequence of iid random variables satisfying \(\mathbb{E}[(\sum_{j\in\mathbb{Z}}|{\bf Q}_{j}|)^{\alpha}]<\infty\) since \(\|{\bf Q}\|_{\alpha}=1\) by definition. We apply Fubini's theorem for positive random variables and obtain
\[\mathbb{E}\Big{[}\frac{\xi_{\alpha}^{\prime}}{(\zeta_{\alpha,p}^ {p})^{1/p}}\Big{]} = \frac{p}{\Gamma(1/p)}\int_{0}^{\infty}\mathbb{E}\big{[}\xi_{ \alpha}^{\prime}{\rm e}^{-\lambda^{p}\zeta_{\alpha,p}^{p}}\big{]}d\lambda\,.\]
We will show that the integrand coincides with
\[\lim_{x\to 0^{+}}-\frac{\partial\Psi_{\xi_{\alpha}^{\prime},\zeta_{\alpha,p}^ {p}}(x,\lambda^{p})}{\partial x}\,,\qquad\Psi_{\xi_{\alpha}^{\prime},\zeta_{ \alpha,p}^{p}}(x,\lambda^{p}):=\mathbb{E}\big{[}{\rm e}^{-\lambda^{p}\zeta_{ \alpha}^{\prime}-\lambda^{p}\zeta_{\alpha,p}^{p}}\big{]}\,.\]
From the expression of the characteristic function - Laplace transform of Theorem 3.2 we have
\[\Psi_{\xi^{\prime}_{\alpha},\zeta^{p}_{\alpha,p}}(x,\lambda^{p}) = \exp\Big{(}\int_{0}^{\infty}\mathbb{E}\Big{[}{\rm e}\,^{-y\,x\, \|{\bf Q}\|_{1}-y^{p}\lambda\|{\bf Q}\|_{p}^{p}}-1\Big{]}d(-y^{-\alpha})\Big{)}\] \[-\frac{\partial\Psi_{\xi^{\prime}_{\alpha},\zeta^{p}_{\alpha,p}}( x,\lambda^{p})}{\partial x} = \int_{0}^{\infty}\mathbb{E}\Big{[}y\,\|{\bf Q}\|_{1}{\rm e}\,^{- xy\|{\bf Q}\|_{1}-(\lambda y)^{p}\|{\bf Q}\|_{p}^{p}}\Big{]}d(-y^{-\alpha})\, \,\mathbb{E}\big{[}{\rm e}\,^{-x\xi^{\prime}_{\alpha}-\lambda^{p}\zeta^{p}_{ \alpha,p}}\big{]}\] \[= \mathbb{E}\Big{[}\|{\bf Q}\|_{p}^{\alpha}\frac{\|{\bf Q}\|_{1}}{ \|{\bf Q}\|_{p}}\int_{0}^{\infty}y\,{\rm e}\,^{-xy\|{\bf Q}\|_{1}/\|{\bf Q}\|_ {p}-(\lambda y)^{p}}d(-y^{-\alpha})\Big{]}\,\,\mathbb{E}\big{[}{\rm e}\,^{-x \xi^{\prime}_{\alpha}-\lambda^{p}\zeta^{p}_{\alpha,p}}\big{]}\] \[\to \frac{\alpha}{p}\Gamma((1-\alpha)/p)\lambda^{\alpha-1}\mathbb{E} \Big{[}\|{\bf Q}\|_{p}^{\alpha}\frac{\|{\bf Q}\|_{1}}{\|{\bf Q}\|_{p}}\Big{]} \,\,\mathbb{E}\big{[}{\rm e}\,^{-\lambda^{p}\zeta^{p}_{\alpha,p}}\big{]}\,, \qquad x\to 0^{+}\,,\]
where we exploit (4.4) in the last step. By monotone convergence we also have
\[\lim_{x\to 0^{+}}-\frac{\partial\Psi_{\xi^{\prime}_{\alpha},\zeta^{p}_{ \alpha,p}}(x,\lambda^{p})}{\partial x}=\lim_{x\to 0^{+}}\mathbb{E}\big{[}\xi^{ \prime}_{\alpha}{\rm e}\,^{-x\,\xi^{\prime}_{\alpha}-\lambda^{p}\,\zeta^{p}_{ \alpha,p}}\big{]}=\mathbb{E}\big{[}\xi^{\prime}_{\alpha}{\rm e}\,^{-\lambda^{p }\,\zeta^{p}_{\alpha,p}}\big{]}\]
since the limit exists. Thus \(\mathbb{E}\big{[}\xi^{\prime}_{\alpha}{\rm e}\,^{-\lambda^{p}\,\zeta^{p}_{ \alpha,p}}\big{]}<\infty\) and we conclude the proof of Lemma 4.6 using the domination \(|\boldsymbol{\xi}_{\alpha}|\leq\xi^{\prime}_{\alpha}\).
## Appendix C Proof of the integrability of \({\bf R}_{\alpha}\)
We proceed as in Appendix B dominating \(|{\bf R}_{\alpha}|\leq\xi^{\prime}_{\alpha}/\eta_{\alpha}\) for \(\alpha\in(0,1)\). The integrability of \({\bf R}_{\alpha}\), \(\alpha\in(1,2)\), follows easily from \(\mathbb{E}[|\boldsymbol{\xi}_{\alpha}|]<\infty\). We introduce the hybrid Laplace transform
\[\Psi_{\xi^{\prime}_{\alpha},\eta_{\alpha}}(u,x):=\mathbb{E}\big{[}{\rm e}\,^{ -u\,\xi^{\prime}_{\alpha}}{\bf 1}(\eta_{\alpha}\leq x)\big{]}\,,\qquad u,x>0.\]
From the expression of the hybrid characteristic function of Theorem 3.1 we have
\[\Psi_{\xi^{\prime}_{\alpha},\eta_{\alpha}}(u,x)=\exp\Big{(}\int_{0}^{\infty} \mathbb{E}\big{[}{\rm e}\,^{-yu\|{\bf Q}\|_{1}}{\bf 1}(y\|{\bf Q}\|_{\infty} \leq x)-1\big{]}d(-y^{-\alpha})\Big{)}\,,\qquad u,x>0\,.\]
Then by monotone convergence we obtain
\[\lim_{u\to 0^{+}}-\frac{\partial\Psi_{\xi^{\prime}_{\alpha},\eta_{ \alpha}}(u,x)}{\partial u} = \lim_{u\to 0^{+}}\int_{0}^{\infty}\mathbb{E}\big{[}y\|{\bf Q}\|_{1}{ \rm e}\,^{-yu\|{\bf Q}\|_{1}}{\bf 1}(y\|{\bf Q}\|_{\infty}\leq x)]d(-y^{- \alpha})\Psi_{\xi^{\prime}_{\alpha},\eta_{\alpha}}(u,x)\] \[= \alpha\mathbb{E}\Big{[}\|{\bf Q}\|_{1}\int_{0}^{x/\|{\bf Q}\|_{ \infty}}y^{-\alpha}dy\Big{]}\Psi_{\xi^{\prime}_{\alpha},\eta_{\alpha}}(0,x)\] \[= \frac{\alpha}{1-\alpha}\mathbb{E}\Big{[}\frac{\|{\bf Q}\|_{1}}{\| {\bf Q}\|_{\infty}^{1-\alpha}}\Big{]}x^{1-\alpha}\exp(-\theta_{|{\bf X}|}x^{- \alpha})\,.\]
The limit is finite since we show the integrability of \(\sum_{t\in\mathbb{Z}}|\widetilde{\bf Q}_{t}|\) in Remark 4.2 and
\[\mathbb{E}\Big{[}\frac{\|{\bf Q}\|_{1}}{\|{\bf Q}\|_{\infty}^{1-\alpha}} \Big{]}=\mathbb{E}\Big{[}\|{\bf Q}\|_{\infty}^{\alpha}\frac{\|{\bf Q}\|_{1}}{ \|{\bf Q}\|_{\infty}}\Big{]}=\theta_{|{\bf X}|}\mathbb{E}[\|\widetilde{\bf Q}\| _{1}]<\infty\,.\]
Then we proved the finiteness of
\[\mathbb{E}[\xi^{\prime}_{\alpha}{\bf 1}(\eta_{\alpha}\leq x)]=\lim_{u\to 0^{+}}- \frac{\partial\Psi_{\xi^{\prime}_{\alpha},\eta_{\alpha}}(u,x)}{\partial u}=\frac{ \alpha}{1-\alpha}\theta_{|{\bf X}|}\mathbb{E}[\|\widetilde{\bf Q}\|_{1}]x^{1- \alpha}\exp(-\theta_{|{\bf X}|}x^{-\alpha})\,,\qquad x>0\,.\]
Applying Fubini's theorem we achieve
\[\mathbb{E}\Big{[}\frac{\xi^{\prime}_{\alpha}}{\eta_{\alpha}} \Big{]} = \mathbb{E}\Big{[}\xi^{\prime}_{\alpha}\int_{0}^{\infty}{\bf 1}(\eta_{ \alpha}\leq x)\frac{dx}{x^{2}}\Big{]}\] \[= \int_{0}^{\infty}\mathbb{E}[\xi^{\prime}_{\alpha}{\bf 1}(\eta_{ \alpha}\leq x)]\frac{dx}{x^{2}}\]
\[= \frac{\mathbb{E}[\|\widetilde{\mathbf{Q}}\|_{1}]}{1-\alpha}\int_{0}^{ \infty}\alpha\theta_{|\mathbf{X}|}x^{1-\alpha}\exp(-\theta_{|\mathbf{X}|}x^{- \alpha})\frac{dx}{x^{2}}\] \[= \frac{\mathbb{E}[\|\widetilde{\mathbf{Q}}\|_{1}]}{1-\alpha}\,.\]
The left-hand side term is finite which proves that \(\mathbf{R}_{\alpha}\) is integrable for \(0<\alpha<1\).
|
2306.12587 | ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer
Reviews | We introduce the task of automatically revising scientific papers based on
peer feedback and release ARIES, a dataset of review comments and their
corresponding paper edits. The data is drawn from real reviewer-author
interactions from computer science, and we provide labels linking each reviewer
comment to the specific paper edits made by the author in response. We
automatically create a high-precision silver training set, as well as an
expert-labeled test set that shows high inter-annotator agreement. In
experiments with 10 models covering the state of the art, we find that they
struggle even to identify which edits correspond to a comment -- especially
when the relationship between the edit and the comment is indirect and requires
reasoning to uncover. We also extensively analyze GPT-4's ability to generate
edits given a comment and the original paper. We find that it often succeeds on
a superficial level, but tends to rigidly follow the wording of the feedback
rather than the underlying intent, and lacks technical details compared to
human-written edits. | Mike D'Arcy, Alexis Ross, Erin Bransom, Bailey Kuehl, Jonathan Bragg, Tom Hope, Doug Downey | 2023-06-21T22:00:03Z | http://arxiv.org/abs/2306.12587v2 | # ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews
###### Abstract
Revising scientific papers based on peer feedback is a challenging task that requires not only deep scientific knowledge and reasoning, but also the ability to recognize the implicit requests in high-level feedback and to choose the best of many possible ways to update the manuscript in response. We introduce this task for large language models and release ARIES, a dataset of review comments and their corresponding paper edits, to enable training and evaluating models. We study two versions of the task: comment-edit alignment and edit generation, and evaluate several baselines, including GPT-4. We find that models struggle even to identify the edits that correspond to a comment, especially in cases where the comment is phrased in an indirect way or where the edit addresses the spirit of a comment but not the precise request. When tasked with generating edits, GPT-4 often succeeds in addressing comments on a surface level, but it rigidly follows the wording of the feedback rather than the underlying intent, and includes fewer technical details than human-written edits. We hope that our formalization, dataset, and analysis will form a foundation for future work in this area.
## 1 Introduction
With remarkable recent advances in natural language processing capabilities, there has been increasing interest in systems that can reason about scientific content and help accelerate scholarly work Hope et al. (2023). This includes assisting in tasks such as literature review Luu et al. (2021); Li et al. (2022), reading Chang et al. (2023), writing Fok and Weld (2023); Shen et al. (2023); Mahlow (2023); Gmeiner and Yildirim (2023) and hypothesis formation Kang et al. (2022).
In this paper we focus on a task that encapsulates multiple challenges in reasoning about scientific text: revising papers in response to peer review feedback. This task provides a testbed for evaluating NLP systems on important and understudied capabilities needed for effective scientific assistants--performing the task requires a deep understanding of the full text of a scientific paper, and the ability to infer the intent behind technical human feedback and act upon it (revise the paper).
Feedback on paper drafts, whether from co-authors, readers, or reviewers, can be challenging to interpret and address because it often includes complex critiques of a paper's substance and can be phrased in an indirect way. For example, consider a reviewer who wants authors to use a more realistic dataset in their evaluation. This could be expressed in a variety of ways; it could be stated as a direct request (_"Apply the method to a realistic dataset"_), or more indirectly as a criticism (_"The evaluation is only on a synthetic dataset"_) or as a question (_"Is the current dataset truly representative of the real-world?"_). Similarly, an author editing the manuscript in response has several options: they could simply comply with the request, or they could clarify that no realistic datasets are publicly available, or they might even argue that the reviewer is mistaken and add a justification of their dataset's realism.
In this work, we evaluate whether large language models (LLMs) possess the reasoning abilities required to model the relationship between feedback and edits. We release **ARIES** (**A**ligned, **R**eview-**I**nformed **E**dits of **S**cientific Papers), a real-world dataset of computer science paper drafts, the corresponding reviewer feedback, and the author responses and revisions that address the feedback.1
Footnote 1: The dataset and code are available at: [https://github.com/allenai/aries](https://github.com/allenai/aries)
Using this dataset, we formulate two novel tasks, shown in Figure 1: **comment-edit alignment**, in which a model must determine which review comments made about a paper correspond to each of the edits made after the feedback, and **edit generation**, in which a model must generate edits directly
from a given reviewer comment and paper text.
In addition to serving as challenging testbeds for LLM evaluation, these tasks have the potential to advance assisted reading and writing applications. Automatic alignment could enable tools that allow readers to quickly find parts of a document that address particular questions or comments (ter Hoeve et al., 2020; Dasigi et al., 2021) or that help authors, reviewers, and area chairs more easily track revisions. Edit generation could power collaborative writing tools that allow authors to rapidly iterate on their manuscripts in response to feedback.
We evaluate ten baseline methods and find that the alignment task is challenging for existing models, including even large models such as GPT-4, and that comments and edits with indirect relationships are especially difficult. For the generation task, we find that GPT-4 does produce edits that are coherent and on-topic on a surface level, but fails to model the underlying intent; unlike real authors, it almost never makes edits that suggest the feedback is mistaken, often paraphrases the feedback rather than tightly integrating edits into the context of the paper, and tends to include less technical detail.
In summary, our contributions are as follows:
* We propose the novel tasks of (1) aligning high-level draft feedback to specific edits and (2) generating revisions for scientific papers given reviewer feedback (section 3).
* We construct ARIES, a real-world dataset containing 196 human-labeled review comments matched to their corresponding paper edits, as well as 3.9K reviewer comments automatically matched to edits using author responses from OpenReview, with 92% precision (section 4).
* We evaluate a wide range of baseline methods on our comment-edit alignment task, finding that it is challenging even for modern LLMs. The best model (GPT-4) achieves only 27.0 micro-F1 compared to human performance of 70.7 (section 5).
* We conduct a thorough analysis of edit generation with GPT-4, detailing several systemic differences between generated and real edits, and suggest future work directions (section 6).
## 2 Related work
To our knowledge, our work is the first to study contentful edits conditioned on complex feedback in a highly technical domain (scientific papers). Previous work on edit modeling either focuses on stylistic and grammatical edits or incorporates no feedback or very different kinds of feedback--such as explicit instructions or descriptions of edits created post-hoc. Those settings don't present the same challenging reasoning requirements as our tasks. Figure 2 illustrates how the content and linguistic complexity of review comments differs substantially from that of the conditioning information used in past work.
Figure 1: Overview of our tasks. In comment-edit alignment, a model is given a review comment and set of candidate edits derived from a source paper and a revised target paper, and it must align the comment to the edit(s) that are associated with it. In edit generation, a model is given a review comment and a source paper and must generate an edit that addresses the comment, possibly using placeholders for missing information.
Style and Grammar EditsEarly work on edit modeling focused on grammatical error correction (GEC), which aims to identify and correct grammatically incorrect or misspelled text, and work in this area dates back several decades (Kukich, 1992; Wang et al., 2021). With the increase in language modeling capabilities in recent years, there has been progress in making more sophisticated edits such as rewriting a sentence to improve clarity, style, or structure (Malmi et al., 2019; Mallinson et al., 2022; Kim et al., 2022). However, these areas of research do not target the kinds of substantive revisions often made to papers in response to reviews, such as adding an entire sentence or paragraph to discuss a result or justify a design choice.
Assisted Writing SystemsSeveral works develop writing assistants that incorporate human input to guide the edits. In some cases the human input is restricted to specific actions, such as marking words that the system should omit (Grangier and Auli, 2018) or selecting proposed edits to apply (Lee et al., 2022; Du et al., 2022), while in other cases the user can provide a natural language instruction (Ito et al., 2020; Reif et al., 2022; Liu et al., 2022; Yuan et al., 2022; Raheja et al., 2023). However, the kinds of instructions found in these works are different from the draft feedback we investigate in that they are written by humans who know they are interacting with an automated system, resulting in more direct and specific instructions than the open-ended feedback that authors often receive for a draft.
Much of the previous research on edit modeling focuses on Wikipedia, using Wikipedia edit messages as a proxy for instructions when generating edits (Faltings et al., 2021; Schick et al., 2022; Reid and Neubig, 2022). Wikipedia edit messages are generally written post-hoc and provide varying levels of information about the content of the edit, often giving only a very vague summary like "add reference". In contrast, review comments generally provide enough information for a human to identify the content of the necessary edit, as in many cases their purpose is in part to guide the authors' revisions.
Lee and Webster (2012) create a corpus of essays by English-as-a-second-language students with sentences aligned to feedback from teachers and the corresponding revisions. Their task has a similar structure to ours, but in practice the vast majority of the feedback in their data is focused on simple word-level grammatical issues. ArgRewrite (Zhang et al., 2017; Kashefi et al., 2022) is also a dataset of student essay revisions with teacher feedback, and contains some contentful comments, but the essays are very short (~500 words) compared to scientific papers (~5000 words) and the comments are not aligned to specific edits.
Scientific EditsSome work does explore scientific-domain edits, but these don't associate edits with reviewer comments and often focus on classification rather than generation. Jiang et al. (2022) and Du et al. (2022) analyze and tag edit intentions on ArXiv papers but do not use feedback. Du et al. (2022) develop a system for human-in-the-loop editing in several domains, including
Figure 2: Representative examples of the kinds of conditioning information used to guide edits in our work (review comments) compared to previous work which considered Wikipedia edits (Faltings et al., 2021) and author-provided instructions (Ito et al., 2020; Yuan et al., 2022; Liu et al., 2022; Raheja et al., 2023). Review comments are longer and less direct, requiring more knowledge and reasoning to interpret.
Wikipedia and Arxiv, but the feedback is limited to accepting/rejecting suggested edits, and the focus is on fluency and style edits. Mita et al. (2022) construct a dataset and evaluation framework for scientific document revision, and they do consider some document-level revisions such as reordering sentences. Nonetheless, the aim of the revisions is to improve writing quality rather than to alter the semantics of the text, and peer review comments are not used.
Finally, Kuznetsov et al. (2022) identify edits between paper versions and separately align reviewer comments to referenced text in the source paper, but do not explore the connection between feedback and edits. We note that linking comments to source text is insufficient to study feedback-based editing due to both spurious edits and our finding in subsection A.2 that most feedback-based edits add a new paragraph or section instead of modifying existing text.
## 3 Task Definitions
As shown in Figure 1, we consider two versions of the task of determining how a document should be edited to address a given piece of feedback: comment-edit alignment and edit generation. Both tasks express the differences between an original (source) document and revised (target) document as a list of _edits_, where each edit represents a specific change from source text at some location in the paper into new text in the target paper. Specifically, an edit consists of a paragraph in the source and its corresponding revised paragraph in the target, where either paragraph (but not both) can be null in the case of deletions or additions.
In the **comment-edit alignment** task, the goal is to identify the edit(s) that correspond to a given review comment. The input is a comment and a list of edits, which include both original and revised text. In our evaluation, we derive the list of input edits by using a paper's gold revisions, but they could consist of any candidate revisions. The output is a set of binary classifications over the list of edits, indicating whether each edit addresses the comment. Note that this results in a many-to-many mapping; one comment may result in several edits to the paper, and (less commonly in our data) multiple comments may be addressed by one edit.
In the **edit generation** task, the objective is to generate appropriate edits to a paper based on feedback. The input for this task consists of a comment and the original paper text. The output is the generated edit, which should address the reviewer's feedback and be coherent within the context of the paper.
## 4 Dataset Construction
Both the comment-edit alignment and edit generation tasks require a dataset with paper edits aligned to specific feedback comments. In this section, we describe our approach for collecting and annotating ARIES, a corpus of computer science paper revisions and reviews with both manual and synthetic annotations of comment-edit alignments.
At a high level, the construction process is as follows: First, we obtain a corpus of paper draft PDFs, their peer reviews, and revised drafts from OpenReview (subsection 4.1). Next, we manually identify spans in reviews that represent actionable comments (subsection 4.2). Then, we manually identify the edits that correspond to each review comment to obtain a small but high-quality dataset for evaluating models (subsection 4.3). Finally, we develop a synthetic labeling approach to automatically extract comments and align them to edits using author responses (subsection 4.4). This approach results in edits with high precision (but low recall), and with it we create a much larger dataset suitable for training models. Statistics of our final dataset are in Table 1.
### Collecting papers and reviews
We obtain papers, reviews, and author responses from computer science conferences on OpenReview.2 For each paper, we use the latest PDF that was uploaded before the first review as the original version and the latest available PDF as the revised version. We omit papers that do not have a revised version uploaded after reviews were posted, resulting in a set of 6,501 paper records. We use Grobid (GRO, 2008-2023) and S2ORC (Lo et al., 2020) to parse the paper PDFs.
Footnote 2: [https://openreview.net](https://openreview.net)
We identify edits between the source and target papers by finding pairs of paragraphs with high bigram overlap. More details can be found in Appendix C.
On average, a paper revision typically has 40% of its paragraphs unchanged, 14% "minor" edits (with less than 10 tokens changed, usually fixing typos or grammar), 14% "major" edits, 8% fully deleted paragraphs, and 23% fully new paragraphs.
### Identifying Actionable Feedback
To create our manually-annotated evaluation data (196 instances), we first extract sentences from reviews which constitute actionable feedback. We define _actionable feedback_ as feedback that states or implies a specific request that could be addressed by an edit to the paper. Reviews generally consist of a summary of the paper in question, some comments on the strengths of the work, the weaknesses of the work (which may include some specific suggestions for improvement), and an overall opinion of whether the paper should be accepted or rejected. In this work we care primarily about the weaknesses and suggestions, although actionable feedback can sometimes appear elsewhere. Actionable feedback can be phrased in a wide variety of ways, including as questions or as implicitly negative remarks. However, a positive comment (_"The paper is sound and of certain interest"_) or one that simply summarizes the paper is _not_ considered actionable for our purposes.
Two annotators manually annotated 42 reviews to extract the token spans corresponding to actionable feedback (details in Appendix D), ultimately resulting in 196 comments. In some cases, a comment might only make sense in the context of some other sentence from the review. For example, in _"The paper is missing several things: (1) a definition of L, (2) ImageNet baseline, (3)..."_, the phrase "ImageNet baseline" is only interpretable in the context of the top-level comment. Where this occurs (9% of comments), we annotate both the context and comment spans and concatenate them into a single comment.
Inter-annotator agreement was measured on a set of 10 reviews that were annotated by both annotators, with a total of 60 non-overlapping spans between the two annotators. We find that 88% of spans overlap between annotators, but due to differences in amounts of included context the token-level Jaccard overlap is 65%. In subsection A.1, we conduct further analysis on the types of actionable review comments in our extracted data.
### Aligning Comments to Edits
The extracted actionable comments (subsection 4.2) were mapped to their corresponding edits in the paper by an expert annotator (an author of this paper). For each comment, the annotator was given the original and revised paper PDFs and the list of edits and asked to identify which edits were made in response to the comment. As additional context, the annotator was given the responses authors made to the reviewers on the OpenReview forum to assist with finding all of the intended edits, as authors often state in their response where they made edits to address each point made by the reviewer. Agreement was calculated against a second annotator on a sample of 25 comments, obtaining a Cohen's \(\kappa\) of 0.8.
In total, 78% of comments were addressed by the authors. However, 28% were addressed only in the author response and not with edits to the paper, and 7% were addressed in the paper but not visible in the parsed text (either because of a parsing error, or because the edit was purely visual, such as changing a figure), leaving 43% (85 comments) aligned to textual edits (the comments without edits are still included as challenging examples for our comment-edit alignment task). The aligned comments each correspond to 2.1 edits on average.
### Creating Synthetic Data
To produce a large training set with high-quality comment-edit alignments, manual annotation is not feasible; each review takes approximately 30 minutes to fully process and requires annotators with extensive domain expertise, and our corpus contains 24k reviews. Thus, we automatically generate a large silver dataset of comment-edit alignments by leveraging the fact that authors often quote reviewer comments directly in author responses, and the edits that correspond to a comment are often highly similar to the author response text discussing the comment.
We automatically identify the quoted review comments in author responses by searching for lines with a small edit distance to a contiguous span of review text (with a minimum length of 40 characters, to eliminate spurious matches). The corresponding response text for each comment is matched to edits with high textual overlap; we infor
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Statistic** & **Manual** & **Synthetic** \\ \hline Papers & 42 & 1678 \\ Comments & 196 & 3892 \\ Aligned Edits & 131 & 3184 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics for manually- and synthetically-labeled data. Papers, reviews, and aligned edits are counted only when they correspond to included comments. Edits are counted only once, even if they correspond to multiple comments.
mally observe that edits with at least 25% bigram overlap to the response text almost always correspond to the quoted comment. Using this threshold, we link responses and edits to obtain a set of 3892 high-precision alignments from the training corpus.
Unlike the manually-annotated data, the synthetic data has low recall; applying the synthetic labeling algorithm to our hand-labeled data identifies only 2% of the matches. However, they have high precision: We manually checked 50 sampled alignments and found that 46 were correct. Furthermore, we find that the synthetically-aligned data has similar statistics to the manually-annotated data; see subsection A.3 for details.
## 5 Comment-Edit Alignment
In this section, we evaluate models on the comment-edit alignment task using our constructed dataset. As described in section 3, the comment-edit alignment task is a binary classification task where the input is review comment and a list of candidate edits, and the output is binary for each comment-edit pair, specifying whether the comment and edit are aligned. In model inputs, edits are textually represented using a "diff" format with additions and deletions enclosed in [+ +] and [- -] brackets, respectively.
For manually-annotated data, for a given comment, we consider all edits for the corresponding paper as candidate edits, labeled as positive if the edit was annotated as addressing the comment and negative otherwise. Given the low recall of the synthetic data (discussed in subsection 4.4), we can only use the synthetic labels to produce positive comment-edit alignment pairs; thus, we pair comments with edits sampled from other documents as negative candidates. Additional details are provided in Appendix E.
### Models
We consider four kinds of model architectures, detailed below. For all models that produce similarity scores or probability outputs, we tune a decision threshold on the dev set to maximize micro-F1. In addition, we use a version of BM25 tuned for high recall (>90%) on the dev set as a first-pass candidate filter for the GPT-4 based methods, which increases evaluation speed and reduces GPT-4 API costs.
Bi-encoder:The model separately consumes each review comment and edit to create an embedding for each, with a goal that embeddings for corresponding comments and edits are closer to each other than those for non-corresponding pairs are. We prefix the comments with "review comment:" and the edits with "edit:" to allow the model to treat the two text types differently. For fine-tuning, we use a triplet loss; given a triplet consisting of a comment \(c\), a positive edit \(x_{+}\), and a negative edit \(x_{-}\), the loss is
\[\mathcal{L}=\text{max}(0,\text{sim}(c,x_{-})-\text{sim}(c,x_{+})+0.5)\]
where \(\text{sim}(\cdot,\cdot)\) is cosine similarity.3
Footnote 3: This loss is similar to the one used to train the SPECTER2 base model we use in our experiments, although we found cosine similarity to work slightly better than Euclidean distance in our preliminary experiments.
The bi-encoder models we use are DeBERTaV3-large (He et al., 2021) and SPECTER2 (Singh et al., 2022).
Pairwise cross-encoder:The model consumes a comment-edit pair separated by a [SEP] token and outputs a score representing the likelihood of a positive label. DeBERTaV3-large (He et al., 2021), LinkBERT (Yasunaga et al., 2022), and GPT-4 (OpenAI, 2023) models are used with this format. For GPT-4, we try both a zero-shot setting where only instructions are given and a (2-way) one-shot setting where one positive and one negative example are given in the prompt.
Multi-edit cross-encoder:The model consumes all edits for a paper at once, including unchanged paragraphs as "edits" for context; in essence, this is a full "diff" of the paper with an edit ID number attached to each paragraph. We additionally feed all comments for a paper at once, each with a unique ID. The output is formatted as a list of JSON objects, each containing a comment ID and a list of edit IDs. In practice, a diff of the full paper is often too long to fit model length limitations, and in these cases we split the paper into 2-3 chunks and merge the output lists. We use GPT-4 (OpenAI, 2023) for this variant, with a maximum input size of 7,500 tokens (the maximum total length is 8,192, and we allow roughly 700 tokens for the response).4
Footnote 4: OpenAI has indicated plans for a 32k-sized model, but that has not been released as of this work.
Bag of words:We try a simple BM25 ranker (Robertson and Zaragoza, 2009) that scores a comment against the post-revision text of an edit. As an additional baseline, we apply BM25 using generated edits from GPT-4 (discussed in section 6)
and refer to this as BM25-generated. As we show in section 6, GPT-generated edits are competitive with human edits in terms of the overall comprehensiveness with which they address comments, but they also differ substantially from human edits in style and content. The BM25-generated baseline serves as a way to empirically probe the similarity of the two kinds of edits.
Human:As a strong baseline, we evaluate how well an expert human annotator can perform on this task given the same inputs as the models. That is, the human is shown a comment and a full diff of the parsed source and target papers, but--unlike the annotators who labeled the task data--does not have access to author responses with which to identify unintuitive responses or to the PDFs with which to identify parsing errors.
### Results
Table 2 reports precision, recall, and F1 scores for models. The micro- scores are computed over all comment-edit pairs, while the macro- scores are macro-averaged by comment5 to down-weight cases where a model incorrectly predicts many edits for one difficult comment. In addition to results over the full dataset, we also run experiments on just edits that add a full paragraph as addition-only F1 (AO-F1); this setting is easier because it does not require models to understand which tokens have been added, removed, or unchanged, and is a better fit for BM25, which cannot represent the differences between these tokens. Results are averaged over three trials with different random seeds for training. The prompt templates used for GPT-4 can be found in Appendix B.
Footnote 5: Implementation note: F1 is considered 100 for comments where the model correctly predicts that there are no corresponding edits.
We find that task is challenging, with none of the models reaching human-level performance. GPT-4 methods are best, but interestingly it appears that giving GPT-4 a full chunk of the document at once (GPT-4 multi-edit) results in slightly worse performance than the pairwise approach, aside from an improvement in efficiency.
For LinkBERT and DeBERTa, we surprisingly find poor micro-AO-F1 performance; it appears that the models sometimes assign similar scores to several instances, making it likely that the decision threshold on the dev set will be suboptimal. Nonetheless, the models can still obtain good macro-AO-F1 scores, and this issue is far less prevalent on the full dataset results.
For DeBERTa, we find that the cross-encoder and bi-encoder variants have similar performance. However, the Specter-based bi-encoder substantially outperforms both DeBERTa and LinkBERT cross-encoders, which is especially notable because Specter has only about a quarter of the parameters of those models. We conjecture that Specter's pretraining makes it an especially good fit for this task; the citation prediction objective it pretrains on, which constrains papers that cite each other to have similar embeddings, is similar to the comment-edit alignment task in that two texts may be "similar" for purposes of the task even if they are semantically and syntactically very different.
The results of BM25-generated indicate that using generated edits as inputs provides only a slight improvement to micro-AO-F1, and actually worsens macro-AO-F1 (although the harm to macro-F1 may be amplified by the fact that the decision threshold is tuned on micro-F1). This suggests that the differences in style and content between GPT-4 and human generated edits are large enough to prevent effective alignment despite GPT's edits appearing plausible in many cases. We discuss the differences in more detail in section 6.
Across all methods, including human performance, we observe that macro-F1 is substantially higher than micro-F1, suggesting that some comments are especially error-prone. For example, 55% of GPT-4 multi-edit's errors correspond to just 20% of the comments. Nuanced comments on documents with many edits may lead to several incorrect predictions--_e.g.,_ if they involve many small changes to technical details and equations--whereas other instances may be more straightforward. In the next section, we analyze specific failure modes that we observe.
### Failure Modes
#### 5.3.1 False Positives
One author examined 50 randomly-sampled false positives of the best-performing model, GPT-4 multi-edit, and identified four common categories of mistakes that it makes. The categories and their frequencies are described in the following paragraphs. Note that the categories are partially overlapping, so the total exceeds 100%, and 10% of the errors did not fit clearly into any category.
Too-Topical (40%)In some cases, the model assigns a positive label to an edit that contains some words that are topically or lexically similar to the words in the review comment, but do not actually address the comment. In many cases, this happens even when the words are only part of the original text and were not added or deleted in the edit.
Diff-ignorance (28%)In some cases, a comment asks about something that is already present in the paper in some form--_e.g., "add CIFAR10 experiments"_ when there is already one CIFAR10 experiment in the paper, or asking to remove a misleading claim. The model sometimes aligns these comments to edits of paragraphs with preexisting (or deleted) content that is relevant to the comment, failing to correctly account for the add/delete markup.
Over-Generation (28%)This failure mode is unique to the multi-edit task format, in which models attempt to generate a full list of all comment-edit alignments for a paper in one pass. We observe some cases where GPT-4 outputs more consecutive edits in a list than it should; for example, if edits 17 and 18 are relevant to some comment, the model might add 19, 20, 21, 22 and so on. In rare cases, the list extends beyond the highest edit id in the input. Although it is difficult to precisely determine the factors that influence GPT-4's output, we hypothesize that GPT-4 may be suffering in part from exposure bias, and as it begins to generate a consecutive sequence it gets stuck in a loop and fails to stop at the correct place. This phenomenon has previously been studied in smaller models (Chiang and Chen, 2021), and may be occurring to a much lesser degree with GPT-4 as well.
Bad Parsing (12%)Some errors are simply the result of the PDF parser extracting text differently for different versions of a paper, causing text to appear edited when it was not. In some of these cases, the "edits" in question do look as though they partially address the comment, similar to the errors in the "diff-ignorance" category, and the model erroneously (albeit reasonably) aligns to those edits without realizing they were already in the original paper.
#### 5.3.2 False Negatives
Similarly to how many false positives arise when an edit uses terms similar to the ones the reviewer used in their comment, we observe that false negatives often occur when there is _low_ overlap between the language of the comment and the edit. For example, a comment may ask how a method was implemented and the corresponding edit adds a link to a code release, or a comment asks for a proof and the corresponding edit adds an equation. In such cases the model must understand e.g. that adding a link to code is a way of addressing a request for implementation details.
We attempt to quantify how the explicitness of the relationship between a comment and edit affects alignment performance. We leverage two metrics:
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c|}{**Micro**} & \multicolumn{4}{c}{**Macro**} \\
**Model** & **AO-F1** & **P** & **R** & **F1** & **AO-F1** & **P** & **R** & **F1** \\ \hline BM25 & 13.3 & 12.2 & 10.5 & 11.3 & 77.1 & 73.8 & 62.4 & 43.8 \\ BM25-generated & 14.7 & 4.6 & 40.3 & 8.3 & 50.7 & 7.6 & 80.3 & 9.6 \\ \hline Specter (no finetuning) & 14.0 & 8.1 & 14.4 & 10.3 & 68.6 & 63.0 & 62.8 & 39.9 \\ Specter bi-encoder & 19.6 & 17.0 & 29.3 & 21.5 & 67.8 & 55.5 & 70.5 & 38.5 \\ DeBERTa bi-encoder & 3.1 & 9.9 & 12.2 & 10.8 & 72.6 & 47.5 & 61.8 & 31.9 \\ \hline LinkBERT cross-encoder & 2.8 & 10.1 & 28.4 & 14.4 & 71.3 & 39.2 & 70.8 & 26.8 \\ DeBERTa cross-encoder & 8.5 & 7.4 & 25.6 & 10.0 & 70.9 & 30.2 & 71.5 & 22.5 \\ GPT-4 cross-encoder 0-shot & 38.7 & - & - & - & 70.6 & - & - & - \\ GPT-4 cross-encoder 1-shot & 42.1 & - & - & - & 74.8 & - & - & - \\ \hline GPT-4 multi-edit & 36.2 & 24.2 & 30.4 & 27.0 & 74.6 & 62.0 & 70.4 & 46.2 \\ \hline Human & 70.6 & 65.6 & 76.8 & 70.7 & 89.2 & 92.7 & 86.2 & 82.7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Precision (P), Recall (R), and F1 of comment-edit alignment on test data. The micro-average is over all comment-edit pairs, while the macro-average is grouped by comment. Addition-Only F1 (AO-F1) is the F1 score when only addition-only edits are considered; due to budget constraints, this is the only feasible setting for pairwise cross-encoder GPT. Overall, GPT-4 methods are all much better than the smaller locally-trained models, but none reach human performance.
The first is a measure of **edit compliance**: Specifically, we annotate how directly an edit obeys a given comment on a 1-3 scale (1 being least compliant, 3 being most compliant). More details on the metric and compliance annotations are in section 6. The second is a measure of **comment directness**: how "direct" or "indirect" the comments are. A direct comment is one that indicates a clear action; this could be phrased in the negative, but still explicitly specifies what needs to be done (_"It is unfortunate that you didn't [do experiments on imagenet]"_). An indirect comment does not state the specific request, and is usually a statement of fact or observation that requires an understanding of linguistic and scientific norms to interpret (_"Only one dataset was used"_).
We measure the performance impact of indirectness and compliance on the multi-edit GPT-4 method in Table 3, and we find that both factors result in a substantial difference. GPT's micro-F1 is 30% lower on indirect comments compared to direct ones, and 24% lower when edits are non-compliant. These results suggest that GPT-4 struggles to understand complex comment-edit interactions and performs better on those with simple, direct relationships.
## 6 Edit Generation
In this section, we explore the edit generation task introduced in section 4.
### Experimental Setup
Our goal is to understand the differences in style and content between the kinds of edits human authors write and those that models generate, which will provide insight into model behavior and point to directions for future improvements. However, we note that evaluating the _correctness_ of generated edits is beyond the scope of our analysis, as it is both difficult to do fairly (models may lack information such as lab notes and raw data) and difficult to do correctly (robust judgements require a very deep expertise in a given paper). Nonetheless, in our preliminary analysis we observed that almost all model-generated edits would appear plausible to a reader with only cursory knowledge of the paper (the title and abstract).
We generate edits with GPT-4, which was the best model for comment-edit alignment and is known to be a powerful general-purpose generation model [1]. The prompt template is provided in subsection B.4.
### Manual Analysis
We explore the differences between GPT-written and author-written edits more deeply with an analysis by two expert judges (authors of this paper, with multiple CS/ML publications) on 85 comments. The comments were divided between the two judges, except for 10 instances that were annotated by both in order to measure agreement. Each instance includes the original paper, the review comment, and both GPT's generated edits and the set of real edits that were made to the paper in response to the comment. The judges are aware of which edits are model-generated and which are real, as it would be impossible to conceal the stylistic differences; however, we do not believe this impacts our goal of understanding the trends between the two edit types, as the judges scored edits using several specific factors described in the following rubric. Examples of these factors can be found in Table 4:
* **Compliance (1-3):** The edit might argue that the comment is irrelevant or infeasible to address (1), address the spirit of the comment but not specifically what was asked (2), or directly comply with the reviewer's advice (3).
* **Promises (true/false):** The edit promises to address part of the comment in future work or a future revision; we include cases where the model says it provides information elsewhere (e.g., in its Appendix) but does not give the corresponding edit for that section.
* **Paraphrases (true/false):** The edit reuses the wording from the comment itself.
\begin{table}
\begin{tabular}{l c c} \hline \hline & **GPT-4** & **Human** \\ \hline Direct comment & 40.4 & 78.6 \\ Indirect comment & 28.2 & 61.3 \\ Compliance = 3 & 39.5 & 71.5 \\ Compliance \textless{} 3 & 30.1 & 77.3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Alignment micro-F1 for GPT and humans on direct/indirect comments and compliant/non-compliant edits. Note that the values are higher than in Table 2 because comments with no corresponding edits were not annotated. GPT and humans both do much worse with indirectly-phrased comments than direct ones. GPT also struggles to match to non-compliant edits, whereas humans are unaffected.
* **Technical details (true/false):** The edit contains specific details or placeholders for details such as citations, mathematical expressions, or numerical results.
We note that the edit generation task is made technically impossible by the fact that some edits may require information that the model does not have, such as the results of additional experiments. We mitigate this by instructing the model to use placeholders or to hallucinate technical details that it does not know (details in Appendix B). In addition, for each comment we measure **answerability**: whether it can be addressed _without_ placeholders or hallucinations. In other words, a perfect model should be able to address answerable comments using just the original paper and background knowledge.
Additionally, for each (GPT, real) edit pair, we evaluate which has greater **comprehensiveness** in addressing the reviewer's comment, as there are many cases where one edit is more thorough or goes beyond what the reviewer asked, even though both have the same compliance. This is not the same as correctness; instead, comprehensiveness measures how thoroughly an edit _attempts_ to address a comment, possibly using placeholders or hallucinating unavailable information.
### Results
From an initial inspection of GPT's generated edits, we find that the model almost always produces coherent and on-topic edits that respond to the given review comments. Table 5 shows that GPT-generated edits are competitive with human-authored edits in comprehensiveness, often being rated as more comprehensively addressing the given comment when sufficient information is available but doing worse for comments that require additional data to address. On average, GPT almost matches real edits in this regard.
However, we observe in Table 6 that the kinds of edits generated by GPT-4 are very different than those produced by real edits. The most striking difference we observe is the tendency for GPT-4 to paraphrase the comment when writing its edit (48% for GPT-4 vs. 4% for human edits). Qualitatively, we notice that GPT-4's edits are often written as though they are meant to be a standalone response, whereas the real edits are more tightly integrated into the context of the paper. In addition, real edits are more likely to use specific technical details as opposed to a high-level response, an effect which is understated in Table 6 due to the cases where both edits contain some technical details but one contains substantially more. To account for these cases, we additionally record relative technicality judgements for each (GPT, real) edit pair and find that the difference grows: the real edits are more technical in 38% of cases compared to only 12% for GPT (p=\(10^{-3}\)). Overall, the reduced level of technicality and the tendency to paraphrase may make GPT-4's edits preferable for those who simply want clear responses to their questions and feedback, but they also make edits less informative for the most engaged readers who care about technical details.
We also note that while most edits from both GPT-4 and humans follow the reviewer's specific instructions, human edits deviate from the reviewer's request more often: 94% of GPT-4 edits are highly compliant (compliance = 3), while only 68% of human edits are. The actual discrepancy in this factor may be even higher, as real authors often choose not to make an edit at all when they disagree with a comment, opting instead to discuss it on the OpenReview forum.
The high compliance of the model is not especially surprising given that GPT-4 is trained to follow instructions, but it does have implications for GPT-4's suitability as an editing assistant. Often, the proper edit requires thinking critically about the reviewer's critique rather than simply following it, and GPT-4's output is less suitable in those cases.
## 7 Conclusion and Future work
In this work, we have introduced the novel tasks of comment-edit alignment and edit generation for scientific paper revisions based on high-level draft feedback from reviewers. We have constructed and released a dataset containing pairs of computer science paper drafts with edits aligned at the para-graph level, along with their corresponding reviews and author responses. We hope the dataset will enable research on assisted writing technologies and serve as a challenging testbed for large language models.
It is interesting that models (including GPT-4) do so poorly on the comment-edit alignment task despite GPT being able to generate plausible edits in the generation task. As our analysis shows, the kinds of edits produced by GPT can be very different from the real edits authors make to their papers, and the fact that GPT fails to recognize many of the
real comment-edit pairs suggests that it may have gaps in its reasoning that would be interesting to explore further in future work. We hope that the insights from our analyses can help motivate and guide future studies.
A shortcoming of the generated GPT edits is their relative lack of technical details. However, this may be caused in part by their lack of access to information about detailed experimental results, code, and lab notes for the paper, which the authors have when doing their revisions. As a long-term goal, we believe that an ideal writing assistant would observe the entire research process and consume all relevant information when writing an edit; in some cases, this might even include suggesting additional experiments for humans to run. However, this requires further work both to create applications that can collect this information and to develop efficient methods to provide this information to large language models, which are currently limited in input size and expensive to run.
### Limitations
Our study is limited to scientific papers in English from the field of AI, and future work should consider a broader set of scientific disciplines and languages. Our evaluations are limited to measuring the correctness and types of alignments and generations produced by today's large language models (LLMs); future work should apply the techniques within real assisted writing applications and evaluate their impact on users. We use proprietary LLMs like GPT-4 in certain experiments, and those results may be difficult reproduce if changes to the
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **GPT** & **Real** & \(\kappa\) & p \\ \hline Compliance & 2.9 & 2.6 & 0.6 & \(10^{-4}\) \\ Promises & 21\% & 6\% & 1.0 & \(10^{-2}\) \\ Paraphrases & 48\% & 4\% & 0.7 & \(10^{-11}\) \\ Technical details & 38\% & 53\% & 0.7 & \(0.06\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Edit generation analysis. We report average Compliance and fraction of examples that include each of the other factors. We report Cohen’s \(\kappa\) for all factors on 10 instances and report p-values using Wilcoxon’s signed-rank test for Compliance and Fisher’s exact test for others. GPT is more compliant, often paraphrases the comment directly in its edits, and tends to include fewer technical details than real edits.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Factor** & **Comment** & **Edit** \\ \hline Compliance=1 &... Isn’t this percentage too much? Can’t we use, & [+... our split of 80\% -10\% -10\% is a standard split+] \\ Compliance=2 &... there is a hyperparameter in the radius decay, & [+... this learnable radius is not effective the performance compared to that the predefined radius decay+] \\ Compliance=3 & the experimental setup requires significantly more details on the hardware... & [+We conducted our experiments using NVIDIA Tesla V100 GPUs...+]* \\ Promises & it would be interesting to know how the proposed method would work, for instance, for node classification (e.g., Cora, Citeseer) & [+... the performance of our method on node classification tasks is beyond the scope of this paper and is left as an interesting direction for future work.+]* \\ Paraphrases &... it should be investigated... with respect to more natural perturbations, e.g. noisy input, blurring,... & [+... we also investigate their performance with respect to more natural perturbations, such as noisy input, blurring,...+]* \\ Technical details &... This does put into question whether the full closed loop model is actually useful in practice & [+... we evaluated the performance of a closed-loop N-CODE model... Here, the control parameters are a matrix of dynamic weights, \(\theta(t)\in\mathbb{R}^{m\times m}\)...+] \\ \hline \hline \end{tabular}
\end{table}
Table 4: Examples of comment-edit pairs exhibiting each scored factor in the edit generation analysis (subsection 6.2). Edits marked with an asterisk (*) are generated by GPT, while the others are real. Text is ellipsized for brevity.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **Ans.** & **Non-ans.** & **All** \\ \hline GPT & 31\% & 19\% & 25\% \\ Real & 19\% & 40\% & 29\% \\ Same & 50\% & 42\% & 46\% \\ \hline Frequency & 51\% & 49\% & 100\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Fraction of the time that a given model’s generated edits were deemed more comprehensive (but not necessarily correct), broken down by answerability. The Frequency is the fraction of comments that fall into each category. Overall, GPT generations are comparable to real edits, with GPT being better for comments that don’t require additional data and real edits being better for those that do.
proprietary services occur.
## Acknowledgements
This work was supported in part by NSF Grant 2033558.
|
2308.04891 | Introducing the ASSESS project: Episodic Mass Loss in Evolved Massive
Stars -- Key to Understanding the Explosive Early Universe | Episodic mass loss is not understood theoretically, neither accounted for in
state-of-the-art models of stellar evolution, which has far-reaching
consequences for many areas of astronomy. We introduce the ERC-funded ASSESS
project (2018-2024), which aims to determine whether episodic mass loss is a
dominant process in the evolution of the most massive stars, by conducting the
first extensive, multi-wavelength survey of evolved massive stars in the nearby
Universe. It hinges on the fact that mass-losing stars form dust and are bright
in the mid-infrared. We aim to derive physical parameters of $\sim$1000 dusty,
evolved massive stars in $\sim$25 nearby galaxies and estimate the amount of
ejected mass, which will constrain evolutionary models, and quantify the
duration and frequency of episodic mass loss as a function of metallicity. The
approach involves applying machine-learning algorithms to select dusty,
luminous targets from existing multi-band photometry of nearby galaxies. We
present the first results of the project, including the machine-learning
methodology for target selection and results from our spectroscopic
observations so far. The emerging trend for the ubiquity of episodic mass loss,
if confirmed, will be key to understanding the explosive early Universe and
will have profound consequences for low-metallicity stars, reionization, and
the chemical evolution of galaxies. | A. Z. Bonanos, G. Maravelias, M. Yang, F. Tramper, S. de Wit, E. Zapartas, K. Antoniadis, E. Christodoulou, G. Munoz-Sanchez | 2023-08-09T11:42:25Z | http://arxiv.org/abs/2308.04891v1 | [
###### Abstract
Episodic mass loss is not understood theoretically, neither accounted for in state-of-the-art models of stellar evolution, which has far-reaching consequences for many areas of astronomy. We introduce the ERC-funded ASSESS project (2018-2024), which aims to determine whether episodic mass loss is a dominant process in the evolution of the most massive stars, by conducting the first extensive, multi-wavelength survey of evolved massive stars in the nearby Universe. It hinges on the fact that mass-losing stars form dust and are bright in the mid-infrared. We aim to derive physical parameters of \(\sim\)1000 dusty, evolved massive stars in \(\sim\)25 nearby galaxies and estimate the amount of ejected mass, which will constrain evolutionary models, and quantify the duration and frequency of episodic mass loss as a function of metallicity. The approach involves applying machine-learning algorithms to select dusty, luminous targets from existing multi-band photometry of nearby galaxies. We present the first results of the project, including the machine-learning methodology for target selection and results from our spectroscopic observations so far. The emerging trend for the ubiquity of episodic mass loss, if confirmed, will be key to understanding the explosive early Universe and will have profound consequences for low-metallicity stars, reionization, and the chemical evolution of galaxies.
Stars: massive - Stars: mass loss - Stars: evolution - Stars: fundamental parameters - Supergiants - Surveys Introducing the ASSESS project] Introducing the ASSESS project: Episodic Mass Loss in Evolved Massive Stars - Key to Understanding the Explosive Early Universe A.Z. Bonanos et al.] A.Z. Bonanos\({}^{1}\), G. Maravelias\({}^{1,3}\), M. Yang\({}^{1,5}\), F. Tramper\({}^{4}\), S. de Wit\({}^{1,2}\), E. Zapartas\({}^{1}\), K. Antoniadis\({}^{1,2}\), E. Christodoulou\({}^{1,2}\), G. Munoz-Sanchez\({}^{1,2}\) 2022 361-1 Massive Stars Near and Far N. St-Louis, J. S. Vink & J. Mackey, eds.
## 1 Introduction
The role of mass loss from massive stars, especially episodic mass loss in evolved massive stars, is one of the outstanding open questions facing stellar evolution theory (Smith 2014). While the upper limit to the masses of stars is thought to be 150 M\({}_{\odot}\)(Figer 2005; Oey & Clarke 2005), and was even claimed to exceed 300 M\({}_{\odot}\)(Crowther et al. 2010; Banerjee et al. 2012), the masses of hydrogen-deficient Wolf-Rayet (WR) stars do not exceed 20 M\({}_{\odot}\)(Crowther 2007). Classical line-driven wind theory (Kudritzki & Puls 2000), once thought to be responsible for removing the envelopes of massive stars, has been shown inadequate, both on theoretical grounds (due to clumping, Owocki & Puls 1999) and estimations based on spectral lines (Bouret et al. 2005; Fullerton et al. 2006; Cohen et al. 2014), which demand reductions in the mass-loss rates by a factor of \(\sim\)2-3. So how do massive stars shed their envelopes? Binary interactions via Roche-Lobe overflow (RLOF) are predicted to occur in 70% of massive stars and strip the envelopes in \(\sim 30\)% of O stars, given the high binarity fraction (\(\sim 70\)%) of massive stars (Sana et al. 2012).
Episodic mass loss is possibly the dominant process that operates in single stars, however, the physical mechanism responsible remains a mystery (Smith 2014).
The importance of episodic mass loss has come to the forefront in both the massive star and supernova (SN) communities. _Spitzer_ images have revealed numerous circumstellar shells surrounding massive, evolved stars in our Galaxy (Gvaramadze et al. 2010; Wachter et al. 2010). Episodes of enhanced mass loss have been recorded not only in luminous blue variables (LBVs), but also in extreme red supergiants (RSGs, e.g. VY CMa, Decin et al. 2006). Moreover, untargeted supernova surveys have found dusty circumstellar material around superluminous supernovae (SLSN, Gal-Yam 2012), and mysterious optical transients with luminosities intermediate between novae and supernovae. The presence of circumstellar material implies a central role of episodic mass loss in the evolution of massive stars and this proposal aims to confirm this hypothesis. Tantalizing evidence suggests that SLSN occur in low-metallicity host galaxies (Neill et al. 2011), implying that such supernovae dominated the metal-poor early Universe. The overluminous Type IIn SN 2010jl is a well-studied example of a SLSN, with a massive progenitor star (30 M\({}_{\odot}\)) surrounded by a dense circumstellar shell (Smith et al. 2011; Zhang et al. 2012), which exploded in a low-metallicity galaxy (Stoll et al. 2011). SN2008S, a well-studied example of the class of intermediate-luminosity optical transients, was found to have a dust-enshrouded progenitor (8-10 M\({}_{\odot}\), Prieto et al. 2008) in pre-explosion _Spitzer_ images of the host galaxy NGC 300. Finally, the remarkable SN2009ip involves a 50-80 M\({}_{\odot}\) progenitor that underwent a series of episodic mass loss events. Its spectacular finale included a series of eruptions in 2009 and 2010 until its final explosion in 2012 as a Type IIn supernova (Mauerhan et al. 2013), although this was contested (Pastorello et al. 2013; Fraser et al. 2013). These examples strongly suggest that episodic mass loss in massive stars is central to their evolution and therefore has profound consequences for the enrichment of the interstellar medium and the chemical evolution of the early Universe.
The physics of LBV eruptions, pre-SN eruptions and extreme RSG mass-loss is still in its infancy and, as stated in the review by Smith (2014), "is a major unsolved problem in astrophysics". Models of single-star evolution adopt empirical, constant mass-loss prescriptions, which highly influence the outcome (Meynet et al. 2015). The ASSESS project tackles the role of episodic mass loss in massive stars by using the fact that mass-losing stars form dust and are bright in the mid-infrared (mid-IR). Physically, there are a number of ways a massive star can become a source of significant mid-IR emission. First, dust can form in a dense, but relatively steady stellar wind. In the most extreme cases, such as in the progenitors of the SN 2008S and the NGC300-OT 2008 transient (Bond et al. 2009), the wind is optically thick even in the near-IR and the source star is seen only in the mid-IR (Prieto et al. 2008). Second, a very massive star can have an impulsive mass ejection or eruption with dust forming in the ejected shell of material. Initially the optical depth and dust temperatures are high, but then drop as the shell expands. The most famous example is the "great eruption" of \(\eta\) Carinae in the 19th century (Humphreys & Davidson 1994; Davidson & Humphreys 1997; Smith & Frew 2011), which ejected several solar masses of material. Third, the dust can be located in a circumstellar disk and emit over a broad range of temperatures, as is seen in supergiant B[e] stars (sgB[e]) stars (Zickgraf 2006).
While stars with significant mid-IR emission are intrinsically rare, many of the most interesting massive "superstars", such as \(\eta\) Car or "Object X" in M33 (Khan et al. 2011), belong to this class. It is clear that searching for analogs of these interesting stars using mid-IR photometry of nearby galaxies is the way to go. The existing mid-IR "roadmaps" for interpreting luminous massive stars (Bonanos et al. 2009, 2010) are based on known massive stars in the LMC and the SMC. They have identified LBVs, sgB[e], and RSGs among the brightest mid-IR sources, due to their intrinsic brightness and due to being surrounded by their own dust. What is new about ASSESS is the idea of conducting - for the first time - a systematic study of mass loss in massive stars, by selecting targets using mid-IR photometry of nearby galaxies obtained with _Spitzer_.
## 2 Methodology
We have collected recently published mid-IR photometric catalogs from _Spitzer_ of galaxies with high star-formation rates within 5 Mpc: (a) seven dwarf galaxies within 1.5 Mpc from the DUSTiNGS project (Boyer et al., 2015): IC 10, IC 1613, Phoenix, Pegasus, Sextans A, Sextans B, and WLM, (b) 13 galaxies within 5 Mpc (Khan et al., 2015; Khan, 2017): M31, M33, NGC 247, NGC 300, NGC 1313, NGC 2403, M81, M83, NGC 3077, NGC 4736, NGC 4826, NGC 6822, and NGC 7793, and (c) five galaxies within 4 Mpc (Williams & Bonanos, 2016): NGC 55, NGC 253, NGC 2366, NGC 4214, and NGC 5253. The mid-IR photometry made available by the SAGE surveys of the LMC (Meixner et al., 2006) and SMC (Gordon et al., 2011) has been also searched for undetected, dust-obscured targets in our nearest neighbor galaxies. These catalogs contain mid-IR photometry of over 5 million point sources in 27 nearby galaxies, 19 of which have Pan-STARRS1 coverage (Chambers et al., 2016), providing an ideal dataset for a systematic study of luminous, dusty, evolved massive stars. We have compiled mid-IR photometric catalogs for these galaxies, including their counterparts in Pan-STARRS1 (\(g,r,i,z,y-\)bands), 2MASS (Cutri et al., 2003), VISTA Science Archive, WISE (Cutri & et al., 2012) and other archival surveys of particular galaxies to construct their spectral energy distributions (SEDs) out to 24 \(\mu\)m. The single epoch 5\(\sigma\) depth of Pan-STARRS1 ranges from 22nd magnitude in \(g-\)band to 20th magnitude in \(y-\)band, corresponding to absolute magnitudes brighter than \(-6\) in \(g\) and \(-8\) in \(y\) at 3.5 Mpc, respectively, which include the most luminous, evolved targets.
Based on these catalogs, we have selected over 1000 luminous and red sources (selected by their colors in \([3.6]-[4.5]\)) in these 27 galaxies and are conducting follow-up low-resolution spectroscopy of these sources, mainly with FORS2 on VLT and OSIRIS on GTC. The spectra yield stellar types, luminosity classes, effective temperatures and an estimate of the reddening. High-resolution spectra will be obtained for particularly interesting targets for further analysis.
SED modeling with DUSTY (Ivezic & Elitzur, 1997) will provide radii and age estimates of the circumstellar shell, as well as the dust temperature, ejected mass, and bolometric luminosity. SED shapes will be quantified to estimate the timescales of episodic mass loss and lifetimes of the various evolved stages as a function of spectral type and metallicity. Evidence of binarity (from spectra, SEDs, light curves) will provide an estimation of the relative contribution of RLOF to the observed dusty evolved stages of massive stars. Armed with all these parameters for a sample of \(\sim~{}1000\) dusty, evolved stars, spanning a range of metallicity (\(\sim\)1/15 \(-\) 2 Z\({}_{\odot}\)), we will perform a comparison with state-of-the-art stellar evolutionary models (Brott et al., 2011; Ekstrom et al., 2012; Georgy et al., 2013; Meynet et al., 2015) to evaluate the input mass-loss rates and predicted outcomes. We plan to reverse-engineer the target stars to quantify and confirm the amount of "input" episodic mass loss needed to match the measurements.
## 3 Results
### Photometric Classifier
We have employed state-of-the-art machine-learning algorithms to automatically classify and select types of mass-losing stars, thereby accelerating and systematizing the investigation of multi-wavelength photometry. We developed a classifier for evolved massive stars based on known massive stars in M31 and M33 and using color indices as features to classify evolved massive stars into the following categories: blue, yellow, red supergiants, LBVs, classical Wolf-Rayet stars, sgB[e]. We also included a class for outliers (e.g. background galaxies, AGNs). The classifier is found to be on average 83% accurate (Maravelias et al., 2022). We are currently applying this classifier to classify over one million sources in 25 nearby galaxies (see Maravelias et al., this volume).
### Observational survey
We have prioritized our targets based on their luminosity and IR excess, specifically, giving highest priority to targets with \(m_{3.6}-m_{4.5}\geq 0.5\) mag and \(M_{3.6}\leq-9.75\) mag. We have obtained multi-object spectroscopy with both the VLT and GTC starting in 2020, giving priority to the
galaxies that had enough high-priority targets to justify multi-object spectroscopy. We used the FORS2 spectrograph (Tramper et al., in prep.) and obtained spectra of over 400 high-priority and over 500 "filler" stars in M83, NGC 55, NGC 247, NGC 253, NGC 300, NGC 7793, Sextans A and WLM over 43h. The spectra have a resolving power of \(R=1000\) and a wavelength coverage around \(5400-8200\) A, which is suitable for classification and parameter estimation. Figure 1 shows examples of dusty, evolved massive stars identified in NGC 55, NGC 300 and NGC 7793.
We also used the GTC OSIRIS spectrograph (for details see Munoz-Sanchez et al., this volume) and have so far obtained 48 high-priority stars in NGC 6822 and 33 in IC 10. The GTC spectra have a resolving power of \(R\sim 500-700\) and a wavelength coverage around \(5200-9200\) A, which are being used to classify the sources and obtain their parameters.
In the Magellanic Clouds, we have similarly selected dusty, evolved sources and obtained spectra with the MagE spectrograph on Magellan and identified 8 new RSGs. Among them is a luminous, extreme RSG, with similar properties to WOH G64. Our results are presented by de Wit et al. (2022, this volume).
### Mass loss rates
We have set out to determine the mass loss rates (MLR) of red supergiants in the Small Magellanic Cloud, based on the catalogs of Yang et al. (2020) and Ren et al. (2021). Comprehensive photometry in over 50 bands (from the UV to \(24\mu m\)) for over 2000 RSG has been compiled and a grid of DUSTY models (Ivezic & Elitzur, 1997) was created for both silicate and carbon dust. This grid was used to perform a \(\chi^{2}\) fit of the SEDs and determine the dust parameters, optical depth and the mass loss rate for each supergiant.
From the distribution of MLR, we find a typical value of \(\sim 10^{-6}\,\mathrm{M}_{\odot}\,yr^{-1}\), with a few outliers at around \(\sim 10^{-4}\) and \(10^{-3}\,\mathrm{M}_{\odot}\,yr^{-1}\). We determine a new MLR vs. \(L\) relation based on an unbiased sample of RSG in the SMC, which shows an upturn at around \(\log(L/\mathrm{L}_{\odot})=4.7\), with enhanced mass loss occurring at higher \(L\). Compared to previously determined relations in the SMC, our result (Yang et al., 2022, in prep.) is most similar to the relations of Feast (1992) and van Loon et al. (2005). We plan to apply the procedure to all our program galaxies and determine the MLR at a range of metallicities.
## 4 Conclusions
We have presented the first results of an ambitious systematic study of episodic mass loss in \(\sim 1000\) evolved massive stars. This survey is timely, given the recent availability of mid-IR
Figure 1: Newly identified spectra of evolved massive stars in our FORS2 data from the VLT, including a candidate LBV and RSG in NGC 55 (top row), a sgB[e] in NGC 300 and a RSG in NGC 7793 (bottom row).
catalogs and ambitious, as it plans to increase the number of evolved massive stars in nearby galaxies by a factor of 5. The _James Webb Space Telescope_ is operating concurrently with this project. The enormous boost in sensitivity and angular resolution will revolutionize our understanding of these nearby objects. However, to fully exploit this we need to be able to tie the JWST results into the more general population. This project provides this anchor.
The results of this study will not only provide the first quantitative inventory and characterization of dusty massive stars in 27 galaxies in the nearby Universe at a range of metallicities, but may also reveal new classes of enshrouded stars and rare transitional objects. A byproduct of the survey will be the release of multi-wavelength photometric catalogs of luminous sources in 27 galaxies, including their classifications, which will be valuable for various scientific projects.
## Acknowledgments
We acknowledge funding support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 772086).
|
2309.01418 | Social Factors in P2P Energy Trading Using Hedonic Games | Lately, the energy communities have gained a lot of attention as they have
the potential to significantly contribute to the resilience and flexibility of
the energy system, facilitating widespread integration of intermittent
renewable energy sources. Within these communities the prosumers can engage in
peer-to-peer trading, fostering local collaborations and increasing awareness
about energy usage and flexible consumption. However, even under these
favorable conditions, prosumer engagement levels remain low, requiring trading
mechanisms that are aligned with their social values and expectations. In this
paper, we introduce an innovative hedonic game coordination and cooperation
model for P2P energy trading among prosumers which considers the social
relationships within an energy community to create energy coalitions and
facilitate energy transactions among them. We defined a heuristic that
optimizes the prosumers coalitions, considering their social and energy price
preferences and balancing the energy demand and supply within the community. We
integrated the proposed hedonic game model into a state-of-the-art
blockchain-based P2P energy flexibility market and evaluated its performance
within an energy community of prosumers. The evaluation results on a
blockchain-based P2P energy flexibility market show the effectiveness in
considering social factors when creating coalitions, increasing the total
amount of energy transacted in a market session by 5% compared with other game
theory-based solutions. Finally, it shows the importance of the social
dimensions of P2P energy transactions, the positive social dynamics in the
energy community increasing the amount of energy transacted by more than 10%
while contributing to a more balanced energy demand and supply within the
community. | Dan Mitrea, Viorica Chifu, Tudor Cioara, Ionut Anghel, Cristina Pop | 2023-09-04T08:02:49Z | http://arxiv.org/abs/2309.01418v1 | # Social Factors in P2P Energy Trading Using Hedonic Games
###### Abstract
Lately, the energy communities have gained a lot of attention as they have the potential to significantly contribute to the resilience and flexibility of the energy system, facilitating widespread integration of intermittent renewable energy sources. Within these communities the prosumers can engage in peer-to-peer trading, fostering local collaborations and increasing awareness about energy usage and flexible consumption. However, even under these favorable conditions, prosumer engagement levels remain low, requiring trading mechanisms that are aligned with their social values and expectations. In this paper, we introduce an innovative hedonic game coordination and cooperation model for P2P energy trading among prosumers which considers the social relationships within an energy community to create energy coalitions and facilitate energy transactions among them. We defined a heuristic that optimizes the prosumers' coalitions, considering their social and energy price preferences and balancing the energy demand and supply within the community. We integrated the proposed hedonic game model into a state-of-the-art blockchain-based P2P energy flexibility market and evaluated its performance within an energy community of prosumers. The evaluation results on a blockchain-based P2P energy flexibility market show the effectiveness in considering social factors when creating coalitions, increasing the total amount of energy transacted in a market session by 5% compared with other game theory-based solutions. Finally, it shows the importance of the social dimensions of P2P energy transactions, the positive social dynamics in the energy community increasing the amount of energy transacted by more than 10% while contributing to a more balanced energy demand and supply within the community.
hedonic games, peer to peer energy trading, social factors, prosumers coalition, energy community.
## 1 Introduction
The energy transition towards renewables is changing the way we produce, consume, and share energy, setting the stage for decentralized local energy systems [1]. In Europe, the war in Ukraine has accelerated this process because it highlighted the security risks associated with centralized energy production and the energy dependence impact on energy prices, encouraging the governments to take more proactive steps towards decentralized and resilient energy systems [2]. Within such a framework, promising new concepts such as prosumers and energy communities have gained a lot of attention as they have the potential to significantly contribute to the resilience and flexibility of the energy system, facilitating the large-scale rollout of intermittent renewable energy technologies without requiring expensive infrastructure upgrades [3].
A prosumer is an individual household that consumes and produces renewable electricity and may inject the surplus into the grid and withdraw electricity when the self-production is not sufficient [4]. Prosumers may participate in the management of energy communities, by trading energy and providing services like demand flexibility and decentralized energy storage [5]. In this context, peer-to-peer (P2P) energy trading enables direct transactions between prosumers, fostering a sense of community by allowing neighbors to share their energy resources. It empowers prosumers to put a value on their energy flexibility increasing their awareness while at the same time promoting energy self-sufficiency and encouraging local
collaboration [6]. P2P energy trading platforms are often implemented using blockchain to record energy transactions transparently and in a tampered-proof manner reinforcing the trust among prosumers. The blockchain's decentralized nature and cryptographic algorithms can provide the needed security for data sharing providing support for the interactions within the energy communities [7]. With the help of IoT devices, real-time data on prosumers' energy production and consumption can be collected, allowing for identifying energy-saving opportunities and optimizing the usage of energy resources using Artificial Intelligence (AI) [8]. Innovative incentivization and financing models enable prosumers to directly trade excess energy with each other, pushing the development of a more economically viable and community-driven decentralized energy system [9].
However, even under these favorable conditions, there is an issue that still needs to be addressed such as the level of engagement of prosumers in P2P energy trading which may have the potential to jeopardize the autonomy and reliability of community energy delivery [10]. Prosumer engagement is seen as one of the effective tools to unlock the potential and P2P energy trading and energy communities, by providing mechanisms that are aligned with their social values and expectations (e.g., relations, prices, and other values). Consideration of social relations may foster a sense of community and cooperation, encouraging prosumers to actively trade their energy inside the energy community [11]. They are more willing to participate in P2P energy trading if they have positive social relationships with other peers within the energy community [12]. They tend to be more open to creating local coalitions with prosumers who have strong social connections, such as friends or family members. This can be particularly important in small communities, where players may already have established social ties [13]. Similarly, prosumers are more likely to find mutually acceptable solutions if they value their positive relationships with each other, thus avoiding the energy transactions among peers with negative relations can also be beneficial. Prosumers may also prefer forming coalitions with other prosumers with whom they have a history of cooperation, as this can help to mitigate the risk of conflicts [14]. Social relations can provide information, and support to newcomers, making P2P trading more accessible, the prosumers being more willing to join in trading with community members who share similar environmental values [11].
At the same time from a technical perspective, a significant body of literature is dedicated to employing cooperative game theory for managing the prosumer participation in P2P energy trading [15]. They provide optimized trading strategies by considering the collective interests of prosumers increasing the engagement levels due to consideration of collective goals [16]. The cooperative games enable prosumers to form coalitions over P2P trading platforms, the reward being provided by the collective actions of the members and not solely by individual actions [17]. Moreover, they ensure a fair distribution of energy resources, ensuring that the benefits derived from P2P energy trading [18]. However, most of the approaches discussed in existing literature tend to overlook social aspects such as the social connections among prosumers within the community in the cooperative game models developed for P2P energy trading.
In this paper, we address the gaps identified in the literature by providing a prosumer coordination and cooperation model over a P2P energy flexibility market set up within an energy community. The cooperation model is based on hedonic games addressing the creation of coalitions of prosumers that can have preferences over the peers they are willing to collaborate with or not or for the coalition they may belong to. The prosumers' preferences are expressed as social connections with other peers within the community, while the created coalitions are optimized, matched for trading, and energy balanced using a genetic heuristic. The model implementation was done in the context of a P2P energy flexibility market based on blockchain and smart contracts. The paper contributes to a broader understanding of P2P energy trading and its social dimensions by showing how social connections can be used to guide the process of coalition formation in P2P energy trading and that is possible to create coalitions that satisfy the social
preferences of prosumers, while also minimizing the overall difference between surplus and deficit of energy at the community level.
The novel contributions of the paper are the following:
* A cooperation model for P2P energy trading using a hedonic game that considers the social relationships among prosumers within an energy community to create energy coalitions and facilitate energy transactions among them.
* A heuristic to optimize the coalitions of prosumers in the hedonic game model for P2P energy trading, considering the prosumers' social and price preferences and balancing the energy demand and supply within the community.
* The hedonic game model integration with a state-of-the-art blockchain-based P2P energy flexibility market and evaluation in the context of an energy community of prosumers.
The rest of the paper is structured as follows: Section 2 presents the state of the art on cooperative games for peer-to-peer energy trading considering social connections among prosumers, section 3 presents the hedonic game-based cooperative trading model emphasizing the incorporation of social preferences of prosumers in creation and optimization of coalitions, section 4 presents the relevant results in the context of blockchain-based P2P energy flexibility market, section 5 discusses the impact of various parameters on the hedonic game outcomes while section 6 presents conclusions and future work.
## 2 Related Work
The state-of-the-art game theory applications in peer-to-peer energy trading focus on cooperative games to facilitate market-level interactions and prosumer engagement [15, 19]. They investigate how and why certain prosumers might relax some of their goals, and form market-level coalitions collaborating to achieve a better collective outcome [20]. Cooperative games over peer-to-peer energy trading systems are used to guarantee the efficiency, stability, and safety of community-level energy operations, and enforce operational constraints [17, 21]. Prosumers' primary objective is to reduce their energy expenses and increase their profit while using different update strategies to reduce communication burdens between them [22]. However, a limited number of strategies consider social factors as motivators for encouraging collaboration among prosumers within energy communities and none to our knowledge apply hedonic games to capture the social preferences of prosumers [11-13].
Wang et al. [23] propose a two-level hierarchical incentive mechanism to motivate prosumers to join electricity peak-shifting in a P2P decentralized energy market. The incentivization is based on prosumers having to meet the energy-shifting values they have agreed upon and a reward penalty approach. Luo et al. [24] use a game-theory-based decentralized trading scheme to separate the original coordination problem, which would be the job of a market coordinator, into several sub-problems for each prosumer. The updates on global prices and quantities are done in a sequential manner, such that the next prosumers can use the latest information to optimize their actions, increasing their economic benefits and reducing electricity costs. Intermittent renewable production might cause instability in P2P energy trading and cooperative games can be used to ensure stability [25]. Opportunistic usage of prosumer batteries for peer-to-peer trading is studied in [20] considering that prosumers want to maximize the usage of renewable. Prosumers can either sell their energy surplus without discharging their battery or use their batteries for trading purposes to maximize their utility by considering a charging / discharging action. To form coalitions, a prosumer first must meet its demand from its solar panels, then calculate its surplus or deficit, and then, based on price thresholds of coalitions and available energy prices will be engaged in P2P trading. Coalitional game theory is used in [26] to find the winning coalitions that will play the game of optimizing the energy transfer within a community of prosumers that aim to operate independently
from the main grid. Frequent changes in energy demand and generation are considered as well as a utility function that computes the coalition energy availability when all prosumers act together. The management of energy communities based on variations of demand and supply is addressed in [27]. To join coalitions, each prosumer generates a list of preferences that might change over time and uses the Bayesian theorem to update these beliefs about others. Zu et al. [28] integrate energy trading and energy management, such that prosumers can manage their consumption and schedule their green energy storage. The energy control is done using the Lyapunov Theory, allowing prosumers to independently determine their energy order for each time slot, based on their current energy supply conditions. These allow prosumers to join a coordinated mechanism of influencing their energy consumption and green energy charge/discharge and find the best solution for the entire community. Li et al. [29] use game theory to construct two computationally efficient mechanisms for generating stable coalitions of prosumers in P2P energy trading, one that involves a benefit distribution scheme, and another that deals with a novel pricing mechanism. How prosumers decide which coalition to join is based on a dissatisfaction level, which can change as they join different coalitions, determine potential benefits from other coalitions, or consider the option of remaining independent. In [30] the authors proposed a P2P energy trading scheme to establish a grand energy coalition of prosumers and a suitable incentivization mechanism. Cooperative optimization of energy storage units is required, with the cost function quantizing the energy cost saving. The Shapley Value method is used to define a unique distribution of the total monetary benefit of the grand coalition, to all its prosumers, showing a decrease in the energy price. A model based on game theoretic approaches that incentivize prosumers to actively interact with the smart grid, all while preserving the privacy of participants, is presented in [31]. The game assumes that prosumers first choose a price, then consumers observe the prices and decide the amount of power to be purchased. Quadratic functions are used to model the Nash equilibrium assuming that users are willing to consume as much power as needed to balance the generation. Jin et al. [32] use game theory for P2P energy flexibility trading in energy communities. The leader of the game is the producer, and the followers are the consumers. The leader will propose the trading price, and then, the trading quantity, based on multiple factors such as load profile and demand. The price is continuously adjusted as in a Stackelberg game, to reach an optimal value, by comparing the demand with the supply. The solution is beneficial to the entire community because if the seller sets a high price, the buyer will react by reducing his allocated traded quantity. This is done multiple times until the prices and quantities converge. Lee et al. [33] combine three types of games that are played sequentially until the convergence point is met. An evolutionary game is used among buyers with strategies that might evolve, a non-cooperative game between sellers, and a Stackelberg game between sellers and buyers. In the evolutionary game, the community manager updates all members with a loss function that helps them improve over time.
Behavioral models and social motivational models for prosumers to join P2P energy trading schemes are considered in [34]. The norm activation theory is used to model the peers' behavior and to define multiple stages of behavioral change starting with awareness, then responsibility, and finally, personal norms (i.e., the usage of renewable energy). Psychological factors for prosumers joining energy trading schemes related to trust, an ecosystem-friendly and fair distribution models. Game theory is used to consider them at the community level to minimize the gap in supply and demand. Tushar et al. [35] discuss motivational models and social factors for engaging prosumers in energy trading. Aspects such as attitude, rational-economic, information, or positive reinforcement are considered concerning environmental, social, or economic advantages. A trading scheme is designed based on a canonical coalition game to obtain the best energy price and to target the social aspects. In [36], an incentivization solution is proposed using a two-level game-theoretic approach, where the lower level uses a hedonic coalition formation game for the servers to share their resources. Cluster heads are randomly assigned, and they offer reward pools for workers to join them, with these reward pools being different based on the cluster heads' available
budgets. As more workers join a cluster head, the reward gets smaller, so the workers will distribute themselves more intelligently such that their reward is the highest possible. Yap et al. [37] use a motivational game theory-based approach to solving efficient P2P energy trading among prosumers, both for multi-cities and intra-city scenarios. The price of the market is set as the average price for residential, commercial, and industrial customers. A cooperative energy market model using a generalized Nash bargaining scheme is proposed in [38] considering social welfare maximization and optimal energy trading. The network operator can trade with prosumers and the prosumers can also P2P trade among them. The socioeconomic optimization problems are transformed from nonconvex problems to linear ones, using a grid propagation algorithm to increase social welfare and fairness in profit allocation. The challenge in P2P energy trading is designing pricing schemes that motivate prosumers to cooperate and participate in managing network congestion [39]. Long et al. address it by proposing a P2P energy trading solution using cooperative games [40]. The parameters for the game are the prosumer load, the energy quantities bought or sold, energy prices, the prosumer electricity bill, and battery energy. Coalitions are formed driven by the income they receive when trading with the supplier while the Shapley Value method is used to allocate resources and fairly distribute cost savings among prosumers. Malik et al. [25] consider multiple time intervals in energy trading, and various priorities such as energy quantity, geographic location, pricing mechanism, etc. After pairs are created, a grand coalition is generated to maximize social welfare and energy savings. Annual profit and energy reliability index are considered to compute a multi-objective optimization function in [41] and perform planning for P2P and P2G energy trading. Game theory and a particle swarm optimization algorithm are used to form coalitions, to find the optimum sizes of the players, and payoff value showing that the profits are maximized when considering both criteria.
## 3 Hedonic Game for Cooperative Trading
In this section, we present how hedonic games can be used for prosumer coordination over a peer-to-peer energy flexibility market implemented at the energy community level. We describe the cooperation model considering the prosumers' social preferences and how the coalitions are optimized and matched using a genetic heuristic.
|
2310.14610 | That was the last straw, we need more: Are Translation Systems Sensitive
to Disambiguating Context? | The translation of ambiguous text presents a challenge for translation
systems, as it requires using the surrounding context to disambiguate the
intended meaning as much as possible. While prior work has studied ambiguities
that result from different grammatical features of the source and target
language, we study semantic ambiguities that exist in the source (English in
this work) itself. In particular, we focus on idioms that are open to both
literal and figurative interpretations (e.g., goose egg), and collect TIDE, a
dataset of 512 pairs of English sentences containing idioms with disambiguating
context such that one is literal (it laid a goose egg) and another is
figurative (they scored a goose egg, as in a score of zero). In experiments, we
compare MT-specific models and language models for (i) their preference when
given an ambiguous subsentence, (ii) their sensitivity to disambiguating
context, and (iii) the performance disparity between figurative and literal
source sentences. We find that current MT models consistently translate English
idioms literally, even when the context suggests a figurative interpretation.
On the other hand, LMs are far more context-aware, although there remain
disparities across target languages. Our findings underline the potential of
LMs as a strong backbone for context-aware translation. | Jaechan Lee, Alisa Liu, Orevaoghene Ahia, Hila Gonen, Noah A. Smith | 2023-10-23T06:38:49Z | http://arxiv.org/abs/2310.14610v1 | # _That was the last straw, we need more:_
###### Abstract
The translation of ambiguous text presents a challenge for translation systems, as it requires using the surrounding context to disambiguate the intended meaning as much as possible. While prior work has studied ambiguities that result from different _grammatical_ features of the source and target language, we study semantic ambiguities that exist in the source (English in this work) itself. In particular, we focus on idioms that are open to both literal and figurative interpretations (e.g., _goose egg_), and collect Tide,1 a dataset of 512 pairs of English sentences containing idioms with disambiguating context such that one is literal (_it laid a goose egg_) and another is figurative (_they scored a goose egg_, as in a score of zero). In experiments, we compare MT-specific models and language models for (i) their **preference** when given an ambiguous subsentence, (ii) their **sensitivity** to disambiguating context, and (iii) the performance **disparity** between figurative and literal source sentences. We find that current MT models consistently translate English idioms literally, even when the context suggests a figurative interpretation. On the other hand, LMs are far more context-aware, although there remain disparities across target languages. Our findings underline the potential of LMs as a strong backbone for context-aware translation.
Footnote 1: Data and code can be found at [https://github.com/jaechan-repo/mt-ambiguity](https://github.com/jaechan-repo/mt-ambiguity).
## 1 Introduction
Natural language is inherently ambiguous due to the competing pressures of efficiency and clarity in communication Zipf (1949); Piantadosi et al. (2012). As communicators, we disambiguate meanings on the basis of a wide range of contextual factors, or ask clarifying questions when such context is not available. Though sometimes overlooked, the role of ambiguity in NLP has gained growing interest in recent work Min et al. (2020); Liu et al. (2023); Stengel-Eskin et al. (2023).
In machine translation (MT), it has long been recognized that ambiguities arise when the source language does not encode grammatical attributes that the target language requires Bar-Hillel (1953); Prates et al. (2019); Savoldi et al. (2021); Gonen and Webster (2020), i.a.). For instance, the English sentence "_I am a doctor_" would require disambiguating the doctor's gender for translation to German, which has no gender-neutral word for "_doctor_." Prior work created contrastive test sets for such phenomena, to evaluate whether MT models correctly translate an ambiguous word (here, "_doctor_") when disambiguating context is available (e.g., "_She is a doctor_") Muller et al. (2018); Bawden et al. (2018); Voita et al. (2019).
In contrast with _grammatical ambiguity_ with respect to a target language, it is relatively less understood how MT systems handle _semantic ambiguity_ present in the source text itself. For instance, _"I have bigger fish to try"_ is ambiguous between figurative ("... _at work_") and literal ("... _for the dinner_") interpretations in English, outside of the
Figure 1: Tide consists of pairs of contrastive sentences that contain the same idiomatic expression in different contexts, such that one uses the figurative meaning of the idiom (left), and another uses its literal meaning (right). On this set of inputs, ChatGPT is sensitive to the disambiguating context when translating the idiom, while NLLB is not.
context of translation. Therefore, we extend the line of work on context-aware translation to semantically ambiguous phrases in English.
To this end, we create Tide, **T**ranslations of **I** Idioms in **D**isambiguating context in **E**nglish, a dataset of \(512\) example triples. Each triple consists of an ambiguous subsentence and a pair of contrastive sentences that contain the subsentence but add disambiguating context: one to produce a figurative interpretation of the idiom, and another to produce a literal interpretation of it (see Figure 1 for an example). Our creation process for the triples combines automatic text generation with human annotation: we use GPT-4 to draft the triples, which are then scrutinized by human annotators. Following this, we engage native speakers from four languages to craft reference translations for a subset of the dataset.
In our experiments, we evaluate both traditional neural MT models and language models (LMs). MT-specific models are trained on large corpora of parallel sentences, and have formed the foundation of translation research; LMs are trained without any explicit supervision for translation, yet recently demonstrate impressive translation ability (Hendy et al., 2023). Using Tide, we compare how these two types of systems handle ambiguity, and evaluate their sensitivity to disambiguating context. We find that on ambiguous input, LMs demonstrate roughly balanced preference between literal and figurative interpretations, whereas MT-specific models consistently prefer literal ones (SS4.1). Given disambiguating context, LMs are substantially more context-aware, though this sensitivity declines for more low-resource target languages; in contrast, MT-specific models tend to translate idioms literally irrespective of context (SS4.2). Finally, MT-specific models are better at translation of literal text than figurative text, whereas this disparity in LMs is much narrower (SS4.3).
We summarize our contributions as follows: (1) We formalize the challenge of ambiguous idiomatic language in MT; (2) we create a new translation benchmark, Tide, that includes sentences with idioms along with disambiguating contexts (literal and figurative); (3) we analyze MT systems' behavior with and without disambiguating contexts, pointing to interesting trends and differences between LMs and MT-specific models.
## 2 Creating Tide
Idioms, though commonplace in daily communication, pose a challenge for MT systems due to its inherent ambiguity between literal and non-literal meanings. Generating the most appropriate translation among potential disambiguations of the idiom involves an understanding that extends beyond the idiom itself, as an MT system must use broader context clues to discern the most fitting translation.
We present Tide, a dataset of \(512\) example triples. Each triple consists of an _ambiguous subsentence_, a _figurative sentence_, and a _literal sentence_ in English, all including the same idiom. The ambiguous subsentence permits both figurative and literal interpretations of the idiom, while the figurative and literal sentences introduce additional context that resolves the ambiguity to figurative and literal readings, respectively. We design subsentences (e.g., _"had a card up his sleeve"_) to be more than an idiom itself (here, _"card up sleeve"_), as idioms alone can often be unnatural as standalone input to an MT system.
We construct Tide through a human-AI collaborative approach following a line of recent work (Liu et al., 2022; Chakrabarty et al., 2022). We first manually select candidate idioms from two large idiom corpora (SS2.1). Next, we leverage the generative power of GPT-4 to efficiently produce diverse and high-quality text, by prompting it to write a complete triple for each idiom (SS2.2). To ensure quality and correctness, we then involve human annotators to filter out invalid triples (SS2.3). Finally, we collect gold translations for a subset of the dataset among native speakers (SS2.4).
### Collection of Idioms
To collect idioms, we scrape The Idioms dictionary2 to obtain 1409 idioms, and additionally use a dataset of 905 idioms from Rabinovich et al. (2020); both sources contain corresponding idiom definitions. We discard duplicate idioms (including those that appear in different conjugations) and proverbs (e.g., _All that glitters is not gold_), which are often too self-contained to be disambiguated with context. Then, we manually select idioms that are available to a natural and plausible _literal_ interpretation, in addition to their figurative meanings. This results in a set of 700 idioms with definitions.
### Generation of Idioms in Context
Next, we draft an example triple for each idiom by prompting GPT-4 with a fixed prompt, containing two in-context examples along with additional guidelines (details in Appendix A). We write a set of heuristics to automatically identify some types of ill-formed output, such as when the subsentence is not an exact substring of the full sentences. When a rule is violated, we add an additional turn of dialogue instructing the model to revise its output to follow the broken rule. We repeat this until all rules are followed, or when two revisions are attempted without success. After this, we have 700 English triples, each associated with a unique idiom.
### Human Annotation
Of course, the triples collected in SS2.2 may not correctly use idioms literally and figuratively, and generated text is susceptible to fluency and coherence issues. To ensure data quality, we recruit crowdworkers on Amazon Mechanical Turk to label each of the full sentences as using either the literal or the figurative sense of an idiom. We present each full sentence independently (not as a pair) to two different crowdworkers, who are asked to label it as _figurative_, _literal_, or _ambiguous_ with respect to how it uses the given idiom. They may also indicate that the sentence is invalid if it is offensive or has fluency issues (see Appendix B for details).
The annotators achieved substantial agreement on this task, with a Fleiss \(\kappa\) score of 0.721. Furthermore, for 82.9% of examples, there is a complete agreement between both annotators and the intended label (the label which we ask GPT-4 to follow when generating triples).
Based on the annotations, we discard triples where the intended-figurative sentences received no votes for figurative, or the intended-literal sentences received at least one vote not for literal. This asymmetry in the filtering heuristic is because we observe that GPT-4 was far more reliable at generating figurative uses of idioms than literal ones, and therefore we enforce a lower bar for retaining figurative sentences. We also discard all the triples that contain at least one vote for discard. In this way, we obtain the 512 English triples which constitute Tide.
### Collecting Translations
Finally, for a randomly subset of 50 idioms, we gather reference translations for the contrastive pairs of figurative and literal sentences from native speakers of Hebrew, Yoruba, Korean, and Chinese.
## 3 Experimental Setup
In this section we outline the models (SS3.1) and languages (SS3.2) we evaluate, our automatic metrics (SS3.3), and our setup for collecting human evaluations of generated translations (SS3.4).
### Models
We evaluate two classes of translation systems: MT-specific models and LMs. Here, the MT-specific models use an encoder-decoder architecture and are trained on large amounts of parallel data, whereas the LMs are decoder-only models trained to maximize likelihood (i.e., next-token prediction) on predominantly-English text.
MT-Specific ModelsWe evaluate NLLB(Meta, 2022) and Opus MT(Tiedemann and Thottingal, 2020; Tiedemann, 2020). NLLB is trained on par
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline
**Idiom** & **Figurative Sentence** & **Literal Sentence** \\ \hline tip of the iceberg _to only know a very small part of the problem_ & The problems we discovered were **just the tip of the iceberg** in this company. & As we approached the glacier, we saw **just the tip of the iceberg** above the water. \\ \hline fall between the cracks _be ignored or unobserved_ & His request for a promotion **fell between the cracks** due to the company’s restructuring. & The small toy **fell between the cracks** of the wooden floor. \\ \hline foam at the mouth _be extremely angry_ & He **was foaming at the mouth** when he found out about the betrayal. & The rabid dog **was foaming at the mouth** and needed to be isolated. \\ \hline foot in the door _succeed with a first step_ & By volunteering at the company, she **got a foot in the door** for a full-time position. & When the door was closing, he quickly **got a foot in the door** to prevent it from shutting. \\ \hline \end{tabular}
\end{table}
Table 1: **Examples in Tide. A figurative and literal sentence disambiguates the idiom by adding context that demands figurative and literal interpretations, respectively.**
tially synthetic parallel data, and covers 202 languages.3OpusMT is a collection of models, each with a fixed source and target language.4 For both models, we decode the translation greedily.
Footnote 3: [https://huggingface.co/facebook/nllb-200-3.38](https://huggingface.co/facebook/nllb-200-3.38)
Footnote 4: The most recent model for each language pair was downloaded from [https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models): transformer-big for De, Es, and Hu, transformer-align for He, Hi, and Yo. Their most recent English to Chinese models by July 2023 do not produce coherent outputs, so we proceed with the earlier version available on HuggingFace: [https://huggingface.co/Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh). English to Korean models are not evaluated due to an issue with pair PyTorch implementation, as reported by multiple users.
Language ModelsWe evaluate ChatGPT (gpt-3.5-turbo; OpenAI 2022)5 and PalM 2 (text-bison-001; Google et al.2023).6 We do not include GPT-4 as it partially authored the examples in the dataset.
Footnote 5: API last accessed on June 18, 2023.
Both models were trained on a mixture of different languages, and in particular PalM 2's training corpus included parallel data for hundreds of languages. However, both LMs are trained for the next-token-prediction objective.
We prompt the LM to generate translations zero-shot with the prompt "Translate the following English sentence to [target language]: [source sentence]," and greedily decode the continuation. We do not provide in-context examples or further instructions about figurative language, in order to create a setting comparable to the evaluation of MT-specific models.
Google TranslateWe also include Google Translate7 for reference due to its popularity in commercial use. We do not classify it as either an MT-specific model or LM due to the lack of public understanding of how it works.
Footnote 7: [https://translate.google.com/](https://translate.google.com/). API last accessed on June 14, 2023.
### Languages
We consider the eight target languages: Spanish (Es), Hindi (Hi), German (De), Hungarian (Hu), Korean (Ko), Chinese (Zh), Hebrew (He), and Yoruba (Yo), which vary in resource-availability and are typologically and culturally diverse. When the evaluation requires a gold translation, we focus on the last four languages for which Tide contains human-written references.
### Automatic Metrics
We use different sets of metrics to evaluate translations for their literalness and for the overall translation quality.
LiteralnessFollowing Hendy et al. (2023), we use two metrics to assess the literalness of the translation: (1) _Unaligned Source Words_ (USW) represents the number of source words unaligned with words in the translation, and (2) _Non-Monotonicity_ (NM; Schioppa et al., 2021) determines the extent of reordering in the word-to-word alignments from the source sentence to its translation. For both metrics, we use the bitext alignments from the awesome-align framework (Dou and Neubig, 2021) which extract word alignments from mBERT embeddings.
Translation qualityWe evaluate translation quality based on sentence similarity between reference and predicted translations. We use chrF(Popovic, 2015), BERTScore(Sun et al., 2022), and BLEURT(Sellam et al., 2020). chrF measures precision, recall, and F-score of character \(n\)-grams. BERTScore is a contextual embedding-based evaluation metric that leverages the pretrained language model.8 BLEURT is a learned regression metric for automatic evaluation of generated text, which utilizes BERT for training on pairwise comparisons of reference and candidate sentences, calibrated on human quality judgments.
Footnote 8: We use XLM-RoBERTa-base embeddings for BERTScore. (Conneau et al., 2020)
### Human Evaluation
Due to the documented limitations of automatic evaluation for translation (Kasai et al., 2022), we additionally perform human evaluation of model-generated translations for Chinese, Korean, Hebrew, and Yoruba. We recruit one native speaker for each language, who are presented with the source sentences in each triple, along with generated translations from NLLB, Opus MT, ChatGPT, and PalM 2. The model-generated translations are presented in a random order not shown to the annotator. For each sentence, they are asked: (1) Does the translation use the figurative meaning of the idiom, the literal meaning of the idiom, preserve the ambiguity due to an equivalent idiom in their language, or is it too nonsensical to determine? (2) Overall, is the translation perfectly correct, containing slight errors, or containing major errors? We use the same subset
of 50 triples from SS2.4. With 3 sentences per triple and 4 source models for each triple, annotators each evaluate \(600\) translations.
## 4 Experimental Results
In our experiments, we explore MT-specific and LM systems' translation behavior on ambiguous subsentences (SS4.1), their sensitivity to disambiguating context (SS4.2), and their overall competence at translating literal versus figurative input (SS4.3).
### RQ1: How do MT systems translate ambiguous subsentences?
First, we investigate how MT systems behave on ambiguous subsentences _without_ disambiguating context, in order to measure their preference for translating them figuratively or literally. We hypothesize that LMs are more likely to produce less literal translations of ambiguous subsentences than MT-specific systems, based on recent findings in Raunak et al. (2023). Unlike their setting, here the source sentences are always ambiguous, so both literal and figurative translations are correct.
Automatic EvaluationWe measure the literalness of translations using USW and NM, where higher values mean less literal translations (SS3.3). Within each language, we normalize values by the average across systems in that language. This is because the metrics are not comparable across target languages, as they depend on linguistic properties of each target language. Shown in Figure 2, LMs (in blue) produce translations with higher USW scores than MT-specific models (in orange), across all target languages. In particular, Qpus MT is the most literal model across all target languages. Moreover, we observe that the differences between LMs and MT-specific models become less pronounced for more under-resourced languages (the languages are ordered left to right based on count of pages in Common Crawl9).
Figure 3: **Human Evaluation of Translations of Ambiguous Subsentences, where annotators are asked to evaluate whether each translation is figurative, literal, ambiguous due to an equivalent idiom, or is nonsensical. ChatGPT and PalM 2 are more balanced in their preference between figurative and literal translations; Opus and NLLB overwhelmingly prefer literal translations.**
Figure 2: **Non-Literalness of Translations of Ambiguous Subsentences, as measured by the number of _unaligned source words_ (USW) between the source sentence and its translation, normalized by the within-language average. Translations from pretrained LMs are less literal than those of MT-specific models, suggesting that LMs prefer less literal translations of ambiguous input (i.e., without disambiguating context). En \(\rightarrow\) Ko Opus MT models are not evaluated due to an issue with their implementation.**
Results based on \(\mathsf{NM}\) (shown in Appendix C) corroborate our findings for SVO languages. This metric is inherently limited to target languages with the same word order as the source language (English in this work, with SVO order).
Human EvaluationIn Figure 3, we show the human judgments of translations of ambiguous subsentences, indicating whether the translation is ambiguous, literal, figurative or nonsense. These results corroborate findings from automatic evaluation, and show even clearer distinctions. Overall, \(\mathsf{ChatGPT}\) and \(\mathsf{PaLM}\) 2 demonstrate much more balanced preferences between figurative and literal translations, compared to \(\mathsf{Opus}\) MT and \(\mathsf{NLLB}\). For the target language Chinese, \(\mathsf{ChatGPT}\) prefers a figurative translation 62% of the time; however, that preference declines dramatically as the target language becomes more low-resource, dropping to 6% for Yoruba. \(\mathsf{PaLM}\) 2 demonstrates more robust preferences across target languages, consistently preferring figurative translations 28% to 46% of the time. In contrast, \(\mathsf{Opus}\) MT and \(\mathsf{NLLB}\) overwhelmingly prefer literal translations, choosing a figurative translation only 4% to 20% of the time.
### RQ2: How sensitive are MT systems to disambiguating context?
We next explore to what extent the predicted translation of an ambiguous subsentence changes when disambiguating context is available.
Automatic EvaluationIntuitively, if the LM is not sensitive to context, then the translation of the ambiguous subsentence, \(p_{a}\), should be equally contained in the translation \(p_{\ell}\) for the literal sentence, and the translation \(p_{f}\) for the figurative sentence. That is, the way the ambiguous subsentence \(a\) is translated should not be affected by the added context. On the other hand, if \(p_{a}\) is more contained in \(p_{\ell}\) than in \(p_{f}\) (or vice versa), that would mean how the model handles \(a\) changes with the context.
Therefore, we operationalize the sensitivity to disambiguating context as
\[|\mathsf{contained\_in}(p_{a},p_{l})-\mathsf{contained\_in}(p_{a},p_{f})|\]
where \(\mathsf{contained\_in}()\) is a measure of unidirectional sentence similarity. Here, we use \(\mathsf{chrP}\) and
Figure 4: **Sensitivity to Disambiguating Context, as measured by \(\mathsf{BERTScore-P}\), describes how well an MT model adapts to disambiguating context for an otherwise ambiguous subsentence. The metric is based on how the translation of the ambiguous subsentence changes between the two full sentences, and is normalized by the in-language mean. \(\mathsf{LMS}\) generally demonstrate greater context-awareness than \(\mathsf{MI}\)-specific models.**
Figure 5: **Human Evaluation of Sensitivity. A set of generated translations is considered \(\mathsf{context\_sensitive}\) when it uses the figurative (or literal) sense of an idiom given figurative (or literal) disambiguating context. \(\mathsf{ChatGPT}\) and \(\mathsf{PaLM}\) 2 are much more \(\mathsf{context\_sensitive}\) than \(\mathsf{Opus}\) MT and \(\mathsf{NLLB}\), which tend to translate idioms literally irrespective of context.**
BERTScore-P, the precision outputs of chrF and BERTScore, both ranging from 0 to 1. A higher value of sensitivity (close to 1) indicates high sensitivity to disambiguating contexts.
Figure 4 shows the sensitivity results for the different models. The LMs, PaLM 2 and ChatGPT, generally exhibit a higher degree of sensitivity across most language pairs. Comparatively, the MT-specific models, Opus MT and NLLB, show less sensitivity. Opus MT, in particular, consistently demonstrates the lowest context sensitivity for all target languages.
Human EvaluationIn human evaluation, a model is considered context-sensitive on a triple if annotators indicate that the idiom is translated figuratively for the figurative sentence, and literally for the literal sentence. Otherwise, the model is insensitive. As shown in Figure 5, both ChatGPT and PaLM 2 are very sensitive to context, though there is still room for improvement. For instance, for En\(\rightarrow\)2h translation, ChatGPT and PaLM 2 translate correctly for \(76\%\) and \(72\%\) of idioms, respectively. Yet, the sensitivity of both models declines monotonically as the target language becomes more low-resource. In particular, for En\(\rightarrow\)Yo translation, ChatGPT translations are entirely nonsensical, and are qualitatively reported as frequently containing hallucinations completely unrelated to the source.
Nonetheless, Opus MT and NLLB are substantially less context-aware, correctly adapting to disambiguating context only \(11.5\%\) and \(34.5\%\) of the time, respectively. Yet, their more consistent performance across languages suggests that dedicated training for translation leads to better results on low-resource languages.
### RQ3: Are there performance disparities between figurative and literal translations?
Finally, we investigate if translation systems have systematic performance gaps between translating figurative versus literal input.
Automatic EvaluationWe use the reference translations collected in SS2.4, and measure text similarity between predicted and reference translation with BLEURT.
The results are shown in Figure 6. Across the board, models are more capable at literal translation than figurative translation. Yet, the gap is more pronounced for MT-specific models compared to LMs. ChatGPT and PaLM 2 exhibit performance gaps of \(2.92\%\) and \(4.85\%\), respectively, between literal (higher) and figurative translations, on average across languages. For OPUS and NLLB this disparity is higher: \(16.4\%\) and \(11.7\%\), respectively.
Overall, MT-specific models and LMs demonstrate comparable performance on literal translations, while NMT models lag behind LMs on figurative translations.
Human EvaluationIn Figure 7, we compare how human annotators evaluate the correctness of translations overall, with the options _perfect_, _minor mistakes_, and _major mistakes_. Consistent with findings from automatic evaluation, ChatGPT and PaLM 2 demonstrate more consistent performance across
Figure 6: **Overall Translation Quality for Literal (left) and Figurative (right) Sentences, as measured by BLEURT between the reference and prediction. While LMs and MT-specific models show comparable performance in translating literal sentences, NMT models are much weaker on figurative source sentences.**
literal and figurative translations. However, \(\mathtt{Opus}\) and \(\mathtt{NLLB}\) are notably stronger at literal translations than figurative ones.
We additionally observe that on Yoruba, the most low-resource language we study, \(\mathtt{Opus}\) MT and \(\mathtt{NLLB}\) actually far outperform \(\mathtt{ChatGPT}\) and \(\mathtt{PaLM}\) 2. We speculate that pretrained LMs are particularly strong on languages that were well-represented during pretraining; when this is not the case, it may produce degenerate text by entirely failing to grasp the translation task.
## 5 Related Work
Ambiguity in translationContext-aware translation usually focuses on grammatical features that the source language does not encode but the target language requires, such as formality (e.g., Chinese has a formal and informal "_you_"; Voita et al. (2019), gendered pronouns (e.g., French has a male and female "_irl_"; Muller et al. (2018); Yin et al. (2021), verb form (e.g., Spanish has six verb forms for past tense; Fernandes et al. (2023), and ellipses (e.g., "_We all did_" in English cannot be translated to Russian without identifying the elided verb; Voita et al. (2019). Another well-studied issue is lexical cohesion, where the same phrase in the source sentence (e.g., a named entity like "_Julia_") should be translated consistently each time Wong and Kit (2012); Kuang et al. (2018). In contrast, our work extends the study of context-aware translation to expressions which are ambiguous _in the source language alone_, focusing on idiomatic expressions. Tide joins a family of contrastive datasets that test model sensitivity to contextual information Muller et al. (2018); Bawden et al. (2018); Voita et al. (2019), i.a.).
Translation of figurative languageFigurative language has received considerable attention in MT research. Some work has studied the hidden representations or attention patterns of MT-specific models when processing multi-word expressions Rikters and Bojar (2017); Garcia et al. (2021); Dankers et al. (2022), or proposed methods to improve translation of these expressions Zaninello and Birch (2020); Gamallo and Garcia (2019). In particular, Baziotis et al. (2023) show that monolingual pretraining improves figurative translation, which may explain our finding that pretrained LMs generate less literal translations and are more sensitive to disambiguating context.
The most closely related work, Raunak et al. (2023), compare how LMs and MT-specific systems translate sentences with idiomatic expressions, and similarly find that LMs produce substantially less literal translations. We go further by evaluating how these models handle _ambiguous_ input and their _sensitivity_ to disambiguating context.
Datasets for idiom translationFadaee et al. (2018) introduced the first extensive dataset for idiom translation, identifying data scarcity as one of core challenges in this domain. EPIE Saxena and Paul (2020) is a large-scale corpus with 25K potentially idiomatic expressions (PIEs), with rep
Figure 7: **Human Evaluation of Overall Translation Quality**, reported separately for figurative versus literal source sentences. \(\mathtt{Opus}\) and \(\mathtt{NLLB}\) are substantially better at literal translation than figurative translation overall, whereas \(\mathtt{ChatGPT}\) and \(\mathtt{PaLM}\) 2 exhibit a much smaller disparity between literal and figurative translation quality.
resentation of both figurative and literal usages. MAGPIE (Haagsma et al., 2020) is a more expansive dataset of 50K samples that also contain genre labels. PECTI (Tang, 2022) curated a parallel English translation dataset of Chinese idioms. While these datasets offer a general-purpose testbed, the contrastive sentence pairs in Tide enable finer-grained analysis, while the fluency of source sentences matches (if not exceeding) that of naturally-occurring datasets.
## 6 Conclusion
In this work we focus on semantic ambiguity in machine translation, specifically when using idiomatic language. We introduce a new benchmark (Tide) of sentences that include idioms, along with disambiguating contexts (both literal and figurative). We then use Tide to investigate the behavior of different translation systems on ambiguous input and their sensitivity to disambiguating context, uncovering new strengths of pretrained LMs compared to MT-specific models.
Our findings point to pretrained LMs as a promising backbone for translation systems, and we foresee a future that combines the strong language understanding of LMs with dedicated supervision for translation.
## Acknowledgments
We would like to thank the UW NLP community for valuable discussion of this work. We are grateful to Weijia Shi, Jiacheng (Gary) Liu, and Xiaochuang Han for their help in writing and evaluating Chinese translations, and Zhaofeng Wu for feedback on the draft and figures.
We thank the reviewers for their valuable feedback and suggestions, and OpenAI for offering access to their models through the API.
## Limitations
In this work we study ambiguous source sentences specifically through idioms that are available to both literal and figurative interpretations. While this allows us to efficiently collect a dataset and perform focused evaluation, ambiguity occurs in more diverse forms, and we encourage future work to collect more data in the form of Tide. Contemporary work collects a dataset of ambiguous sentences (with direct disambiguations, rather than disambiguating context), and is a promising start (Liu et al., 2023).
In addition, we only study the behavior of translation systems when English is the source language, due to the availability of English idiom collections. Yet figurative expressions vary greatly across languages (Kabra et al., 2023), and our conclusions may not necessarily generalize to translation from other languages.
|
2308.01731 | Quantification of Predictive Uncertainty via Inference-Time Sampling | Predictive variability due to data ambiguities has typically been addressed
via construction of dedicated models with built-in probabilistic capabilities
that are trained to predict uncertainty estimates as variables of interest.
These approaches require distinct architectural components and training
mechanisms, may include restrictive assumptions and exhibit overconfidence,
i.e., high confidence in imprecise predictions. In this work, we propose a
post-hoc sampling strategy for estimating predictive uncertainty accounting for
data ambiguity. The method can generate different plausible outputs for a given
input and does not assume parametric forms of predictive distributions. It is
architecture agnostic and can be applied to any feed-forward deterministic
network without changes to the architecture or training procedure. Experiments
on regression tasks on imaging and non-imaging input data show the method's
ability to generate diverse and multi-modal predictive distributions, and a
desirable correlation of the estimated uncertainty with the prediction error. | Katarína Tóthová, Ľubor Ladický, Daniel Thul, Marc Pollefeys, Ender Konukoglu | 2023-08-03T12:43:21Z | http://arxiv.org/abs/2308.01731v1 | # Quantification of Predictive Uncertainty via Inference-Time Sampling
###### Abstract
Predictive variability due to data ambiguities has typically been addressed via construction of dedicated models with built-in probabilistic capabilities that are trained to predict uncertainty estimates as variables of interest. These approaches require distinct architectural components and training mechanisms, may include restrictive assumptions and exhibit overconfidence, i.e., high confidence in imprecise predictions. In this work, we propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity. The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions. It is architecture agnostic and can be applied to any feed-forward deterministic network without changes to the architecture or training procedure. Experiments on regression tasks on imaging and non-imaging input data show the method's ability to generate diverse and multi-modal predictive distributions, and a desirable correlation of the estimated uncertainty with the prediction error.
Keywords:Uncertainty Quantification Deep Learning Metropolis-Hastings MCMC.
## 1 Introduction
Estimating uncertainty in deep learning (DL) models' predictions, i.e., predictive uncertainty, is of critical importance in a wide range of applications from diagnosis to treatment planning, e.g., [3]. One generic formulation for predictive uncertainty, i.e., \(p(y|x)\), using DL is through the following probabilistic model
\[p(y|x)\triangleq\int_{\mathcal{M}}\int_{\mathcal{D}}\int_{\theta}p(y|x,\theta, \mathcal{D},\mathcal{M})dp(\theta|\mathcal{D},\mathcal{M})dp(\mathcal{D})dp( \mathcal{M}), \tag{1}\]
where \(x,y,\theta,\mathcal{M}\) and \(\mathcal{D}\) represent input, output, model parameters, model specification, and training set, respectively. The joint distribution \(p(y,\mathcal{D},\mathcal{M},\theta|x)\) is modeled with the factorization \(p(y|x,\theta,\mathcal{D},\mathcal{M})p(\theta|\mathcal{D},\mathcal{M})p( \mathcal{D})p(\mathcal{M})\) and \(p(y|x)\) is defined through marginalization of \(\theta,\mathcal{D}\) and \(\mathcal{M}\).
Different components contribute to \(p(y|x)\) in the marginalization and represent different sources of uncertainty. Borrowing terminology from [36], \(p(y|x,\theta,\mathcal{D},\mathcal{M})\) is often considered as the _aleatoric_ component that models input-output ambiguities, i.e., when the input may not uniquely identify an output [40]. \(p(\theta|\mathcal{D},\mathcal{M})\) on the other hand is considered the _epistemic_ component stemming from parameter uncertainty or model bias [8], which can be alleviated by training on more data or using a more appropriate model. Modeling \(p(\theta)\triangleq\int_{\mathcal{M}}\int_{\mathcal{D}}p(\theta|\mathcal{M}, \mathcal{D})dp(\mathcal{D})dp(\mathcal{M})\) as the epistemic component is also possible, however, this is prohibitively costly in DL models, and therefore mostly not done. Here, we focus on the aleatoric component \(p(y|x,\theta,\mathcal{D},\mathcal{M})\) and propose a model agnostic method.
The common approach to model \(p(y|x,\theta,\mathcal{D},\mathcal{M})\) in the recent DL literature is to build models that predict probability distributions instead of point-wise predictions. Pioneering this effort, Tanno et al. [35, 36] and Kendall and Gal [20] simultaneously proposed models that output pixel-wise factorized distributions for pixel-wise prediction problems, such as segmentation. Going beyond this simplified factorization, more recent models predict distributions modeling dependencies between multiple outputs, notably for pixel-wise prediction models, such as [23, 2, 39, 38]. These later models are more apt for real world applications as they are capable of producing realistic samples from the modelled \(p(y|x,\theta,\mathcal{D},\mathcal{M})\). On the downside, (i) these models require special structures in their architectures, and therefore the principles cannot be easily applied to any top performing architecture without modification, and (ii) they rely on the assumption that input-output ambiguities present in a test sample will be similarly present in the training set, so that models can be trained to predict posterior distributions.
In this work, we propose a novel model for \(p(y|x,\theta,\mathcal{D},\mathcal{M})\) and the corresponding Metropolis-Hasting (MH) [16] scheme for sampling of network outputs for a given input, which can be used during inference. We restrict ourselves to problems where a prior model for the outputs, i.e., \(p(y)\), can be estimated. While this may seem limiting, on the contrary this restriction is often _exploited_ in medical image analysis to improve model robustness and generalization, e.g., [38, 29, 26, 19]. Inspired by the ideas of network inversion via gradient descent [22] and neural style transfer [12], our main contribution is a new definition of a _likelihood_ function that can be used to evaluate the MH acceptance criterion. The new likelihood function uses _input backpropagation_ and distances in the input space, which (i) makes it architecture agnostic, (ii) does not require access to training or ground truth data, (iii) avoids explicit formulation of analytically described energy functionals, or (iv) implementation of dedicated neural networks (NNs) and training procedures.
We present experiments with two regression problems and compare our method with state-of-the-art methods [11, 39, 30], as well as MC Dropout [11] for completeness, even though it is a method for quantifying epistemic uncertainty. Our experimental evaluation focuses on regression problems, however, the proposed technique can be easily applied to classification problems as well.
## 2 Related work
Post-hoc uncertainty quantification of trained networks has been previously addressed by Qiu et al. [30]. In their work (RIO), the authors augment a pre-trained deterministic network with a Gaussian process (GP) with a kernel built around the network's inputs and outputs. The GP can be used for a post-hoc improvement of the network's outputs and to assess the predictive uncertainty. This model can be applied to any standard NN without modifications. While mathematically elegant and computationally inexpensive, this approach requires access to training data and impose that the posteriors be normally distributed.
One of the dedicated DL models addressing aleatoric uncertainty prediction with an assumption that a prior over outputs \(p(y)\) is available is probPCA [39, 38]. Developed for parametric surface reconstruction from images using a principal component analysis (PCA) to define \(p(y)\), the method predicts multivariate Gaussian distributions over output mesh vertices, by first predicting posterior distributions over PC representation for a given sample. While the model takes into account covariance structure between vertices, it also requires specific architecture and makes a Gaussian assumption for the posterior.
Sampling techniques built on Monte Carlo integration [15] provide a powerful alternative. Traditional Markov Chain Monte Carlo (MCMC) techniques [27] can construct chains of proposed samples \(y^{\prime}\) with strong theoretical guarantees on their convergence to posterior distribution [13]. In MH [16], this is ensured by evaluation of acceptance probabilities. This involves calculation of a prior \(p(y^{\prime})\) and a likelihood \(L(y^{\prime};x)=p(x|y^{\prime},\theta,\mathcal{D},\mathcal{M})\). At its most explicit form, the evaluation of \(L(y^{\prime};x)\) would translate to the generation of plausible inputs for every given \(y^{\prime}\) and then calculation of the relative density at the input \(x\).
In some applications, the likelihood function can be defined analytically with energy functionals [21, 17, 25]. General DL models, however, do not have convenient closed form formulations that allow analytical likelihood definitions. Defining a tractable likelihood with invertible neural networks [34] or reversible transition kernels [24, 33] is possible, but these approaches require specialized architectures and training procedures. To the best of our knowledge, a solution that can be applied to any pre-trained network has not yet been proposed. Sampling methods without likelihood evaluation, i.e., Approximate Bayesian Computation (ABC) methods [31], can step up to the task, however, they rely on sampling from a likelihood, hence the definition of an appropriate likelihood remains open.
## 3 Method
We let \(f(x|\theta)\) be a deep neural network that is trained on a training set of \((x,y)\) pairs and \(\theta\) representing its parameters. The network is trained to predict targets from inputs, i.e., \(y\approx f(x|\theta)\). We would like to asses the aleatoric uncertainty \(p(y|x,\theta,\mathcal{D},\mathcal{M})\) associated with \(f(x|\theta)\). Note that while \(f(x|\theta)\) is a deterministic mapping, if there is an input-output ambiguity around an \(x\), then a trained model will likely show high sensitivity around that \(x\), i.e., predictions will vary greatly
with small changes in \(x\). We exploit this sensitivity to model \(p(y|x,\theta,\mathcal{D},\mathcal{M})\) using \(f(x|\theta)\). Such modeling goes beyond a simple sensitivity analysis [32] by allowing drawing realistic samples of \(y\) from the modeled distribution.
### Metropolis-Hastings for Target Sampling
Our motivation comes from the well established MH MCMC methods for sampling from a posterior distribution [16]. For a given input \(x\) and an initial state \(y^{0}\), the MH algorithm generates Markov chains (MC) of states \(\{y^{t},t=1,2,\ldots,n\}\). At each step \(t\) a new proposal is generated \(y^{\prime}\sim g(y^{\prime}|y^{t})\) according to a symmetric proposal distribution \(g\). The sample is then accepted with the probability
\[A(y^{\prime},y^{t}|x)=\min\left(1,\underbrace{\frac{g(y^{t}|y^{\prime},x)}{g(y ^{\prime}|y^{t},x)}}_{\text{transitions}}\underbrace{\frac{p(x|y^{\prime}, \theta,\mathcal{D},\mathcal{M})}{p(x|y^{t},\theta,\mathcal{D},\mathcal{M})}}_{ \text{likelihoods}}\underbrace{\frac{p(y^{\prime})}{p(y^{t})}}_{\text{ priors}}\right) \tag{2}\]
and the next state is set as \(y^{t+1}=y^{\prime}\) if the proposal is accepted, and \(y^{t+1}=y^{t}\) otherwise. The sufficient condition for asymptotic convergence of the MH MC to the posterior \(p(y|x,\theta,\mathcal{D},\mathcal{M})\) is satisfied thanks to the reversibility of transitions \(y^{t}\to y^{t+1}\)[5]. The asymptotic convergence also means that for arbitrary initialization, the initial samples are not guaranteed to come from \(p(y|x)\), hence a burn-in period, where initial chain samples are removed, is often implemented [5]. The goal of the acceptance criterion is to reject the unfeasible target proposals \(y^{\prime}\) according to prior and likelihood probabilities. The critical part here is that for every target proposal \(y^{\prime}\), the prior probability \(p(y^{\prime})\) and the likelihood \(p(x|y^{\prime})\) needs to be evaluated. While the former is feasible for the problems we focus on, the latter is not trivial to evaluate.
### Likelihood Evaluation
In order to model \(p(y|x,\theta,\mathcal{D},\mathcal{M})\), unlike prior work that used a dedicated network architecture, we define a likelihood function \(p(x|y,\theta,\mathcal{D},\mathcal{M})\) that can be used with any pre-trained network. Given \(f\) and a proposed target \(y^{\prime}\), like [7, 6, 10, 28], we define likelihood with an energy function as
\[p(x|y^{\prime},f(\cdot,\theta))\propto\exp(-\beta\,E(x,y^{\prime})), \tag{3}\]
where \(\beta\) is a "temperature" parameter and \(E(x,y^{\prime})\) is evaluated through a process we call gradient descent _input backpropagation_ inspired by neural network inversion approximation in [22] and neural style transfer work [12]. To evaluate \(E(x,y^{\prime})\), we generate an input sample \(x^{\prime}_{y^{\prime}}\) that is as close as possible to \(x\) and lead to the proposed shape \(y^{\prime}=f(x^{\prime}_{y^{\prime}}|\theta)\). This action can be formulated as an optimisation problem
\[x^{\prime}_{y^{\prime}}=\underset{x^{\prime}}{\text{argmin}}\ \lambda\ \underbrace{\rho(x^{\prime},x)}_{\text{input loss}}+\underbrace{\mu(f(x^{ \prime}),y^{\prime})}_{\text{target loss}}, \tag{4}\]
where \(\rho:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}^{+}\), \(\mu:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}^{+}\) are distances and \(\lambda\in\mathbb{R}\) is a scaling constant ensuring proper optimisation of both loss elements. \(\mu\) can be defined as the original distance used for training \(f(x|\theta)\). We then define
\[E(x,y^{\prime})\coloneqq\rho(x^{\prime}_{y^{\prime}},x). \tag{5}\]
We set \(\rho(\cdot,x)\) as the squared \(L_{2}\) distance in this work, i.e., \(\rho(x^{\prime}_{y^{\prime}},x)=\|x^{\prime}_{y^{\prime}}-x\|_{2}^{2}\), but other options are possible, as long as (5) combined with (3) define a proper probability distribution over \(x\). Note that the \(L_{2}\) distance corresponds to a Gaussian distribution centered around \(x^{\prime}_{y^{\prime}}\) as \(p(x|y^{\prime},\theta,\mathcal{D},\mathcal{M})\). Even though \(p(x|y^{\prime},\theta,\mathcal{D},\mathcal{M})\) is a Gaussian, it is crucial to note that this does not correspond to \(p(y|x,\theta,\mathcal{D},\mathcal{M})\) being a Gaussian due to the non-linearity the optimization in Equation 4 introduces. Different \(y^{\prime}\)'s can lead to very different \(x^{\prime}_{y^{\prime}}\), and aggregating the samples through the MH MC process can lead to complex and multi-modal distributions as confirmed by the experiments in Section 4.
Within the MCMC context, the minimisation formulation ensures that we only generate \(x^{\prime}_{y^{\prime}}\) close to the test image \(x\) that also lead to the proposed \(y^{\prime}\) as the prediction. Provided the two terms in (4) are balanced correctly, a low likelihood value assigned to a proposal \(y^{\prime}\) indicates \(y^{\prime}\) either lies outside the range of \(f\) (the trained network is incapable of producing \(y^{\prime}\) for any input), or outside of the image (understood as a subset of codomain of \(f\)) of the inputs similar to \(x\) under \(f\). On the other hand, a high likelihood value means an \(x^{\prime}_{y^{\prime}}\) very close to \(x\) can produce \(y^{\prime}\) implying the sensitivity of the model around \(x\). We can then consider \(y^{\prime}\) a sample from \(p(y|x,\theta,\mathcal{D},\mathcal{M})\).
The choice of \(\beta\) parameter in (3) is important as it affects the acceptance rate in the MH sampling. Correctly setting \(\beta\) ensures the acceptance ratio is not dominated by either \(p(y)\) or \(p(x|y,\theta,\mathcal{D},\mathcal{M})\). The optimisation problem (4) needs to be solved for every MCMC proposal \(y^{\prime}\). We will refer to the full proposed MH MCMC sampling scheme as **Deep MH**.
### Shape Sampling Using Lower Dimensional Representations
As mentioned, we focus on applications where \(p(y)\) can be estimated. For low dimensional targets, this can be achieved by parametric or non-parametric models, e.g., KDEs [18]. For high dimensional targets, one can use lower dimensional representations. Here, we illustrate our approach on shape sampling and, as in [39], by using a probabilistic PCA prior for \(p(y)\)
\[y=US^{\frac{1}{2}}z+\mu+s, \tag{6}\]
where \(U\) is the matrix of principal vectors, \(S\) the diagonal principal component matrix and \(\mu\) the data mean. All three were precomputed using surfaces in the training set of \(f\). PCA coefficients \(z\) and global shift \(s\) then modulate and localise the shape in space, respectively. The posterior \(p(y|x,\theta,\mathcal{D},\mathcal{M})\) is then approximated by deploying the MH sampling process to estimate the joint
posterior of the parameterisation \(p(z,s|x,\theta,\mathcal{D},\mathcal{M})\). Proposal shapes \(y^{\prime}\) are constructed at every step of the MCMC from proposals \(z^{\prime}\) and \(s^{\prime}\) for the purposes of likelihood computation as defined by (3) and (5). PCA coefficients and shifts are assumed to be independently distributed. We set \(p(z)=\mathcal{N}(z;0,I)\) as in [4] and \(p(s)=\mathcal{U}([0,D]^{2})\) for an input image \(x\in\mathbb{R}^{D\times D}\). In practice, we restrict the prior on shift to a smaller area within the image to prevent the accepted shapes from crossing the image boundaries. Assuming both proposals are symmetrical Gaussian, the MH acceptance criterion (2) then becomes \(A((z^{\prime},s^{\prime}),(z^{t},s^{t})|x)=\min\left(1,\frac{p(x|z^{\prime},s^{ \prime},f(\cdot;\theta))}{p(x|z^{\prime},s^{\prime},f(\cdot;\theta))}\frac{p( z^{\prime})}{p(z^{\prime})}\frac{p(s^{\prime})}{p(s^{\prime})}\right)\).
## 4 Results
The method was compared against three uncertainty quantification baselines: (1) **Post-hoc uncertainty estimation method** RIO [30], using the code provided by the authors; and **Dedicated probabilistic forward systems:** (2) probPCA [38] and (3) MC Dropout [11]. We compare with MC dropout to provide a complete picture even though it is quantifying epistemic uncertainty.
We evaluate the uncertainty estimates on two regression problems on imaging and non-imaging data. The first is left ventricle myocardium surface delineation in cardiac MR images from the UK BioBank data set [1] where network \(f:\mathbb{R}^{D}\times\mathbb{R}^{D}\rightarrow\mathbb{R}^{50\times 2}\) predicts coordinates of 50 vertices on the surface in small field of view (SFOV) and large FOV (LFOV) images for \(D=60,200\), respectively. The data sets are imbalanced when it comes to location and orientation of the images with propensity towards the myocardium being in the central area of the image (90% of the examples). We tested the method on a 10-layer CNN trained to predict a lower dimensional PCA representation from images, as described in Sec. 3.3 and proposed in [26], which is a deterministic version of probPCA. The CNN predicts 20 PCA components which are then transformed into a surface with 50 vertices. The second is a 1D problem of computed tomography (CT) slice localisation within the body given the input features describing the tissue content via histograms in polar space [14, 9] and \(f:\mathbb{R}^{384}\rightarrow[0,100]\). We use the same data set and training setup as in the original RIO paper [30] and tested the method on a fully connected network with 2 hidden layers.
The same base architectures were used across baselines, with method pertinent modifications where needed, following closely the original articles [39, 11]. No data augmentation was used for forward training. Hyperparameter search for Deep MH was done on the validation set by hand tuning or grid search to get a good balance between the proper optimisation of (4), good convergence times and optimal acceptance rates (average of \(\sim 20\%\) for high and \(\sim 40\%\) for the one-dimensional tasks). This led to \(\beta=10000\) (SFOV), 30000 (LFOV), 60000 (CT) and Gaussian proposal distributions: \(z^{\prime}\sim\mathcal{N}(z^{t},\sigma_{z}^{2}I)\) and \(s^{\prime}\sim\mathcal{N}(s^{t},\sigma_{s}^{2}I)\), with \((\sigma_{z},\,\sigma_{s})=\) (0.1, 2) in SFOV, and (0.2, 8) in LFOV; in CT \(y^{\prime}\sim\mathcal{N}(y^{t},I)\). Choice of priors for the delineation task followed Sec. 3.3. In CT, we used an uniformative prior \(\mathcal{U}([0,100])\) to reflect that the testing slice can lie anywhere in the body. Parameters of the input backpropagation were set to \(\lambda=1\) and
\(\sigma_{n}=0.1\). We employed independent chain parallelization to speed up sampling and reduce auto-correlation within the sample sets. Chains were initialised randomly and run for a fixed number of steps with initial burn in period removed based on convergence of trace plots of the sampled target variables. The final posteriors were then approximated by aggregation of the samples across the chains. For comparisons, we quantify uncertainty in the system via dispersion of \(p(y|x,\theta,\mathcal{D},\mathcal{M})\): standard deviation \(\sigma_{y|x}\) in 1D problems; trace of the covariance matrix \(\Sigma_{y|x}\) in multi-dimensional tasks (sample covariance matrix for Deep MH and MC Dropout, and predictive covariance for probPCA).
**Predictive Posteriors:** Fig. 1 shows qualitative comparison of target distributions produced by Deep MH, MC Dropout and probPCA in the SFOV surface reconstruction task. We include two images representative of the majority of the data set (well centered) and two from the 10% outlier group (images with large translation and/or reflection compared to rest of the data set). The posteriors obtained by Deep MH exhibited greater variability, including multimodality, than the baseline methods even for the test images with lower forward predic
Figure 1: Comparison of estimated \(p(y|x,\theta,\mathcal{D},\mathcal{M})\) for selected test subjects: GT shape (red), forward prediction \(f(x_{i}|\theta)\) (blue), sample / predicted mean (yellow). Kernel density estimates were used for visualization. Only 5 vertices were visualised for clarity. Test subjects are ordered column-wise according to growing fixed network prediction error (RMSE\((f(x),y_{\text{GT}})=0.48;2.72;5.25;6.31\); test set average RMSE \(=1.46\)). Outlier test subjects (column 3-4) are associated with higher uncertainty and lower accuracy for all methods.
tion RMSE. In contrast, probPCA produced unimodal Gaussians, expectedly. MC Dropout, at \(p=0.5\), produced tight distributions around the sample mean.
Deep MH posteriors also showed a larger dispersion of vertex locations along the surface as can be seen in the first two columns. This uncertainty reflects the ambiguity in the target generation process. Boundaries seen in the images can identify the placement of a vertex in the orthogonal direction to the boundary, but placement along the boundary is much less obvious and variable. Deep MH posteriors captured this ambiguity well while others were not able to do so. Irrespective of the method, the high probability regions of the predicted \(p(y|x,\theta,\mathcal{D},\mathcal{M})\) might not cover the GT shapes when the prediction fails, as can be seen in the last two columns of Fig. 1.
**Correlation with accuracy:** We investigated the relationship between accuracy of a network and the predictive uncertainty. A good uncertainty estimation model should yield a non-decreasing relationship between uncertainty and prediction accuracy, and not show overconfidence, i.e., assign high confidence to imprecise predictions. Tab. 1 presents the correlation coefficients between quantified uncertainties and the accuracy (RMSE) of either the forward prediction of the pre-trained network in the case of Deep MH, or predictive mean for the baselines. We include results for the shape delineation task analysed in two settings: on the full imbalanced test sets, and on the homogeneous subsets of the test sets with two outliers, detected manually, removed. Deep MH yielded higher Spearman and Pearson's correlation for most cases than probPCA and RIO. Correlation of epistemic uncertainty, as quantified by MC Dropout, with prediction error was higher in some cases than that of aleatoric uncertainty quantified by Deep MH.
Further analysis together with additional in-depth tests can be found in [37].
\begin{table}
\begin{tabular}{l|c c|c c} & **Spearman** [r;p] & **Pearson** [r;p] & **Spearman** [r;p] & **Pearson** [r;p] \\ \hline
**Method** & _Shape delineation_ & _SFOV: full_ & _Shape delineation_ & _SFOV: homogeneous_ \\ \hline Deep MH & 0.30; \(3\times 10^{-3}\) & 0.62; \(1\times 10^{-11}\) & 0.26; \(2\times 10^{-2}\) & 0.38;\(3\times 10^{-4}\) \\ probPCA & 0.35; \(5\times 10^{-4}\) & 0.57; \(8\times 10^{-10}\) & 0.19; \(8\times 10^{2}\) & 0.40; \(9\times 10^{-5}\) \\ MC Dropout & 0.34; \(5\times 10^{-4}\) & 0.61; \(3\times 10^{-11}\) & 0.21; \(5\times 10^{2}\) & 0.29; \(7\times 10^{-3}\) \\ \hline
**Method** & _Shape delineation_ & _LFOV: full_ & _Shape delineation_ & _LFOV: homogeneous_ \\ \hline Deep MH & 0.33; \(3\times 10^{-3}\) & 0.38; \(7\times 10^{-4}\) & 0.20; \(9\times 10^{-2}\) & 0.26; \(3\times 10^{-2}\) \\ probPCA & 0.17; \(1\times 10^{-2}\) & 0.33; \(3\times 10^{-3}\) & 0.04; \(7\times 10^{-1}\) & 0.03; \(8\times 10^{-1}\) \\ MC Dropout & 0.52; \(9\times 10^{-7}\) & 0.55; \(2\times 10^{-7}\) & 0.42; \(2\times 10^{-4}\) & 0.42; \(2\times 10^{-4}\) \\ \hline
**Method** & _CT slice localisation_ & & & \\ \hline Deep MH & 0.55; \(1\times 10^{-7}\) & 0.64; \(2\times 10^{-9}\) & & \\ RIO & 0.07; \(6\times 10^{-1}\) & 0.13; \(3\times 10^{-1}\) & & \\ MC Dropout & 0.53; \(2\times 10^{-6}\) & 0.51; \(7\times 10^{-6}\) & & \\ \hline \end{tabular}
\end{table}
Table 1: Correlation between uncertainty and accuracy (RMSE).
## 5 Conclusion
In this work we proposed a novel method--Deep MH--for uncertainty estimation of trained deterministic networks using MH MCMC sampling. The method is architecture agnostic and does not require training of any dedicated NNs or access to training or GT data. Experiments on regression tasks showed the better quality of the Deep MH uncertainty estimates not only in comparison to a post-hoc baseline [30], but dedicated probabilistic forward models as well [39, 38].
The main limitation of Deep MH is its computational complexity. Some of it stems from its sampling nature. The rest is due to the proposed evaluation of the likelihood. A possible alternative to the proposed likelihood computation via optimisation is the simulation of an inversion process via a generative adversarial network (GAN), which would, however, involve a design of an additional dedicated network. Deep MH can be applied to problems where it is possible to define or estimate a prior distribution \(p(y)\). While the deployment to problems with higher dimensional targets is an open research question, it is straightforward for problems with low effective dimensions. These are commonly found across the spectrum of the deep learning applications.
#### Acknowledgements
This research has been conducted using the UK Biobank Resource under Application Number 17806.
|
2310.04403 | The impact of high-dimensional phase space correlations on the beam
dynamics in a linear accelerator | Hadron beams develop intensity-dependent transverse-longitudinal correlations
within radio-frequency quadrupole (RFQ) accelerating structures. These
correlations are only visible in six-dimensional phase space and are destroyed
by reconstructions from low-dimensional projections. In this work, we estimate
the effect of artificial decorrelation on the beam dynamics in the Spallation
Neutron Source (SNS) linac and Beam Test Facility (BTF). We show that the
evolution of a realistic initial distribution and its decorrelated twin
converge during the early acceleration stages; thus, low-dimensional
projections are probably sufficient for detailed predictions in high-power
linacs. | A. Hoover, K. Ruisard, A. Aleksandrov, A. Zhukov, S. Cousineau, A. Shishlo | 2023-10-06T17:53:13Z | http://arxiv.org/abs/2310.04403v1 | The Impact of High-Dimensional Phase Space Correlations on the Beam Dynamics in a Linear Accelerator+
###### Abstract
Hadron beams develop intensity-dependent transverse-longitudinal correlations within radio-frequency quadrupole (RFQ) accelerating structures. These correlations are only visible in six-dimensional phase space and are destroyed by reconstructions from low-dimensional projections. In this work, we estimate the effect of artificial decorrelation on the beam dynamics in the Spallation Neutron Source (SNS) linac and Beam Test Facility (BTF). We show that the evolution of a realistic initial distribution and its decorrelated twin converge during the early acceleration stages; thus, low-dimensional projections are probably sufficient for detailed predictions in high-power linacs.
## 1 Introduction
Predicting halo formation in linear hadron accelerators is complicated by incomplete knowledge of the beam's initial distribution in six-dimensional phase space [1; 2; 3]. Denoting the positions by \(x\), \(y\), \(z\) and momenta by \(p_{x}\), \(p_{y}\), \(p_{z}\), the distribution function \(f(x,p_{x},y,p_{y},z,p_{z})\) is typically reconstructed from the set of orthogonal two-dimensional projections \(\{f(x,p_{x}),f(y,p_{y}),f(z,p_{z})\}\):
\[f(x,p_{x},y,p_{y},z,p_{z})=f(x,p_{x})f(y,p_{y})f(z,p_{z}). \tag{1}\]
Alternatively, if only the covariance matrix is known, it is common to assume an analytic, ellipsoidally symmetric distribution function. It is unclear whether such approximations are sufficient to predict the detailed beam evolution at the halo level in high-power linear accelerators.
The research program at the Spallation Neutron Source (SNS) Beam Test Facility (BTF) aims to predict halo formation over a short distance. The BTF contains a replica of the SNS linac front-end, including a negative hydrogen ion source, electrostatic low-energy beam transport (LEBT), and 402.5 MHz radio-frequency quadrupole (RFQ). The medium-energy beam transport (MEBT) is similar to the SNS but has no rebunching cavities and is followed by a 9.5-cell FODO transport line, where halo is expected to form. A suite of high-dimensional and high-dynamic-range phase space diagnostics are located at both ends of the MEBT [4].
Previous work at the BTF has focused on reconstructing and visualizing the phase space distribution at the MEBT entrance [5; 6; 7; 8; 9]. These studies have unveiled high-dimensional, intensity-dependent correlations between phase space coordinates. One way to investigate the origin of these features is to simulate the upstream beam evolution. The distribution in the LEBT cannot be measured in the BTF, but we have access to older measurements of \(\{f(x,p_{x}),f(y,p_{y})\}\) from a similar ion source [6]. By propagating samples from this uncorrelated, unbunched LEBT distribution through the RFQ and MEBT, we obtain a "model" distribution that can be compared to direct measurements.
We have evidence that the model distribution is realistic -- the RFQ and MEBT simulations capture the essential beam dynamics. It follows that we can use the model distribution to estimate the impact of artificial _decorrelation_ (see Eq. (1)) on the beam evolution. In this paper, we summarize our comparisons of the model and measured distributions. We then estimate the influence of the relevant high-dimensional features on the beam evolution in the BTF and as well as in the SNS linac. We conclude that reconstructions from two-dimensional measurements, as in Eq. (1), are probably sufficient for detailed predictions in high-power linacs.
## 2 Parmteq vs. Reality
We use transverse slits, a dipole-slit energy spectrometer, and a bunch shape monitor (BSM) to reconstruct the phase space distribution in the MEBT, 1.3 meters past the RFQ. Six-dimensional measurements still have quite low resolution (\(\approx 10^{6}\) points) and dynamic range (\(\approx 10^{2}\)) since their first demonstration (although these numbers are expected to improve in the near future [10]). Four- and five-dimensional measurements are high-resolution alternatives that capture most correlations of interest. We refer the reader to Refs. [6; 7; 9] for details. Ref. [6] also contains a detailed description of the PARMTEQ [11] model and LEBT distribution used in our RFQ simulations.
Fig. 1 compares the low-dimensional projections of the model distribution to a five-dimensional measurement. Note that the RFQ model predicts a 42 mA beam current, higher than the 25 mA measured current. Although the predictions are incorrect by a large margin, normalizing both data sets to unit covariance brings the predicted projections fairly close to the measured projections. From now on, we let \(x\), \(p_{x}\), \(y\), \(p_{y}\), \(z\), \(p_{z}\) represent these normalized coordinates.
A striking conclusion drawn from the BTF measurements is that various higher dimensional projections of the six-dimensional phase space distribution are hollow.1 It is, on one hand, unsurprising that inter-plane correlations develop in the RFQ, where a complex bunching process occurs over many focusing periods with strong space charge. On the other hand, some measured features are not entirely intuitive; for example, they contain unexpected asymmetries. Below, we highlight two such features and show that they are also present in the model distribution.
Footnote 1: It is well-known from the Kapchinskij-Vladimirskij distribution (a four-dimensional ellipsoidal shell) that a hollow core is easily hidden by low-dimensional projections.
First, the longitudinal momentum distribution is bimodal near the transverse core; i.e., the five-dimensional distribution \(f(x,p_{x},y,p_{y},p_{z})\) is hollow. We visualize this feature using radial/ellipsoidal shell slices [9] in Fig. 2. The model reproduces the transition from unimodal to bimodal \(p_{z}\) when moving from the periphery to the core of the four-dimensional transverse distribution. Ruisard et al. [6; 7] showed that longitudinal hollowing occurs during the free expansion of a six-dimensional Gaussian distribution, which relaxes to a uniform and eventually hollow spatial density. However, they also concluded that this mechanism is not excited in the MEBT after the RFQ. Additionally, a symmetric, freely expanding Gaussian does not reproduce the measured asymmetric sliced energy distribution. Thus, longitudinal hollowing must occur in the RFQ. Initial investigations suggest the hollowing occurs during the initial bunch formation; a more thorough investigation may be the subject of future work.
Second, the transverse charge distribution \(f(x,y)\) is bimodal when \(p_{z}=0\); i.e., the three-dimensional distribution \(f(x,y,p_{z})\) is hollow. Due to the strong \(z\)-\(p_{z}\) correlation, the charge distribution \(f(x,y,z)\) has a similar shape. Again, this feature is present in the model distribution -- see Fig. 3. The transverse hollowing is due to space-charge-driven relaxation of the peaked charge distribution after the RFQ to a uniform and eventually slightly hollow charge distribution at the measurement plane; the hollowing does not require any cross-plane correlations at the MEBT entrance [9].
We conclude that the RFQ model generates a realistic position-momentum distribution. Thus, we can use the model distribution to investigate the influence of high-dimensional phase space correlations on the beam dynamics.
Figure 1: One- and two-dimensional projections of the measured (black) and model (red) five-dimensional phase space distribution. In (b), the coordinates are normalized to unit covariance. The contours vary logarithmically from \(10^{-3.0}\) to \(10^{-0.5}\) as a fraction of the peak density in each frame.
Figure 2: Energy distribution of the measured (black) and model (red) bunch within ellipsoidal shells in transverse phase space. Each shell slice is defined by \(r_{min}\leq\mathbf{x}_{\perp}^{T}\Sigma_{\perp}^{-1}\mathbf{x}_{\perp}\leq r _{max}\) for covariance matrix \(\mathbf{\Sigma}_{\perp}=\langle\mathbf{x}_{\perp}\mathbf{x}_{\perp}^{T}\rangle\) and transverse phase space coordinate vector \(\mathbf{x}_{\perp}=[x,p_{x},y,p_{y}]^{T}\).
The next section describes two such studies in the SNS linac and the new BTF straight layout.2
Footnote 2: The BTF previously used a bent layout; a new straight layout design will be commissioned in late 2023.
## 3 Artificial Decorrelation
We decorrelate an \(N\)-particle bunch by permuting the particle indices in the \(x\)-\(p_{x}\), \(y\)-\(p_{y}\), and \(z\)-\(p_{z}\) planes:
\[\{x_{i},p_{x_{i}}\} \rightarrow\{x_{i},p_{x_{i}}\}, \tag{2}\] \[\{y_{i},p_{y_{i}}\} \rightarrow\{y_{j},p_{y_{j}}\},\] \[\{z_{i},p_{z_{i}}\} \rightarrow\{z_{k},p_{z_{k}}\},\]
where \(i\) is an integer running from 1 to \(N\), and \(j\), \(k\) are permutations of those integers. Decorrelation removes all relationships between planes without changing the projections \(\{f(x,p_{x}),f(y,p_{y}),f(z,p_{z})\}\). Note that decorrelation also creates artificial "corners", inflating the six-dimensional phase space volume.
After the bunch is decorrelated, it is straightforward to run two parallel simulations and monitor the divergence between the correlated and decorrelated distributions as they evolve. Here we consider the MEBT and Drift Tube Linac (DTL) sections of the SNS linac. We modelled the linac using PyORBIT [12] with a transit time factor rf gap model, hard-edged (non-overlapping) quadrupole fields, and FFT space charge solver on a \(128\times 128\times 128\) mesh. Results are summarized in Fig. 4. There are no apparent differences between the correlated and decorrelated beams at the rms level, and the low-dimensional projections are almost identical even in the low-density tails.3 Note that \(8\times 10^{6}\) simulation particles were tracked, while there are approximately \(5\times 10^{8}\) particles in each real bunch.
Footnote 3: Previous studies produced an intensity-dependent divergence between the correlated/decorrelated horizontal beam sizes by the end of the DTL [13], but these calculations were erroneous. A few particles escaped the stable region of longitudinal phase space, ending up far behind the bunch and eventually lost to a transverse aperture. The grid used in the space charge solver expanded to include these particles, placing almost the entire bunch in one longitudinal bin and generating incorrect space charge forces. The particles escaped at slightly different times in the correlated and decorrelated bunches, leading to an apparent intensity-dependent difference in the rms beam sizes. We fixed this problem by adding a dense array of longitudinal apertures in the MEBT and DTL.
Fig. 5 examines the longitudinal phase space distribution near the transverse core during the first 7 meters of transport in the linac. In the first half of the transport, the beam is compressed by four rebunching cavities in the MEBT. In the second half, the bunch begins to accelerate in the DTL. The two bunches converge to the same dynamics by the end of the MEBT.
If the lattice and physics models are correct, these results suggest that the measured two-dimensional projections \(\{f(x,p_{x}),f(y,p_{y}),f(z,p_{z})\}\) are probably sufficient to predict the detailed beam evolution in high-power linear accelerators.
We performed a similar numerical experiment in the BTF lattice, which does not host any rebunching or accelerating cavities. We used the straight-layout design planned for the next experimental run (late 2023). We upsampled the model bunch from \(8.5\times 10^{6}\) to \(5.0\times 10^{8}\) particles, which approaches the real number of particles in a single bunch.4 The optics were set to generate a slightly mismatched envelope in the FODO channel. The simulation ran for 10 hours on four MPI nodes. The \(x\)-\(x^{\prime}\) and \(y\)-\(y^{\prime}\) projections at the end of the beamline at displayed in Fig. 6.
Footnote 4: A variety of upsampling methods were explored, including generative diffusion and normalizing flow models, but in this study, we simply computed and sampled from a six-dimensional histogram with \(50^{6}\) bins. Even with nearly \(10^{7}\) particles, only \(0.03\%\) of the histogram bins had nonzero counts; however, adding Gaussian noise to each particle seemed to eliminate most of the unrealistic “clumping” in six-dimensional phase space, resulting in a reasonable upsampled bunch.
The differences between the correlated and decorrelated beams in Fig. 6 are small. We emphasize that the phase space structure at the \(10^{-6}\) density level is relevant to beam loss in high-power (multi-megawatt) linacs, which must operate below a one-watt-per-meter loss limit, and that this structure can be measured in the BTF [8].
Figure 4: Simulated beam evolution in the SNS linac. Top: horizontal projection \(f(x)\) at the beginning, middle, and end of the lattice. The horizontal coordinate is scaled to unit variance. Bottom: rms horizontal beam size as a function of position. The correlated/decorrelated bunch is represented by blue/red lines. (The lines overlap at almost all points.)
Figure 3: The \(x\)-\(y\) distribution within \(p_{z}\) slices in normalized coordinates. The contours vary linearly from \(10^{-2}\) to \(10^{0}\) as a fraction of the peak density in each frame.
## 5 Discussion
Our findings in this brief study imply that most of the uncertainty in the initial distribution in the SNS linac (and similar machines) can be eliminated by measuring three orthogonal two-dimensional projections: \(\{f(x,p_{x}),f(y,p_{y}),f(z,p_{z})\}\). Of course, our tentative conclusions are based on simulations and must be verified with direct measurements. This will be the focus of the upcoming experimental run at the BTF.
These findings also connect to the work in the LEDA experiment, where only the block-diagonal elements of the \(6\times 6\) covariance matrix were measured [2]. In addition to rms-equivalent Waterbag and Gaussian distributions, the authors generated a "LEBT/RFQ distribution" analogous to the model distribution defined in this paper. Like us, they found the model distribution's covariance matrix to be significantly different than the measured covariance matrix. Thus, they normalized the model distribution to match the measured covariance matrix. This normalized model distribution generated one-dimensional profiles closer to the measurements but still failed to reproduce the low-density tails, especially when the beam was mismatched. Interestingly, our normalized model distribution agrees quite well with direct high-dimensional phase space measurements. Therefore, it seems that the normalized model distribution in Ref. [2] should have been quite close to the true distribution.
A key difference between our studies is that our model distribution was defined by two-dimensional measurements before the RFQ, while the authors of [2] specify that their model distribution was defined at the ion source, perhaps
Figure 5: Simulated sliced longitudinal phase space distribution in the first 7 meters of the SNS linac (left to right). The slice selects particles within the rms ellipsoid in transverse phase space. Each distribution is normalized to unit covariance. The scatter plots are shown in logarithmic color scale. One-dimensional projections onto the \(z\) axis are overlayed on the bottom row for the initially correlated (blue) and decorrelated (red) bunches.
Figure 6: Normalized horizontal and vertical phase space distributions of a simulated bunch at the end of the BTF. The contours of the initially correlated (blue) and decorrelated (red) distributions vary logarithmically.
from an analytic distribution function or a simulation of the ion source. This gives motivation to study the sensitivity of the RFQ output bunch to the input distribution; see, e.g., Fig. 11 in Ref. [6].
|
2302.04612 | Sharp interface analysis of a diffuse interface model for cell blebbing
with linker dynamics | We investigate the convergence of solutions of a recently proposed diffuse
interface/phase field model for cell blebbing by means of matched asymptotic
expansions. It is a biological phenomenon that increasingly attracts attention
by both experimental and theoretical communities. Key to understanding the
process of cell blebbing mechanically are proteins that link the cell cortex
and the cell membrane. Another important model component is the bending energy
of the cell membrane and cell cortex which accounts for differential equations
up to sixth order. Both aspects pose interesting mathematical challenges that
will be addressed in this work like showing non-singularity formation for the
pressure at boundary layers, deriving equations for asymptotic series
coefficients of uncommonly high order, and dealing with a highly coupled system
of equations. | Philipp Nöldner, Martin Burger, Harald Garcke | 2023-02-09T13:00:53Z | http://arxiv.org/abs/2302.04612v1 | # Sharp interface analysis of a diffuse interface model for cell blebbing with linker dynamics
###### Abstract
We investigate the convergence of solutions of a recently proposed diffuse interface/phase field model for cell blebbing by means of matched asymptotic expansions. It is a biological phenomenon that increasingly attracts attention by both experimental and theoretical communities. Key to understanding the process of cell blebbing mechanically are proteins that link the cell cortex and the cell membrane. Another important model component is the bending energy of the cell membrane and cell cortex which accounts for differential equations up to sixth order. Both aspects pose interesting mathematical challenges that will be addressed in this work like showing non-singularity formation for the pressure at boundary layers, deriving equations for asymptotic series coefficients of uncommonly high order, and dealing with a highly coupled system of equations.
Cell Blebbing, Sharp Interface Analysis, Phase Fields, Fluid-Structure Interaction
## 1 Introduction
The phenomenon of cell blebbing is connected with various biological processes such as locomotion of primordial germ or cancer cells, the programmed cell death (apoptosis), or cell division. Its importance has been recognised and emphasized in the last decade [6, 17, 16], and attracts more and more interest. Cell blebbing results from chemical reactions that cause the selection of sites on the cell cortex, which lies underneath the cell membrane, where it contracts. This contraction causes the fluid inside the cell (the cytosol) to be pushed towards the cell membrane, which is then stretched out and moved away from the cell cortex. The cell membrane is pinned to the cell cortex via linker proteins. Only if a sufficient amount of protein bonds can be broken, the membrane can freely develop a protrusion that is called a bleb.
Besides experimental studies [13] there are also many endeavours to understand cell blebbing from a theoretical perspective, cf. [20, 14, 22, 21, 3, 2, 19, 26]. While all these modelling approaches concentrate on selected aspects of the whole process, a full 3D model that brings together the linker proteins, their surface diffusion, and the fluid-structure interaction has only recently been proposed in [25]: the authors derive a phase field model in which cell cortex and cell membrane are defined by two coupled phase fields, with phase field parameter \(\epsilon\), that interact with the cytosol. The coupling of the phase fields reflects the linker proteins connecting both surfaces and brings in new interesting mathematical challenges such as well-posedness of equations on evolving 'diffuse manifolds' (the linker protein densities on the cell cortex undergo changes due to surface diffusion and bond
breaking), developing numerical schemes for solving non-linear, sixth-order phase field equations, and answering the question what model is reached in the limit \(\epsilon\to 0\).
This article is aimed at investigating the last problem and showing that the phase field model of [25] formally approximates a sharp interface model that has also been derived by physical first principles [24]. For that we will use the method of formal asymptotic analysis. The techniques we employ are similar to those applied for the asymptotic analysis of related phase field models like [4], the Stokes-Allen-Cahn system in [1], or the Willmore \(L^{2}\)-flow [9]. Another related asymptotic analysis is that of [23] for minimisers of the Canham-Helfrich energy.
We start by briefly recalling the phase field model from [25] and show the sharp interface system that is expected in the limit. After we have introduced the notation and gained some understanding of the system of partial differential equations, we introduce foundations of the technique we use to pass to the limit \(\epsilon\to 0\). The major part of this paper follows, which is to plug in series expansions of the solutions of the phase field model in powers of \(\epsilon\). Via separation of scales, we are able to derive equations for the leading order summands of the series. Using these findings, we can finally pass to the limit in the equations of the phase field model and find the sharp interface system of equations that we initially reviewed.
### Preliminaries
We denote the \(n\)-dimensional Lebesgue measure by \(\mathfrak{R}^{n}\) and the Hausdorff measure of Hausdorff dimension \(m\) by \(\widehat{\mathfrak{H}}^{m}\). Recall that for a two-dimensional submanifold \(\Gamma\subset\mathbb{R}^{3}\) with a smooth global chart \(\varphi:\Gamma\to\mathbb{R}^{2}\), and for a summable function \(f:\Gamma\to\mathbb{R}\), it holds by definition
\[\int_{\Gamma}f\ \mathrm{d}\widehat{\mathfrak{H}}^{2}=\int_{\psi(\Gamma)}f \circ\varphi^{-1}J\left[\varphi^{-1}\right]\ \mathrm{d}\mathfrak{R}^{2},\]
where \(J\left[u\right]=\sqrt{\det\left(\nabla u^{T}\nabla u\right)}\) is the Jacobian of \(u=\varphi^{-1}\).
Let \(\Gamma\subsetneq\mathbb{R}^{3}\) be a sufficiently smooth submanifold. We denote by \(N_{\delta}\left(\Gamma\right)=\left\{x\in\mathbb{R}^{3}\mid\mathrm{dist}_{ \Gamma}(x)<\delta\right\}\) the tubular neighbourhood around \(\Gamma\), where \(\mathrm{dist}_{\Gamma}(x)\) is the distance of \(x\) to \(\Gamma\) defined via the orthogonal projection; by \(d_{\Gamma}(x)\) we denote the signed distance. If we partition \(N_{\delta}\left(\Gamma\right)=\bigcup_{r\in\left(-\delta,0\right)}\Gamma_{r}\), where \(\Gamma_{r}=\left\{x\in N_{\delta}\left(\Gamma\right)\mid d_{\Gamma}(x)=r\right\}\), we may define extensions of quantities defined on \(\Gamma\) into \(N_{\delta}\left(\Gamma\right)\) (cf. [10, Sec. 14.6]). The extended principal curvatures are defined as
\[\tilde{\kappa}_{i}:\ N_{\delta}\left(\Gamma\right) \to\mathbb{R},\] \[x \mapsto\kappa_{\Gamma_{\Gamma_{\Gamma_{\Gamma}\cup i}}^{2}}(x)\,,\]
where \(\kappa_{S,i}(x)\) is the \(i\)th principal curvature of the surface \(S\). Accordingly, the mean curvature is
\[\tilde{H}:\ N_{\delta}\left(\Gamma\right) \to\mathbb{R},\] \[x \mapsto H_{\Gamma_{\Gamma_{\Gamma_{\Gamma}\cup i}}}(x)\,.\]
Another extension method we will encounter is the _normal extension_ of a quantity \(f:\Gamma\to V\), for a set \(V\), meaning that the quantity is extended constantly in normal direction. We denote those extensions by \(\tilde{f}^{\nu}\).
The surface gradient, \(\nabla_{\Gamma}f\big{|}_{\rho}\), of a function \(f:\Gamma\to\mathbb{R}\) in a point \(p\in\Gamma\) is the vector
\[\nabla_{\Gamma}f\big{|}_{\rho}=\mathbb{P}_{\Gamma}(p)\nabla\tilde{f}^{\nu} \big{|}_{\rho},\]
where \(\mathbb{P}_{\Gamma}(p)=I-\nu_{\Gamma}\left(p\right)\otimes\nu_{\Gamma}\left(p\right)\) is the tangential projection onto \(\Gamma\). Other surface differential operators such as the divergence or the Jacobi-Matrix can be derived analogously.
For a differentiable functional \(S:\ X\to[0,\infty)\) on a Banach space \(X\), the element \(\nabla^{Y}S(u),u\in X\), of a subspace \(Y\subseteq X\) that fulfills
\[\left(\nabla^{Y}S(u),v\right)_{Y}=S^{\prime}(u)v\qquad\forall v\in Y,\]
where \(S^{\prime}(u)\) is the Gateaux-derivative of \(S\) in \(u\), is called the \(Y\)-gradient of \(S\). Consider, e.g., the functional \(S(u)=\int_{B_{1}(0)}\left|\nabla u(x)\right|^{2}\ \mathrm{d}\mathfrak{R}^{3}(x)\); its \(L^{2}\)-gradient is \(\nabla^{L^{2}}S(u)=-\Delta u\), whereas the \(H^{1}_{0}\)-gradient takes the form \(\nabla^{H^{1}_{0}}S(u)=u\), and the \(H^{-1}\)-gradient is \(\nabla^{H^{-1}}S(u)=\Delta^{2}u\).
## 2 Modelling
Besides the numerical advantage of making topological changes such as pinch-offs (like when vesicles form out of the membrane) easy to handle, a phase field approach for modelling cell blebbing is also apt for bio-physical reasons: cell membranes are bilayers
of lipid molecules which can be subject to undulations, and so the membrane is not strictly demarcated to the surrounding fluid. Depending on the scale we look at these membranes, the diameter of the lipid molecules involved, and the spacing between them, it may be desirable to model uncertainty in the lipid molecules' position and thus take them to be diffuse layers of some thickness \(\epsilon\). Another pecularity when considering cell blebbing is experimental evidence [13] that at sites where blebbing occurs, the cell membrane is folded multiple times providing for enough material to be unfolded, and is thus thicker than a typical biological membrane.
Let us assume we observe the process of cell blebbing for a certain time \(T\in(0,\infty)\) in a domain \(\Omega\subseteq\mathbb{R}^{3}\). We consider two evolving diffuse interfaces--the cell membrane and the cell cortex--that can be defined as those subsets of \(\Omega\), on which phase fields \(\phi_{\epsilon}\) (modelling the membrane) and \(\psi_{\epsilon}\) (modelling the cell cortex) are close to zero, respectively. Additionally, there is a surrounding fluid with density \(\rho\), velocity \(v_{\epsilon}\), and pressure \(p_{\epsilon}\). Also in the domain, but concentrated on the cell cortex, are linker proteins with mass volume density \(\rho_{\alpha,\epsilon}\). They connect the cell membrane and the cell cortex. The linker proteins behave like springs, but may break if overstretched, so we introduce another density \(\rho_{i,\epsilon}\) which gives the mass of linkers per volume that are broken. This is important because'repairing mechanisms' of the cell take care of reconnecting those broken linkers back to the cell membrane. A scheme in which the aforementioned quantities are all depicted together is given in Figure 1.
For deriving the phase field model, Onsager's variational principle [15, 18] is combined with a reaction-diffusion-like surface evolution equation for the active and inactive linker proteins. To establish a basic understanding of how a PDE system for cell blebbing can be obtained, let us mention the principle steps in the derivation.
1. Definition of an energy functional \(U\left[v_{\epsilon},p_{\epsilon},\phi_{\epsilon},\psi_{\epsilon}\right]\) that is the sum of all kinds of energy of the cell: the ingredients are the kinetic energy of the fluid, the surface and bending energy of the cell cortex and cell membrane, and a potential energy that accounts for the coupling of both membrane and cortex via the linker proteins.
2. Definition of appropriate boundary conditions (see below).
3. Variation of \(U\) plus a dissipation functional. With regard to the linker proteins, our process is assumed to be quasi-static, i.e., we assume the linker proteins to be given parameters of \(U\) although their evolution is given by a reaction-diffusion-like surface equation.
4. Extending the stationarity condition derived by the previous variation step, the aforementioned surface evolution equations for the linker proteins are added.
Figure 1: Illustration of the relationship of the two diffuse layers. The dotted lines indicate the centers of the transition layers of \(\phi\) and \(\psi\). In the white region, both \(\phi_{\epsilon}\) and \(\psi_{\epsilon}\) take values close to \(1\). In the light orange region, \(\psi_{\epsilon}\) takes values close to \(1\), but \(\phi_{\epsilon}\) has values close to \(-1\). In the dark orange region, both \(\phi_{\epsilon}\) and \(\psi_{\epsilon}\) have values close to \(-1\).
### Phase field model
Several computations and formulae are the same for the phase field representing the cell membrane \(\phi_{\epsilon}\) and that representing the cell cortex \(\psi_{\epsilon}\). For those, we always use the symbols \(\varphi\in\left\{\phi,\psi\right\}\), and \(\Phi\in\left\{\Gamma,\Sigma\right\}\) to avoid copious repetition.
In the phase field approach, we approximate two important geometrical quantities known from the sharp interface perspective, namely the normal
\[\nu_{\Phi}=\nu_{\varphi_{\epsilon}}+O\left(\epsilon\right),\quad\nu_{\varphi_{ \epsilon}}=\frac{\nabla\varphi_{\epsilon}}{\left|\nabla\varphi_{\epsilon} \right|}\]
(everywhere where \(\varphi_{\epsilon}\neq 0\)), and the mean curvature
\[H_{\Phi}=H_{\varphi_{\epsilon}}+O\left(\epsilon\right),\quad H_{\varphi_{ \epsilon}}=\left|\nabla\varphi_{\epsilon}\right|\left(-\epsilon\Delta\varphi_ {\epsilon}+\epsilon^{-1}W^{\prime}\left(\varphi_{\epsilon}\right)\right)\]
with \(W\left(\varphi_{\epsilon}\right)=\frac{1}{4}\left(\varphi_{\epsilon}^{2}-1 \right)^{2}\). Having the velocity \(v\) and density \(\rho\) of the fluid, we may express the kinetic energy as
\[\frac{1}{2}\int_{\Omega}\rho\left|v\right|^{2}\,\mathrm{d}\mathfrak{B}^{3}.\]
Let us consider the following energies at a particular point in time \(t\in[0,T]\), so we can ignore the time-dependency for now. The surface energy of the diffuse cell membrane with a surface tension proportional to \(\gamma_{\Gamma}\) is given by the Ginzburg-Landau energy
\[G_{\epsilon,\Gamma}\left[\phi\right]=\gamma_{\Gamma}\int_{\Omega}\frac{ \epsilon}{2}\left|\nabla\phi\right|^{2}+\frac{1}{\epsilon}W\left(\phi\right) \,\mathrm{d}\mathfrak{B}^{3}=\gamma_{\Gamma}\int_{\Omega}g_{\epsilon}\left[ \phi\right]\,\mathrm{d}\mathfrak{B}^{3}\]
with \(g_{\epsilon}\left[\phi\right]=\frac{\epsilon}{2}\left|\nabla\phi\right|^{2}+ \frac{1}{\epsilon}W\left(\phi\right)\). A well-established [5, 11, 7] model for the bending energy of a cell membrane with bending rigidity \(\beta_{\Gamma}\) and spontaneous mean curvature \(C_{0}^{\Gamma}\) is the phase field version of the Canham-Helfrich energy
\[\mathcal{W}_{\epsilon,\Gamma}\left[\phi\right]=\frac{\beta_{\Gamma}}{2 \epsilon}\int_{\Omega}\left(-\epsilon\Delta\phi+\left(\frac{1}{\epsilon} \phi+C_{0}^{\Gamma}\right)\left(\phi^{2}-1\right)\right)^{2}\,\mathrm{d} \mathfrak{B}^{3}.\]
The spontaneous mean curvature corresponds to an intrinsic bending of the membrane which is typical for biomembranes. The additional term in the energy introduced by that, however, does not introduce new theoretical challenges compared to using a Willmore functional, which is why we will omit it for the sake of a straightforward presentation, i.e, \(C_{0}^{\Gamma}=0\). In this configuration, \(\mathcal{W}_{\epsilon,\Gamma}\left[\phi\right]\) is the phase field version of the Willmore energy. We simplify the situation for the cell cortex in that we assume it to be just a stiffer membrane thus employing the same types of energies just with different surface tension and bending rigidity. Both energies associated to membrane and cortex are summarised in the energy functionals
\[S_{\Phi}^{\epsilon}\left[\varphi\right]=\mathcal{W}_{\epsilon,\Phi}\left[ \varphi\right]+G_{\epsilon,\Phi}\left[\varphi\right].\]
For the coupling of cell membrane and cell cortex, we account with a generalised Hookean spring energy.
\[C_{\epsilon}\left[\phi_{\epsilon},\psi_{\epsilon},\rho_{\alpha\epsilon} \right]=\int_{\Omega}g_{\epsilon}\left[\phi\right]\left(y\right)\frac{\xi}{2 }\int_{\Omega}g_{\epsilon}\left[\psi\right]\left(x\right)\left|x-y\right|^{2} \rho_{\alpha\epsilon}\left(t,x\right)\omega\left(x,y,\nu_{\varphi_{\epsilon}} \right)\,\mathrm{d}\mathfrak{B}^{3}(x)\,\mathrm{d}\mathfrak{B}^{3}(y),\]
where \(\xi\) is a spring constant, and \(\omega\left(x,y,\nu_{\varphi_{\epsilon}}\right)\) assigns to points \(x,y\in\Omega\) the particle-per-volume density of protein linkers connecting in direction \(x-y\). A possible choice is
\[\omega\left(x,y,\nu_{\varphi_{\epsilon}}\right)=\tilde{\omega}\left(\frac{ \left(x-y\right)\cdot\nu_{\varphi_{\epsilon}}\left(x\right)}{\left|x-y\right|} \right),\quad\tilde{\omega}(r)=\hat{\omega}\exp\left(\frac{\left(r-1\right)^{2 }}{s^{2}}\right)\]
with \(s\) being a suitable standard deviation and \(\hat{\omega}\) an appropriate scaling factor. To outline the idea for this modelling choice, we first point out that \(g_{\epsilon}\left[\phi_{\epsilon}\right]\) can be pictured as a'smooth Dirac delta function' if \(\phi_{\epsilon}\) is the so-called optimal profile
\[x\mapsto\tanh\left(\frac{d_{\Gamma}\left(x\right)}{\epsilon\sqrt{2}}\right).\]
The same holds for \(g_{\epsilon}\left[\psi_{\epsilon}\right]\), so that
\[\int_{\Omega}g_{\epsilon}\left[\phi\right]\cdot\,\mathrm{d}\mathfrak{B}^{3} \approx\frac{2\sqrt{2}}{3}\int_{\Gamma\left(y\right)}\cdot\mathrm{d}\mathfrak{ B}^{2}\]
and
\[\int_{\Omega}g_{\epsilon}\left[\psi\right]\cdot\,\mathrm{d}\mathfrak{B}^{3} \approx\frac{2\sqrt{2}}{3}\int_{\Sigma\left(t\right)}\cdot\mathrm{d}\mathfrak{ S}^{2}\]
approximate surface integrals for small \(\varepsilon\). Thus,
\[C_{\varepsilon}\left[\phi_{\varepsilon},\psi_{\varepsilon},\rho_{a \varepsilon}\right] =\int_{\Omega}g_{\varepsilon}\left[\phi\right](y)\frac{\xi}{2}\int_{ \Omega}g_{\varepsilon}\left[\psi\right](x)\left|x-y\right|^{2}\rho_{a, \varepsilon}\left(t,x\right)\omega\left(x,y,\nu_{\psi_{\varepsilon}}\right)\ \mathrm{d} \mathfrak{B}^{3}(x)\ \mathrm{d}\mathfrak{B}^{3}(y)\] \[\approx\int_{\{V_{0}\}}\frac{\xi}{2}\int_{\Sigma(t)}\left|x-y \right|^{2}\rho_{a,\varepsilon}\left(t,x\right)\omega\left(x,y,\nu_{\Sigma(t)} \right)\ \mathrm{d}\mathfrak{S}^{2}(x)\ \mathrm{d}\mathfrak{S}^{2}(y).\]
Looking at the sharp interface equivalent of the coupling energy, we can identify
1. \(\frac{\xi}{2}\left|x-y\right|^{2}\) as a Hookean energy density, which is integrated over the membrane and cortex, and weighted additionally by
2. \(\omega\left(x,y,\nu_{\Sigma(t)}\right)\) to incorporate the likeliness of the two spatial points \(x\in\Sigma\), \(y\in\Gamma\) being connected, and
3. the volume-density of linker particles \(\rho_{a,\varepsilon}\left(t,x\right)\) actually linking.
The Hookean energy ansatz accounts for the earlier mentioned assumption that the linker proteins behave like springs. Additionally, since linkers might not be distributed homogeneously, we should scale the coupling force by their actual amount, which explains 3. The necessity to consider a weight \(\omega\) might not be so obvious: it has not yet been agreed upon in the biological literature how to identify the pairs of points \((x,y)\in\Sigma\times\Gamma\) that are connected by protein linkers. That is why we allow the weight \(\omega\) to model a certain probability for this state. An easy way to describe such a probability is in terms of the angle between \(y-x\) and a gauge direction. As this gauge direction, we chose the cortex normal, which enters as the third argument of \(\omega\).
It shall be remarked that there are other choices for'smooth Dirac delta functions' like \(\frac{1}{\varepsilon}\mathbf{W}\left(\varphi\right)\), which is smoother and easier to handle analytically and numerically. It turns out, however, that for passing to the limit \(\varepsilon\to 0\), the latter two choices are not appropriate. The reason for that becomes clear when we compare the right hand side \(K\) of the momentum balance for the different choices of the integral weight: only for \(g_{\varepsilon}\left[\varphi\right]\), we have phase field counterparts in \(K\) for every term we expect in the sharp interface system as derived from physical first principles (cf. [25]).
Summing all potential energies, we obtain the Helmoltz free Energy of the cell as
\[\mathcal{F}_{\varepsilon}\left[\phi_{\varepsilon},\psi_{\varepsilon},\rho_{ a,\varepsilon}\right]=\mathcal{S}_{\Gamma}^{\varepsilon}\left[\phi_{ \varepsilon}\right]+\mathcal{S}_{\Sigma}^{\varepsilon}\left[\psi_{\varepsilon }\right]+\mathcal{C}_{\varepsilon}\left[\phi_{\varepsilon},\psi_{\varepsilon },\rho_{a,\varepsilon}\right],\]
and the inner energy as
\[U\left[v_{\varepsilon},\phi_{\varepsilon},\psi_{\varepsilon},\rho_{a, \varepsilon}\right]=\frac{1}{2}\int_{\Omega}\rho\left|v_{\varepsilon}\right|^ {2}\ \mathrm{d}\mathfrak{B}^{3}+\mathcal{F}_{\varepsilon}\left[\phi_{ \varepsilon},\psi_{\varepsilon},\rho_{a,\varepsilon}\right].\]
Via Onsager's variational principle (cf. [25]), the following system of partial differential equations is then found as stationarity conditions
\[\rho(\partial_{t}v_{\varepsilon}+(v_{\varepsilon}\cdot\nabla)v_{\varepsilon} )-\nabla\cdot\left(\eta\left(\nabla v_{\varepsilon}+\nabla v_{\varepsilon}^{ \ T}\right)-p_{\varepsilon}\right)=K, \tag{1a}\] \[\nabla\cdot v=0,\] (1b) \[\partial_{t}\phi_{\varepsilon}+v\cdot\nabla\phi_{\varepsilon} =\nabla\cdot\left(m\left(\phi_{\varepsilon}\right)\left(\nabla \left(\nabla_{\phi}^{L^{2}}S_{\Gamma}^{\varepsilon}\left[\phi\right]\right)+ \nabla\left(\nabla_{\phi_{\varepsilon}}^{L^{2}}C_{\varepsilon}\left[\phi,\psi,\rho_{a,\varepsilon}\right]\right)\right)\right),\] (1c) \[\partial_{t}\psi_{\varepsilon}+v\cdot\nabla\psi_{\varepsilon} =\nabla\cdot\left(m\left(\psi_{\varepsilon}\right)\left(\nabla \left(\nabla_{\psi}^{L^{2}}S_{\Sigma}^{\varepsilon}\left[\psi\right]\right)+ \nabla\left(\nabla_{\psi_{\varepsilon}}^{L^{2}}C_{\varepsilon}\left[\phi,\psi, \rho_{a,\varepsilon}\right]\right)\right)\right), \tag{1d}\]
where
\[K =\nabla^{L^{2}}S_{\Gamma}^{\varepsilon}\left[\phi\right]\nabla \phi_{\varepsilon}+\nabla^{L^{2}}S_{\Sigma}^{\varepsilon}\left[\psi\right] \nabla\psi_{\varepsilon}\] \[+\nabla_{\phi}^{L^{2}}C_{\varepsilon}\left[\phi,\psi,\rho_{a, \varepsilon}\right]\nabla\phi+\nabla_{\psi}^{L^{2}}C_{\varepsilon}\left[\phi, \psi,\rho_{a,\varepsilon}\right]\nabla\psi\] \[-\int_{\Omega}g_{\varepsilon}\left[\phi_{\varepsilon}\right](y) \partial_{\rho_{a,\varepsilon}}c(\cdot,y,\rho_{a,\varepsilon},\nu_{\psi})H_{ \psi_{\varepsilon}}\rho_{a,\varepsilon}\nu_{\psi}\ \mathrm{d}\mathfrak{B}^{3}(y)\] \[-\int_{\Omega}g_{\varepsilon}\left[\phi_{\varepsilon}\right](y) \mathrm{P}_{\psi_{\psi}}\nabla\left(\partial_{\rho_{a,\varepsilon}}c(\cdot,y, \rho_{a,\varepsilon},\nu_{\psi})\right)g_{\varepsilon}\left[\psi\right]\rho_{a, \varepsilon}\ \mathrm{d}\mathfrak{B}^{3}(y).\]
The imposed boundary conditions are
\[v_{\varepsilon}|_{\partial\Omega} =0, \tag{1e}\] \[\partial_{\psi}\phi_{\varepsilon}|_{\partial\Omega} =\partial_{\psi}\psi_{\varepsilon}|_{\partial\Omega} =0,\] (1f) \[J_{\phi_{\varepsilon}}|_{\partial\Omega}\cdot\nu =J_{\psi_{\varepsilon}}|_{\partial\Omega}\cdot\nu =0,\] (1g) \[\rho_{a,\varepsilon}|_{\partial\Omega} =\rho_{i,\varepsilon}|_{\partial\Omega} =0, \tag{1h}\]
where
\[\mathbf{J}_{\phi_{e}}=\nabla\left(\nabla_{\phi}^{L^{2}}S_{\Gamma}^{e}\left[\phi\right]+ \nabla_{\phi_{e}}^{L^{2}}C_{e}\left[\phi,\psi,\rho_{ax}\right]\right),\]
and
\[\mathbf{J}_{\psi_{e}}=\nabla\left(\nabla_{\psi}^{L^{2}}S_{\Sigma}^{e}\left[\psi \right]+\nabla_{\psi_{e}}^{L^{2}}C_{e}\left[\phi,\psi,\rho_{ax}\right]\right).\]
In addition, we consider evolution equations for the active and inactive linkers on the diffuse surface of the cell cortex:
\[g_{e}\left[\psi_{e}\right]\partial_{t}\rho_{a,x}-\nu_{\psi_{u}}H_ {\psi_{e}}\rho_{a,x}-\nabla\cdot\left(g_{e}\left[\psi_{e}\right]\eta_{a}\nabla \rho_{a}\right)+\nabla\cdot\left(g_{e}\left[\psi_{e}\right]\upsilon_{e,x}\rho_ {a}\right)=\] \[g_{e}\left[\psi_{e}\right]\mathcal{R}\left[\rho_{a},\rho_{i}, \phi_{e},\nu_{\psi}\right], \tag{1i}\] \[g_{e}\left[\psi_{e}\right]\partial_{t}\rho_{i,x}-\nabla_{\psi_{u }}H_{\psi_{e}}\rho_{i,x}-\nabla\cdot\left(g_{e}\left[\psi_{e}\right]\eta_{i} \nabla\rho_{i}\right)+\nabla\cdot\left(g_{e}\left[\psi_{e}\right]\upsilon_{e,x} \rho_{i}\right)=\] \[-g_{e}\left[\psi_{e}\right]\mathcal{R}\left[\rho_{a},\rho_{i}, \phi_{e},\nu_{\psi}\right], \tag{1j}\]
where
\[\mathcal{R}\left[\rho_{a,x},\rho_{i,x},\phi_{e},\nu_{\psi}\right]=k\rho_{i,x}- \rho_{a,x}r\left[\phi_{e},\nu_{\psi}\right].\]
The term \(k\rho_{i,x}\) is the effective reconnection rate, \(k\geq 0\), of the inactive linkers, and
\[\rho_{a,x}r\left[\phi_{e},\nu_{\psi}\right]\]
is the effective disconnection rate of the active linkers in relation to the membrane position in space and the orientation of the cortex given by its normal.
For a thorough discussion and further references, the reader may please refer to [24]. In the following section, we describe steps one, two and four, but leave out the lengthy calculations involved for step three. For the following discussion, however, we need the concrete expression for all the \(L^{2}\)-gradients of the energies, so we give them here without doing the calculations. Note that these calculations depend on the boundary conditions (1e), (1f), (1g), and (1h):
\[\nabla_{\varphi}^{L^{2}}S_{\Phi}^{e} =\nabla_{\varphi}^{L^{2}}\mathcal{W}_{e}+\nabla_{\varphi}^{L^{2}} \mathcal{G}_{e}, \tag{2a}\] \[\nabla_{\varphi}^{L^{2}}\mathcal{G}_{e} =-\epsilon\Delta\varphi+\frac{1}{\epsilon}W^{\prime}\left(\varphi \right)=:\mu\left[\varphi\right],\] (2b) \[\nabla_{\varphi}^{L^{2}}\mathcal{W}_{e} =-\Delta\left(\mu\left[\varphi\right]\right)+\mu\left[\varphi \right]\frac{1}{\epsilon^{2}}W^{\prime\prime}\left(\varphi\right). \tag{2c}\]
For easier expression of the coupling energy gradients, we introduce
\[C_{\varphi}(t,y) =\int_{\Omega}g_{e}\left[\psi_{e}\right](x)c\left(x,y,\rho_{a} \left(t,x\right),\nu_{\psi_{e}\left(t\right)}(x)\right)\ \mathrm{d}\mathfrak{B}^{3}(x),\] \[C_{\phi}(t,x) =\int_{\Omega}g_{e}\left[\phi\right](y)c\left(x,y,\rho_{a}\left(t,x\right),\nu_{\psi_{e}\left(t\right)}(x)\right)\ \mathrm{d}\mathfrak{B}^{3}(y).\]
Then,
\[\nabla_{\phi_{e}}^{L^{2}}C_{e} =\mu_{0}\left[\phi\right](y)C_{\varphi}(y)-\int_{\Omega}\epsilon g _{e}\left[\psi_{e}\right](x)\nabla_{\varphi}\phi_{e}\cdot\nabla_{\varphi} \left(c\left(x,y,\rho_{a,x},\nu_{\psi}\right)\right)\ \mathrm{d}\mathfrak{B}^{3}(x), \tag{2d}\] \[\nabla_{\psi_{e}}^{L^{2}}C_{e} =\mu_{0}\left[\psi_{e}\right](x)C_{\phi}(x)-\int_{\Omega}\epsilon g _{e}\left[\phi_{e}\right](y)\nabla_{x}\left(c\left(x,y,\rho_{a,x},\nu_{\psi} \right)\right)\cdot\nabla_{x}\psi_{e}\ \mathrm{d}\mathfrak{B}^{3}(y)\] (2e) \[-\int_{\Omega}g_{e}\left[\phi_{e}\right](y)\nabla_{x}\cdot\left(g _{e}\left[\psi_{e}\right](x)\nabla_{\varphi}c\left(x,y,\rho_{ax,x},\nu_{\psi} \right)^{T}\frac{1}{\left[\nabla\psi\right]}\mathbb{P}_{\nu_{\psi}}\right)\ \mathrm{d} \mathfrak{B}^{3}(y).\]
Solutions of (1) fulfil an energy inequality, cf. [25]. This energy inequality reads
\[\begin{split}\frac{\mathrm{d}}{\mathrm{d}\,t}\mathcal{F}_{\varepsilon} \left[\phi_{\varepsilon},\psi_{\varepsilon},\rho_{a,a}\right]\leq&- \left\|\nabla v_{\varepsilon}\right\|_{L^{2}(\Omega)}^{2}\\ &-m\left(\phi_{\varepsilon}\right)\left\|\nabla\left(\nabla_{ \phi_{\varepsilon}}^{L^{2}}S_{\Gamma}^{\varepsilon}\left[\phi_{\varepsilon} \right]+\nabla_{\phi_{\varepsilon}}^{L^{2}}C_{\varepsilon}\left[\phi_{ \varepsilon},\psi_{\varepsilon},\rho_{a,a}\right]\right)\right\|_{L^{2}( \Omega)}^{2}\\ &-m\left(\psi_{\varepsilon}\right)\left\|\nabla\left(\nabla_{\psi_ {\varepsilon}}^{L^{2}}S_{\Gamma}^{\varepsilon}\left[\psi_{\varepsilon}\right]+ \nabla_{\psi_{\varepsilon}}^{L^{2}}C_{\varepsilon}\left[\phi_{\varepsilon}, \psi_{\varepsilon},\rho_{a,a}\right]\right)\right\|_{L^{2}(\Omega)}^{2}\\ &+\int_{\Omega}g_{\varepsilon}\left[\phi\right](y)\int_{\Omega} g_{\varepsilon}\left[\psi\right](x)\partial_{\rho_{a}\upsilon}[\partial_{\rho_{a} \varepsilon}]\,\mathrm{d}\Omega^{3}(x)\,\mathrm{d}\Omega^{3}(y)\\ &-\int_{\Omega}H_{\psi}\left(t,x\right)\rho_{a,\varepsilon}\left( t,x\right)\mathrm{v}_{\psi_{\varepsilon}}(t,x)\int_{\Omega}g_{\varepsilon} \left[\phi\right](y)\partial_{\rho_{a,\varepsilon}}\,c\left(x,y,\rho_{a, \varepsilon},\nu_{\psi}\right)\,\mathrm{d}\Omega^{3}(y)\,\mathrm{d}\Omega^{3 }(x)\\ &-\int_{\Omega}\int_{\Omega}g_{\varepsilon}\left[\phi_{\varepsilon }\right](y)\nabla\left(\partial_{\rho_{a}\varepsilon}\,c(\cdot,y,\rho_{a, \varepsilon},\nu_{\psi})\right)\cdot v_{\varepsilon}g_{\varepsilon}\left[ \psi\right]\rho_{a,\varepsilon}\,\mathrm{d}\Omega^{3}(y)\,\mathrm{d}\Omega^{3 }(x).\end{split} \tag{3}\]
### Sharp interface model
We introduce two evolving, two-dimensional manifolds \(\Gamma_{T}=(\Gamma\left(t\right))_{t\in[0,T]}\) for the cell membrane, and \(\Sigma_{T}=(\Sigma\left(t\right))_{t\in[0,T]}\) for the cell cortex. These evolving manifolds can also be described as the level sets \(\Gamma\left(t\right)=\phi^{-1}\left(t,0\right)\) and \(\Sigma\left(t\right)=\psi^{-1}\left(t,0\right)\) of functions \(\phi:\,\Omega\times[0,T]\to\mathbb{R}\) and \(\psi:\,\Omega\times[0,T]\to\mathbb{R}\). The cell we consider is swimming in a fluid with pressure \(p\) and velocity \(\upsilon\). Additionally, we have the density \(\rho_{a}:\,\Sigma_{T}\to\mathbb{R}\) of linker proteins connecting cell membrane and cell cortex, which we call active linkers. Another density \(\rho_{i}:\,\Sigma_{T}\to\mathbb{R}\) is introduced to model the density of the disconnected or broken proteins, called inactive linkers; these no longer couple cell membrane and cell cortex, but may be reconnected due to healing mechanisms inside the cell. \(\overset{\circ}{\Omega}=\Omega\setminus(\Gamma\left(t\right)\cup\Sigma\left(t\right))\)
\[\rho(\partial_{t}\upsilon+\left(\upsilon\cdot\nabla\right)\upsilon)-\nabla \cdot\mathbb{T} =0 \text{in}\overset{\circ}{\Omega} \tag{4a}\] \[\nabla\cdot\upsilon =0 \text{in}\overset{\circ}{\Omega}\] (4b) \[\upsilon\left(t,\cdot\right) =0 \text{on}\,\partial\Omega,\] (4c) \[\left[\upsilon\right]_{\Gamma\left(t\right)} =0 \text{on}\,\Gamma\left(t\right),\] (4d) \[\left[\upsilon\right]_{\Sigma\left(t\right)} =0 \text{on}\,\Sigma\left(t\right),\] (4e) \[-\left[\mathbb{T}\upsilon\right] =\nabla_{\psi}^{L^{2}}S_{\Gamma}\nabla\psi-\left(\nabla_{\gamma} C_{\Gamma}^{0}\cdot\upsilon_{\Gamma}\right)\upsilon_{\Gamma}+H_{\Gamma}C_{\Sigma}^{0}\upsilon_{ \Gamma} \text{on}\,\Gamma\left(t\right),\] (4f) \[-\left[\mathbb{T}\upsilon\right] =\nabla_{\psi}^{L^{2}}S_{\Sigma}\nabla\psi-\left(\nabla_{\kappa} C_{\Gamma}^{0}\cdot\upsilon_{\Sigma}\right)\upsilon_{\Sigma}+H_{\Sigma}C_{\Gamma}^{0}\upsilon_{\Sigma}\] \[-\partial_{\rho_{a}}C_{\Gamma}^{0}H_{\Sigma}\rho_{a}\upsilon_{ \Sigma}-\nabla_{\Sigma}\left(\partial_{\rho_{a}}C_{\Gamma}^{0}\right)\rho_{a }-\nabla_{\Sigma}\cdot\left(\nabla_{\nu}C_{\Gamma}^{0}\right)\upsilon_{\Sigma}\] \[-H_{\Sigma}\left(\nabla_{\nu}C_{\Gamma}^{0}\cdot\upsilon_{\Sigma} \right)\upsilon_{\Sigma} \text{on}\,\Sigma\left(t\right),\] (4g) \[\partial_{t}\phi+\upsilon\cdot\nabla\phi =0 \text{in}\,\Omega,\] (4h) \[\partial_{t}\psi+\upsilon\cdot\nabla\psi =0 \text{in}\,\Omega,\] (4i) \[\partial_{t}\rho_{a}-H\mathrm{v}_{\varsigma_{\psi}}\rho_{a}-\nabla_{ \Sigma\left(t\right)}\cdot\left(\eta_{a}\upsilon\rho_{a}\right)+\nabla_{\Sigma \left(t\right)}\cdot\left(\rho_{a}\upsilon_{\Gamma}\right) =\mathcal{R}\left[\rho_{a},\rho_{i},\phi,\upsilon_{\Sigma}\right] \text{on}\,\Sigma\left(t\right),\] (4j) \[\partial_{t}\rho_{i}-H\mathrm{v}_{\varsigma_{\psi}}\rho_{i}-\nabla_{ \Sigma\left(t\right)}\cdot\left(\eta_{i}\nabla\rho_{i}\right)+\nabla_{\Sigma \left(t\right)}\cdot\left(\rho_{i}\upsilon_{\Gamma}\right) =S\left[\rho_{a},\rho_{i},\phi,\upsilon_{\Sigma}\right] \text{on}\,\Sigma\left(t\right). \tag{4k}\]
## 3 Formal asymptotic analysis
Having outlined the physical principles, we are going to analyse the sharp interface limit of the phase field model. Let us now turn to the main result of this paper: we will demonstrate, using the method of formal asymptotic expansions, that classical solutions of the system (1) converge, for \(\varepsilon\to 0\), to solutions of (4). For a thorough theoretical introduction into the subject of formal asymptotic expansions, we refer to [8], whereas a more application-oriented perspective is taken in [12].
### Interfacial coordinates
For the following analysis, we will need a coordinate transformation typical for asymptotic analysis of phase field equations for which boundary layers are expected in the regions where the phase fields are close to zero.
Let us denote a tubular neighbourhood of a smooth, orientable hypersurface \(S\subseteq\mathbb{R}^{3}\) by \(N_{\delta}\left(S\right)\). We require that \(\delta\in\left(0,\infty\right)\) is small enough such that \(N_{\delta}\left(\Gamma\left(t\right)\right)\cap N_{\delta}\left(\Sigma\left(t \right)\right)=\emptyset\) for all \(t\in\left[0,T\right]\). The local boundary layer coordinates, or _interfacial coordinates_ (as they are most often termed in this context), with respect to \(S\) are defined by the map
\[\iota_{S,x} :\,N_{\delta}\left(S\right)\to S\times\mathbb{R},\] \[x \mapsto\left(\pi_{S}\left(x\right),\frac{d_{S}\left(x\right)}{ \varepsilon}\right).\]
For two evolving manifolds \(\Gamma_{T}\), \(\Sigma_{T}\), we extent this definition to
\[\iota_{e} :\,\bigcup_{t\in\left[0,T\right]}\left\{t\right\}\times\left(N_{ \delta}\left(\Gamma\left(t\right)\right)\cup N_{\delta}\left(\Sigma\left(t \right)\right)\right)\rightarrow\bigcup_{t\in\left[0,T\right]}\left\{t \right\}\times\left(\Gamma\left(t\right)\cup\Sigma\left(t\right)\right) \times\mathbb{R},\] \[\left(t,x\right) \mapsto\left\{\begin{aligned} \left(t,\iota_{\Gamma\left(t\right),x} \left(x\right)\right)& x\in N_{\delta}\left(\Gamma\left(t \right)\right)\\ \left(t,\iota_{\Sigma\left(t\right),x}\left(x\right)\right)& x\in N_{\delta}\left(\Sigma\left(t\right)\right)\end{aligned},\right.\]
and then set
\[\iota_{S_{T},x}=\iota_{e}\bigcup_{t\in\left[0,T\right]}\left\{t\right\}\times S \left(t\right)\]
for \(S\left(t\right)\in\left\{\Gamma\left(t\right),\Sigma\left(t\right)\right\}\). We always consider \(\delta\) small enough such that the interfacial coordinate transformations are well-defined. Generally, for a function \(f\) on \(\bigcup_{t\in\left[0,T\right]}\left\{t\right\}\times\left(N_{\delta}\left( \Gamma\left(t\right)\right)\cup N_{\delta}\left(\Sigma\left(t\right)\right)\right)\), we define
\[\hat{f}\circ\iota_{e}\left(t,x\right)=f(t,x).\]
The function \(\hat{f}\) depends on three arguments: The first is time, the second a point on one of the manifolds \(\Gamma\left(t\right)\) or \(\Sigma\left(t\right)\), and the third a real number from \(\left(-\frac{\delta}{\varepsilon},\frac{\delta}{\varepsilon}\right)\). The latter is occasionally referred to as 'fast variable' and derivatives with respect to this variable are denoted by \(\left(\cdot\right)\)'; derivatives with respect to the first variable are denoted by \(\partial_{s}\left(\cdot\right)\).
The following (standard) formulae will be important later.
**Lemma 3.1** (cf. [10, Sec. 14.6]).: Let \(S\subseteq\mathbb{R}^{3}\) be a real, orientable, and sufficiently smooth submanifold and \(N_{\delta}\left(S\right)\), \(\delta\in\mathbb{R}_{>0}\), a tubular neighbourhood on which all the following extended functions are defined. For all \(x\in N_{\delta}\left(S\right)\), it holds
\[\bar{H}\left(x\right) =\sum_{i=1}^{2}\frac{\bar{\kappa}_{S,i}^{v}\left(x\right)}{1-d_{S }\left(x\right)\bar{\kappa}_{S,i}^{v}\left(x\right)}\] \[=\sum_{i=1}^{2}\bar{\kappa}_{S,i}^{v}\left(x\right)+d_{S}\left(x \right)\bar{\kappa}_{S,i}^{v}\left(x\right)^{2}+O\left(d_{S}\left(x\right)^{2}\right) \tag{5a}\] \[=\sum_{i=1}^{2}\left(\bar{\kappa}_{S,i}+\varepsilon z\bar{\kappa}_{S,i}^{2} +O\left(\varepsilon^{2}\right)\right)\circ\iota_{S,x}\left(\pi_{S}\left(x \right)\right),\] \[\nabla\left(\bar{H}\right)\Big{|}_{x}\cdot\bar{v}\left(x\right) =\sum_{i=1}^{2}\frac{\bar{\kappa}_{S,i}^{v}\left(x\right)^{2}}{ \left(1-d_{S}\left(x\right)\bar{\kappa}_{S,i}^{v}\left(x\right)\right)^{2}}\] \[=\sum_{i=1}^{2}\bar{\kappa}_{S,i}^{v}\left(x\right)^{2}+2d_{S} \left(x\right)\bar{\kappa}_{S,i}^{v}\left(x\right)^{3}+O\left(d_{S}\left(x \right)^{2}\right)\] (5b) \[=\left(\sum_{i=1}^{2}\hat{\kappa}_{S,i}^{2}+2\varepsilon z\bar{\kappa}_{S,i}^ {3}+O\left(\varepsilon^{2}\right)\right)\circ\iota_{S,x}\left(\pi_{S}\left(x \right)\right).\] \[\nabla\left(\bar{H}\right)\Big{|}_{x}\cdot\bar{v}\left(x\right) =\bar{H}\left(x\right)^{2}-2\bar{K}\left(x\right).\] (5c) \[\nabla^{2}\left(\bar{H}\right)\Big{|}_{x}\cdot\bar{v}\left(x\right) \otimes\bar{v}\left(x\right) =2\bar{H}\left(x\right)\left(\bar{H}\left(x\right)^{2}-3\bar{K}\left(x \right)\right). \tag{5d}\]
### Assumptions on the solution
Typically, formal asymptotic theories rely on non-trivial properties on the solution of the system under investigation, (1) in our case. A rigorous justification requires treatment of its own and is not in the scope of this work. We shall restrict ourselves to clearly formulating the properties we need in form of assumptions, and rather focus on the relation of the quantities of a solution of (1) that assure a sensible behaviour in the limit. These assumptions can serve as a hint what needs to be investigated when a mathematical proof is to be given.
1. For every \(\varepsilon>0\) the system (1) with boundary conditions (1e), (1f), (1g), (1h), and initial data \(\phi_{\varepsilon}\left(0,\cdot\right)\), \(\psi_{\varepsilon}\left(0,\cdot\right)\), \(\rho_{a,\varepsilon}\left(0,\cdot\right)\), which converge in \(H^{2}\left(\Omega\right)\times H^{2}\left(\Omega\right)\times H^{1}\left( \Omega\right)\) for \(\varepsilon\searrow 0\) and form a recovery sequence of \(\mathcal{F}\), has a classical solution \[\left(v_{\varepsilon},p_{\varepsilon},\phi_{\varepsilon},\psi_{\varepsilon},\rho_{a,\varepsilon},\rho_{i,\varepsilon}\right)\] on \(\Omega_{T}=\left[0,T\right]\times\Omega\) for some time \(T>0\) being independent of \(\varepsilon\). Throughout this work, we choose the mobilities of the phase field to be a power of \(\varepsilon\): \(m\left(\phi\right)=m\left(\psi\right)=\varepsilon^{a}\) for \(\alpha\in\mathbb{R}_{>0}\).
2. Additionally, there shall be two-dimensional, orientable, smoothly evolving manifolds \[\Gamma\left(t\right)=\left\{x\in\Omega\ \middle|\ \phi_{\varepsilon}\left(t,x\right)=0\right\}, \Sigma\left(t\right)=\left\{x\in\Omega\ \middle|\ \psi_{\varepsilon}\left(t,x\right)=0\right\},\] which both enclose open sets \(\Omega_{\Gamma\left(t\right)}^{-}\) and \(\Omega_{\Sigma\left(t\right)}^{-}\). The corresponding outer domains are defined such that \(\Omega_{\Gamma\left(t\right)}^{+}=\Omega\setminus\Omega_{\Gamma\left(t \right)}^{-}\setminus\Gamma\left(t\right)\) and \(\Omega_{\Sigma\left(t\right)}^{+}=\Omega\setminus\Omega_{\Sigma\left(t \right)}^{-}\setminus\Sigma\left(t\right)\). It shall hold, \(\lim_{\varepsilon\searrow 0}\phi_{\varepsilon}\left(0,\cdot\right)=-1\) pointwise on \(\Omega_{\Gamma}^{-}\) and \(\lim_{\varepsilon\searrow 0}\phi_{\varepsilon}\left(0,\cdot\right)=1\) pointwise on \(\Omega_{\Gamma}^{+}\), and analogously for \(\psi\) and \(\Sigma\).
3. For sufficiently small \(T\), it shall hold \(\Gamma\left(t\right)\cap\Sigma\left(t\right)=\emptyset\) for all \(t\in\left[0,T\right]\).
4. The components of every classical solution to (1) shall have a regular asymptotic expansion in every compact subset \(U\) of \(\Omega_{0}=\Omega\setminus\Gamma\left(t\right)\setminus\Sigma\left(t\right)\), i.e., for every \(q\in\left\{\phi_{\varepsilon},\psi_{\varepsilon},v_{\varepsilon},p_{ \varepsilon},\rho_{a,\varepsilon},\rho_{i,\varepsilon}\right\}\), it holds \[q\big{|}_{U}(t,x)=\sum_{i=0}^{n}q_{i}^{o}(t,x)\varepsilon^{i}+o\left( \varepsilon^{n}\right),\] (6) for some \(n\in\mathbb{N}_{0}\). All \(q_{i}^{o}\) shall be as smooth as \(q\). We call these series _outer expansions of \(q\)_. This implies that a boundary layer is to be expected at most at \(\Gamma\left(t\right)\cup\Sigma\left(t\right)\).
5. If \(q=\phi\), (6) shall even hold for all \(U\Subset\Omega_{0}\cup\Sigma\) and if \(q=\psi\), for all \(U\Subset\Omega_{0}\cup\Gamma\). Thus, every phase field is expected to have only one boundary layer.
6. The species densities' evolution is irrelevant outside the diffuse layers around \(\Gamma_{T}\), \(\Sigma_{T}\). We thus consider them to be asymptotically constant in time away from the diffuse layers: For every \(U\Subset\Omega_{0}\), it holds \(\partial_{t}\left(\rho_{a,\varepsilon}\big{|}_{U}\right)\in O\left(\varepsilon^ {2}\right),\) which is equivalent to claiming \(\partial_{t}\rho_{a,\varepsilon 0}^{o}=0=\partial_{t}\rho_{a,\varepsilon 1}^{o}\).
7. The components of every classical solution to (1) shall have a regular asymptotic expansion in \(N_{\delta}\left(S\right)\), \(S\in\left\{\Gamma\left(t\right),\Sigma\left(t\right)\right\}\), after transformation into local coordinates: For all \(q\in\left\{\phi_{\varepsilon},\psi_{\varepsilon},v_{\varepsilon},p_{ \varepsilon},\rho_{a,\varepsilon},\rho_{i,\varepsilon}\right\}\), it holds \(q\big{|}_{\Gamma\left(t\right)\cup\Sigma\left(t\right)}=\hat{q}\circ\iota_{\varepsilon}\) such that \[\hat{q}(t,s,z)=\sum_{k=-N}^{n}\varepsilon^{k}\hat{q}_{k}^{i}(t,s,z)+o\left( \varepsilon^{n}\right)\] for \(N,n\in\mathbb{N}_{0}\), where all \(\hat{q}_{k}^{i}\) shall be integrable in \(z\) and as smooth as \(q\). We call these series _inner expansions of \(q\)_.
8. Physically, the phase fields model the volume fraction of phases. Thus, they should always take values between \(-1\) and \(1\), independent of how small \(\varepsilon\) may be. Hence, for \(q\in\left\{\phi,\psi\right\}\), we assume \(\hat{q}_{\varepsilon}^{i}=0\) for all \(\ell\in\left\{-N,\ldots,-1\right\}\).
9. For the species density \(\rho_{a,\varepsilon}\), we additionally require that blow-ups are of order at most \(-1\), i.e., \(\hat{\rho_{a,\varepsilon}^{-}}^{i}=0\) for all \(\ell\in\left\{-N,\ldots,-2\right\}\). The reason why we cannot naturally expect boundedness here is that \(\rho_{a,\varepsilon}\) does not give the volume fraction, but the number of particles per volume of the active linkers.
We will often have to compute differential operators of functions that are expressed in interfacial coordinates:
**Remark 3.2**.: For a sufficiently smooth function \(q:\ S_{T}\times\mathbb{R}\rightarrow\mathbb{R}\) on an evolving manifold \(S_{T}=\bigcup_{\varepsilon\in[0,T]}\{t\}\times S_{s}\), and \(t^{*}\in[0,T]\), \(x^{*}\in\Omega\), it holds
\[\nabla_{x}\left(q\circ t_{S_{T},x}\right)(t^{*},x^{*}) =\varepsilon^{-1}q^{\prime}(t_{S_{T},x}(t^{*},x^{*}))\tilde{v} \left(t^{*},x^{*}\right)+\nabla_{S(t^{*})_{\delta(t^{*})}}q(t_{S_{T},x}\left(t ^{*},x^{*}\right)), \tag{7}\] \[\Delta_{x}\left(q\circ t_{S_{T},x}\right)(t^{*},x^{*}) =\varepsilon^{-2}q^{\prime\prime}(t_{S_{T},x}(t^{*},x^{*}))- \varepsilon^{-1}q^{\prime}(t_{S_{T},x}(t^{*},x^{*}))\tilde{H}\left(t^{*},x^{*}\right)\] (8) \[+\Delta_{S(t^{*})_{\delta(t^{*})}}q(t_{S_{T},x}(t^{*},x^{*})),\] \[\partial_{t}\left(q\circ t_{S_{T},x}\right)(t^{*},x^{*}) =-\varepsilon^{-1}V_{v}^{S}(t^{*},x^{*})q^{\prime}(t_{S_{T},x}(t ^{*},x^{*}))+\partial_{t}q(t_{S_{T},x}(t^{*},x^{*})). \tag{9}\]
For \(\vec{q}:\ S_{T}\times\mathbb{R}\rightarrow\mathbb{R}^{\alpha}\), it holds
\[\nabla_{x}\cdot\left(\vec{q}\circ t_{S_{T},x}\right)(t^{*},x^{*}) =\varepsilon^{-1}\vec{q}^{\prime}(t_{S_{T},x}(t^{*},x^{*}))\cdot \tilde{v}\left(t^{*},x^{*}\right)+\nabla_{S(t^{*})_{\delta(t^{*})}}\cdot\vec{ q}(t_{S_{T},x}\left(t^{*},x^{*}\right)), \tag{10}\] \[\nabla_{x}\left(\vec{q}\circ t_{S_{T},x}\right)(t^{*},x^{*}) =\varepsilon^{-1}\vec{q}^{\prime}(t_{S_{T},x}(t^{*},x^{*}))\otimes \tilde{v}\left(t^{*},x^{*}\right)+\nabla_{S(t^{*})_{\delta(t^{*})}}\vec{q}(t_{ S_{T},x}\left(t^{*},x^{*}\right)),\] (11) \[\Delta_{x}\left(\vec{q}\circ t_{S_{T},x}\right)(t^{*},x^{*}) =\varepsilon^{-2}\vec{q}^{\prime\prime}(t_{S_{T},x}(t^{*},x^{*}))- \varepsilon^{-1}(\vec{q}^{\prime})\circ t_{S_{T},x}\left(t^{*},x^{*}\right) \tilde{H}\left(t^{*},x^{*}\right)\] (12) \[+\Delta_{S(t^{*})_{\delta(t_{T},x})}\vec{q}(t_{S_{T},x}(t^{*},x^ {*})).\]
For \(Q:\ S_{T}\times\mathbb{R}\rightarrow\mathbb{R}^{(\alpha,\alpha)}\), it holds
\[\nabla_{x}\cdot\left(Q\circ t_{S_{T},x}\right)(t^{*},x^{*}) =\varepsilon^{-1}Q^{\prime}(t_{S_{T},x}(t^{*},x^{*}))\tilde{v} \left(t^{*},x^{*}\right)+\nabla_{S(t^{*})_{\delta(t^{*})}}\cdot Q(t_{S_{T},x} \left(t^{*},x^{*}\right)). \tag{13}\]
Let us further exercise some smaller expansions.
**Lemma 3.3**.: For \(\varphi\in\{\nu,\phi\}\) the following expansions hold
\[\left|\nabla\varphi\right| =\nu\cdot\nabla\varphi=\left(\varepsilon^{-1}\hat{\varphi}_{0}^{ \prime}+\hat{\varphi}_{1}^{\prime}+\varepsilon\hat{\varphi}_{2}^{\prime} \right)\circ t_{\epsilon}+O\left(\varepsilon^{2}\right), \tag{14}\] \[W\left(\varphi\right) =W\left(\varphi_{0}\right)+\varepsilon W^{\prime}\left(\varphi _{0}\right)\varphi_{1}+\varepsilon^{2}\left(W^{\prime}\left(\varphi_{0} \right)\varphi_{2}+W^{\prime\prime}\left(\varphi_{0}\right)\varphi_{1}^{2} \right)+O\left(\varepsilon^{3}\right),\] (15) \[g_{\varepsilon}\left[\varphi\right] =\varepsilon^{-1}\left(\frac{1}{2}\left(\hat{\varphi}_{0}^{ \prime}\right)^{2}\circ t_{\epsilon}+W\left(\varphi_{0}\right)\right)+2(\hat{ \varphi}_{0}^{\prime}\hat{\varphi}_{1}^{\prime})\circ t_{\epsilon}+W^{ \prime}\left(\varphi_{0}\right)\varphi_{1}+O\left(\varepsilon\right),\] (16) \[\left|\nabla\varphi\right|^{-1} =\varepsilon(\hat{\varphi}_{0}^{\prime})^{-1}\circ t_{\epsilon}+ \varepsilon^{2}\frac{\hat{\varphi}_{1}^{\prime}\circ t_{\epsilon}}{\left(\hat{ \varphi}_{0}^{\prime}\right)^{2}\circ t_{\epsilon}}+O\left(\varepsilon^{3} \right). \tag{17}\]
If \(\varphi\) is the optimal profile at leading order, i.e., \(\hat{\varphi}_{0}^{\prime\prime}\circ t_{\epsilon}-W^{\prime}\left(\varphi_{0} \right)=0\), we further have
\[H_{\varphi}=\varepsilon^{-1}\hat{\varphi}_{0}^{\prime}\circ t_{\epsilon}\left( \hat{\varphi}_{0}^{\prime}\circ t_{\epsilon}\tilde{H}+\hat{\varphi}_{1}^{ \prime\prime}\circ t_{\epsilon}+W^{\prime\prime}\left(\varphi_{0}\right) \varphi_{1}\right)+O\left(1\right). \tag{18}\]
Proof.: Ad (16): We use (14) to compute
\[g_{\varepsilon}\left[\rho\right]=\varepsilon^{-1}\left(\frac{1}{2}\left(\hat{ \rho}_{0}^{\prime}\right)^{2}\circ t_{\epsilon}+W\left(\rho_{0}\right) \right)+2(\hat{\rho}_{0}^{\prime}\hat{\rho}_{1}^{\prime})\circ\iota+W^{\prime }\left(\rho_{0}\right)\rho_{1}+O\left(\varepsilon\right).\]
Ad (17): Observe,
\[\left|\nabla\left(\rho_{0}+\varepsilon r_{1}\right)\right|^{-1}=\left|\nabla \rho_{0}\right|^{-1}-\varepsilon\frac{\nabla\rho_{0}\cdot\nabla r_{1}}{\left| \nabla\rho_{0}\right|^{3}}+O\left(\varepsilon^{3}\right). \tag{19}\]
Note further that \(\left|\nabla\rho_{0}\right|^{-1}=(\nabla\rho_{0}\cdot\nu)^{-1}=(\varepsilon^{-1} \hat{\rho}_{0}^{\prime}\circ t_{\epsilon})^{-1}=\varepsilon(\hat{\rho}_{0}^{ \prime}\circ t_{\epsilon})^{-1}\) and \(\nabla r_{1}=\varepsilon^{-1}\hat{\rho}_{1}^{\prime}\circ t_{\epsilon}\nu+O \left(1\right)\) so that
\[\varepsilon\frac{\nabla\rho_{0}\cdot\nabla r_{1}}{\left|\nabla\rho_{0}\right|^{3 }}=\frac{\varepsilon^{-1}\hat{\rho}_{0}^{\prime}\circ t_{\epsilon}\hat{\rho}_{1}^{ \prime}\circ t_{\epsilon}+O\left(1\right)}{\left|\nabla\rho_{0}\right|^{3}}\in O \left(\varepsilon^{2}\right).\]
Ad (18): Expand
\[H_{\rho} =\left|\nabla\rho\right|(-\varepsilon\Delta\rho+\epsilon^{-1}W^{ \prime}\left(\rho\right))\] \[\overset{(1)}{=}(\varepsilon^{-1}\hat{\rho}_{0}^{\prime}\circ t _{\epsilon}+\hat{\rho}_{1}^{\prime}\circ t_{\epsilon}+O\left(\varepsilon \right))(-\varepsilon^{-1}\hat{\rho}_{0}^{\prime\prime}\circ t_{\epsilon}+\hat{ \rho}_{0}^{\prime}\circ t_{\epsilon}\tilde{H}+\hat{\rho}_{1}^{\prime\prime} \circ t_{\epsilon}+\varepsilon^{-1}W^{\prime}\left(\rho_{0}\right)+W^{ \prime\prime}\left(\hat{\rho}_{0}\right)\rho_{1}+O\left(\varepsilon\right))\] \[\overset{(2)}{=}\varepsilon^{-1}\hat{\rho}_{0}^{\prime}\circ t_{ \epsilon}\left(\hat{\rho}_{0}^{\prime}\circ t_{\epsilon}\tilde{H}+\hat{\rho}_{1}^{ \prime\prime}\circ t_{\epsilon}+W^{\prime\prime}\left(\rho_{0}\right)\rho_{1} \right)+O\left(1\right),\]
where for (1) we employ (8), and for (2) the optimal profile equation.
A common principle, which we will make use of in the following multiple times, is summarised in the following
**Lemma 3.4**.: Let \(\Gamma\subseteq\Omega\) be a smooth hypersurface. Let \(p\in L^{1}\left(\mathbb{R}\right)\) with
\[\sup_{|t|>s}|p(t)t|\leq\frac{C}{s^{m}}\]
for some \(C\in\left[0,\infty\right)\) and \(m\in\left(0,\infty\right)\), \(f_{\varepsilon}\in C\left(\Omega\right)\), and for all sequences \(x_{\varepsilon}\overset{\epsilon\to 0}{\rightarrow}x\), it holds \(f_{\varepsilon}(x_{\varepsilon})\overset{\epsilon\to 0}{\rightarrow}f(x)\) with \(\left\|f_{\varepsilon}\right\|_{L^{\infty}\left(\Omega\right)}<M\) for some \(M\in\left(0,\infty\right)\) being independent of \(\varepsilon\). Then,
\[\varepsilon^{-1}\int_{\Omega}p\left(\frac{d_{\Gamma}\left(x\right)}{ \varepsilon}\right)f_{\varepsilon}(x)\ \mathrm{d}\mathfrak{B}^{3}(x)\overset{\epsilon\to 0}{ \rightarrow}\int_{-\infty}^{\infty}p(s)\ \mathrm{d}\mathfrak{B}^{1}(s)\int_{\Gamma}f(x)\ \mathrm{d}\mathfrak{S}^{2}(x).\]
After the preliminaries are fixed, we shall proceed by analysing the asymptotic behaviour of the solution of (1).
### Outer expansion
We start with investigating the solutions' behaviour away from the boundary layer, i.e. on a set \(\Omega_{\delta}=\Omega\setminus\left(N_{\delta}\left(\Gamma\right)\cup N_{ \delta}\left(\Sigma\right)\right)\) for some \(\delta>0\). Let \(\varphi\in\left\{\phi_{\varepsilon},\psi_{\varepsilon}\right\}\) for the following considerations.
Due to the recovery sequence property of the initial data postulated in Assumption 1,
\[\mathcal{F}_{\varepsilon}\left[v_{\varepsilon}\left(0,\cdot\right),\phi_{ \varepsilon}\left(0,\cdot\right),\psi_{\varepsilon}\left(0,\cdot\right),\rho_ {a,\varepsilon}\left(0,\cdot\right)\right]\in O\left(1\right).\]
Further, the sufficiently fast decay of the species densities' time derivative, see Assumption 6 imply
\[\int_{\Omega_{\delta}}g_{\varepsilon}\left[\phi\right](y)\int_{\Omega_{\delta }}g_{\varepsilon}\left[\psi\right](x)\partial_{\rho_{a,\varepsilon}}\left[ \partial_{\rho_{a,\varepsilon}}\right]\mathrm{d}\mathfrak{B}^{3}(x)\ \mathrm{d}\mathfrak{B}^{3}(y)\in O\left(1\right).\]
From (1c) and (1d), we also obtain
\[\varepsilon^{a}\Delta\left(\nabla_{\varphi}^{L^{2}}S_{\Gamma}^{\varepsilon} \left[\varphi\right]+\nabla_{\varphi}^{L^{2}}C_{\varepsilon}\right)\in O\left( 1\right),\]
so for \(\alpha<1\), \(\nabla_{\varphi}^{L^{2}}S_{\Gamma}^{\varepsilon}\left[\varphi\right]+\nabla_ {\varphi}^{L^{2}}C_{\varepsilon}\in O\left(1\right).\) (Bringing \(\varepsilon^{a}\) to the right, all leading order terms of \(\Delta\left(\nabla_{\varphi}^{L^{2}}S_{\Gamma}^{\varepsilon}\left[\varphi \right]+\nabla_{\varphi}^{L^{2}}C_{\varepsilon}\right)\) from order \(-3\) to \(-1\) have no match on the right hand side and thus have to be zero following the separation of scales argument. Using that the Neumann boundary conditions (1g) do not depend on \(\varepsilon\), we can thus conclude that all these terms are of order zero.) Comparing the right hand side of (1a) with its left hand side, we conclude
\[\int_{\Omega_{\delta}}H_{\psi}\left(x\right)\rho_{a,\varepsilon} \left(t,x\right)\mathrm{v}_{\nu_{\varphi}}(t,x)\int_{\Omega_{\delta}}g_{ \varepsilon}\left[\phi\right](y)\partial_{\rho_{a,\varepsilon}}c\left(x,y, \rho_{a,\varepsilon},\nu_{\varphi}\right)\ \mathrm{d}\mathfrak{B}^{3}(y)\ \mathrm{d} \mathfrak{B}^{3}(x)+\] \[\int_{\Omega_{\delta}}\int_{\Omega_{\delta}}g_{\varepsilon}\left[ \phi_{\varepsilon}\right](y)\nabla\left(\partial_{\rho_{a,\varepsilon}}c( \cdot,y,\rho_{a,\varepsilon},\nu_{\varphi})\right)\cdot v_{\varepsilon}g_{ \varepsilon}\left[\psi\right]\rho_{a,\varepsilon}\ \mathrm{d}\mathfrak{B}^{3}(y)\ \mathrm{d} \mathfrak{B}^{3}(x)\in O\left(1\right).\]
It thus follows from (3) that
\[\mathrm{ess}\sup_{p\in\left[0,T\right]}F_{\varepsilon}\left[\phi_{\varepsilon},\psi_{\varepsilon},\rho_{a}\right]\in O\left(1\right).\]
Therefore, \(\int_{\Omega_{\delta}}\frac{\varepsilon}{2}\left|\nabla\rho\right|^{2}+ \varepsilon^{-1}W\left(\rho\right)\ \mathrm{d}\mathfrak{B}^{3}\in O\left(1\right)\) for all \(t\in\left[0,T\right]\), and we conclude \(W\left(\varphi\right)\in O\left(\varepsilon\right)\). Inserting the outer expansion of \(\varphi\) into \(W\left(\varphi\right)\) brings
\[W\left(\varphi\right)=\left(\left(\varphi_{0}^{o}\right)^{2}-1\right)^{2}+O \left(\varepsilon\right).\]
Hence, it must hold
\[\left(\left(\varphi_{0}^{o}\right)^{2}-1\right)^{2}|_{\Omega_{\delta}}=0\]
for any \(\delta>0\). This further implies \(\varphi_{0}^{o}|_{\Omega_{\delta}}(t,\cdot)\in\left\{-1,1\right\}\) for all \(t\in\left[0,T\right]\). For the initial data of \(\varphi\), we have (cf. Assumption 2) \(\varphi_{0}^{o}(0,\cdot)=-1\) in \(\Omega_{S}^{-}\) and \(1\) in \(\Omega_{S}^{+}\), \(S\in\left\{\Gamma,\Sigma\right\}\), so that we can argue by continuity in time that
\[\varphi_{0}^{o}|_{\Omega_{S}^{-}\setminus N_{\delta}(S)}(t,\cdot)=-1\text{ and }\varphi_{0}^{o}|_{\Omega_{S}^{+}\setminus N_{\delta}(S)}(t,\cdot)=1\text{ for all }t\in\left[0,T\right], \tag{20}\]
which is the essential result of this paragraph.
### Inner expansion
As there is no danger of confusion, we drop the subscript \(\varepsilon\) on the physical quantities. Let us first not that the result of the previous paragraph can be combined with the principle of asymtpotic matching on the phase fields such that we obtain
\[\left(\lim_{\varepsilon\nearrow\infty}\hat{\phi}_{0}^{i}(\cdot,\cdot,z)\right) \circ t_{\varepsilon}\left(t,x\right)=\lim_{\begin{subarray}{c}x\to \mathbb{R}\\ x\in\Omega_{\varepsilon}^{+}\end{subarray}}\phi_{0}^{s}(t,x)=1 \tag{21}\]
for \(x\in N_{\delta}\left(\Gamma\right)\cap\Omega_{\Gamma}^{+}\), i.e., \(d_{\Gamma}\left(x\right)>0\). Analogously,
\[\left(\lim_{z\searrow\infty}\hat{\phi}_{0}^{i}(\cdot,\cdot,z)\right)\circ t_{ \epsilon}\left(t,x\right)=\lim_{\begin{subarray}{c}x\to\Gamma\\ x\in\Omega_{\Gamma}^{-}\end{subarray}}\phi_{0}^{\epsilon}(t,x)=-1 \tag{22}\]
for \(x\in N_{\delta}\left(\Gamma\right)\cap\Omega_{\Gamma}^{-}\) and mutatis mutandis for \(\psi\).
An immediate consequence of the matching principle and the assumption that \(q_{\ell}^{o}=0\) for all \(\ell\in\left\{-N,\ldots,-1\right\}\) of the outer expansion is
\[\lim_{z\to\infty}\hat{q}_{\ell}=0\qquad\text{for all }\ell\in\left\{-N, \ldots,-1\right\} \tag{23}\]
of the inner expansion. This also holds for all derivatives as long as they exist.
### Properties of \(\hat{o}\) and \(\hat{p}\) to leading order
Let \(S\in\left\{\Gamma,\Sigma\right\}\). To obtain insight on the higher order coefficients in the expansion of the velocity and the pressure, we exploit the structure of the Navier-Stokes equations (1a), (1b) following [1, p. 486, Section A.1.2].
Due to Assumption 8, \(\nabla_{\psi}^{L^{2}}\mathcal{W}\in\mathcal{O}\left(\epsilon^{-3}\right)\). Thus, for \(N\geq 3\), we have from (1a), at order \(\epsilon^{-N-2}\),
\[-\eta\hat{v}_{-N}^{\prime\prime}\circ t_{\epsilon}=0.\]
With (23), it further follows \(\hat{v}_{-N}^{\prime}=0\). From \(\hat{v}_{-N}^{\prime}=0\) with (23), we conclude analogously \(\hat{v}_{-N}=0\).
At order \(\epsilon^{-N-1}\), the equation is
\[-\eta\hat{v}_{-N+1}^{\prime\prime}\circ t_{\epsilon}+\hat{p}_{-N}^{\prime} \circ t_{\epsilon}\bar{v}=0. \tag{24}\]
From (1b) we have, using Remark 3.2(10), to leading order \(\epsilon^{-N}\):
\[\hat{v}_{-N+1}^{\prime}\circ t_{\epsilon}\cdot\bar{v}=0. \tag{25}\]
Multiplying (24) by \(\bar{v}\), we find with (25)
\[\hat{p}_{-N}^{\prime}\circ t_{\epsilon}=0. \tag{26}\]
In turn, inserting (26) back into (24), we obtain \(\hat{v}_{-N+1}^{\prime\prime}=0\), and with (23) further \(\hat{v}_{-N+1}^{\prime}=0\). From \(\hat{v}_{-N+1}^{\prime}=0\) with (23), we conclude analogously \(\hat{v}_{-N+1}=0\). Arguing verbatim with (23), (26) implies \(\hat{p}_{-N}=0\).
Repeating the arguments of the previous paragraph, we may from now on assume w.l.o.g. \(\hat{v}_{\ell}=0\) for all \(\ell\leq-3\) and \(\hat{p}_{\ell}=0\) for all \(\ell^{\prime}\leq-4\).
At order \(\epsilon^{-4}\), we have an additional right hand side term
\[\epsilon^{-4}\left(-\eta\hat{v}_{-2}^{\prime\prime}\circ t_{\epsilon}+\hat{p}_ {-3}^{\prime}\circ t_{\epsilon}\bar{v}\right)=\epsilon^{-1}\left(\nabla_{ \phi}^{L^{2}}\mathcal{F}\hat{\phi}_{0}^{\prime}\circ t_{\epsilon}\bar{v}+ \nabla_{\psi}^{L^{2}}\mathcal{F}\hat{\psi}_{0}^{\prime}\circ t_{\epsilon}\bar{v }\right)-\partial_{\rho_{\epsilon}}C_{\phi}H_{\psi_{0}^{\prime}\psi_{0}^{ \prime}}\rho_{\alpha-1}^{i}.\]
Multiplying again by \(\bar{v}\) and noting that due to the previous considerations \(\hat{v}_{-2}^{\prime}\circ t_{\epsilon}\cdot\bar{v}=0\) (25), we have
\[\epsilon^{-4}\hat{p}_{-3}^{\prime}\circ t_{\epsilon}=\epsilon^{-1}\left( \nabla_{\phi}^{L^{2}}\mathcal{F}\hat{\phi}_{0}^{\prime}\circ t_{\epsilon}+ \nabla_{\psi}^{L^{2}}\mathcal{F}\hat{\psi}_{0}^{\prime}\circ t_{\epsilon} \right)-\partial_{\rho_{\epsilon}}C_{\phi}H_{\psi_{0}^{\prime}\psi_{0}^{ \prime}}\cdot\bar{v}\rho_{\alpha-1}^{i};\]
hence, \(\hat{v}_{-2}^{\prime\prime}=0\) and we may conclude \(\hat{v}_{-2}=0\) as before.
We cannot go further now. However, in Section 3.6 we show that actually \(\nabla_{\rho}^{L^{2}}\mathcal{W}\in\mathcal{O}\left(\epsilon^{-2}\right)\) and in Section 3.7 that \(\rho_{a}^{i}\in\mathcal{O}\left(1\right)\)--using only the results on velocity and pressure we've derived here--, which gives \(\hat{p}_{-3}^{\prime}\circ t_{\epsilon}=0\), and further
\[\epsilon^{-3}\hat{p}_{-2}^{\prime}\circ t_{\epsilon}=\epsilon^{-1}\left( \nabla_{\phi}^{L^{2}}\mathcal{F}\hat{\phi}_{0}^{\prime}\circ t_{\epsilon}+ \nabla_{\psi}^{L^{2}}\mathcal{F}\hat{\psi}_{0}^{\prime}\circ t_{\epsilon} \right)-\partial_{\rho_{\epsilon}}C_{\phi}H_{\psi_{0}^{\prime}\psi_{0}^{ \prime}}\cdot\bar{v}\rho_{\alpha 0}^{i}\]
resulting in \(\hat{v}_{-1}^{\prime\prime}=0\) and \(\hat{v}_{-1}=0\). All together, we can thus state that
\[\hat{v}_{\ell}=0\text{ for all }\ell\in\left\{-N,\ldots,-1\right\},\text{ and }\hat{p}_{\ell}=0\text{ for all }\ell\in\left\{-N,\ldots,-3\right\}. \tag{27}\]
### Optimal profiles of \(\hat{\psi}\) and \(\hat{\phi}\) to leading order
Leading order of \(\nabla_{\psi}^{L^{2}}\mathcal{G}\) and \(\nabla_{\psi}^{L^{2}}\mathcal{C}\) is at most \(\epsilon^{-2}\). We consider the evolution law (1d):
\[\partial_{t}\psi+\upsilon\cdot\nabla\psi=\epsilon^{\alpha}\Delta\left(\nabla_{ \psi}^{L^{2}}\mathcal{W}+\nabla_{\psi}^{L^{2}}\mathcal{G}+\nabla_{\psi}^{L^{2}} \mathcal{C}\right).\]
The left hand side is at most of order \(\varepsilon^{-2}\) (since the velocity is at most of order \(\varepsilon^{-1}\), see the previous Section 3.5). So requiring \(\alpha\leq 2\), the leading order terms of \(\varepsilon^{\alpha}\Delta\left(\nabla_{\psi}^{L^{2}}\mathcal{W}\right)\) are of order \(\varepsilon^{-3}\) and must be zero, which is equivalent to the equation
\[\left(\left(\hat{\psi}_{0}^{\prime\prime}-W^{\prime}\left(\hat{\psi}_{0}\right) \right)^{\prime\prime}-\left(\hat{\psi}_{0}^{\prime\prime}-W^{\prime}\left( \hat{\psi}_{0}\right)\right)W^{\prime\prime}\left(\hat{\psi}_{0}\right)\right)^ {\prime\prime}=0.\]
We pose the additional condition \(\hat{\psi}_{0}\left(t,s,0\right)=0\) (otherwise, we had infinitely many solutions by shifting along the abscissa). Further, we set \(g:=\left(\hat{\psi}_{0}^{\prime\prime}-W^{\prime}\left(\hat{\psi}_{0}\right) \right)^{\prime\prime}-\left(\hat{\psi}_{0}^{\prime\prime}-W^{\prime}\left( \hat{\psi}_{0}\right)\right)W^{\prime\prime}\left(\hat{\psi}_{0}\right)\) and observe that thanks to the counterparts of (21), (22) for \(\psi\), \(\lim_{\left|z\right|\rightarrow\infty}g=0\). By integration, we obtain
\[0=g^{\prime}(z)-g^{\prime}(0),\]
and sending \(\left|z\right|\rightarrow\infty\) gives \(g^{\prime}(0)=0\). Conclusively, \(g^{\prime}(z)=0\) for all \(z\in\mathbb{R}\). Repeating the argument, we obtain
\[0=g(z)-g(0),\]
send \(\left|z\right|\rightarrow\infty\), conclude \(g(0)=0\) and thus have \(g(z)=0\) for all \(z\in\mathbb{R}\). Setting \(f:=\left(\hat{\psi}_{0}^{\prime\prime}\circ\iota_{\epsilon}-W^{\prime}\left( \psi_{0}\right)\right)\), a solution to
\[\left(\hat{\psi}_{0}^{\prime\prime}-W^{\prime}\left(\hat{\psi}_{0}\right) \right)^{\prime\prime}-\left(\hat{\psi}_{0}^{\prime\prime}-W^{\prime}\left( \hat{\psi}_{0}\right)\right)W^{\prime\prime}\left(\hat{\psi}_{0}\right)=g=0\]
is obviously given by \(f=0\). From
\[f=\hat{\psi}_{0}^{\prime\prime}\circ\iota_{\epsilon}-W^{\prime}\left(\psi_{0} \right)=0, \tag{28}\]
we further conclude with the counterparts of (21), (22) for \(\psi\) that \(\hat{\psi}_{0}\left(z\right)=\tanh\left(\frac{z}{\sqrt{2}}\right).\)
The very same argument applies verbatim for \(\phi\).
### Properties of \(\hat{\rho}_{a}\) and \(\hat{\rho}_{i}\) to leading order
The following analysis is conducted on the example of \(\hat{\rho}_{a}\), but the arguments are the same for \(\hat{\rho}_{i}\). We consider the equation (1i) on \(N_{\delta}\left(\Sigma\right)\):
\[g_{\epsilon}\left[\psi\right]\partial_{t}\rho_{a}-\nu_{\nu_{\varphi}}H_{\psi }\rho_{a}-\nabla\cdot\left(g_{\epsilon}\left[\psi\right]\eta_{a}\nabla\rho_{a }\right)+\nabla\cdot\left(g_{\epsilon}\left[\psi\right]\rho_{a}v_{\epsilon} \right)=g_{\epsilon}\left[\psi\right]\mathcal{R}\left[\rho_{a},\rho_{i};\phi, \nu_{\varphi}\right].\]
Using (16), the results from Section 3.5, (27), the optimal profile found for \(\psi\) in Section 3.6 together with (18), we have
\[g_{\epsilon}\left[\psi\right]\partial_{t}\rho_{a}^{i},\nu_{\nu_{\varphi}}H_{ \psi}\rho_{a}^{i},\nabla\cdot\left(g_{\epsilon}\left[\psi^{\prime}\right] \rho_{a}^{i}v_{\epsilon}\right),g_{\epsilon}\left[\psi\right]\mathcal{R}\left[ \rho_{a},\rho_{i};\phi,\nu_{\varphi}\right]\in O\left(\varepsilon^{-N-2} \right),\]
so to leading order only the terms at \(\varepsilon^{-N-3}\) of the diffusion term matter:
\[\nabla_{x}\cdot\left(\varepsilon^{-N-2}\left(\frac{1}{2}\left( \hat{\psi}_{0}^{\prime}\right)^{2}+W\left(\hat{\psi}_{0}\right)\right)\circ \iota_{\epsilon}\eta_{a}\hat{\rho}_{a-N}^{\prime}\circ\iota_{\epsilon}\ddot{ \psi}\right) =\varepsilon^{-N-3}\Big{(}\Big{(}\frac{1}{2}\left(\hat{\psi}_{0}^{ \prime}\right)^{2}+W\left(\hat{\psi}_{0}\right)\Big{)}\eta_{a}\hat{\rho}_{a- N}^{\prime}\dot{\psi}\Big{)}^{\prime}\cdot\ddot{\psi}\] \[+O\left(\varepsilon^{-N-2}\right) =0.\]
Thus, \(\left(\frac{1}{2}\left(\hat{\psi}_{0}^{\prime}\right)^{2}+W\left(\hat{\psi}_{0} \right)\right)\eta_{a}\hat{\rho}_{a-N}^{\prime}\) has to be constant in \(z\). However, \(\frac{1}{2}\left(\hat{\psi}_{0}^{\prime}\right)^{2}+W\left(\hat{\psi}_{0}\right)\) decays due to (21), (22). Simultaneously, \(\hat{\rho}_{a-N}^{\prime}\) decays as \(\left|z\right|\rightarrow\infty\), see (23). Thus, it must even hold
\[\Big{(}\frac{1}{2}\left(\hat{\psi}_{0}^{\prime}\right)^{2}+W\left(\hat{\psi}_{0 }\right)\Big{)}\,\eta_{a}\hat{\rho}_{a-N}^{\prime}=0,\]
and so \(\hat{\rho}_{a-N}^{\prime}(z,z)=0\) for all \(s,z\). Consequently, \(\hat{\rho}_{a-N}\) is constant in \(z\). Leveraging (23) again, it follows \(\hat{\rho}_{a-N}=0\). We may repeat this argument and find
\[\hat{\rho}_{a\ell}=0\qquad\text{for all }\mathscr{L}\in\{-N,\ldots,-1\}. \tag{29}\]
Finally, we have to leading order:
\[\Big{(}\Big{(}\frac{1}{2}\left(\hat{\psi}_{0}^{\prime}\right)^{2}+W\left(\hat{ \psi}_{0}\right)\Big{)}\,\eta_{a}\hat{\rho}_{a0}^{\prime}\dot{\psi}\Big{)}^{ \prime}\cdot\ddot{\psi}=0,\]
and conclude
\[\hat{\rho}_{a0}^{\prime}(z,z)=0\qquad\text{for all }s,z. \tag{30}\]
### Further properties of \(\hat{\phi}\) and \(\hat{\psi}\)
The expansion of the Willmore-Energy gradient in interfacial coordinates shall be
\[\widehat{\nabla_{\varphi}^{L^{2}}\mathcal{W}}=\sum_{k=-3}^{\infty}\varepsilon^{k }\hat{e}_{k}(s,z),\]
and in original coordinates
\[\nabla_{\varphi}^{L^{2}}\mathcal{W}=\sum_{k=-3}^{\infty}\varepsilon^{k}e_{k} \left(\pi_{\Phi}\left(x\right),\frac{d_{\Phi}\left(x\right)}{\varepsilon} \right). \tag{31}\]
We are going to show that \(\hat{e}_{-3}=\hat{e}_{-2}=\hat{e}_{-1}=0\) by dint of the energy inequality. Afterwards, we are going to see that important properties of \(\hat{\varphi}_{0}\), \(\hat{\varphi}_{1}\), and \(\hat{\varphi}_{2}\) follow from these equations that we will use when passing to the limit in the next Section 4. Before going on, let us calculate
\[\nabla_{\varphi}^{L^{2}}\mathcal{W}\left[\varphi\right]=-\Delta\left(\mu \left[\varphi\right]\right)+\varepsilon^{-2}\mu\left[\varphi\right]W^{\prime \prime}\left(\varphi\right).\]
Thanks to the optimal profiles at leading order for both phase fields, cf. Section 3.6, we have \(\nabla_{\varphi}^{L^{2}}\mathcal{W}\left[\varphi\right]\in O\left(\varepsilon ^{-2}\right)\), and also \(\nabla_{\varphi}^{L^{2}}\mathcal{G}\left[\varphi\right]\in O\left(1\right)\). We note further
\[\nabla_{\phi}^{L^{2}}C(y)=\mu\left[\phi\right](y)C_{\varphi}(y)-\varepsilon \int_{\Omega}g_{\varepsilon}\left[\psi\right](x)\nabla_{y}\phi\cdot\nabla_{y }c\left(x,y,\rho_{a^{\prime}},\nu_{\varphi}\right)\ \mathrm{d}\mathfrak{B}^{3}(x)\in O\left(1\right),\]
which follows from the optimal profile of \(\phi\) and \(\psi\) to leading order combined with Lemma 3.4. (The optimal profiles allow for showing the decaying condition that is the main prerequisite of Lemma 3.4.) For
\[\nabla_{\varphi}^{L^{2}}C(x) =\mu\left[\psi\right](x)C_{\phi}(x)-\int_{\Omega}\epsilon g_{ \varepsilon}\left[\phi\right](y)\nabla_{x}\left(c\left(x,y,\rho_{a^{\prime}}, \nu_{\varphi}\right)\right)\cdot\nabla_{x}\psi\ \mathrm{d}\mathfrak{B}^{3}(y)\] \[-\int_{\Omega}g_{\varepsilon}\left[\phi\right](y)\nabla_{x}\cdot \left(g_{\varepsilon}\left[\psi\right](x)\nabla_{y}c\left(x,y,\rho_{a^{\prime} },\nu_{\varphi}\right)^{T}\frac{1}{\left|\nabla\psi\right|}\mathbb{P}_{\nu_{ \varphi}}\right)\ \mathrm{d}\mathfrak{B}^{3}(y)\in O\left(1\right),\]
we have to additionally consider (57), and note (58), as well as \(\frac{1}{\left|\nabla\varphi\right|}\mathbb{P}_{\nu_{\varphi}}\in O\left( \varepsilon\right)\). We conclude \(\oint_{N_{d}(S)}\nabla_{\varphi}F\ \mathrm{d}\mathfrak{B}^{3}\in O\left( \varepsilon^{-1}\right)\) (again, leveraging Lemma 3.4 and using the optimal profile of \(\phi\) and \(\psi\) to verify the prerequisites). The energy inequality (3) gives us additionally
\[\int_{0}^{T}\int_{N_{d}(S)}\varepsilon^{\alpha}\left|\nabla\nabla_{\varphi}F \right|^{2}\ \mathrm{d}\mathfrak{B}^{3}\ \mathrm{d}\mathfrak{B}^{1}\in O\left(1\right).\]
Note that we can restrict to \(N_{\delta}\left(S\right)\) since the energies and their \(L^{2}\)-gradients are zero outside to leading order. Applying the Poincare-Wirtinger inequality, we deduce,
\[\left(\int_{0}^{T}\int_{N_{d}(S)}\varepsilon^{\alpha}\left(\nabla_{\varphi}F- \int_{N_{d}(S)}\nabla_{\varphi}F\ \mathrm{d}\mathfrak{B}^{3}\right)^{2}\ \mathrm{d}\mathfrak{B}^{3}\ \mathrm{d} \mathfrak{B}^{1}\right)^{\frac{1}{2}}\leq\left(\int_{0}^{T}\int_{N_{d}(S)} \varepsilon^{\alpha}\left|\nabla\nabla_{\varphi}F\right|^{2}\ \mathrm{d}\mathfrak{B}^{3}\ \mathrm{d} \mathfrak{B}^{1}\right)^{\frac{1}{2}},\]
which implies, using the reversed triangle inequality for \(\left\|\cdot\right\|_{L^{2}\left(N_{d}(S)\right)}\),
\[\left(\int_{0}^{T}\int_{N_{d}(S)}\varepsilon^{\alpha}\left|\nabla_{\varphi}F \right|^{2}\ \mathrm{d}\mathfrak{B}^{3}\ \mathrm{d}\mathfrak{B}^{1}\right)^{\frac{1}{2}}-\sqrt{\left|N_{ \delta}\left(S\right)\right|}\varepsilon^{\frac{\alpha}{2}}\int_{N_{d}(S)} \nabla_{\varphi}F\ \mathrm{d}\mathfrak{B}^{3}\in O\left(1\right), \tag{32}\]
thus
\[\left(\int_{0}^{T}\int_{N_{d}(S)}\left|\nabla_{\varphi}F\right|^{2}\ \mathrm{d} \mathfrak{B}^{3}\ \mathrm{d}\mathfrak{B}^{1}\right)^{\frac{1}{2}}-\sqrt{\left|N_{\delta} \left(S\right)\right|}\int_{N_{d}(S)}\nabla_{\varphi}F\ \mathrm{d}\mathfrak{B}^{3}\in O\left( \varepsilon^{-\frac{\alpha}{2}}\right), \tag{33}\]
so
\[\int_{0}^{T}\int_{N_{d}(S)}\left|\nabla_{\varphi}F\right|^{2}\ \mathrm{d} \mathfrak{B}^{3}\ \mathrm{d}\mathfrak{B}^{1}\in O\left(\varepsilon^{-2}\right)\]
for \(\alpha\leq 2\). By applying Young's inequality, we can deduce further
\[\int_{N_{d}(S)}\left|\nabla_{\varphi}F\right|^{2}\ \mathrm{d}\mathfrak{B}^{3}=\int_{N_{d}(S)} \left|\nabla_{\varphi}^{L^{2}}C+\nabla_{\varphi}^{L^{2}}\mathcal{G}\right|^{2}+2 \nabla_{\varphi}^{L^{2}}\mathcal{W}\cdot\left(\nabla_{\varphi}^{L^{2}}C+\nabla_{ \varphi}^{L^{2}}\mathcal{G}\right)+\left|\nabla_{\varphi}^{L^{2}}\mathcal{W} \right|^{2}\ \mathrm{d}\mathfrak{B}^{3},\]
which in turn implies
\[\frac{1}{2}\int_{N_{d}(S)}\left|\nabla_{\varphi}^{L^{2}}\mathcal{W}\right|^{2}\ \mathrm{d} \mathfrak{B}^{3}\leq\int_{N_{d}(S)}\left|\nabla_{\varphi}F\right|^{2}+3\left| \nabla_{\varphi}^{L^{2}}C+\nabla_{\varphi}^{L^{2}}\mathcal{G}\right|^{2}\ \mathrm{d} \mathfrak{B}^{3},\]
so with the co-area formula, it follows
\[\epsilon\int_{0}^{T}\int_{-\frac{\hat{z}}{z}}^{\frac{\hat{z}}{z}}\int_{\Phi_{zz}} \left|\widehat{\nabla^{L^{2}}_{\varphi}}\widehat{\mathcal{W}}\right|^{2}\,\, \mathrm{d}\mathfrak{G}^{2}\,\,\mathrm{d}\mathfrak{B}^{1}(z)\,\,\mathrm{d} \mathfrak{B}^{1}\in O\left(\epsilon^{-2}\right). \tag{34}\]
From (31), the expansion
\[\left|\widehat{\nabla^{L^{2}}_{\varphi}}\widehat{\mathcal{W}}\right|^{2} =\epsilon^{-6}\hat{e}_{-3}^{2}+\epsilon^{-5}2\hat{e}_{-3}\hat{e}_ {-2}+\epsilon^{-4}\left(\hat{e}_{-2}^{2}+2\hat{e}_{-3}\hat{e}_{-1}\right)+ \epsilon^{-3}\left(2\hat{e}_{-3}\hat{e}_{0}+2\hat{e}_{-2}\hat{e}_{-1}\right)\] \[+\epsilon^{-2}\left(\hat{e}_{-1}^{2}+\hat{e}_{-2}\hat{e}_{0}+2 \hat{e}_{-3}\hat{e}_{1}\right)+O\left(\epsilon^{-1}\right)\] \[=\sum_{k=-6}^{-2}\epsilon^{k}f_{\hat{z}}(s,z)+O\left(\epsilon^{-1}\right)\]
of the integrand follows directly. Equation (34) then requires
\[\int_{0}^{T}\int_{-\infty}^{\infty}\int_{\Phi}f_{k}\,\,\mathrm{d}\mathfrak{G} ^{2}\,\,\mathrm{d}\mathfrak{B}^{1}\,\,\mathrm{d}\mathfrak{B}^{1}=0\]
up to \(k\leq-4\), so
\[\hat{e}_{-3}^{2}=f_{-6}=0\text{ a.e.} \Rightarrow\,\,\hat{e}_{-3}=0\text{ a.e.}\] \[\Rightarrow\,\,\hat{e}_{-2}^{2}=f_{-4}=0\text{ a.e.}\] \[\Rightarrow\,\,\hat{e}_{-2}=0\text{ a.e.}\]
This in turn gives \(\nabla_{\varphi}F\in O\left(\epsilon^{-1}\right)\), so \(\int_{N_{\hat{z}}(S)}\nabla_{\varphi}F\,\,\mathrm{d}\mathfrak{B}^{3}\in O \left(1\right)\), which we insert into (32), and choose \(\alpha<1\) to obtain
\[\int_{N_{\hat{z}}(S)}\left|\nabla_{\varphi}F\right|^{2}\,\,\mathrm{d} \mathfrak{B}^{3}\in o\left(\epsilon^{-1}\right);\]
hence,
\[\int_{0}^{T}\int_{-\frac{\hat{z}}{z}}^{\frac{\hat{z}}{z}}\int_{\Phi_{zz}} \widehat{\nabla^{L^{2}}_{\varphi}}\widehat{\mathcal{W}}\right|^{2}\,\,\mathrm{ d}\mathfrak{G}^{2}\,\,\mathrm{d}\mathfrak{B}^{1}(z)\,\,\mathrm{d}\mathfrak{B}^{1} \in o\left(\epsilon^{-2}\right),\]
so that \(\hat{e}_{-1}=0\).
Now that we have found equations \(\hat{e}_{-1}=0\), \(\hat{e}_{-2}=0\), and \(\hat{e}_{-3}=0\), we may derive information on \(\hat{\varphi}\) from them.
#### 3.8.1 Expansion of the \(L^{2}\)-gradient of the Willmore energy
We recall that
\[\nabla^{L^{2}}_{\varphi}\mathcal{W}=-\Delta\left(\mu\left[\varphi\right] \right)+\epsilon^{-2}\mu\left[\varphi\right]W^{\prime\prime}\left(\varphi \right). \tag{35}\]
First, we expand the chemical potential,
\[\mu\left[\varphi_{0}+\epsilon e_{1}\right]=-\epsilon\Delta\left(\varphi_{0}+ \epsilon e_{1}\right)+\epsilon^{-1}W^{\prime}\left(\varphi_{0}+\epsilon e_{1} \right),\]
by expanding the Laplacian term:
\[\epsilon\Delta\,\left(\varphi_{0}+\epsilon e_{1}\right) =\epsilon^{-1}\hat{\varphi}_{0}^{\prime\prime}\circ\iota_{ \epsilon}-\hat{\varphi}_{0}^{\prime}\circ\iota_{\epsilon}\bar{H}+\epsilon \Delta_{\Gamma_{\epsilon(0)}}\varphi_{0}\] \[\quad+\hat{\varphi}_{1}^{\prime\prime}\circ\iota_{\epsilon}- \epsilon\hat{\varphi}_{1}^{\prime}\circ\iota_{\epsilon}\bar{H}+\epsilon^{2} \Delta_{\Gamma_{\epsilon(0)}}\varphi_{1}\] \[\quad+\epsilon\hat{\varphi}_{2}^{\prime\prime}\circ\iota_{ \epsilon}-\epsilon^{2}\hat{\varphi}_{2}^{\prime}\circ\iota_{\epsilon}\bar{H}\] \[\quad+\epsilon^{2}\hat{\varphi}_{3}^{\prime\prime}\circ\iota_{ \epsilon}+O\left(\epsilon^{3}\right)\]
and the double well potential's first derivative:
\[\epsilon^{-1}W^{\prime}\left(\varphi_{0}+\epsilon e_{1}\right) =\epsilon^{-1}\left(W^{\prime}\left(\varphi_{0}\right)+\epsilon W^{ \prime\prime}\left(\varphi_{0}\right)e_{1}+\epsilon^{2}W^{(3)}\left(\varphi_{0 }\right)e_{1}^{2}+\epsilon^{3}W^{(4)}\left(\varphi_{0}\right)e_{1}^{3}\right)\] \[=\epsilon^{-1}W^{\prime}\left(\varphi_{0}\right)+W^{\prime\prime }\left(\varphi_{0}\right)\left(\varphi_{1}+\epsilon\varphi_{2}+\epsilon^{2} \varphi_{3}\right)+\epsilon W^{(3)}\left(\varphi_{0}\right)\left(\varphi_{1}^{ 2}+2\epsilon\varphi_{1}\varphi_{2}\right)+\epsilon^{2}W^{(4)}\left(\varphi_{0} \right)\varphi_{1}^{3}\] \[+O\left(\epsilon^{3}\right)\] \[=\epsilon^{-1}W^{\prime}\left(\varphi_{0}\right)+W^{\prime\prime }\left(\varphi_{0}\right)\varphi_{1}+\epsilon\left(W^{\prime\prime}\left( \varphi_{0}\right)\varphi_{2}+W^{(3)}\left(\varphi_{0}\right)\varphi_{1}^{2}\right)\] \[+\epsilon^{2}\left(W^{\prime\prime}\left(\varphi_{0}\right)\varphi_{ 3}+2W^{(3)}\left(\varphi_{0}\right)\varphi_{1}\varphi_{2}+W^{(4)}\left(\varphi_{0 }\right)\varphi_{1}^{3}\right)+O\left(\epsilon^{3}\right).\]
The expansion of the chemical potential then reads
\[\mu\left[\varphi_{0}+\varepsilon\epsilon_{1}\right]=\varepsilon^{-1}\mu_{-1} \left[\varphi\right]+\mu_{0}\left[\varphi\right]+\varepsilon\mu_{1}\left[ \varphi\right]+\varepsilon^{2}\mu_{2}\left[\varphi\right]+O\left(\epsilon^{3}\right) \tag{36}\]
with
\[\mu_{-1}\left[\varphi\right] =-\hat{\varphi}_{0}^{\prime\prime}\circ\iota_{e}+W^{\prime}\left( \varphi_{0}\right), \tag{37}\] \[\mu_{0}\left[\varphi\right] =\hat{\varphi}_{0}^{\prime}\circ\iota_{e}\bar{H}-\hat{\varphi}_{1 }^{\prime\prime}\circ\iota_{e}+W^{\prime\prime}\left(\varphi_{0}\right)\varphi _{1},\] \[\mu_{1}\left[\varphi\right] =-\Delta_{\Gamma_{\alpha(0)}}\varphi_{0}+\hat{\varphi}_{1}^{ \prime}\circ\iota_{e}\bar{H}-\hat{\varphi}_{2}^{\prime\prime}\circ\iota_{e}+W ^{\prime\prime}\left(\varphi_{0}\right)\varphi_{2}+W^{\left(3\right)}\left( \varphi_{0}\right)\varphi_{1}^{2},\] \[\mu_{2}\left[\varphi\right] =-\Delta_{\Gamma_{\alpha(s)}}\varphi_{0}+\hat{\varphi}_{2}^{ \prime}\circ\iota_{e}\bar{H}-\hat{\varphi}_{3}^{\prime\prime}\circ\iota_{e}+W ^{\prime\prime}\left(\varphi_{0}\right)\varphi_{3}+2W^{\left(3\right)}\left( \varphi_{0}\right)\varphi_{1}\varphi_{2}+W^{\left(4\right)}\left(\varphi_{0} \right)\varphi_{1}^{3}.\]
Expansion of \(\Delta\left(\mu\left[\varphi\right]\right)\)
We may rewrite \(\mu_{i}\left[\varphi\right]=\hat{\mu}_{i}\left[\hat{\varphi}\right]\circ\iota _{e}\) and treat the Laplacian terms \(\Delta\left(\mu_{i}\left[\varphi\right]\right)\) with (8):
\[\Delta\left(\mu_{i}\left[\varphi\right]\right)=\varepsilon^{-2}\hat{\mu}_{i} \left[\hat{\varphi}\right]^{\prime\prime}\circ\iota_{e}-\varepsilon^{-1}\hat{ \mu}_{i}\left[\hat{\varphi}\right]^{\prime}\circ\iota_{e}\bar{H}+\Delta_{ \Gamma_{\alpha(s)}}\left(\mu_{i}\left[\varphi\right]\right)\]
giving
\[\Delta\left(\mu\left[\varphi\right]\right) =\varepsilon^{-3}\left(\hat{\mu}_{-1}\left[\hat{\varphi}\right]^ {\prime\prime}\circ\iota_{e}\right)+\] \[\varepsilon^{-2}\left(-\hat{\mu}_{-1}\left[\hat{\varphi}\right]^ {\prime}\circ\iota_{e}\bar{H}+\hat{\mu}_{0}\left[\hat{\varphi}\right]^{ \prime\prime}\circ\iota_{e}\right)+\] \[\varepsilon^{-1}\left(\Delta_{\Gamma_{\alpha(s)}}\left(\mu_{-1} \left[\varphi\right]\right)-\hat{\mu}_{0}\left[\hat{\varphi}\right]^{\prime} \circ\iota_{e}\bar{H}+\hat{\mu}_{1}\left[\hat{\varphi}\right]^{\prime\prime} \circ\iota_{e}\right)+\] \[\Delta_{\Gamma_{\alpha(s)}}\left(\mu_{0}\left[\varphi\right] \right)-\hat{\mu}_{1}\left[\hat{\varphi}\right]^{\prime}\circ\iota_{e}\bar{H }+\hat{\mu}_{2}\left[\hat{\varphi}\right]^{\prime\prime}\circ\iota_{e}+\] \[O\left(\varepsilon\right).\]
Expansion of \(\varepsilon^{-2}W^{\prime\prime}\left(\varphi\right)\mu\left[\varphi\right]\)
For obtaining the expansion of \(\varepsilon^{-2}W^{\prime\prime\prime}\left(\varphi\right)\mu\left[\varphi\right]\), the remaining ingredient is an expansion of \(W^{\prime\prime\prime}\left(\varphi\right)\):
\[\varepsilon^{-2}W^{\prime\prime\prime}\left(\varphi_{0}+ \varepsilon\epsilon_{1}\right) =\varepsilon^{-2}W^{\prime\prime\prime}\left(\varphi_{0}\right)+ \varepsilon^{-1}W^{\left(3\right)}\left(\varphi_{0}\right)\varphi_{1}+W^{ \left(3\right)}\left(\varphi_{0}\right)\varphi_{2}+W^{\left(4\right)}\left( \varphi_{0}\right)\varphi_{1}^{2} \tag{38}\] \[\quad+\varepsilon\left(W^{\left(3\right)}\left(\varphi_{0}\right) \varphi_{3}+2W^{\left(4\right)}\left(\varphi_{0}\right)\varphi_{1}\varphi_{2}+W ^{\left(5\right)}\left(\varphi_{0}\right)\varphi_{1}^{3}\right)+O\left( \varepsilon^{2}\right).\]
Multiplication of (36) and (38) gives
\[\varepsilon^{-2}W^{\prime\prime\prime}\left(\varphi\right)\mu \left[\varphi\right] =\varepsilon^{-3}\left(\mu_{-1}\left[\varphi\right]W^{\prime\prime} \left(\varphi_{0}\right)\right)+\] \[\varepsilon^{-2}\left(\mu_{0}\left[\varphi\right]W^{\prime\prime }\left(\varphi_{0}\right)+\mu_{-1}\left[\varphi\right]W^{\left(3\right)}\left( \varphi_{0}\right)\varphi_{1}\right)+\] \[\varepsilon^{-1}\left(\mu_{1}\left[\varphi\right]W^{\prime\prime }\left(\varphi_{0}\right)+\mu_{0}\left[\varphi\right]W^{\left(3\right)} \left(\varphi_{0}\right)\varphi_{1}+\mu_{-1}\left[\varphi\right]\left(W^{ \left(3\right)}\left(\varphi_{0}\right)\varphi_{2}+W^{\left(4\right)}\left( \varphi_{0}\right)\varphi_{1}^{2}\right)\right)+\] \[\mu_{2}\left[\varphi\right]W^{\prime\prime}\left(\varphi_{0} \right)+\mu_{1}\left[\varphi\right]W^{\left(3\right)}\left(\varphi_{0}\right) \varphi_{1}+\mu_{0}\left[\varphi\right]\left(W^{\left(3\right)}\left( \varphi_{0}\right)\varphi_{2}+W^{\left(4\right)}\left(\varphi_{0}\right) \varphi_{1}^{2}\right)+\] \[\mu_{-1}\left[\varphi\right]\left(W^{\left(3\right)}\left( \varphi_{0}\right)\varphi_{3}+2W^{\left(4\right)}\left(\varphi_{0}\right) \varphi_{1}\varphi_{2}+W^{\left(5\right)}\left(\varphi_{0}\right)\varphi_{1}^{3}\right)+\] \[O\left(\varepsilon\right).\]
Finally, we draw the following conclusions for \(\hat{\varphi}_{0}\), \(\hat{\varphi}_{1}\), and \(\hat{\varphi}_{2}\) by evaluating the equations \(\hat{e}_{i}=0\), for \(i\in\{-1,-2,-3\}\):
* \(\hat{e}_{-3}=0\): This is an equation we have already encountered in Section 3.6, and it reassures the optimal profile \(\hat{\varphi}_{0}(s,z)=\tanh\left(\frac{z}{\sqrt{2}}\right)\).
* \(\hat{e}_{-2}=0\): We use \[0=-\hat{\mu}_{0}\left[\hat{\varphi}\right]^{\prime\prime}\circ\iota_{e}+\mu_{0} \left[\varphi\right]W^{\prime\prime}\left(\varphi_{0}\right),\] (39) and compute \[-\hat{\mu}_{0}\left[\varphi\right]^{\prime\prime} =-\left(\hat{\varphi}_{0}^{\prime}\hat{H}\right)^{\prime\prime}+ \left(\hat{\varphi}_{1}^{\prime\prime}-W^{\prime\prime}\left(\hat{\varphi}_{0} \right)\hat{\varphi}_{1}\right)^{\prime\prime}\] (40) \[=-\hat{\varphi}_{0}^{\left(3\right)}\hat{H}-2\hat{\varphi}_{0}^{ \prime\prime}\hat{H}^{\prime}-\hat{\varphi}_{0}^{\prime}\hat{H}^{\prime\prime}+ \left(\hat{\varphi}_{1}^{\prime\prime}-W^{\prime\prime}\left(\hat{\varphi}_{0} \right)\hat{\varphi}_{1}\right)^{\prime\prime}\] with \[\hat{H}^{\prime}(s,z) =\bar{H}\left(s+\varepsilon zv_{S}\left(s\right)\right)^{\prime}= \varepsilon\nabla\bar{H}\left(s+\varepsilon zv_{S}\left(s\right)\right)\cdot v _{S}\left(s\right)\] (41) and \[\hat{H}^{\prime\prime}(s,z) =\bar{H}\left(s+\varepsilon zv_{S}\left(s\right)\right)^{ \prime\prime}=\varepsilon^{2}\bar{H}\left(s+\varepsilon zv_{S}\left(s\right) \right)\cdot v_{S}\left(s\right)\otimes v_{S}\left(s\right).\] (42)
Passing the last two terms to lower scales, we obtain from (39), \[-\tilde{H}\tilde{\varphi}_{0}^{(3)}\circ\iota_{\epsilon}+\left(\tilde{\varphi}_{1 }^{\prime\prime}+W^{\prime\prime}\left(\tilde{\varphi}_{0}\right)\tilde{\varphi }_{1}\right)^{\prime\prime}\circ\iota_{\epsilon}-\left(-\tilde{H}\tilde{ \varphi}_{0}^{\prime}\circ\iota_{\epsilon}+\tilde{\varphi}_{1}^{\prime\prime }\circ\iota_{\epsilon}-W^{\prime\prime}\left(\varphi_{0}\right)\varphi_{1} \right)W^{\prime\prime}\left(\varphi_{0}\right)=0.\] We note that from the optimal profile property \(\tilde{\varphi}_{0}^{\prime\prime}\circ\iota_{\epsilon}-W^{\prime}\left( \varphi_{0}\right)=0\), the relation \(\tilde{\varphi}_{0}^{(3)}=\tilde{\varphi}_{0}^{\prime}W^{\prime\prime}\left( \tilde{\varphi}_{0}\right)\) directly follows, so we may further simplify: \[\left(\tilde{\varphi}_{1}^{\prime\prime}+W^{\prime\prime}\left(\tilde{\varphi }_{0}\right)\tilde{\varphi}_{1}\right)^{\prime\prime}\circ\iota_{\epsilon}- \left(\tilde{\varphi}_{1}^{\prime\prime}\circ\iota_{\epsilon}-W^{\prime\prime }\left(\varphi_{0}\right)\varphi_{1}\right)W^{\prime\prime}\left(\varphi_{0} \right)=0,\] which is solved by \[\tilde{\varphi}_{1}=0.\] (43)
* \(\tilde{\epsilon}_{-1}=0\): Equation \[0=\tilde{H}\tilde{\mu}_{0}\left[\tilde{\varphi}\right]^{\prime}\circ\iota_{ \epsilon}-\hat{\mu}_{1}\left[\tilde{\varphi}\right]^{\prime\prime}\circ\iota_ {\epsilon}+\mu_{1}\left[\varphi\right]W^{\prime\prime}\left(\varphi_{0}\right) -2\tilde{\varphi}_{0}^{\prime\prime}\circ\iota_{\epsilon}\nabla\tilde{H} \cdot\tilde{v}\left(x\right)\] is equivalent to \[0 =\tilde{H}^{2}\tilde{\varphi}_{0}^{\prime\prime}\circ\iota_{ \epsilon}-\hat{\mu}_{1}\left[\tilde{\varphi}\right]^{\prime\prime}\circ \iota_{\epsilon}+\mu_{1}\left[\varphi\right]W^{\prime\prime}\left(\varphi_{0} \right)-2\tilde{\varphi}_{0}^{\prime\prime}\circ\iota_{\epsilon}\nabla\tilde{H }\cdot\tilde{v}\left(x\right)\] (44) \[+\epsilon\tilde{H}\tilde{\varphi}_{0}^{\prime}\circ\iota_{ \epsilon}\nabla\tilde{H}\cdot\tilde{v}\] using \(\tilde{\varphi}_{1}=0\) so that \(\mu_{0}\left[\varphi\right]=\tilde{\varphi}_{0}^{\prime}\circ\iota_{\epsilon} \tilde{H}\). We use Lemma 3.1 abbreviating \(\tilde{H}\left(s,0\right)=:\left.\tilde{H}\right|_{S}\left(s\right)\): \[\nabla\left.\left(\tilde{H}\right)\right|_{x}\cdot\tilde{v}\left(x\right) =\left(\sum_{i=1}^{2}\hat{\kappa}_{i}^{2}+2\epsilon z\hat{\kappa}_ {i}^{3}+O\left(\epsilon^{2}\right)\right)\circ\iota_{\epsilon}\left(\pi_{S} \left(x\right)\right)\] (45) \[=\left(\tilde{H}\right|_{S}^{2}-2\hat{K}\right|_{S}+2\epsilon z \left(\hat{H}\right|_{S}^{3}-3\hat{H}|_{S}\hat{K}|_{S}\right)+O\left(\epsilon^ {2}\right)\right)\circ\iota_{\epsilon}\left(\pi_{S}\left(x\right)\right)\] and \[\tilde{H}\left(x\right)^{2} =\left(\tilde{H}\right|_{S}+\epsilon z\left(\hat{H}\right|_{S}^{2 }-2\hat{K}|_{S}\right)+O\left(\epsilon^{2}\right)\right)^{2}\circ\iota_{ \epsilon}\left(\pi_{S}\left(x\right)\right)\] (46) \[=\left(\tilde{H}\right|_{S}^{2}+2\epsilon z\hat{H}|_{S}\left(\hat{H }\right|_{S}^{2}-2\hat{K}|_{S}\right)+O\left(\epsilon^{2}\right)\right)\circ \iota_{\epsilon}\left(\pi_{S}\left(x\right)\right).\] Passing all terms of lower order to the lower scales, we obtain \[0 =\hat{H}|_{S}^{2}\tilde{\varphi}_{0}^{\prime\prime}-\hat{\mu}_{1} \left[\tilde{\varphi}\right]^{\prime\prime}+\mu_{1}\left[\tilde{\varphi}\right] W^{\prime\prime}\left(\tilde{\varphi}_{0}\right)-2\tilde{\varphi}_{0}^{\prime\prime} \left(\hat{H}\right|_{S}^{2}-2\hat{K}|_{S}\right)\] (47) \[=-\hat{\mu}_{1}\left[\tilde{\varphi}\right]^{\prime\prime}+\mu_{1} \left[\tilde{\varphi}\right]W^{\prime\prime}\left(\tilde{\varphi}_{0}\right)- \tilde{\varphi}_{0}^{\prime\prime}\left(\hat{H}\right|_{S}^{2}-4\hat{K}|_{S} \right).\] We make the ansatz \(\hat{\mu}_{1}\left[\tilde{\varphi}\right]\left(s,z\right)=-\left(\hat{H} \right|_{S}^{2}-4\hat{K}|_{S}\right)\left(s)s_{1}(z)\). Then (47) becomes \[0=\left(\hat{H}\right|_{S}^{2}-4\hat{K}|_{S}\right)\left(s_{1}^{ \prime\prime}-s_{1}W^{\prime\prime}\left(\hat{\varphi}_{0}\right)-\tilde{\varphi }_{0}^{\prime\prime}\right).\] Substituting \(\tilde{\varphi}_{0}^{\prime\prime}=W^{\prime}\left(\hat{\varphi}_{0}\right)\), and solving for \[0=s_{1}^{\prime\prime}-s_{1}W^{\prime\prime}\left(\hat{\varphi}_{0}\right)- \tilde{\varphi}_{0}^{\prime\prime}\] gives \(s_{1}(z)=\frac{1}{2}\tilde{\varphi}_{0}^{\prime}(z)z\) as in [23, Theorem 2.13, Equation (2.29)], so \[\hat{\mu}_{1}\left[\tilde{\varphi}\right]\left(s,z\right)=-\frac{1}{2}\left( \hat{H}\right|_{S}^{2}-4\hat{K}|_{S}\right)\left(s)\hat{\varphi}_{0}^{\prime}(z)z.\] (48)
### Revisiting \(\hat{\upsilon}\) and \(\hat{p}\) at leading order
The incompressibility (1b) gives with (10) to leading order \(\epsilon^{-1}\)
\[\left(\hat{\upsilon}_{0}\cdot\tilde{v}\right)^{\prime}=0. \tag{49}\]
We have shown in Section 3.8 that \(\nabla_{\varphi}^{L^{2}}\mathcal{F}\in O\left(1\right)\). This gives, by repeating the arguments used in Section 3.5, \(\hat{p}_{-2}=0\). Using (9), (11), and (12), we compute for the inner expansion on \(N_{\delta}\left(\Gamma\right)\cup N_{\delta}\left(\Sigma\right)\), \(S\in\left\{\Gamma,\Sigma\right\}\),
\[\partial_{t}v =\partial_{t}v_{0}^{i}-\epsilon^{-1}b^{\prime}_{0}\circ\iota_{ \epsilon}V_{v}^{\Phi}+\hat{\nu}_{1}^{\prime}\circ\iota_{\epsilon}+O\left( \epsilon\right),\] \[\nabla v =\epsilon^{-1}b^{\prime}_{0}\circ\iota_{\epsilon}\otimes\tilde{v}+ \nabla_{S_{\delta}}v_{0}^{i}+\hat{\nu}_{1}^{\prime}\circ\iota_{\epsilon}\otimes \tilde{v}+O\left(\epsilon\right)\]
resulting in
\[\left(\upsilon\cdot\nabla v\right)v=\epsilon^{-1}\hat{\nu}_{0}^{i}\circ\iota_{ \epsilon}v_{0}^{i}\cdot\tilde{v}+\hat{v}_{0}^{\prime}\circ\iota_{\epsilon}v_{1}^{i }\cdot\tilde{v}+\left(\nabla_{S_{\delta}}v_{0}^{i}+\hat{\nu}_{1}^{\prime}\circ\iota_{ \epsilon}\otimes\tilde{v}\right)v_{0}^{i}+O\left(\epsilon\right),\]
\[\Delta v^{i}=\epsilon^{-2}\hat{\epsilon}_{0}^{\prime\prime}\circ\iota_{\epsilon}+ \epsilon^{-1}\left(-\hat{\epsilon}_{0}^{\prime}\tilde{H}+\hat{\epsilon}_{1}^{ \prime\prime}\right)+\Delta_{\Phi_{\epsilon}}v_{0}^{i}-\hat{\epsilon}_{1}^{ \prime}\tilde{H}+\hat{\epsilon}_{2}^{\prime\prime}\circ\iota_{\epsilon}+O \left(\epsilon\right).\]
At leading order \(\epsilon^{-2}\) of (1a), we thus find
\[-\eta\hat{\epsilon}_{0}^{\prime\prime}\circ\iota_{\epsilon}+\hat{\epsilon}_{1 }^{\prime}\circ\iota_{\epsilon}\tilde{v}=0.\]
Multiplication by \(\tilde{v}\) and using (49) gives
\[\hat{\rho}_{-1}^{\prime}=0. \tag{50}\]
By matching (23), \(\hat{\rho}_{-1}=0\). Inserting back again, we obtain \(\hat{\epsilon}_{0}^{\prime\prime}=0\). Conclusively, \(\hat{\epsilon}_{0}^{\prime\prime}\) is constant in \(z\). Matching with the outer expansion
\[\left(\lim_{z\sim\infty}\hat{\epsilon}_{0}\right)\circ\iota_{\epsilon}\left( x\right)=\lim_{\delta\searrow 0}v_{0}^{\sigma}(\pi_{S}\left(x\right)+\delta v_{S} \left(\pi_{S}\left(x\right)\right))\]
and
\[\left(\lim_{z\searrow-\infty}\hat{\epsilon}_{0}\right)\circ\iota_{\epsilon} \left(x\right)=\lim_{\delta\nearrow 0}v_{0}^{\sigma}(\pi_{S}\left(x\right)+ \delta v_{S}\left(\pi_{S}\left(x\right)\right))\]
indicates that \(\hat{\epsilon}_{0}\) is bounded. Thus, it must hold
\[\hat{\epsilon}_{0}^{\prime}=0. \tag{51}\]
## 4 Sharp interface limit
By inserting the expansions in interfacial coordinates of the components of the solution of (1) into the systems' equations, we have managed to
* eliminate the velocity expansion's summands up to (and including) order \(\epsilon^{-1}\),
* eliminate the pressure expansion's summands up to (and including) order \(\epsilon^{-3}\),
* show that both phase fields assume the optimal profile at leading order,
* show that \(\varphi_{1}=0\),
* and derive (48).
Before we can make use of these findings and pass to the limit \(\epsilon\to 0\), we compute the expansions of the remaining terms in \(K\) (see the right hand side of (1a)).
### Expansion of \(\nabla^{L^{2}}C\) and remaining force terms
We compute the asymptotic expansions of \(\nabla_{\phi}^{L^{2}}C\), \(\nabla_{\psi}^{L^{2}}C\),
\[G_{\epsilon}:=-\partial_{\rho_{\epsilon}}C_{\phi}H_{\psi}\rho_{\alpha}v_{\psi }=-\rho_{a}H_{\psi}\int_{\Omega}g_{\epsilon}\left[\phi\right](y)\partial_{ \rho_{\epsilon}}c(x,y,\rho_{\alpha},v_{\psi})\;\mathrm{d}\Omega^{3}(y)v_{ \psi},\]
and
\[H_{\epsilon}:=-g_{\epsilon}\left[\psi\right]\mathbb{P}_{v_{\psi}}\nabla \partial_{\rho_{\epsilon}}C_{\phi}\rho_{a}=-\int_{\Omega}g_{\epsilon}\left[ \phi\right](y)\mathbb{P}_{v_{\psi}}\nabla_{x}\left(\partial_{\rho_{\epsilon} }c(\cdot,y,\rho_{\alpha},v_{\psi})\right)g_{\epsilon}\left[\psi\right]\rho_{a} \;\mathrm{d}\Omega^{3}(y).\]
Expansion of \(\nabla_{\phi}^{L^{2}}C\)
We recall from (2d) that
\[\nabla_{\phi}^{L^{2}}C=A_{\epsilon}+B_{\epsilon}\] (52a) with \[A_{\epsilon}(y) =\left(-\epsilon\Delta_{y}\phi+\epsilon^{-1}W^{\prime}\left(\phi \right)\right)(y)\int_{N_{\delta}\left(\Sigma\right)}g_{\epsilon}\left[\psi \right](x)c\left(x,y,\rho_{\alpha},v_{\psi}\right)\;\mathrm{d}\Omega^{3}(x)+O \left(\epsilon\right), \tag{52b}\] \[B_{\epsilon}(y) =-\epsilon\nabla_{y}\phi\cdot\int_{N_{\delta}\left(\Sigma\right) }g_{\epsilon}\left[\psi\right](x)\nabla_{y}\left(c\left(x,y,\rho_{\alpha},v_{ \psi}\right)\right)\;\mathrm{d}\Omega^{3}(x)+O\left(\epsilon\right).\]
We further expand
\[c\left(x,y,\rho_{a0}+\varepsilon r_{1},\nu_{\varphi_{0}+\varepsilon s_{1}}\right)= c\left(x,y,\rho_{a0},\nu_{\varphi_{0}}\right)+\varepsilon\nabla_{\rho_{a 0}}v\cdot\left(r_{1},\,\frac{\mathrm{d}}{\mathrm{d}\,\varepsilon}\left(\nu_{ \varphi_{0}+\varepsilon s_{1}}\right)\right|_{0}\right)^{T}+O\left(\varepsilon ^{2}\right),\]
and note
\[\frac{\mathrm{d}}{\mathrm{d}\,\varepsilon}\left(\nu_{\varphi_{0}+\varepsilon s _{1}}\right)\bigg{|}_{0}=\frac{1}{\left|\nabla\varphi_{0}\right|}\mathbb{P}_{ \nu_{\varphi_{0}}}\nabla s_{1}\in O\left(1\right), \tag{53}\]
and thus
\[\nabla_{\rho_{a},v}c\cdot\left(r_{1},\,\frac{\mathrm{d}}{\mathrm{d}\, \varepsilon}\left(\nu_{\varphi_{0}+\varepsilon s_{1}}\right)\right|_{0} \right)^{T}\in O\left(1\right).\]
By employing (16), we expand \(A_{\varepsilon}\):
\[\int_{N_{\varepsilon}(\Sigma)}g_{\varepsilon}\left[\psi\right] \left(x\right)c\left(x,y,\rho_{a},\nu_{\varphi}\right)\,\mathrm{d}\mathfrak{B} ^{3}(x)=\] \[\int_{N_{\varepsilon}(\Sigma)}\varepsilon^{-1}\left(\frac{1}{2} \left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\varepsilon}\left(x \right)+W\left(\psi_{0}\left(x\right)\right)\right)c\left(x,y,\rho_{a0},\nu_{ \varphi_{0}}\right)+O\left(1\right)\,\mathrm{d}\mathfrak{B}^{3}(x).\]
Multiplication with \(\left(-\varepsilon\Delta_{\varphi}\phi+\varepsilon^{-1}W^{\prime}\left(\phi \right)\right)\) yields
\[A_{\varepsilon}(y)=\hat{\psi}_{0}^{\prime}\circ\iota_{\varepsilon}\left(y \right)\bar{H}\left(y\right)\int_{N_{\varepsilon}(\Sigma)}\varepsilon^{-1} \left(\frac{1}{2}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{ \varepsilon}\left(x\right)+W\left(\psi_{0}\left(x\right)\right)\right)c\left( x,y,\rho_{a0},\nu_{\varphi_{0}}\right)+O\left(1\right)\,\mathrm{d}\mathfrak{B}^{3}(x)\]
thanks to \(\hat{\psi}_{0}\) being the optimal profile, and (43).
In order to expand \(B_{\varepsilon}\), we first compute
\[\nabla_{y}c\left(x,y,\rho_{a0}+\varepsilon r_{1},\nu_{\varphi_{0}+\varepsilon s _{1}}\right)=\nabla_{y}c\left(x,y,\rho_{a0},\nu_{\varphi_{0}}\right)+ \varepsilon\nabla_{\rho_{a},v}\nabla_{y}c\left(\begin{matrix}r_{1}\\ \frac{1}{\left|\varphi_{0}\right|}\mathbb{P}_{\nu_{\varphi_{0}}}\nabla s_{1} \end{matrix}\right)+O\left(\varepsilon^{2}\right).\]
Therefore, using (16), we find
\[\int_{N_{\varepsilon}(\Sigma)}g_{\varepsilon}\left[\psi\right] \nabla_{y}c\left(x,y,\rho_{a},\nu_{\varphi}\right)\,\mathrm{d}\mathfrak{B}^{3 }(x)=\] \[\int_{N_{\varepsilon}(\Sigma)}\varepsilon^{-1}\left(\frac{1}{2} \left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\varepsilon}\left(x \right)+W\left(\psi_{0}\left(x\right)\right)\right)\nabla_{y}c\left(x,y,\rho_{ a0},\nu_{\varphi_{0}}\right)+O\left(1\right)\,\mathrm{d}\mathfrak{B}^{3}(x).\]
Multiplication with \(-\varepsilon\nabla\phi\) gives
\[B_{\varepsilon}(y)=-\hat{\psi}_{0}^{\prime}\circ\iota_{\varepsilon}\left(y \right)\int_{N_{\varepsilon}(\Sigma)}\varepsilon^{-1}\left(\frac{1}{2}\left( \hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\varepsilon}\left(x\right)+W \left(\psi_{0}\left(x\right)\right)\right)\bar{v}\left(y\right)\cdot\nabla_{y}c \left(x,y,\rho_{a0},\nu_{\varphi_{0}}\right)+O\left(1\right)\,\mathrm{d} \mathfrak{B}^{3}(x).\]
Expansion of \(\nabla_{w}^{L^{2}}C\)
We recall (2e),
\[\nabla_{\varphi}^{L^{2}}C=C_{\varepsilon}+D_{\varepsilon}+E_{\varepsilon}\] (54a) with \[C_{\varepsilon}(x)=\left(-\varepsilon\Delta_{x}\psi+\varepsilon^{-1}W^{\prime} \left(\psi\right)\right)(x)\int_{N_{\varepsilon}(\Gamma)}g_{\varepsilon} \left[\phi\right](y)c\left(x,y,\rho_{a},\nu_{\varphi}\right)\,\mathrm{d} \mathfrak{B}^{3}(y),\] \[D_{\varepsilon}(x)=-\varepsilon\nabla_{x}\psi\cdot\int_{N_{ \varepsilon}(\Gamma)}g_{\varepsilon}\left[\phi\right](y)\nabla_{x}\left(c \left(x,y,\rho_{a},\nu_{\varphi}\right)\right)\,\mathrm{d}\mathfrak{B}^{3}(y), \tag{54b}\] \[E_{\varepsilon}(x)=-\int_{N_{\varepsilon}(\Gamma)}g_{\varepsilon} \left[\phi\right](y)\nabla_{x}\cdot\left(g_{\varepsilon}\left[\psi\right] \nabla_{y}c^{T}\frac{1}{\left|\nabla\psi\right|}\mathbb{P}_{\nu_{\varphi}} \right)\,\mathrm{d}\mathfrak{B}^{3}(y).\]
Before we start expanding these terms, we prove the following formulae:
**Lemma 4.1**.: It holds,
\[\frac{\nabla\psi}{\left|\nabla\psi\right|}=\bar{v}+O\left(\varepsilon^{2} \right), \tag{55}\]
\[\nabla^{2}\psi=\varepsilon^{-2}\hat{\psi}_{0}^{\prime\prime}\circ\iota_{ \varepsilon}\bar{v}\otimes\bar{v}+\varepsilon^{-1}\hat{\psi}_{0}^{\prime}\circ \iota_{\varepsilon}\nabla\bar{v}+\hat{\psi}_{2}^{\prime\prime}\circ\iota_{ \varepsilon}\bar{v}\otimes\bar{v}+O\left(\varepsilon\right). \tag{56}\]
Proof.: Ad (55): Due to \(\hat{\psi}_{0}\) being the optimal profile and it thus being independent of the tangential variable \(s\), and considering (43), we have
\[\nabla\psi=\epsilon^{-1}\hat{\psi}_{0}^{\prime}\circ\iota_{\epsilon}\bar{v}+ \nabla_{\Sigma_{\epsilon}}\hat{\psi}_{0}+\hat{\psi}_{1}^{\prime}\circ\iota_{ \epsilon}\bar{v}+O\left(\epsilon\right)=\epsilon^{-1}\hat{\psi}_{0}^{\prime} \circ\iota_{\epsilon}\bar{v}+O\left(\epsilon\right).\]
This observation brings the claimed expansion for the product of \(\nabla\psi\) and \(\left|\nabla\psi\right|^{-1}\) using (19).
Ad (56):
\[\nabla\left(\partial_{i}\psi\right) =\epsilon^{-2}\hat{\psi}_{0}^{\prime\prime}\circ\iota_{\epsilon} \bar{v}_{i}+\epsilon^{-1}\nabla_{\Sigma_{\epsilon}}\left(\hat{\psi}_{0}^{ \prime}\circ\iota_{\epsilon}\right)\bar{v}_{i}+\epsilon^{-1}\hat{\psi}_{0}^{ \prime}\circ\iota_{\epsilon}\nabla\bar{v}_{i}\] \[+\nabla_{\Sigma_{\epsilon}}\left(\epsilon^{-1}\hat{\psi}_{0}^{ \prime}\circ\iota_{\epsilon}\bar{v}_{i}+\nabla_{\Sigma_{\epsilon}}\left(\psi_ {0}\right)\left(i\right)\right)+\nabla\left(\hat{\psi}_{1}^{\prime}\circ\iota_ {\epsilon}\nabla v_{i}+\epsilon\nabla_{\Sigma_{\epsilon}}\hat{\psi}_{1}(i)\right)\] \[+\hat{\psi}_{2}^{\prime\prime}\circ\iota_{\epsilon}\bar{v}_{i}+O \left(\epsilon\right).\]
We again use the optimal profile and (43) to conclude
\[\epsilon^{-1}\nabla_{\Sigma_{\epsilon}}\left(\hat{\psi}_{0}^{\prime}\circ \iota_{\epsilon}\right)v_{i}=\nabla_{\Sigma_{\epsilon}}\left(\epsilon^{-1} \hat{\psi}_{0}^{\prime}\circ\iota_{\epsilon}v_{i}+\nabla_{\Sigma_{\epsilon}} \left(\psi_{0}\right)\left(i\right)\right)=\nabla\left(\hat{\psi}_{1}^{\prime} \circ\iota_{\epsilon}\nabla v_{i}+\epsilon\nabla_{\Sigma_{\epsilon}}\hat{\psi}_ {1}(i)\right)=0,\]
and the claim follows.
\(C_{\epsilon}\) is expanded just like \(A_{\epsilon}\):
\[C_{\epsilon}(x)=\hat{\psi}_{0}^{\prime}\circ\iota_{\epsilon}\left(x\right) \bar{H}\left(x\right)\int_{N_{\epsilon}(\Sigma)}\epsilon^{-1}\left(\frac{1}{2 }\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon}\left(x\right) +W\left(\phi_{0}\left(x\right)\right)\right)c\left(x,y,\rho_{a0},\nu_{\psi_{0} }\right)+O\left(1\right)\;\mathrm{d}\mathfrak{B}^{3}(y).\]
We continue by expanding \(D_{\epsilon}\): First note
\[\epsilon\nabla_{x}\psi\cdot\nabla_{x}\left(c\left(\cdot,y,\rho_{a^{\prime}} \nu_{\psi}\right)\right)=\epsilon\nabla_{x}\psi\cdot\left(\nabla_{x}c+\partial _{\rho_{a}}c\nabla_{x}\rho_{a}+\nabla\nu_{\psi}{}^{T}\nabla_{\psi}c\right).\]
Then we observe \(\epsilon\nabla_{x}\psi^{T}\nabla\nu_{\psi}{}^{T}=O\left(\epsilon\right),\) so
\[\epsilon\nabla_{x}\psi\cdot\nabla_{x}c=\left(\hat{\psi}_{0}^{\prime}\circ \iota_{\epsilon}\bar{v}+O\left(\epsilon\right)\right)\cdot\left(\nabla_{x}c+ \partial_{\rho_{a}}c\nabla\rho_{a}\right)=\hat{\psi}_{0}^{\prime}\circ\iota_ {\epsilon}\bar{v}\cdot\left(\nabla_{x}c+\partial_{\rho_{a}}c\nabla\rho_{a} \right)+O\left(\epsilon\right),\]
where the last equality is justified by (30). Then,
\[D_{\epsilon}(x)=-\int_{N_{\epsilon}(\Gamma)}\epsilon^{-1}\left(\frac{1}{2} \left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon}\left(y\right)+W \left(\phi_{0}\left(y\right)\right)\right)\left(\hat{\psi}_{0}^{\prime}\circ \iota_{\epsilon}\bar{v}\cdot\left(\nabla_{x}c+\partial_{\rho_{a}}c\nabla\rho_{a }\right)+O\left(\epsilon\right)\right)+O\left(\epsilon\right)\;\mathrm{d} \mathfrak{B}^{3}(y).\]
At last, we turn to \(E_{\epsilon}\) and compute
\[\nabla_{x}\cdot\left(g_{\epsilon}\left[\psi\right]\nabla_{v}c^{T} \frac{1}{\left|\nabla\psi\right|}\mathbb{P}_{\nu_{\psi}}\right) =\nabla_{x}\left(g_{\epsilon}\left[\psi\right]\right)\cdot\frac{1 }{\left|\nabla\psi\right|}\mathbb{P}_{\nu_{\psi}}{}^{T}\nabla_{v}c+g_{ \epsilon}\left[\psi\right]\nabla\cdot\left(\frac{1}{\left|\nabla\psi\right|} \mathbb{P}_{\nu_{\psi}}{}^{T}\nabla_{v}c\right) \tag{57}\] \[=\nabla_{v}c^{T}\frac{1}{\left|\nabla\psi\right|}\mathbb{P}_{\nu_{ \psi}}\nabla_{x}\left(g_{\epsilon}\left[\psi\right]\right)\] \[+g_{\epsilon}\left[\psi\right]\left(\frac{1}{\left|\nabla\psi\right| }\nabla\left(\nabla_{v}c\right):\mathbb{P}_{\nu_{\psi}}+\nabla_{v}c\cdot \nabla\cdot\left(\frac{1}{\left|\nabla\psi\right|}\mathbb{P}_{\nu_{\psi}} \right)\right).\]
On the first term, we use (56) and (43) (for the expansion of the double well potential) to obtain
\[\nabla\left(g_{\epsilon}\left[\psi\right]\right) =\left(\epsilon^{-1}\hat{\psi}_{0}^{\prime\prime}\circ\iota_{ \epsilon}\bar{v}\otimes\bar{v}+\hat{\psi}_{0}^{\prime}\circ\iota_{\epsilon} \nabla\bar{v}+\epsilon^{-1}W^{\prime}\left(\psi_{0}\right)+O\left(\epsilon \right)\right)\left(\epsilon^{-1}\hat{\psi}_{0}^{\prime}\circ\iota_{\epsilon} \bar{v}+O\left(\epsilon\right)\right)\] \[=\epsilon^{-2}\left(\hat{\psi}_{0}^{\prime\prime}\circ\iota_{ \epsilon}\hat{\psi}_{0}^{\prime}\circ\iota_{\epsilon}\bar{v}+W^{\prime}\left( \psi_{0}\right)\hat{\psi}_{0}^{\prime}\circ\iota_{\epsilon}\bar{v}\right)+O \left(1\right).\]
Since \(\mathbb{P}_{\nu_{\psi}}\bar{v}\in O\left(\epsilon^{2}\right)\) thanks to (55), we have
\[\nabla_{v}c^{T}\frac{1}{\left|\nabla\psi\right|}\mathbb{P}_{\nu_{\psi}}\nabla \left(g_{\epsilon}\left[\psi\right]\right)\in O\left(\epsilon\right). \tag{58}\]
Second, we calculate
\[g_{\epsilon}\left[\psi\right]\frac{1}{\left|\nabla\psi\right|} \nabla\left(\nabla_{v}c\right):\mathbb{P}_{\nu_{\psi}} =\left(\epsilon^{-1}\left(\frac{1}{2}\left(\hat{\psi}_{0}^{\prime} \right)^{2}\circ\iota_{\epsilon}+W\left(\psi_{0}\right)\right)+O\left(\epsilon \right)\right)\frac{1}{\left|\nabla\psi\right|}\left(\nabla\left(\nabla_{v}c \right):\mathbb{P}_{\bar{v}}+O\left(\epsilon^{2}\right)\right)\] \[=\epsilon^{-1}\frac{1}{\left|\nabla\psi\right|}\left(\frac{1}{2} \left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon}+W\left(\psi_{0} \right)\right)\nabla_{\Sigma_{\epsilon\left(\epsilon\right)}}\cdot\left(\nabla_{v }c\right)+O\left(\epsilon^{2}\right).\]
Third,
\[\nabla\cdot\left(\frac{1}{\left|\nabla\psi\right|}\mathbb{P}_{\nu_{\nu}}\right)= \mathbb{P}_{\nu_{\nu}}\nabla\left(\frac{1}{\left|\nabla\psi\right|}\right)+ \frac{1}{\left|\nabla\psi\right|}\nabla\cdot\mathbb{P}_{\nu_{\nu}}.\]
We further compute
\[\nabla\left(\frac{1}{\left|\nabla\psi\right|}\right)=-\frac{1}{\left|\nabla \psi\right|^{3}}\nabla^{2}\psi\nabla\psi=-\frac{1}{\left|\nabla\psi\right|^{2} }\nabla^{2}\psi\bar{\nu},\]
and using (56), we find
\[\nabla_{\nu}c\cdot\nabla\cdot\left(\frac{1}{\left|\nabla\psi\right|}\mathbb{P} _{\nu_{\nu}}\right)=\nabla_{\nu}c\cdot\mathbb{P}_{\nu_{\nu}}\nabla\left(\frac{1 }{\left|\nabla\psi\right|}\right)+\nabla_{\nu}c\cdot\frac{1}{\left|\nabla\psi \right|}\nabla\cdot\mathbb{P}_{\nu_{\nu}}=\nabla_{\nu}c\cdot\frac{1}{\left| \nabla\psi\right|}\nabla\cdot\mathbb{P}_{\nu_{\nu}}+O\left(\varepsilon^{3} \right).\]
Finally,
\[\nabla\cdot\mathbb{P}_{\nu_{\nu}}=-(\nabla\bar{\nu}\bar{\nu}+\bar{\nu}\nabla \cdot\bar{\nu}+O\left(\varepsilon\right))=\bar{H}\bar{\nu}+O\left(\varepsilon \right),\]
so
\[E_{\varepsilon}(x) =-\frac{\varepsilon^{-1}}{\left|\nabla_{x}\psi\right|}\left( \frac{1}{2}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ t_{\varepsilon}(x)+W \left(\psi_{0}\left(x\right)\right)\right)\cdot\] \[\cdot\int_{N_{\delta}(\Gamma)}\varepsilon^{-1}\left(\frac{1}{2} \left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ t_{\varepsilon}(y)+W\left(\phi_{ 0}\left(y\right)\right)\right)\left(\nabla_{\Sigma_{\delta(x)}}\cdot\left( \nabla_{\nu}c\right)+\nabla_{\nu}c\cdot\bar{H}\left(x\right)\bar{\nu}\left(x \right)\right)\,\,\mathrm{d}\mathfrak{B}^{3}(y)+O\left(1\right).\]
Expansion of \(G_{\varepsilon}\)
We use (16), (18), and (55) in connection with (43) to obtain
\[G_{\varepsilon}= -\int_{\Omega}g_{\varepsilon}\left\{\phi\right\}(y)\rho_{a}H_{ \psi}\partial_{\rho_{a}}c\left(\cdot,y,\rho_{a},\nu_{\nu}\right)\nu_{\psi}\, \,\mathrm{d}\mathfrak{B}^{3}(y)=\] \[-\varepsilon^{-1}\rho_{a0}\left(\hat{\psi}_{0}^{\prime}\right)^{ 2}\bar{H}\bar{\nu}\int_{N_{\delta}(\Gamma)}\varepsilon^{-1}\left(\frac{1}{2} \left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ t_{\varepsilon}(y)+W\left(\phi_ {0}\left(y\right)\right)\right)\partial_{\rho_{a}}c\left(\cdot,y,\rho_{a0}, \nu_{\psi_{0}}\right)\,\,\mathrm{d}\mathfrak{B}^{3}(y)+O\left(1\right).\]
Expansion of \(H_{\varepsilon}\)
As before, we employ (16), (55) to obtain
\[H_{\varepsilon}= -\int_{\Omega}g_{\varepsilon}\left\{\phi\right\}(y)\mathbb{P}_{ \nu_{\nu}}\nabla_{x}\left(\partial_{\rho_{a}}c(\cdot,y,\rho_{a},\nu_{\nu}) \right)g_{\varepsilon}\left[\psi\right]\rho_{a}\,\,\mathrm{d}\mathfrak{B}^{3} (y)=-\varepsilon^{-1}\left(\frac{1}{2}\left(\hat{\psi}_{0}^{\prime}\right)^{2 }\circ t_{\varepsilon}+W\left(\psi_{0}\right)\right)\cdot\] \[\cdot\int_{\Omega}\varepsilon^{-1}\left(\frac{1}{2}\left(\hat{ \psi}_{0}^{\prime}\right)^{2}\circ t_{\varepsilon}(y)+W\left(\phi_{0}\left(y \right)\right)\right)\nabla_{\Sigma_{\delta}}\left(\partial_{\rho_{a}}c( \cdot,y,\rho_{a0},\nu_{\psi_{0}})\right)\rho_{a0}\,\,\mathrm{d}\mathfrak{B}^{3 }(y)+O\left(1\right).\]
We now show, using the results of the analysis in the previous sections, that classical solutions of (1) converge formally to solutions of (4) for \(\varepsilon\searrow 0\). In the following, we will often use that
\[\left(\hat{\psi}_{0}^{\prime}(z)\right)^{2}=\left(\hat{\psi}_{0}^{\prime}(z) \right)^{2}=\left(\tanh\left(\frac{z}{\sqrt{2}}\right)^{\prime}\right)^{2}= \frac{1}{2}\left(1-\tanh\left(\frac{z}{\sqrt{2}}\right)^{2}\right)^{2}\]
is integrable, and we will abbreviate
\[Z\,:=\frac{1}{2}\int_{-\infty}^{\infty}\left(1-\tanh\left(\frac{z}{\sqrt{2}} \right)^{2}\right)^{2}\,\,\mathrm{d}\mathfrak{B}^{1}(z).\]
We also partition all integrals over \(\Omega\) into an interval over \(N_{\delta}\left(S\right)\) and one over \(\Omega_{\delta}=\Omega\setminus N_{\delta}\left(S\right)\). In the latter region, the outer expansions hold, and thus the integrands are all of lower order vanishing in the limit, so we can neglect them.
### Momentum balance and mass conservation
#### 4.2.1 Outer region
At order \(\varepsilon^{0}\), we find with the results of Section 3.3 (causing all the energy gradient terms on the right to vanish)
\[\rho\left(\partial_{t}v_{0}^{\circ}+\left(v_{0}^{\circ}\cdot\nabla\right)v_{0}^{ \circ}\right)-\nabla\cdot\left(\eta\left(\nabla v_{0}^{\circ}+\nabla v_{0}^{ \circ T}\right)\right)+\nabla p_{0}^{\circ}=0,\]
and for the incompressibility condition
\[\nabla\cdot{{\iota}_{0}^{\varrho}}=0.\]
This gives (4a) and (4b).
#### Inner region
Let \(S\in\{\Gamma,\Sigma\}\). We note that the matching conditions for the velocity state the no-jump conditions (4d), (4e).
Plugging further the inner expansion into (1a) and using (50), (51), we find
\[-\epsilon^{-1}\left(\left(\hat{\eta}\hat{\nu}_{1}^{\prime}\right)^ {\prime}+\left(\widehat{\nabla_{\phi_{\epsilon}}{{\upsilon}_{0}^{T}}}\hat{ \eta}^{T}\right)^{\prime}\hat{v}+\hat{v}\otimes\hat{\nu}_{1}^{\prime\prime} \hat{v}\right)\circ{{\iota}_{\epsilon}}+\epsilon^{-1}\hat{\nu}_{0}^{\prime} \circ{{\iota}_{\epsilon}}\hat{v}+r=\] \[\qquad\qquad\qquad\epsilon^{-1}\left(\nabla_{\phi}^{L^{2}}\mathcal{ F}\hat{\phi}_{0}^{\prime}\circ{{\iota}_{\epsilon}}+\nabla_{\psi}^{L^{2}} \mathcal{F}\hat{\psi}_{0}^{\prime}\circ{{\iota}_{\epsilon}}\right)\bar{v}- \partial_{\rho_{\epsilon}}C_{\phi}H_{\nu_{0}^{\prime}}\nu_{\nu_{0}^{\prime}} \rho_{a0}^{j},\]
where \(r=\hat{r}\circ{{\iota}_{\epsilon}}\) with \(\hat{r}\in O\left(1\right)\). For understanding the limit of this equation, let us consider its variational formulation with test functions \(w\in[H^{1}\left(\Omega\right)]^{3}\). The left hand side then reads
\[\begin{split}&\epsilon^{-1}\int_{N_{d}(\Gamma)\cap N_{d}(\Sigma)} \left(-\left(\hat{\eta}\hat{\nu}_{1}^{\prime}\right)^{\prime}\circ{{\iota}_{ \epsilon}}+\left(\widehat{\nabla_{\phi_{\epsilon}}{{\upsilon}_{0}^{T}}}\hat{ \eta}^{T}\right)^{\prime}\hat{v}\circ{{\iota}_{\epsilon}}\bar{v}+\bar{v} \otimes\hat{\nu}_{1}^{\prime\prime}\circ{{\iota}_{\epsilon}}\bar{v}+\hat{\beta }_{0}^{\prime}\circ{{\iota}_{\epsilon}}\bar{v}\right)\cdot w\ \mathrm{d}\mathfrak{g}^{3}= \\ &\int_{-\frac{\hat{r}}{\hat{r}}}^{\frac{\hat{r}}{\hat{r}}}\int_{ \Gamma_{x}v\cup\Sigma_{x}}\left(\left(-\left(\hat{\eta}\hat{\nu}_{1}^{\prime} \right)^{\prime}+\left(\widehat{\nabla_{\phi_{\epsilon}}{{\upsilon}_{0}^{T}}} \hat{\eta}^{T}\right)^{\prime}\hat{v}+\hat{v}\otimes\hat{\nu}_{1}^{\prime \prime}\hat{v}\right)\left(\pi_{\Sigma\cup\Gamma}\left(\sigma\right),z\right) +\hat{\beta}_{0}^{\prime}(\pi_{\Sigma\cup\Gamma}\left(\sigma\right),z)\bar{ v}(\sigma)\right)\cdot w\ \mathrm{d}\mathfrak{g}^{3}(\sigma)\ \mathrm{d}z=\\ &\int_{\Gamma_{x}v\setminus\Sigma_{x}}\left(\left[-\hat{\eta} \hat{\nu}_{1}^{\prime}+\left(\widehat{\nabla_{\phi_{\epsilon}}{{\upsilon}_{0} ^{T}}}\hat{\eta}^{T}\right)\hat{v}+\hat{v}\otimes\hat{\nu}_{1}^{\prime}\hat{ v}\right]_{-\frac{\hat{r}}{\hat{r}}}^{\frac{\hat{r}}{\hat{r}}}\left(\pi_{\Sigma\cup \Gamma}\left(\sigma\right)\right)+\left[\hat{\beta}_{0}\right]_{-\frac{\hat{r} }{\hat{r}}}^{\frac{\hat{r}}{\hat{r}}}\left(\pi_{\Sigma\cup\Gamma}\left(\sigma \right)\right)\bar{v}\left(\sigma\right)\right)\cdot w\ \mathrm{d}\mathfrak{g}^{2}(\sigma).\end{split} \tag{59}\]
We can rewrite the integral of \(-\hat{\eta}\hat{\nu}_{1}^{\prime}+\left(\widehat{\nabla_{\phi_{\epsilon}}{{ \upsilon}_{0}^{T}}}\hat{\eta}^{T}\right)\hat{v}+\hat{v}\otimes\hat{\nu}_{1}^{ \prime}\hat{v}\) in \(z\) by looking at the expansions of \(\nabla v\bar{v}\) and of \(\nabla v^{T}\bar{v}\) in interfacial coordinates:
\[\begin{split}\nabla\left(\hat{\varrho}\circ{{\iota}_{\epsilon}} \right)\bar{v}&=\epsilon^{-1}\hat{\nu}_{0}^{\prime}\circ{{\iota}_{ \epsilon}}+\hat{\nu}_{1}^{\prime}\circ{{\iota}_{\epsilon}}+O\left(\epsilon\right) \\ &=\hat{\nu}_{1}^{\prime}\circ{{\iota}_{\epsilon}}+O\left(\epsilon \right),\end{split}\]
and
\[\begin{split}\nabla\left(\hat{\varrho}\circ{{\iota}_{\epsilon}} \right)^{T}\bar{v}&=\epsilon^{-1}\bar{v}\otimes\hat{\nu}_{0}^{ \prime}\circ{{\iota}_{\epsilon}}\bar{v}+\nabla_{\phi_{\epsilon}}{{\upsilon}_{ 0}^{T}}\bar{v}+\bar{v}\otimes\hat{\nu}_{1}^{\prime}\circ{{\iota}_{\epsilon}} \bar{v}+O\left(\epsilon\right)\\ &=\nabla_{\phi_{\epsilon}}{{\upsilon}_{0}^{T}}\bar{v}+\bar{v} \otimes\hat{\nu}_{1}^{\prime}\circ{{\iota}_{\epsilon}}\bar{v}+O\left(\epsilon \right),\end{split}\]
respectively (following with (11) and (51)). With the matching conditions for \(\nabla v\bar{v}\) and \(\nabla v^{T}\bar{v}\), we further obtain
\[\begin{split}\lim_{\alpha\searrow 0}(\nabla{{\upsilon}_{0}}\bar{v})(\pi_{S} \left(x\right)+\alpha\nu_{S}\left(\pi_{S}\left(x\right)\right))&= \left(\lim_{z\nearrow 0}\hat{\nu}_{1}^{\prime}\right)\circ{{\iota}_{\epsilon}}\left(x \right),\\ \lim_{\alpha\searrow 0}(\nabla{{\upsilon}_{0}^{T}}\bar{v})(\pi_{S} \left(x\right)+\alpha\nu_{S}\left(\pi_{S}\left(x\right)\right))&= \lim_{z\nearrow 0}(\widehat{\nabla_{\phi_{\epsilon}}{{\upsilon}_{0}^{T}}}^{T})\circ{{\iota}_{ \epsilon}}\bar{v}+\bar{v}\otimes\lim_{z\nearrow 0}(\hat{\nu}_{1}^{\prime})\circ{{\iota}_{\epsilon}}\bar{v}.\end{split} \tag{60}\]
The computations go analogously for the limits \(\alpha\nearrow 0\) and \(z\searrow-\infty\). Together, both limits form a jump \(\llbracket\cdot\rrbracket\). Now we pass \(\epsilon\searrow 0\) in (59) and insert the matching condition for the pressure and (60). This reveals
\[\begin{split}\epsilon^{-1}\int_{N_{d}(\Gamma)\cap N_{d}(\Sigma)} \left(-\left(\hat{\eta}\hat{\nu}_{1}^{\prime}\right)^{\prime}\circ{{\iota}_{ \epsilon}}+\left(\widehat{\nabla_{\phi_{\epsilon}}{{\upsilon}_{0}^{T}}}\hat{ \eta}^{T}\right)^{\prime}\bar{v}\circ{{\iota}_{\epsilon}}\bar{v}+\bar{v} \otimes\hat{\nu}_{1}^{\prime\prime}\circ{{\iota}_{\epsilon}}\bar{v}+\hat{\beta}_{0}^ {\prime}\circ{{\iota}_{\epsilon}}\bar{v}\right)\cdot w\ \mathrm{d}\mathfrak{g}^{3}\overset{\epsilon\searrow 0}{ \rightarrow}\\ \int_{\Gamma\cup\Sigma}-\left[\eta(\nabla{{\upsilon}_{0}}+\nabla{{ \upsilon}_{0}^{T}})-p_{0}\right]\nu\cdot w\ \mathrm{d}\mathfrak{g}^{3},\end{split} \tag{61}\]
where \(\nu\in\{\nu_{\Gamma},\nu_{\Sigma}\}\) depending on what surface the integrand is to be understood.
The right hand side of (1a) in variational form is
\[\begin{split} f(w)&=\left(\nabla_{\phi}^{L^{2}}\mathcal{W }\left[\phi\right]+\nabla_{\phi}^{L^{2}}\mathcal{C}[\phi]+\nabla_{\phi}^{L^{2}} \mathcal{C}\left[\phi,\psi,\rho_{a}\right],\nabla\phi\cdot w\right)_{L^{2}(N_{d} \left(\Gamma\right))}\\ &+\left(\nabla_{\psi}^{L^{2}}\mathcal{W}\left[\psi\right]+\nabla_ {\psi}^{L^{2}}\mathcal{C}[\psi]+\nabla_{\psi}^{L^{2}}\mathcal{C}\left[\phi, \psi,\rho_{a}\right],\nabla\psi\cdot w\right)_{L^{2}(N_{s}\left(\Sigma\right))} \\ &-\left(\int_{\Omega}g_{\varepsilon}\left[\phi\right](y)\mathbb{P }_{\nu_{\varphi}}\nabla\left(\partial_{\rho_{a}}c(\cdot,y,\rho_{a},\nu_{\varphi })\right)g_{\varepsilon}\left[\psi\right]\rho_{a}\,\mathrm{d}\mathfrak{B}^{3}( y),w\right)_{\left[L^{2}\left(N_{s}\left(\Sigma\right)\right)\right]^{3}}\\ &-\left(\int_{N_{s}\left(\Omega\right)}g_{\varepsilon}\left[\phi \right](y)\rho_{a}H_{\psi}\partial_{\rho_{a}}c\,\mathrm{d}\mathfrak{B}^{3}( y),w\cdot\nu_{\varphi}\right)_{L^{2}(N_{s}\left(\Sigma\right))}.\end{split} \tag{62}\]
Note that we can rewrite
\[-\left(\int_{N_{d}\left(\Gamma\right)}g_{\varepsilon}\left[\phi \right](y)\rho_{a}H_{\psi}\partial_{\rho_{a}}c\,\mathrm{d}\mathfrak{B}^{3}( y),w\cdot\nu_{\varphi}\right)_{L^{2}(N_{s}\left(\Sigma\right))}=-\left( \partial_{\rho_{a}}C_{\phi}H_{\psi}\rho_{a},w\cdot\nu_{\varphi}\right)_{L^{2}( N_{s}\left(\Sigma\right))}\]
and
\[-\left(\int_{\Omega}g_{\varepsilon}\left[\phi\right](y)\mathbb{P }_{\nu_{\varphi}}\nabla\left(\partial_{\rho_{a}}c(\cdot,y,\rho_{a},\nu_{\varphi })\right)g_{\varepsilon}\left[\psi\right]\rho_{a}\,\mathrm{d}\mathfrak{B}^{3}( y),w\right)_{\left[L^{2}\left(N_{s}\left(\Sigma\right)\right)\right]^{3}}=\] \[-\left(g_{\varepsilon}\left[\psi\right]\mathbb{P}_{\nu_{\varphi} }\nabla\partial_{\rho_{a}}C_{\phi}\rho_{a},w\right)_{\left[L^{2}\left(N_{s} \left(\Sigma\right)\right)\right]^{3}},\]
which we use in the following as abbreviation.
To pass (62) to the limit, we treat the gradients of the energies separately. The gradients of \(\mathcal{W}\) and \(\mathcal{G}\) have the same structure for both \(\phi\) and \(\psi\), and can therefore be treated verbatim. For \(\mathcal{C}\) we distinguish the derivatives w.r.t. \(\phi\) and \(\psi\). Let us start our analysis with \(\nabla_{\varphi}^{L^{2}}\mathcal{W}\).
#### 4.2.3 Force terms of \(\nabla_{\varphi}^{L^{2}}\mathcal{W}\)
In this section we show the following lemma:
**Lemma 4.2**.: Let \(\mathcal{S}\in\left\{\Gamma,\Sigma\right\}\) and \(\varphi\in\left\{\phi,\psi\right\}\) such that \(\mathcal{S}\) is the boundary layer for \(\varphi\). The following limit holds true:
\[\beta\int_{N_{d}\left(\Phi\right)}\left(-\Delta\left(\mu\left[\varphi\right] \right)+\mu\left[\varphi\right]\varepsilon^{-2}W^{\prime\prime}\left(\varphi \right)\right)\nabla\varphi\cdot w\,\mathrm{d}\mathfrak{B}^{3}\overset{ \varepsilon\to 0}{\rightarrow}-C\int_{\Phi}\left(2\Delta_{\Phi}H_{\Phi}+H_{\Phi} \left(H_{\Phi}^{2}-4K_{\Phi}\right)\right)\nu_{\mathcal{S}}\cdot w\,\mathrm{ d}\mathfrak{B}^{2}\]
for a constant \(C\).
The strategy of the proof is as follows: In Section 3.8, we have expanded the gradient \(\nabla_{\varphi}^{L^{2}}\mathcal{W}\) and concluded from the energy inequality that all its terms up to order \(\varepsilon^{-1}\) must equal zero. The terms remaining on order \(\varepsilon^{0}\) are shifted to order \(\varepsilon^{-1}\) by multiplication with \(\nabla\varphi\cdot w\), which is just the right scaling for obtaining the claimed limit using Lemma 3.4.
Proof of 4.2.: Collect all the terms of \(\left(\nabla_{\varphi}^{L^{2}}\mathcal{W},\nabla\varphi\cdot w\right)_{L^{2}( N_{s}\left(\Phi\right))}\) on order \(\varepsilon^{-1}\):
\[\left(\nabla_{\varphi}^{L^{2}}\mathcal{W},\nabla\varphi\cdot w \right)_{L^{2}(N_{s}\left(\Phi\right))} =\varepsilon^{-1}\int_{N_{d}\left(\Phi\right)}\bigg{(}-\Delta_{ \Phi_{\alpha_{\alpha_{\alpha_{\alpha_{\alpha}}}}}}\left(\mu_{0}\left[\varphi \right]\right)+\hat{\mu}_{1}\left[\bar{\varphi}\right]^{\prime}\circ j\bar{H}- \hat{\mu}_{2}\left[\bar{\varphi}\right]^{\prime\prime}\circ l_{\varepsilon}\] \[+\mu_{2}\left[\phi\right]W^{\prime\prime}\left(\varphi_{0} \right)+\mu_{0}\left[\varphi\right]W^{(3)}\left(\varphi_{0}\right)\varphi_{2}\] \[+\phi_{0}^{\prime}\circ l_{\varepsilon}\,\nabla\bar{H}\cdot\bar {v}\bar{H}+2\varepsilon^{-1}d\left(x\right)\phi_{0}^{\prime\prime}\circ l_{ \varepsilon}\,\bar{H}|_{\Phi}\left(\bar{H}\right|_{\Phi}^{2}-2\bar{K}|_{\Phi}\right)\] \[-\phi_{0}^{\prime}\circ l_{\varepsilon}\nabla^{2}\bar{H}\,:\, \bar{v}\otimes\bar{v}-4\varepsilon^{-1}d\left(x\right)\phi_{0}^{\prime\prime} \circ l_{\varepsilon}\left(\bar{H}\right|_{\Phi}^{3}-3\bar{H}|_{\Phi}\bar{K}|_{ \Phi}\right)\,\bigg{)}\phi_{0}^{\prime}\circ l_{\varepsilon}\,\nu_{\varphi} \cdot w\,\mathrm{d}\mathfrak{B}^{3}(x)\] \[+O\left(\varepsilon\right).\]
The terms on the first and second line are from the expansion of the chemical potential and the double well potential. The left summand on line three stems from (44), the right one from (46). The left summand on line four is taken from (40) (note (42));
the right summand is from (44) (note (45)). First, we substitute some expressions using Lemma 3.1 and (43):
\[\left(\nabla_{\varphi}^{L^{2}}\mathcal{W},\nabla\varphi\cdot w \right)_{L^{2}\left(N_{\varepsilon}\left(\Phi\right)\right)} =\epsilon^{-1}\int_{N_{\varepsilon}\left(\Phi\right)}^{\varepsilon }\bigg{(}-\phi_{0}^{\prime}\circ\iota_{t}\Delta_{\phi_{\iota(s)}}\tilde{H}+ \hat{\mu}_{1}\left[\phi\right]^{\prime}\circ\iota_{t}\tilde{H}-\hat{\mu}_{2} \left[\phi\right]^{\prime\prime}\circ\iota_{t}\] \[+\mu_{2}\left[\varphi\right]W^{\prime\prime}\left(\varphi_{0} \right)+\mu_{0}\left[\varphi\right]W^{(3)}\left(\varphi_{0}\right)\varphi_{2}\] \[+\hat{\varphi}_{0}^{\prime}\circ\iota_{t}\tilde{H}\left(\tilde{ H}^{2}-2\tilde{K}\right)+2\epsilon^{-1}d\left(x\right)\hat{\varphi}_{0}^{ \prime\prime}\circ\iota_{t}\tilde{H}|_{\Phi}\left(\tilde{H}|_{\Phi}^{2}-2 \tilde{K}|_{\Phi}\right)\] \[-2\hat{\varphi}_{0}^{\prime}\circ\iota_{t}\left(\tilde{H}^{3}-3 \tilde{H}\tilde{K}\right)-4\epsilon^{-1}d\left(x\right)\hat{\varphi}_{0}^{ \prime\prime}\circ\iota_{t}\left(\tilde{H}|_{\Phi}^{3}-3\tilde{H}|_{\Phi} \tilde{K}|_{\Phi}\right)\bigg{)}\hat{\varphi}_{0}^{\prime}\circ\iota_{t}\nu_{ \varphi}\cdot w\mathrm{d}\mathfrak{B}^{3}(x)\] \[+O\left(\epsilon\right).\]
Second, we transform the integral using the co-area formula. We denote \(j(\sigma,z)=\left(\pi_{\Phi}\left(\sigma\right),z\right)\).
\[\left(\nabla_{\varphi}^{L^{2}}\mathcal{W},\nabla\varphi\cdot w \right)_{L^{2}\left(N_{\varepsilon}\left(\Phi\right)\right)} =\int_{-\frac{\epsilon}{2}}^{\frac{\epsilon}{2}}\int_{\phi_{\iota _{z}}}\bigg{(}-\hat{\varphi}_{0}^{\prime}\circ j\Delta_{\phi_{\iota_{z}}} \tilde{H}+\hat{\mu}_{1}\left[\phi\right]^{\prime}\circ j\tilde{H}-\hat{\mu}_{2 }\left[\phi\right]^{\prime\prime}\circ j\] \[+\hat{\varphi}_{0}^{\prime}\circ j\tilde{H}\left(\tilde{H}^{2}-2 \tilde{K}\right)+2\hat{z}\hat{\varphi}_{0}^{\prime\prime}\circ j\tilde{H}|_{ \Phi}\left(\tilde{H}|_{\Phi}^{2}-2\tilde{K}|_{\Phi}\right)\] \[-2\hat{\varphi}_{0}^{\prime}\circ j\left(\tilde{H}^{3}-3\tilde{H }\tilde{K}\right)-4z\hat{\varphi}_{0}^{\prime\prime}\circ j\left(\tilde{H}|_{ \Phi}^{3}-3\tilde{H}|_{\Phi}\tilde{K}|_{\Phi}\right)\quad\bigg{)}\hat{\varphi }_{0}^{\prime}\circ j\nu_{\varphi}\cdot w\,\mathrm{d}\mathfrak{\tilde{H}}^{2} (\sigma)\,\mathrm{d}\mathfrak{B}^{1}(z)\] \[+O\left(\epsilon\right).\]
With integration by parts, it follows directly that \(\int_{-\infty}^{\infty}z\hat{\varphi}_{0}^{\prime\prime}\hat{\varphi}_{0}^{ \prime}\,\,\,\mathrm{d}z=-\frac{1}{2}\int_{-\infty}^{\infty}\left(\hat{ \varphi}_{0}^{\prime}\right)^{2}\,\,\,\mathrm{d}z\). Exploiting this property and the independence of \(\hat{\varphi}_{0}\) of the first argument of \(j\), we obtain
\[\int_{-\frac{\epsilon}{2}}^{\frac{\epsilon}{2}}\int_{\phi_{\iota _{z}}}\left(\hat{\varphi}_{0}^{\prime}\circ j\tilde{H}\left(\tilde{H}^{2}-2 \tilde{K}\right)+2z\hat{\varphi}_{0}^{\prime\prime}\circ j\tilde{H}|_{\Phi} \left(\tilde{H}|_{\Phi}^{2}-2\tilde{K}|_{\Phi}\right)\right)\hat{\varphi}_{0}^{ \prime}\circ j\,\,\mathrm{d}\mathfrak{\tilde{H}}^{2}(\sigma)\,\,\mathrm{d} \mathfrak{B}^{1}(z)=\] \[\int_{-\frac{\epsilon}{2}}^{\frac{\epsilon}{2}}\left(\hat{\varphi }_{0}^{\prime}\right)^{2}\int_{\phi_{\iota_{z}}}\tilde{H}\left(\tilde{H}^{2}-2 \tilde{K}\right)\,\,\mathrm{d}\mathfrak{\tilde{H}}^{2}(\sigma)\,\,\mathrm{d} \mathfrak{B}^{1}(z)-\int_{-\frac{\epsilon}{2}}^{\frac{\epsilon}{2}}\left(\hat{ \varphi}_{0}^{\prime}\right)^{2}\,\,\mathrm{d}\mathfrak{B}^{1}(z)\int_{\Phi }\tilde{H}|_{\Phi}\left(\tilde{H}|_{\Phi}^{2}-2\tilde{K}|_{\Phi}\right)\,\, \mathrm{d}\mathfrak{\tilde{H}}^{2}(\sigma).\]
Thanks to Lemma 3.4, we know that this difference converges to zero as \(\epsilon\searrow 0\). The same reasoning applies for
\[\int_{-\frac{\epsilon}{2}}^{\frac{\epsilon}{2}}\int_{\phi_{\iota _{z}}}\left(-2\hat{\varphi}_{0}^{\prime}\circ j\left(\tilde{H}^{3}-3\tilde{H }\tilde{K}\right)-4z\hat{\varphi}_{0}^{\prime\prime}\circ j\left(\tilde{H}|_{ \Phi}^{3}-3\tilde{H}|_{\Phi}\tilde{K}|_{\Phi}\right)\right)\hat{\varphi}_{0}^{ \prime}\circ j\,\,\mathrm{d}\mathfrak{\tilde{H}}^{2}\,\,\mathrm{d}\mathfrak{B}^{ 1}(z).\]
The remaining terms are treated as follows: We recall \(\hat{H}^{\prime}\in O\left(\epsilon\right)\) and \(\hat{H}^{\prime\prime}\in O\left(\epsilon^{2}\right)\) (cf. (41), (42)) and
\[\hat{\mu}_{2}\left[\phi\right]=\hat{\varphi}_{2}^{\prime}\hat{H}-\hat{\varphi}_{ 3}^{\prime\prime}+\hat{\varphi}_{3}W^{\prime\prime}\left(\hat{\varphi}_{0}\right)\]
(cf. (37) with \(\varphi_{1}=0\)). Thus, the following expansion holds:
\[\hat{\mu}_{2}\left[\hat{\varphi}\right]^{\prime\prime}=\hat{\varphi}_{2}^{(3)} \hat{H}-\hat{q}^{\prime\prime}+O\left(\epsilon\right),\]
where
\[\hat{q}=\hat{\varphi}_{3}^{\prime\prime}-\hat{\varphi}_{3}W^{\prime\prime}\left( \hat{\varphi}_{0}\right).\]
This way we see with \(\mu_{0}\left[\varphi\right]=\tilde{H}\hat{\varphi}_{0}^{\prime}\circ\iota_{t}\) and \(\hat{\mu}_{1}\left[\phi\right]=-\hat{\varphi}_{2}^{\prime\prime}+W^{\prime \prime}\left(\hat{\varphi}_{0}\right)\hat{\varphi}_{2}\) (cf. (37) with \(\varphi_{1}=0\)) that
\[-\hat{\mu}_{2}\left[\hat{\varphi}\right]^{\prime\prime}\circ j+\mu_{2} \left[\varphi\right]W^{\prime\prime}\left(\varphi_{0}\right)+\mu_{0}\left[ \varphi\right]W^{(3)}\left(\varphi_{0}\right)\varphi_{2} =-\tilde{H}\hat{\varphi}_{2}^{(3)}\circ j+\tilde{H}W^{\prime\prime} \left(\varphi_{0}\right)\hat{\varphi}_{2}^{\prime}\circ j+\mu_{0}\left[\varphi \right]W^{(3)}\left(\varphi_{0}\right)\varphi_{2}\] \[+\hat{q}^{\prime\prime}\circ j-qW^{\prime\prime}\left(\varphi_{0} \right)+O\left(\epsilon\right)\] \[=\tilde{H}\left(-\hat{\varphi}_{2}^{\prime\prime}+W^{\prime\prime }\left(\hat{\varphi}_{0}\right)\hat{\varphi}_{2}\right)^{\prime}\circ j+\hat{q}^{ \prime\prime}\circ j-qW^{\prime\prime}\left(\hat{\varphi}_{0}\right)\circ j+O \left(\epsilon\right)\] \[=\tilde{H}\hat{\mu}_{1}\left[\phi\right]^{\prime}\circ j+\hat{q}^{ \prime\prime}\circ j-qW^{\prime\prime}\left(\hat{\varphi}_{0}\right)\circ j+O \left(\epsilon\right),\]
where we used \(\mu_{0}\left[\varphi\right]W^{\left(3\right)}\left(\varphi_{0}\right)\varphi_{2}= \varphi_{0}{}^{\prime}\circ jW^{\left(3\right)}\left(\varphi_{0}\right)\varphi_{ 2}\tilde{H}=\left(W^{\prime\prime\prime}\left(\hat{\varphi}_{0}\right)\right)^ {\prime}\circ j\varphi_{2}\tilde{H}\). We therefore obtain,
\[\int_{-\frac{i}{z}}^{\frac{i}{z}}\int_{\Phi_{zz}}\left(-\hat{ \varphi}_{0}^{\prime}\circ j\Delta_{\Phi_{zz}}\tilde{H}+\tilde{H}\tilde{\mu}_ {1}\left[\phi\right]^{\prime}\circ j-\hat{\mu}_{2}\left[\phi\right]^{\prime \prime}\circ j\right.\] \[+\mu_{2}\left[\varphi\right]W^{\prime\prime}\left(\varphi_{0} \right)+\mu_{0}\left[\varphi\right]W^{\left(3\right)}\left(\varphi_{0}\right) \varphi_{2}\right)\hat{\varphi}_{0}^{\prime}\circ j\varphi_{\varphi}\cdot w\, \mathrm{d}\mathfrak{H}^{2}\,\mathrm{d}\mathfrak{H}^{1}(z)=\] \[\int_{-\frac{i}{z}}^{\frac{i}{z}}\int_{\Phi_{zz}}\left(-\hat{ \varphi}_{0}^{\prime}\circ j\Delta_{\Phi_{zz}}\tilde{H}+2\tilde{H}\tilde{\mu} _{1}\left[\phi\right]^{\prime}\circ j+\tilde{q}^{\prime\prime}\circ j-\tilde{ q}\circ jW^{\prime\prime}\left(\hat{\varphi}_{0}\right)\circ j\right)\hat{ \varphi}_{0}^{\prime}\circ j\varphi_{\varphi}\cdot w\,\mathrm{d}\mathfrak{H}^{ 2}(\sigma)\,\mathrm{d}\mathfrak{H}^{1}(z)+O\left(\varepsilon\right).\]
To treat \(\int_{-\frac{i}{z}}^{\frac{i}{z}}\hat{\varphi}_{0}^{\prime}\int_{\Phi_{zz}} \hat{q}^{\prime\prime}\circ j\varphi_{\varphi}\cdot w\,\mathrm{d}\mathfrak{H}^ {2}(\sigma)\,\mathrm{d}\mathfrak{H}^{1}(z)\), we take a global parametrisation \(\gamma:\,\mathbb{R}^{2}\rightarrow\Phi\) (this is w.l.o.g. since in case there is no global parametrisation, we make all the following calculations locally and patch the integrals together afterwards), and define \(\gamma_{ez}(s)=\gamma(s)+\varepsilon z\nu_{S}\left(\gamma(s)\right)\). With the area formula, we obtain
\[\int_{-\frac{i}{z}}^{\frac{i}{z}}\hat{\varphi}_{0}^{\prime}(z) \int_{\Phi_{zz}}\hat{q}^{\prime\prime}\circ j\varphi_{\varphi}\cdot w\, \mathrm{d}\mathfrak{H}^{2}(\sigma)\,\mathrm{d}\mathfrak{H}^{1}(z)=\] \[\int_{-\frac{i}{z}}^{\frac{i}{z}}\hat{\varphi}_{0}^{\prime}(z) \int_{\mathbb{R}^{2}}\hat{q}^{\prime\prime}(\gamma(s),z)\nu_{\varphi}\circ \gamma_{ez}\cdot w\circ\gamma_{ez}\,\boldsymbol{J}\left[\gamma_{ez}\right]\, \mathrm{d}\mathfrak{H}^{2}(s)\,\mathrm{d}\mathfrak{H}^{1}(z).\]
Note that \(j(\gamma_{ez}(s),z)=(\gamma(s),z)\). We further integrate by parts
\[\int_{-\frac{i}{z}}^{\frac{i}{z}}\hat{\varphi}_{0}^{\prime}(z) \int_{\mathbb{R}^{2}}\hat{q}^{\prime\prime}(\gamma(s),z)\boldsymbol{J}\left[ \gamma_{ez}\right]\nu_{\varphi}\circ\gamma_{ez}\cdot w\circ\gamma_{ez}\, \mathrm{d}\mathfrak{H}^{2}(s)\,\mathrm{d}\mathfrak{H}^{1}(z)=\] \[-\int_{-\frac{i}{z}}^{\frac{i}{z}}\hat{\varphi}_{0}^{\prime\prime \prime}(z)\int_{\mathbb{R}^{2}}\hat{q}^{\prime}(\gamma(s),z)\boldsymbol{J} \left[\gamma_{ez}\right]\nu_{\varphi}\circ\gamma_{ez}\cdot w\circ\gamma_{ez} \,\mathrm{d}\mathfrak{H}^{2}(s)\,\mathrm{d}\mathfrak{H}^{1}(z)\] \[+\int_{\mathbb{R}^{2}}\left[\hat{q}^{\prime}(\gamma(s),\hat{ \varphi}_{0}^{\prime}\boldsymbol{J}\left[\gamma_{ez}\right]\nu_{\varphi}\circ \gamma_{ez}\cdot w\circ\gamma_{ez}\right]_{-\delta/\varepsilon}^{\delta/ \varepsilon}\,\mathrm{d}\mathfrak{H}^{2}(s)\] \[+\int_{-\frac{i}{z}}^{\frac{i}{z}}\hat{\varphi}_{0}^{\prime \prime}\int_{\mathbb{R}^{2}}\hat{q}^{\prime}(\gamma(s),z)\boldsymbol{J} \left[\gamma_{ez}\right]\left(\nu_{\varphi}\circ\gamma_{ez}\cdot w\circ\gamma_{ez }\right)^{\prime}\mathrm{d}\mathfrak{H}^{2}(s)\,\mathrm{d}\mathfrak{H}^{1}(z) +O\left(\varepsilon\right).\]
We observe
\[\left(\nu_{\varphi}\circ\gamma_{ez}\cdot w\circ\gamma_{ez}\right)^ {\prime} =\varepsilon\,\nabla\nu_{\varphi}{}^{T}\circ\gamma_{ez}\nu_{\Phi} \circ\gamma\cdot w\circ\gamma_{ez}+\varepsilon\nu_{\varphi}\circ\gamma_{ez} \cdot\nabla w^{T}\circ\gamma_{ez}\nu_{\Phi}\circ\gamma\] \[=\varepsilon\left(\frac{1}{|\varphi|}\nabla^{2}\mathbb{P}^{T} \mathbb{P}_{\nu_{\varphi}}\right)\circ\gamma_{ez}\nu_{\Phi}\circ\gamma\cdot w \circ\gamma_{ez}+O\left(\varepsilon\right),\]
and we have \(\mathbb{P}_{\nu_{\varphi}}\circ\gamma_{ez}\nu_{\Phi}\in O\left(\varepsilon^{2} \right)\circ\gamma\) (see (55)), so
\[\int_{-\frac{i}{z}}^{\frac{i}{z}}\hat{\varphi}_{0}^{\prime\prime}\int_{\mathbb{R }^{2}}\hat{q}^{\prime}\,\boldsymbol{J}\left[\gamma_{ez}\right]\left(\nu_{ \varphi}\circ\gamma_{ez}\cdot w\circ\gamma_{ez}\right)^{\prime}\,\mathrm{d} \mathfrak{H}^{2}(s)\,\mathrm{d}\mathfrak{H}^{1}(z)\in O\left(\varepsilon\right).\]
With Jacobi's formula for derivatives of determinants, it can be seen that the derivative of the Jacobian w.r.t \(z\) is also in \(O\left(\varepsilon\right)\). Integrating by parts one more time leads us therefore to
\[\int_{-\frac{i}{z}}^{\frac{i}{z}}\hat{\varphi}_{0}^{\prime}(z) \int_{\mathbb{R}^{2}}\hat{q}^{\prime\prime}(\gamma(s),z)\boldsymbol{J}\left[ \gamma_{ez}\right]\nu_{\varphi}\circ\gamma_{ez}\cdot w\circ\gamma_{ez}\, \mathrm{d}\mathfrak{H}^{2}(s)\,\mathrm{d}\mathfrak{H}^{1}(z)=\] \[\int_{-\frac{i}{z}}^{\frac{i}{z}}\hat{\varphi}_{0}^{(3)}(z)\int_{ \mathbb{R}^{2}}\hat{q}(\gamma(s),z)\nu_{\varphi}\circ\gamma_{ez}\cdot w\circ \gamma_{ez}\boldsymbol{J}\left[\gamma_{ez}\right]\,\mathrm{d}\mathfrak{H}^{2}(s) \,\mathrm{d}\mathfrak{H}^{1}(z)\] \[+\int_{\mathbb{R}^{2}}\left[\hat{q}^{\prime}(\gamma(s),\cdot)\hat {\varphi}_{0}^{\prime}\nu_{\varphi}\circ\gamma_{ez}\cdot w\circ\gamma_{ez} \boldsymbol{J}\left[\gamma_{ez}\right]\right]_{-\delta/\varepsilon}^{\delta/ \varepsilon}\,\mathrm{d}\mathfrak{H}^{2}(s)\] \[-\int_{\mathbb{R}^{2}}\left[\hat{q}(\gamma(s),\cdot)\hat{\varphi}_ {0}^{\prime\prime}\nu_{\varphi}\circ\gamma_{ez}\cdot w\circ\gamma_{ez} \boldsymbol{J}\left[\gamma_{ez}\right]\right]_{-\delta/\varepsilon}^{\delta/ \varepsilon}\,\mathrm{d}\mathfrak{H}^{2}(s)+O\left(\varepsilon\right).\]
The last integrals vanish for \(\epsilon\searrow 0\) since \(\hat{\phi}^{\prime}_{0}\) and \(\hat{\phi}^{\prime\prime}_{0}\) vanish for \(z\to\pm\infty\) (see (21), (22)). Hence,
\[\int_{-\frac{\epsilon}{\epsilon}}^{\frac{\epsilon}{\epsilon}}\int_{\Phi_{ \epsilon}}\left(\hat{q}^{\prime\prime}-\hat{q}W^{\prime\prime}\left(\hat{\phi} _{0}\right)\right)\hat{\phi}^{\prime}_{0}v_{\varphi}\cdot w\;\mathrm{d}\hat{ \mathfrak{S}}^{2}(\sigma)\;\mathrm{d}\hat{\mathfrak{B}}^{1}(z)\stackrel{{ \epsilon\searrow 0}}{{\to}}\int_{-\infty}^{\infty}\hat{\phi}^{(3)}_{0}-\hat{ \phi}^{\prime}_{0}W^{\prime\prime}\left(\hat{\phi}_{0}\right)\;\mathrm{d}\hat{ \mathfrak{B}}^{1}(z)\int_{\Phi}qv_{\varphi}\cdot w\;\mathrm{d}\hat{\mathfrak{S }}^{2}(s).\]
Note that \(\hat{\phi}^{(3)}_{0}-\hat{\phi}^{\prime}_{0}W^{\prime\prime}\left(\hat{\phi} _{0}\right)=0\) (property of the optimal profile), so the whole integral vanishes in the limit.
The last term we need to investigate is \(2\hat{H}\hat{\mu}_{1}\left[\hat{\phi}\right]^{\prime}\), and we already know (see (48))
\[2\hat{H}\hat{\mu}_{1}\left[\hat{\phi}\right]^{\prime}=-\hat{H}\left(\hat{H} \hat{\mu}_{0}^{1}-4\hat{K}|_{\Phi}\right)\left(\hat{\phi}^{\prime}_{0}z^{ \prime}\right)^{\prime}=-\hat{H}\left(\hat{H}|_{\Phi}^{2}-4\hat{K}|_{\Phi} \right)\left(\hat{\phi}^{\prime\prime}_{0}z+\hat{\phi}^{\prime}_{0}\right).\]
Now we observe
\[\int_{-\frac{\epsilon}{\epsilon}}^{\frac{\epsilon}{\epsilon}}\int _{\Phi_{\epsilon z}}-\hat{H}\left(\hat{H}|_{\Phi}^{2}-4\hat{K}|_{\Phi}\right) \left(\hat{\phi}^{\prime\prime}_{0}\circ jz+\hat{\phi}^{\prime}_{0}\circ j \right)\hat{\phi}^{\prime}_{0}\circ jv_{\varphi}\cdot w\;\mathrm{d}\hat{ \mathfrak{S}}^{2}(\sigma)\;\mathrm{d}\hat{\mathfrak{B}}^{1}(z)\stackrel{{ \epsilon\searrow 0}}{{\to}}\] \[-\int_{-\infty}^{\infty}\hat{\phi}^{\prime\prime}_{0}\hat{\phi}^{ \prime}_{0}z+\left(\hat{\phi}^{\prime}_{0}\right)^{2}\;\mathrm{d}\hat{ \mathfrak{B}}^{1}(z)\int_{\Phi}\hat{H}\left(\hat{H}|_{\Phi}^{2}-4\hat{K}|_{ \Phi}\right)v_{\varphi}\cdot w\;\mathrm{d}\hat{\mathfrak{S}}^{2}(\sigma),\]
and finally use \(\int_{-\infty}^{\infty}\hat{\phi}^{\prime\prime}_{0}\hat{\phi}^{\prime}_{0}z+ \left(\hat{\phi}^{\prime}_{0}\right)^{2}\;\mathrm{d}\hat{\mathfrak{B}}^{1}(z) =\frac{1}{2}\int_{-\infty}^{\infty}\left(\hat{\phi}^{\prime}_{0}\right)^{2}\; \mathrm{d}\hat{\mathfrak{B}}^{1}(z)\).
#### 4.2.4 Force terms of \(\nabla_{\varphi}^{L^{2}}\mathcal{G}\)
A small calculation reveals
\[\nabla_{\varphi}^{L^{2}}\mathcal{G}[\varphi]=\gamma\mu\left[\varphi\right],\]
and from (36) we have with \(\hat{\phi}^{\prime\prime}_{0}\circ\iota_{\epsilon}-W^{\prime}\left(\varphi_{ 0}\right)=0\) and (43)
\[\mu\left[\varphi\right]=\gamma\hat{\phi}^{\prime}_{0}\circ\iota_{\epsilon} \tilde{H}+O\left(\epsilon\right).\]
So convergence under the integral follows by Lemma 3.4:
\[\int_{N_{d}(\Phi)}\left(\hat{\phi}^{\prime}_{0}\right)^{2}\circ\iota_{\epsilon }\tilde{H}v_{\varphi}\cdot w\;\mathrm{d}\hat{\mathfrak{B}}^{3}+O\left(\epsilon \right)\stackrel{{\epsilon\searrow 0}}{{\to}}Z\int_{\Phi}\gamma\tilde{H}v_{\Phi}\cdot w\; \mathrm{d}\hat{\mathfrak{S}}^{2}.\]
#### 4.2.5 Coupling energy force terms
We now come to the limit of the terms
\[\left(\nabla_{\phi}^{L^{2}}C,\nabla\phi\cdot w\right)_{L^{2}\left(N_{d}( \Gamma)\right)},\] \[\left(\nabla_{\varphi}^{L^{2}}C,\nabla\psi\cdot w\right)_{L^{2} \left(N_{d}(\Sigma)\right)},\] \[\tilde{G_{\epsilon}}:= -\left(\partial_{\rho_{\epsilon}}C_{\mu}\varphi_{\rho_{\sigma}}w \cdot v_{\varphi}\right)_{L^{2}\left(N_{d}(\Sigma)\right)},\text{ and}\] \[\tilde{H}_{\epsilon}:= -\left(g_{\epsilon}\left[w\right]\mathbb{P}_{v_{\varphi}}\nabla \partial_{\rho_{\epsilon}}C_{\phi}\rho_{\sigma},w\right)_{\left[L^{2}\left(N_{d }(\Sigma)\right)\right]}\]
in (62).
Limit of \(\left(\nabla_{\phi}^{L^{2}}C,\nabla\phi\cdot w\right)_{L^{2}\left(N_{d}(\Gamma) \right)}\)
Recall (52) with the term abbreviations introduced therein. Then
\[\left(\nabla_{\phi}^{L^{2}}C,\nabla\phi\cdot w\right)_{L^{2}\left(N_{d}(\Gamma) \right)}=\left(\left|\nabla\phi\right|\left(A_{\epsilon}+B_{\epsilon}\right),v_{\phi}\cdot w\right)_{L^{2}\left(N_{d}(\Gamma)\right)}=:\;\tilde{A}_{ \epsilon}+\tilde{B}_{\epsilon}. \tag{63}\]
We further define
\[\tilde{A}_{\epsilon}=\int_{N_{d}(\Sigma)}\epsilon^{-1}\left(\frac{1}{2}\left( \hat{\psi}^{\prime}_{0}(t\left(x\right))\right)^{2}+W\left(\psi_{0}\left(x \right)\right)\right)c\left(x,\cdot,\rho_{\sigma 0},v_{\varphi_{0}}\right)+O\left(1\right)\;\mathrm{d}\hat{ \mathfrak{B}}^{3}(x),\]
so that
\[A_{\epsilon}=\hat{\phi}^{\prime}_{0}\circ\iota_{\epsilon}\tilde{H}\tilde{A}_{ \epsilon}. \tag{64}\]
Due to the optimal profile for \(\psi_{0}\), it holds \(W\left(\psi_{0}\right)=\left(\dot{\psi}_{0}^{\prime}\right)^{2}\), and applying Lemma 3.4, we have as \(\epsilon\searrow 0\),
\[\tilde{A}_{\epsilon} =\int_{N_{\tilde{A}}(\Sigma)}\epsilon^{-1}\frac{3}{2}\left(\dot{ \psi}_{0}^{\prime}(t\left(x\right))\right)^{2}c\left(x,\cdot,\rho_{a0},\nu_{ \psi_{0}}\right)+O\left(1\right)\ \mathrm{d}\mathfrak{B}^{3}(x)\] \[\to\frac{3Z}{2}\int_{\Sigma}c\left(x,\cdot,\rho_{a0},\nu_{\Sigma} \right)\ \mathrm{d}\mathfrak{S}^{2}(x).\]
In order to apply Lemma 3.4 on \(\tilde{A}_{\epsilon}\), we shall show that for \(y_{\epsilon}\) converging to \(y\), \(\tilde{A}_{\epsilon}(y_{\epsilon})\) converges. To this purpose, we write \(\tilde{A}_{\epsilon}(y)=F_{\epsilon}[c(\cdot,y,\rho_{a0},\psi_{0})]\) for all \(y\in\Omega\), where \(F_{\epsilon}:\ L^{2}\left(\Omega\right)\to\mathbb{R}\) is linear and continuous. \(\tilde{A}_{\epsilon}(y_{\epsilon})\) converging is then equivalent to \(F_{\epsilon}[c_{\epsilon}]=\tilde{A}_{\epsilon}(y_{\epsilon})\) converging for \(c_{\epsilon}=c(\cdot,y_{\epsilon},\rho_{a0},\psi_{0})\). We calculate
\[\left|F_{\epsilon}[c_{\epsilon}]-F_{0}[c_{0}]\right|\leq\left|F_{\epsilon}[c_ {\epsilon}-c_{0}]\right|+\left|F_{\epsilon}[c_{0}]-F_{0}[c_{0}]\right|. \tag{65}\]
The previous calculations directly show \(\left|F_{\epsilon}[c_{0}]-F_{0}[c_{0}]\right|\to 0\). For the other summand, it holds \(\left|F_{\epsilon}[c_{\epsilon}-c_{0}]\right|\leq\left|F_{\epsilon}\right| \left|\left|c_{\epsilon}-c_{0}\right|\right|_{L^{2}\left(\Omega\right)}.\) Since we have convergence of \(F_{\epsilon}[f]\) for every \(f\in L^{2}\left(\Omega\right)\), the Banach-Steinhaus theorem implies \(\left|F_{\epsilon}\right|\leq\infty\). Also, the last term converges to zero (\(c\) is continuous in \(y\)), and so the left hand side of (65) converges to zero. Then, as \(\epsilon\searrow 0\),
\[\tilde{A}_{\epsilon} =\int_{N_{\tilde{A}}(\Gamma)}\left|\nabla_{y}\phi\left(y\right) \right|A_{\epsilon}(y)\nu_{\phi}\left(y\right)\cdot w\left(y\right)\ \mathrm{d}\mathfrak{B}^{3}(y)\] \[=\epsilon^{-1}\int_{N_{\tilde{A}}(\Gamma)}\left(\dot{\phi}_{0}^{ \prime}(t\left(y\right))^{2}\tilde{H}\left(y\right)\tilde{A}_{\epsilon}(y) \nu_{\phi}(y)\cdot w\left(y\right)\ \mathrm{d}\mathfrak{B}^{3}(y)+O\left( \epsilon\right)\right.\] \[\to\left.Z\int_{\Gamma}H_{\Gamma}\left(y\right)\tilde{A}_{0}(y) \nu_{\Gamma}\left(y\right)\cdot w\left(y\right)\ \mathrm{d}\mathfrak{S}^{2}(y)\right.\] \[=\frac{3Z^{2}}{2}\int_{\Gamma}H_{\Gamma}\left(y\right)\int_{ \Sigma}c\left(x,y,\rho_{a0},\nu_{\Sigma}\right)\ \mathrm{d}\mathfrak{S}^{2}(x)\nu_{\Gamma}\left(y\right)\cdot w\left(y\right) \ \mathrm{d}\mathfrak{S}^{2}(y).\]
Concerning \(\tilde{B}_{\epsilon}\) one argues analogously, as \(\epsilon\searrow 0\),
\[\tilde{B}_{\epsilon} =\int_{N_{\tilde{A}}(\Gamma)}\left|\nabla_{y}\phi\left(y\right) \right|B_{\epsilon}(y)\nu_{\phi}\left(y\right)\cdot w\left(y\right)\ \mathrm{d}\mathfrak{B}^{3}(y)\] \[=-\epsilon^{-1}\int_{N_{\tilde{A}}(\Gamma)}\left(\dot{\phi}_{0}^{ \prime}\circ\iota_{\epsilon}\right)^{2}\int_{N_{\tilde{A}}(\Sigma)}\epsilon^{ -1}\frac{3}{2}\left(\dot{\psi}_{0}^{\prime}(\iota_{\epsilon}\left(x\right)) \right)^{2}\tilde{v}\left(y\right)\cdot\nabla_{y}c\left(x,y,\rho_{a0},\nu_{ \psi_{0}}\right)+O\left(1\right)\ \mathrm{d}\mathfrak{B}^{3}(x)\nu_{\phi}\cdot w\ \mathrm{d}\mathfrak{B}^{3}(y)\] \[\to-\frac{3Z^{2}}{2}\int_{\Gamma}\nu_{\Gamma}\cdot\int_{\Sigma} \nabla_{y}c\left(x,y,\rho_{a0},\nu_{\Sigma}\right)\ \mathrm{d}\mathfrak{S}^{2}(x)\nu_{\Gamma}\cdot w\ \mathrm{d}\mathfrak{S}^{2}(y).\]
Limit of \(\left(\nabla_{\psi}^{L^{2}}C,\nabla\psi\cdot w\right)_{L^{2}\left(N_{\tilde{A }}(\Sigma)\right)}\)
We recall (54) and write
\[\left(\nabla_{\psi}^{L^{2}(\Omega)}C,\nabla\psi\cdot w\right)_{L^{2}\left(N_{ \tilde{A}}(\Sigma)\right)}=\left(\left|\nabla\psi\right|\left(C_{\epsilon}+D_{ \epsilon}+E_{\epsilon}\right),\nu_{\psi}\cdot w\right)_{L^{2}\left(N_{\tilde{A }}(\Sigma)\right)}+O\left(\epsilon\right)=\tilde{C}_{\epsilon}+\tilde{D}_{ \epsilon}+\tilde{E}_{\epsilon}+O\left(\epsilon\right). \tag{66}\]
The term \(\tilde{C}_{\epsilon}\) is treated just like \(\tilde{A}_{\epsilon}\), so we obtain in the limit \(\epsilon\searrow 0\)
\[\tilde{C}_{\epsilon} =\int_{N_{\tilde{A}}(\Sigma)}\left(-\epsilon\Delta\psi+\epsilon^{-1 }W^{\prime}\left(\psi\right)\right)\int_{N_{\tilde{A}}(\Gamma)}g_{\epsilon} \left[\phi\right](y)c\left(x,y,\rho_{a},\nu_{\psi}\right)\ \mathrm{d}\mathfrak{B}^{3}(y)\nu_{\psi}\left(x\right)\cdot w \left(x\right)\ \mathrm{d}\mathfrak{B}^{3}(x)\] \[\to\frac{3Z^{2}}{2}\int_{\Sigma}H_{\Sigma}\left(x\right)\int_{ \Gamma}c\left(x,y,\rho_{a0},\nu_{\Sigma}\right)\ \mathrm{d}\mathfrak{S}^{2}(y)\nu_{\Sigma}\left(x\right)\cdot w\left(x\right)\ \mathrm{d} \mathfrak{S}^{2}(x).\]
Using the expansion of \(D_{\epsilon}\), we compute further
\[\bar{D}_{\epsilon} =-\int_{N_{\epsilon}(\Sigma)}\left|\nabla\psi\right|\epsilon\nabla \psi\cdot\int_{N_{\epsilon}(\Gamma)}g_{\epsilon}\left[\phi\right](y)\nabla_{x} \left(c\left(x,y,\rho_{a},\nu_{\psi}\right)\right)\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)\] \[=-\int_{N_{\epsilon}(\Sigma)}\left|\nabla\psi\right|\hat{\psi}_{ 0}^{\prime}\circ\iota_{\epsilon}\tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)}g_{ \epsilon}\left[\phi\right](y)\nabla_{x}c\left(x,y,\rho_{a},\nu_{\psi}\right)\ \mathrm{d} \mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d}\mathfrak{G}^{3}(x)\] \[-\int_{N_{\epsilon}(\Sigma)}\left|\nabla\psi\right|\hat{\psi}_{ 0}^{\prime}\circ\iota_{\epsilon}\tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)} \varepsilon_{\epsilon}\left[\phi\right](y)\partial_{\rho_{a}}c\left(x,y,\rho_ {a},\nu_{\psi}\right)\ \nabla_{x}\rho_{a0}\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)+O\left(\epsilon\right)\] \[=-\int_{N_{\epsilon}(\Sigma)}\epsilon^{-1}\left(\hat{\psi}_{0}^{ \prime}\right)^{2}\circ\iota_{\epsilon}\tilde{v}\cdot\int_{N_{\epsilon}( \Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{\phi}_{0}^{\prime}(\iota_{\epsilon} \left(y\right))\right)^{2}\nabla_{x}c\left(x,y,\rho_{a0},\nu_{\psi_{0}}\right) \ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)\] \[-\int_{N_{\epsilon}(\Sigma)}\epsilon^{-1}\left(\hat{\psi}_{0}^{ \prime}\right)^{2}\circ\iota_{\epsilon}\tilde{v}\cdot\int_{N_{\epsilon}( \Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{\phi}_{0}^{\prime}(\iota_{\epsilon} \left(y\right))\right)^{2}\partial_{\rho_{a}}c\left(x,y,\rho_{a0},\nu_{\psi_{0} }\right)\ \nabla_{x}\rho_{a0}\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)+O\left(\epsilon\right)\] \[\stackrel{{(1)}}{{=}}-\int_{N_{\epsilon}(\Sigma)} \epsilon^{-1}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon} \tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{ \phi}_{0}^{\prime}(\iota_{\epsilon}\left(y\right))\right)^{2}\nabla_{x}c \left(x,y,\rho_{a0},\nu_{\psi_{0}}\right)\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)+O\left(\epsilon\right)\] \[\stackrel{{(1)}}{{=}}-\int_{N_{\epsilon}(\Sigma)} \epsilon^{-1}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon} \tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{ \phi}_{0}^{\prime}(\iota_{\epsilon}\left(y\right))\right)^{2}\nabla_{x}c\left( x,y,\rho_{a0},\nu_{\psi_{0}}\right)\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)+O\left(\epsilon\right)\] \[\stackrel{{(1)}}{{=}}-\int_{N_{\epsilon}(\Sigma)} \epsilon^{-1}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon} \tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{ \phi}_{0}^{\prime}(\iota_{\epsilon}\left(y\right))\right)^{2}\nabla_{x}c\left( x,y,\rho_{a0},\nu_{\psi_{0}}\right)\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d}\mathfrak{G}^{3}(x)+O\left(\epsilon\right)\] \[\stackrel{{(1)}}{{=}}-\int_{N_{\epsilon}(\Sigma)} \epsilon^{-1}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon} \tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{ \phi}_{0}^{\prime}(\iota_{\epsilon}\left(y\right))\right)^{2}\nabla_{x}c\left( x,y,\rho_{a0},\nu_{\psi_{0}}\right)\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)+O\left(\epsilon\right)\] \[\stackrel{{(1)}}{{=}}-\int_{N_{\epsilon}(\Sigma)} \epsilon^{-1}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon} \tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{ \phi}_{0}^{\prime}(\iota_{\epsilon}\left(y\right))\right)^{2}\nabla_{x}c\left( x,y,\rho_{a0},\nu_{\psi_{0}}\right)\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d}\mathfrak{G}^{3}(x)+O\left(\epsilon\right)\] \[\stackrel{{(1)}}{{=}}-\int_{N_{\epsilon}(\Sigma)} \epsilon^{-1}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon} \tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{ \phi}_{0}^{\prime}(\iota_{\epsilon}\left(y\right))\right)^{2}\nabla_{x}c\left( x,y,\rho_{a0},\nu_{\psi_{0}}\right)\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)+O\left(\epsilon\right)\] \[\stackrel{{(2)}}{{=}}-\int_{N_{\epsilon}(\Sigma)} \epsilon^{-1}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon} \tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{ \phi}_{0}^{\prime}(\iota_{\epsilon}\left(y\right))\right)^{2}\nabla_{x}c\left( x,y,\rho_{a0},\nu_{\psi_{0}}\right)\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)+O\left(\epsilon\right)\] \[\stackrel{{(3)}}{{=}}-\int_{N_{\epsilon}(\Sigma)} \epsilon^{-1}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon} \tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{ \phi}_{0}^{\prime}(\iota_{\epsilon}\left(y\right))\right)^{2}\nabla_{x}c\left( x,y,\rho_{a0},\nu_{\psi_{0}}\right)\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)+O\left(\epsilon\right)\] \[\stackrel{{(1)}}{{=}}-\int_{N_{\epsilon}(\Sigma)} \epsilon^{-1}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon} \tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{ \phi}_{0}^{\prime}(\iota_{\epsilon}\left(y\right))\right)^{2}\nabla_{x}c \left(x,y,\rho_{a0},\nu_{\psi_{0}}\right)\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)+O\left(\epsilon\right)\] \[\stackrel{{(2)}}{{=}}-\int_{N_{\epsilon}(\Sigma)} \epsilon^{-1}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\epsilon} \tilde{v}\cdot\int_{N_{\epsilon}(\Gamma)}\epsilon^{-1}\frac{3}{2}\left(\hat{ \phi}_{0}^{\prime}(\iota_{\epsilon}\left(y\right))\right)^{2}\nabla_{x}c \left(x,y,\rho_{a0},\nu_{\psi_{0}}\right)\ \mathrm{d}\mathfrak{G}^{3}(y)\nu_{\psi}\cdot w\ \mathrm{d} \mathfrak{G}^{3}(x)+O\left(\epsilon\
start by labelling them (here in variational form):
\[\tilde{I}_{0} =\int_{\Gamma}(-\nabla_{y}C_{\Sigma}^{0}\cdot v_{\Gamma}+H_{\Gamma} C_{\Sigma}^{0})w\cdot v_{\Gamma}\;\mathrm{d}\mathfrak{H}^{2},\] \[\tilde{J}_{0} =\int_{\Sigma}\left(-\nabla_{x}C_{\Gamma}^{0}\cdot v_{\Sigma}+H_{ \Sigma}C_{\Gamma}^{0}\right)w\cdot v_{\Sigma}\;\mathrm{d}\mathfrak{H}^{2},\] \[\tilde{K}_{0} =-\int_{\Sigma}\partial_{\rho_{\varepsilon}}C_{\Gamma}^{0}H_{ \Sigma}\rho_{a}w\cdot v_{\Sigma}\;\mathrm{d}\mathfrak{H}^{2},\] \[\tilde{L}_{0} =-\int_{\Sigma}\nabla_{\Sigma}\left(\partial_{\rho_{\varepsilon} }C_{\Gamma}^{0}\right)\cdot\rho_{a}w\;\mathrm{d}\mathfrak{H}^{2},\] \[\tilde{M}_{0} =-\int_{\Sigma}\left(\nabla_{\Sigma}\cdot\left(\nabla_{v}C_{ \Gamma}^{0}\right)+H_{\Sigma}\left(\nabla_{v}C_{\Gamma}^{0}\cdot v_{\Sigma} \right)\right)w\cdot v_{\Sigma}\;\mathrm{d}\mathfrak{H}^{2}.\]
We see immediately that \(\tilde{B}_{0}+\tilde{A}_{0}=\frac{3Z^{2}}{2}I_{0}\). Further,
\[\tilde{D}_{0}+\tilde{C}_{0}=-\frac{3Z^{2}}{2}\int_{\Sigma}\int_{\Gamma}v_{ \Sigma}\cdot\nabla_{x}v_{\Sigma}\cdot w\;\mathrm{d}\mathfrak{H}^{2}\; \mathrm{d}\mathfrak{H}^{2}+\frac{3Z^{2}}{2}\int_{\Sigma}H_{\Sigma}\int_{ \Gamma}c\;\mathrm{d}\mathfrak{H}^{2}v_{\Sigma}\cdot w\;\mathrm{d}\mathfrak{H}^ {2}=\frac{3Z^{2}}{2}\tilde{J}_{0}.\]
Clearly, \(\tilde{G}_{0}=\frac{3Z^{2}}{2}\tilde{K}_{0}\), \(\tilde{H}_{0}=\left(\frac{3Z}{2}\right)^{2}\tilde{L}_{0}\), and \(\tilde{E}_{0}=\left(\frac{3Z}{2}\right)^{2}\tilde{M}_{0}\).
Equating (61) on the left and the limit of Lemma 4.2, as well as the terms for the Ginzburg-Landau energy gradient and \(\tilde{A}_{0}+\tilde{B}_{0}+\tilde{C}_{0}+\tilde{D}_{0}+\tilde{E}_{0}+\tilde{G }_{0}+\tilde{H}_{0}\) on the right, we find (4f) and (4g).
### Phase field evolution equations
The equations (1c) and (1d) are just the tautologies \(0=0\) in the outer region, so we are only concerned with them close to the boundary layers. On their left hand sides, we find at leading order \(\varepsilon^{-1}\)
\[-V_{v}^{\Phi}\tilde{\phi}_{0}^{\prime}\circ\iota_{\varepsilon}+v_{0}\cdot \tilde{\psi}_{0}^{\prime}\circ\iota_{\varepsilon} \tag{67}\]
for \(\varphi\in\{\phi,\psi\}\) with corresponding boundary layer \(\Phi\in\{\Gamma,\Sigma\}\).
From the energy inequality, the following bound holds,
\[\varepsilon^{\alpha}\int_{0}^{T}\int_{N_{\varepsilon}(\Phi)}\left|\nabla\nabla _{\varphi}^{L^{2}}\mathcal{F}\right|^{2}\;\mathrm{d}\mathfrak{H}^{3}\; \mathrm{d}t\in O\left(1\right). \tag{68}\]
We have found in Section 3.8 that \(\nabla_{\varphi}^{L^{2}}\mathcal{F}\left[\varphi\right]\in O\left(1\right)\), so
\[\nabla\nabla_{\varphi}^{L^{2}}\mathcal{F}=\sum_{i=-1}^{2}\varepsilon^{i} \hat{f}_{i}\circ\iota_{\varepsilon}+O\left(\varepsilon^{3}\right).\]
Thus
\[\left|\widehat{\nabla\nabla_{\varphi}^{L^{2}}\mathcal{F}}\right|^{2}= \varepsilon^{-2}f_{-1}^{2}+2\varepsilon^{-1}f_{-1}f_{0}+\left(f_{0}^{2}+\hat{f }_{-1}\hat{f}_{1}\right)+\varepsilon\left(2f_{-1}f_{2}+2\hat{f}_{0}\hat{f}_{1 }\right)+\varepsilon^{2}\left(f_{1}^{2}+2\hat{f}_{-1}\hat{f}_{3}+2\hat{f}_{0} \hat{f}_{2}\right)+O\left(1\right).\]
Equation (68) directly implies
\[\int_{0}^{T}\int_{-\frac{\varepsilon}{\varepsilon}}^{\frac{\varepsilon}{ \varepsilon}}\int_{\Phi_{\varepsilon,z}}\left|\widehat{\nabla\nabla_{\varphi}^{L ^{2}}\mathcal{F}}\right|^{2}\left(\pi_{\Phi}\left(\sigma\right),z\right)\mathrm{ d}\mathfrak{H}^{2}(\sigma)\;\mathrm{d}z\;\mathrm{d}t\in O\left(\varepsilon^{- \alpha-1}\right).\]
Since \(\alpha<1\) (required in Section 3.8), \(\hat{f}_{-1}=0\), so \(\widehat{\nabla\nabla_{\varphi}^{L^{2}}\mathcal{F}}\in O\left(1\right)\). Further, \(\nabla\cdot\left(\varepsilon^{\alpha}\nabla\left(\nabla_{\varphi}^{L^{2}} \mathcal{F}\right)\right)\in O\left(\varepsilon^{-1+\alpha}\right)\), which is of lower order than the left hand side (67) for \(\alpha>0\), so we obtain from the phase field evolution to leading order
\[v_{0}\cdot\tilde{v}=V_{v}^{\Phi} \tag{69}\]
meaning that the interface \(\Phi\) is driven purely by the fluid's velocity in normal direction, and this is equivalent to the Hamilton-Jacobi equations (4h), and (4i).
### Species subsystem
Like the phase field equations, the species subsystem (1i) (1j) is meaningless in the outer region. For reasons of symmetry, it suffices to conduct the asymptotic analysis for the equation of \(\rho_{a}\): it carries over to \(\rho_{i}\) verbatim.
We start our analysis by expanding the term \(\nabla\cdot\left(g_{\varepsilon}\left[\psi\right]\eta_{a}\nabla\rho_{a}\right)\). W.l.o.g., we assume that \(\hat{\eta_{a}}^{\prime}=0\). First we derive from (7) and using (30)
\[\eta_{a}\nabla\rho_{a} =\epsilon^{-1}\eta_{a}\hat{\rho}_{a}^{\prime}\circ\iota_{ \varepsilon}\tilde{v}+\eta_{a}\nabla_{\Sigma_{\varepsilon}}\rho_{a0}+\eta_{a} \hat{\rho}_{a1}^{\prime}\circ\iota_{\varepsilon}\tilde{v}+\epsilon\eta_{a} \nabla_{\Sigma_{\varepsilon}}\rho_{a1}+O\left(\epsilon^{2}\right)\] \[=\eta_{a}\nabla_{\Sigma_{\varepsilon}}\rho_{a0}+\eta_{a}\hat{ \rho}_{a1}^{\prime}\circ\iota_{\varepsilon}\tilde{v}+\epsilon\eta_{a}\nabla_{ \Sigma_{\varepsilon}}\rho_{a1}+O\left(\epsilon^{2}\right).\]
With (16) and thanks to (43) and \(\hat{\psi}_{0}\) being the optimal profile, we have also
\[g_{\varepsilon}\left[\psi\right]=\epsilon^{-1}\frac{3}{2}\left(\hat{\psi}_{0 }^{\prime}\right)^{2}\circ\iota_{\varepsilon}+O\left(\epsilon\right).\]
Multiplying both equations gives
\[g_{\varepsilon}\left[\psi\right]\eta_{a}\nabla\rho_{a}=\epsilon^{-1}\frac{3}{ 2}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\varepsilon}\eta_{a} \left(\nabla_{\Sigma_{\varepsilon}}\rho_{a0}+\hat{\rho}_{a1}^{\prime}\circ \iota_{\varepsilon}\tilde{v}\right)+\frac{3}{2}\left(\hat{\psi}_{0}^{\prime} \right)^{2}\circ\iota_{\varepsilon}\eta_{a}\nabla_{\Sigma_{\varepsilon}}\rho_ {a1}+O\left(\epsilon\right). \tag{70}\]
We also expand
\[\nabla\cdot\left(g_{\varepsilon}\left[\psi\right]\rho_{a}v_{\tau}\right) =\nabla\cdot\left(\epsilon^{-1}\frac{3}{2}\left(\hat{\psi}_{0}^{ \prime}\right)^{2}\circ\iota_{\varepsilon}\rho_{a0}(v_{0})_{\tau}+\frac{3}{2} \left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\varepsilon}\left(\rho_{a 1}(v_{0})_{\tau}+\rho_{a0}(v_{1})_{\tau}\right)+O\left(\epsilon\right)\right) \tag{71}\] \[=\epsilon^{-2}\left(\frac{3}{2}\left(\hat{\psi}_{0}^{\prime} \right)^{2}\hat{\rho}_{a0}(\hat{v}_{0})_{\tau}\right)^{\prime}\circ\iota_{ \varepsilon}\cdot\tilde{v}+\epsilon^{-1}\left(\frac{3}{2}\left(\hat{\psi}_{0 }^{\prime}\right)^{2}\left(\hat{\rho}_{a1}(\hat{v}_{0})_{\tau}+\hat{\rho}_{a0 }(\hat{v}_{1})_{\tau}\right)\right)^{\prime}\circ\iota_{\varepsilon}\cdot \tilde{v}\] \[+\epsilon^{-1}\nabla_{\Sigma_{\varepsilon}}\cdot\left(\frac{3}{2 }\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\varepsilon}\rho_{a0}(v_ {0})_{\tau}\right)+O\left(1\right).\]
The first and second summands vanish since \(\left(\hat{v}_{i}\right)_{\tau}\cdot\tilde{v}=\hat{v}_{i}^{T}\mathbb{P}_{v_{ \tilde{v}}}\hat{v}=O\left(\epsilon^{2}\right)\), \(i\in\left\{1,2\right\}\), thanks to (55).
Second, we compute (making again use of (30))
\[\nabla\cdot\left(g_{\varepsilon}\left[\psi\right]\eta_{a}\nabla \rho_{a}\right) =\epsilon^{-2}\frac{3}{2}\left(\left(\hat{\psi}_{0}^{\prime} \right)^{2}\right)^{\prime}\circ\iota_{\varepsilon}\eta_{a}\left(\nabla_{ \Sigma_{\varepsilon}}\rho_{a0}+\hat{\rho}_{a1}^{\prime}\circ\iota_{ \varepsilon}\tilde{v}\right)\cdot\tilde{v}+\epsilon^{-2}\frac{3}{2}\left(\hat{ \psi}_{0}^{\prime}\right)^{2}\circ\iota_{\varepsilon}\eta_{a}\hat{\rho}_{a1} ^{\prime\prime}\circ\iota_{\varepsilon}\] \[+O\left(\epsilon^{-1}\right)\] \[=\epsilon^{-2}\frac{3}{2}\left(\left(\hat{\psi}_{0}^{\prime} \right)^{2}\right)^{\prime}\circ\iota_{\varepsilon}\hat{\rho}_{a1}^{\prime} \circ\iota_{\varepsilon}+\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{ \varepsilon}\eta_{a}\hat{\rho}_{a1}^{\prime\prime}\circ\iota_{\varepsilon}\right)\] \[=\epsilon^{-2}\frac{3}{2}\eta_{a}\left(\left(\hat{\psi}_{0}^{ \prime}\right)^{2}\hat{\rho}_{a1}^{\prime}\right)^{\prime}\circ\iota_{ \varepsilon}+O\left(\epsilon^{-1}\right).\]
All other terms in (1i) are in \(O\left(\epsilon^{-1}\right)\). Thus, \(\left(\hat{\psi}_{0}^{\prime}\right)^{2}\hat{\rho}_{a1}^{\prime}\) is constant in \(z\). We observe that \(\left(\hat{\psi}_{0}^{\prime}\right)^{2}\) decays for large \(\left|z\right|\), and for the expression to remain constant, \(\hat{\rho}_{a1}^{\prime}\) must either blow up or be zero constantly. We can exclude the former case by matching, so
\[\hat{\rho}_{a1}^{\prime}=0.\]
Then, (70) simplifies and we obtain
\[\nabla\cdot\left(g_{\varepsilon}\left[\psi\right]\eta_{a}\nabla \rho_{a}\right) =\epsilon^{-1}\frac{3}{2}\Bigg{(}\left(\left(\hat{\psi}_{0}^{ \prime}\right)^{2}\right)^{\prime}\circ\iota_{\varepsilon}\eta_{a}\nabla_{ \Sigma_{\varepsilon}}\rho_{a1}\cdot\tilde{v}\] \[+\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\varepsilon} \left(\nabla_{\Sigma_{\varepsilon}}\cdot\left(\eta_{a}\nabla_{\Sigma_{\varepsilon}} \rho_{a0}\right)+\eta_{a}\left(\widehat{\nabla_{\Sigma_{\varepsilon}}\rho_{a1}} \right)^{\prime}\circ\iota_{\varepsilon}\cdot\tilde{v}\right)\Bigg{)}+O\left(1\right)\] \[=\epsilon^{-1}\frac{3}{2}\left(\hat{\psi}_{0}^{\prime}\right)^{2} \circ\iota_{\varepsilon}\nabla_{\Sigma_{\varepsilon}}\cdot\left(\eta_{a}\nabla_{ \Sigma_{\varepsilon}}\rho_{a0}\right).\]
Finally, we find in (1i) (note (71), (18), and the independence of \(\hat{\psi}_{0}\) from the tangential variable due to its being the optimal profile) to leading order \(\epsilon^{-1}\)
\[\frac{3}{2}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{ \varepsilon}\left(\partial_{t}\rho_{a0}-\nabla_{\Sigma_{\varepsilon}}\cdot\left( \eta_{a}\nabla_{\Sigma_{\varepsilon}}\rho_{a0}\right)\right)-\left(\hat{\psi}_{0 }^{\prime}\right)^{2}\circ\iota_{\varepsilon}\rho_{a0}\tilde{H}v_{0}\cdot\tilde{v}+ \frac{3}{2}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{\varepsilon} \nabla_{\Sigma_{\varepsilon}}\cdot\left(\rho_{a0}(v_{0})_{\tau}\right)=\] \[\frac{3}{2}\left(\hat{\psi}_{0}^{\prime}\right)^{2}\circ\iota_{ \varepsilon}\mathcal{R}\left[\rho_{a0},\rho_{a0};\phi_{0},\psi_{\varphi_{0}} \right],\]
which is (4j) up to constants. On the right hand side, we used the expansion
\[\mathcal{R}\left[\rho_{a},\rho_{i};\phi,\nu_{\varphi}\right]=\mathcal{R}\left[ \rho_{a0},\rho_{i0};\phi_{0},\nu_{\varphi_{0}}\right]+O\left(\epsilon \right),\]
which holds with (53) and
\[\nabla_{\rho_{\epsilon}}^{L^{2}}\mathcal{R},\nabla_{\mu}^{L^{2}}\mathcal{R}, \nabla_{\phi}^{L^{2}}\mathcal{R},\nabla_{\nu}^{L^{2}}\mathcal{R}\in O\left(1 \right).\]
So altogether, we could argue with the help of formally matched asymptotic expansions that solutions of the PDE system (1), under suitable Assumptions 1-8, converge to solutions of (4) as \(\epsilon\searrow 0\).
## 5 Conclusion
We have made plausible that both the diffuse and sharp interface modelling approaches are compatible in the sense that their solutions are approximations of each other. To arrive at this conclusion, we leveraged the method of formal asymptotic analysis.
From a mathematical perspective, it is desirable to prove this result rigorously like [1] or [9] did for related PDE systems. The main problems to deal with will likely be analysing the leading order terms in the expansion of the Canham-Helfrich energy and controlling the pressure, showing it does not blow up near the diffuse layers. An excellent stock of techniques for analysing the Canham-Helfrich energy is already provided in [9]. However, they analyse the pure Willmore flow problem, and so there is no coupling with a fluid, nor with a species subsystem like in the PDE system (1) investigated here, which poses additional problems like possible pressure blow-ups. Controlling the pressure for a phase-field-Navier-Stockes coupling is investigated in [1]. A step towards a rigorous analysis of (1) might therefore be possible by uniting the results of both works and leaving the species subsystem aside.
From a modelling perspective, our results increase the confidence that qualitatively both abstractions--the sharp interface or diffuse layer abstraction--are equivalent and the focus can now be more on other aspects like numerical feasibility.
### Acknowledgements
The authors gratefully acknowledge the support by the Graduiertenkolleg 2339 IntComSin of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 321821685. We thank Helmut Abels for insightful discussions.
|
2308.11522 | ReLiCADA -- Reservoir Computing using Linear Cellular Automata Design
Algorithm | In this paper, we present a novel algorithm to optimize the design of
Reservoir Computing using Cellular Automata models for time series
applications. Besides selecting the models' hyperparameters, the proposed
algorithm particularly solves the open problem of linear Cellular Automaton
rule selection. The selection method pre-selects only a few promising candidate
rules out of an exponentially growing rule space. When applied to relevant
benchmark datasets, the selected rules achieve low errors, with the best rules
being among the top 5% of the overall rule space. The algorithm was developed
based on mathematical analysis of linear Cellular Automaton properties and is
backed by almost one million experiments, adding up to a computational runtime
of nearly one year. Comparisons to other state-of-the-art time series models
show that the proposed Reservoir Computing using Cellular Automata models have
lower computational complexity, at the same time, achieve lower errors. Hence,
our approach reduces the time needed for training and hyperparameter
optimization by up to several orders of magnitude. | Jonas Kantic, Fabian C. Legl, Walter Stechele, Jakob Hermann | 2023-08-22T15:52:37Z | http://arxiv.org/abs/2308.11522v1 | # ReLiCADA - Reservoir Computing using Linear Cellular Automata Design Algorithm
###### Abstract
In this paper, we present a novel algorithm to optimize the design of Reservoir Computing using Cellular Automata models for time series applications. Besides selecting the models' hyperparameters, the proposed algorithm particularly solves the open problem of linear Cellular Automation rule selection. The selection method pre-selects only a few promising candidate rules out of an exponentially growing rule space. When applied to relevant benchmark datasets, the selected rules achieve low errors, with the best rules being among the top 5 % of the overall rule space. The algorithm was developed based on mathematical analysis of linear Cellular Automaton properties and is backed by almost one million experiments, adding up to a computational runtime of nearly one year. Comparisons to other state-of-the-art time series models show that the proposed Reservoir Computing using Cellular Automata models have lower computational complexity, at the same time, achieve lower errors. Hence, our approach reduces the time needed for training and hyperparameter optimization by up to several orders of magnitude.
Cellular Automata, Dynamical System, Edge of Chaos, Field-Programmable Gate Array, Reservoir Computing, Time Series Prediction.
## I Introduction
Real-time sensor signal processing is a growing demand in our everyday life. High-frequent sensor data is available in a wide range of embedded applications, including, for example, speech recognition, battery protection in electric cars, and monitoring of production facilities. Hence, many application domains could benefit from intelligent and real-time sensor signal analysis on low-cost and low-power devices. To be able to adapt to changing environmental and operational conditions of the target system, machine learning-based approaches have to be employed since classical signal processing techniques often reach their limits under changing external influences. However, at the same time, efficient hardware implementation and acceleration of such intelligent methods are needed in order to fulfill real-time constraints for applications requiring inference times in the us to ns range.
Currently, neural network-based models, including Recurrent Neural Networks (RNNs) and Long-Short Term Memorys (LSTMs), form the state-of-the-art for numerous time series processing tasks. However, these models typically consist of several different layers stacked in a deep architecture, have millions of trainable parameters, and include hard-to-implement nonlinear mappings. Such deep architectures are unfavorable for hardware implementations and induce long inference times, which is disadvantage for real-time applications.
During the last two decades, Reservoir Computing (RC) emerged as a promising alternative to deep and recurrent neural networks. In contrast to the latter, RC models have a shallow architecture and trainable parameters in only a single layer. This makes them generally much easier to design and train. Despite their comparatively simple architecture, RC models have proven their capabilities in many application domains, like biomedical, audio, and social applications [1]. While they only need relatively little computational resources compared to deep neural networks, they still have to be optimized for the unique requirements of, e.g., Field-Programmable Gate Array (FPGA)-based implementations.
Due to the discrete nature of their reservoirs, Reservoir Computing using Cellular Automata (ReCA) models form a subset within the RC framework that is suitable for the implementation on FPGAs. Like for other RC models, the training of ReCA models is easy and fast. Nevertheless, they require extensive hyperparameter tuning.
One major challenge that we address in this paper is that for most ReCA models, the hyperparameter search space is too big for current heuristic search and optimization algorithms. This is especially true for the selection of suitable Cellular Automaton (CA) rules in the reservoir.
Because of this, we conducted the first mathematical analysis of the influence of linear CA rules on the model performance in the ReCA framework and identified common analytical properties of suitable linear rules to be used in the reservoir. We backed our mathematical analysis with the results of almost one million experiments with a sequential runtime of nearly one year using an NVIDIA RTX A4000 GPU (using this GPU, we were able to run three parallel runs of our experiment). In the research community, the ReCA framework has been tested almost solely on pathological datasets that do not allow conclusions about the generality of the conducted studies and generalization capabilities of the ReCA models
themselves. In the context of this study, we performed an extensive analysis using several benchmark datasets. The result of our research is the Reservoir Computing using Linear Cellular Automata Design Algorithm (ReLiCADA), which specifies Reservoir Computing using Linear Cellular Automata (ReLiCA) models with fixed hyperparameters, and thus immensely simplifies the overall design process. The selected ReLiCA models achieve lower errors than comparable state-of-the-art time series models while maintaining low computational complexity.
The rest of this paper is structured as follows. We first start with an introduction to RC and a review of related work in section II-A, followed by CAs in section II-B, and finally, an overview of the ReCA framework in section II-C. In section II-D, we define the mathematical parameters used in our analysis. After that, we introduce our implementation and refinement of the ReCA model architecture and describe all parts of it in section III. This is followed by an explanation of our novel Reservoir Computing using Linear Cellular Automata Design Algorithm in section IV. The datasets and models that we use to compare and validate our algorithm with are listed in section V, before we analyze the experiments in section VI. The paper is completed by a conclusion in section VII.
## II Background and Related Work
The in-depth analysis of ReCA models comprises concepts and methods from different research fields, ranging from abstract algebra over automaton theory and properties of dynamical systems to machine learning. In the following sections, we summarize the required background knowledge and related work about RC, CA, and ReCA. Furthermore, we define the mathematical parameters that we use to characterize the ReCA models.
### _Reservoir Computing_
The main idea of RC is to transform the input \(\mathbf{x}\) into a higher dimensional space \(\mathbf{s}\) in order to make the input linearly separable. This transformation is performed by a dynamic system which is called the reservoir (center part of Fig. 1).
The readout layer (right part of Fig. 1) is then used to linearly transform the reservoir state into the desired output \(\mathbf{y}\)[2]. Generally, RC models can be described using
\[\mathbf{s}^{(t)} =g(\mathbf{V}\mathbf{x}^{(t)},\mathbf{W}\mathbf{s}^{(t-1)}) \tag{1}\] \[\mathbf{y}^{(t)} =h(\mathbf{U}\mathbf{s}^{(t)})\]
with the reservoir state \(\mathbf{s}\), the input \(\mathbf{x}\), and the output \(\mathbf{y}\) at the discrete time \(t\). The function \(g\) depends on the reservoir type, while the function \(h\) describes the used readout layer and is typically a linear mapping. During model training, only the output weights \(\mathbf{U}\) are trained, while the input weights \(\mathbf{V}\) and reservoir weights \(\mathbf{W}\) are fixed and usually generated using some model specific constraints. In Fig. 1, we depict an Echo State Network (ESN) [3] using a single layer RNN [4] as the reservoir. Further simplifications to the reservoir were proposed by Rodan et al. [5], resulting in, e.g., the Delay Line Reservoir (DLR) or the Simple Cycle Reservoir (SCR). These types of reservoirs require less computations during the inference step compared to general ESNs. Nevertheless, they are still not suited for implementation in, e.g., FPGAs due to the required floating point calculations. To eliminate the floating point operations in the reservoir, stochastic bitstream neurons can be used [6]. However, stochastic bitstream neurons trade inference speed with the simplicity of implementation on FPGAs and are thus not suited for our usecase [7]. In this paper, we are focusing on a class of RC models that use CAs as the reservoir, which was first proposed by Yilmaz [8] and has been termed ReCA [9]. One of the main advantages of ReCA models compared to other RC models is that the reservoir only uses integer operations on a finite set of possible values. Because of that, they are easy and fast to compute on digital systems like FPGAs.1
Footnote 1: There are a lot more models inside the RC framework like Liquid State Machine (LSM) [10], Extreme Learning Machine (ELM) [11], Backpropagation-Decretization (BPDC) [12], and physical reservoirs [1]. Nonetheless, we will not go into details about them since they are not suitable for our target implementation.
### _Cellular Automata_
CAs represent one of the simplest types of time, space, and value discrete dynamical systems and have been introduced initially by von Neumann [13], [14]. Following this idea, CAs have been analyzed concerning several different properties, including structural [15], [16], algebraic [17]-[21], dynamical [22]-[26], and behavioral [27]-[31] properties.
The CAs considered in this paper consist of a finite, regular, and one-dimensional lattice of \(N\) cells (see Fig. 2), for reasons discussed later. Each of the cells can be in one of \(m\) discrete states. The lattice is assumed to be circularly closed, resulting in periodic boundary conditions. In this sense, the right neighbor of the rightmost cell (\(s_{N-1}\)) is the leftmost cell (\(s_{0}\)), and vice versa. A configuration of a CA at a discrete iteration2\(i\) consists of the states of all its
Fig. 1: Echo State Network as an example for Reservoir Computing.
cells at that iteration and can thus be written as a state vector \(\mathbf{s}^{(i)}\in\mathbb{Z}_{m}^{N}\) according to
\[\mathbf{s}^{(i)}=(s_{0}^{(i)},\ldots,s_{N-1}^{(i)}),\qquad\text{with }s_{k}\in \mathbb{Z}_{m}, \tag{2}\]
where \(\mathbb{Z}_{m}=\mathbb{Z}/m\mathbb{Z}\) denotes the ring of integers modulo \(m\), and \((i)\) denotes the iteration index. The states of each cell change over the iterations according to a predefined rule. At iteration \((i)\), the cell state \(s_{k}^{(i)}\) is defined in dependency of the states of the cells in its neighborhood of fixed size \(n\) at iteration \((i-1)\) (see Fig. 2). The neighborhood of a cell contains the cell itself, as well as a range of \(r\) neighboring cells to the left and right, respectively, leading to
\[n=2r+1,\qquad\text{with }r\in\mathbb{N}^{+}. \tag{3}\]
We introduce a restriction to the neighborhood \(n\) to define the true neighborhood \(\hat{n}\). For \(\hat{n}\) we require that \(w_{-r}\neq 0\) or \(w_{r}\neq 0\).
The iterative update of the cell states can be described in terms of a local rule \(f\colon\mathbb{Z}_{m}^{n}\to\mathbb{Z}_{m}\), which defines the dynamic behavior of the CA according to
\[s_{k}^{(i)}=f(s_{k-r}^{(i-1)},\ldots,s_{k+r}^{(i-1)}). \tag{4}\]
Since we use periodic boundary conditions, the indices \(k-r,\ldots,k+r\) of the states in (4) have to be taken mod \(N\).
Linear CAs form a subset of general CAs [21]. The local rule of a linear CA is a linear combination of the cell states in the considered neighborhood. Hence, for linear CAs, \(f\) (see (4)) can be defined as
\[f(s_{k-r}^{(i)},\ldots,s_{k+r}^{(i)})=\sum_{j=-r}^{r}w_{j}s_{k+j}^{(i)} \tag{5}\]
with rule coefficients \(w_{j}\in\mathbb{Z}_{m}\). A linear rule can thus be identified by its rule coefficients \(\mathbf{w}=(w_{-r},\ldots,w_{r})\). Unless otherwise noted, we will restrict the CA rule to linear rules in this paper. A prominent example is the elementary rule 90 CA, which is defined by \(m=2\), \(n=3\) and \((w_{-1},w_{0},w_{1})=(1,0,1)\)[18].
For each linear rule \(f\), there exists a mirrored rule \(\hat{f}\) with \(\hat{\mathbf{w}}=(w_{r},\ldots,w_{-r})\). If the rule coefficients are symmetric with respect to the central coefficient \(w_{0}\), it holds that \(\hat{f}=f\). In total, there exist \(m^{n}\) different linear CA rules, which directly follows from (5). We denote the set of all linear rules for given \(m\) and \(n\) by
\[\mathcal{R}(m,n)=\{(w_{-r},\ldots,w_{r}):w_{i}\in\mathbb{Z}_{m},n=2r+1\}. \tag{6}\]
The local rule \(f\) is applied simultaneously to every cell of the lattice, such that the configuration \(\mathbf{s}^{(i-1)}\) updates to the next iteration \(\mathbf{s}^{(i)}\), and therefore it induces a global rule \(F\colon\mathbb{Z}_{m}^{N}\to\mathbb{Z}_{m}^{N}\). For linear CAs, this mapping of configurations can be described by multiplication with a circulant matrix \(\mathbf{W}\in\mathbb{Z}_{m}^{N\times N}\), which is given by
\[\mathbf{W}=\text{circ}(w_{0},\ldots,w_{r},0,\ldots,0,w_{-r},\ldots,w_{-1}), \tag{7}\]
with circ as defined in [21] (_note_: if \(N=n\), the circulant matrix has no additional zero entries that are not rule coefficients). Thus, the global rule for a linear CA can be written as
\[\mathbf{s}^{(i)}=F(\mathbf{s}^{(i-1)})=\mathbf{W}\mathbf{s}^{(i-1)}. \tag{8}\]
It has been shown that several key properties typically used to characterize dynamical systems are not computable for general CAs. Even for general one-dimensional CAs nilpotency is undecidable, and the topological entropy cannot even be approximated [23], [25], [32]. Furthermore, injectivity and surjectivity can be computed only for one- and two-dimensional CAs [24], [25]. However, when restricting the analysis to one-dimensional linear CAs, all the mentioned properties are computable. This is the reason why we focus on one-dimensional linear CAs in this paper.
### _ReCA Framework_
CAs have been employed as the reservoir in the RC framework first by Yilmaz [8], replacing the recurrently connected neurons typically used in ESNs. The original architecture of a ReCA model is depicted in Fig. 3. Input to the model is a time series \(\mathbf{x}\), which is fed sample by sample into an encoding stage. The encoding stage, as proposed in [8], serves several purposes. First, the input is preprocessed depending on the type of data, which may include feature expansion, weighted summation, scaling, and binarization. Second, the processed data is mapped to the cells of the CA in the reservoir. Third, the processed data is encoded into the mapped cell states [8]. With the input encoded into its cell states, the global rule of the CA is executed iteratively for a fixed number of iterations.
Fig. 3: ReCA architecture as initially proposed by Yilmaz [8]
Fig. 2: Lattice of a one-dimensional Cellular Automaton with periodic boundary conditions. Using only the orange weights results in \(n=3\); using the orange and green weights results in \(n=5\). The state of the cell \(s_{0}\) in the \((i)^{\text{th}}\) iteration is the weighted sum of the cell states in its neighborhood in the \((i-1)^{\text{th}}\) iteration.
The output of the CA is then passed to the readout layer, which produces the final model output \(\mathbf{y}\).
The ReCA framework has been analyzed and developed further based on the initially proposed architecture. In Nichele et al. [33], the authors use hybrid CA-based reservoirs split into two halves, each half running with a different rule to enrich the dynamics within the reservoir. However, this increases the search space for suitable rule combinations in the reservoir, and it remains unclear how to design the reservoir effectively. Deep reservoir computing using the ReCA approach is investigated by Nichele et al. [34] by stacking two ReCA models one after the other, resulting in decreased error rates in most of the analyzed cases. This design principle, however, is to some point contradictory to the original intention of RC, which is to reduce the complexity of supervised training of Neural Networks (NNs) [35]. The analysis of suitable CA rules has been extended from elementary CAs (\(m=2,\hat{n}=3\)) to complex CAs (\(m\geq 3\) and/or \(\hat{n}\geq 5\)) in [36]. In their work, the authors use a Genetic Algorithm (GA) to perform a heuristic optimization within the super-exponentially growing rule space (\(m^{m^{n}}\), since they do not restrict on linear CAs) to find suitable rules for use in the reservoir. One of the biggest challenges with this approach is that the rule space quickly becomes unmanageable for heuristic optimization methods, including genetic algorithms. Even when the number of possible states is only doubled from, e.g., \(m=2\) to only \(m=4\), the number of possible rules with a three-neighborhood grows from \(2^{2^{3}}=256\) to \(4^{4^{3}}\approx 3.4\times 10^{38}\). This example impressively shows that even small increases in complexity of the CA reservoirs make applying heuristic search and optimization methods practically impossible.
Most of the research mentioned above has been mainly based on the synthetic 5-bit and 20-bit memory tasks [8]. However, as the authors in [37] point out, especially the 5-bit memory task is not sufficient to make conclusions about the generalization capability of a model since this task consists of only 32 examples. Furthermore, the model is trained and tested on the whole dataset which contradicts common practice of separating training and test sets. Therefore, they adapt the 5-bit memory task by splitting the 32 examples into a training and test set. This, however, shrinks the number of available train and test examples further. The authors also investigate the effect of different feature extraction techniques on the reservoir output with the result that simply overwriting CA cells in the reservoir works well in less complex CAs.
A rule 90-based FPGA implementation of a ReCA model for the application of handwritten digit recognition based on the MNIST dataset is presented in [38]. Even though their implementation does not reach the classification accuracy of current state-of-the-art Convolutional Neural Network (CNN)-based implementations, the authors show that ReCA is a promising alternative to traditional neural network-based machine learning approaches. This is especially underlined by the fact that the energy efficiency of their implementation is improved by a factor of 15 compared to CNN implementations [38].
An analysis of the influence of several hyperparameters in the ReCA framework has been conducted in [39], with the result that for general CAs, the overall performance of the model is dependent on and sensitive to the concrete choice of hyperparameters.
### _Mathematical Parameters_
This section introduces the mathematical parameters we use to analyze linear CA rules. Depending on \(m\), \(\mathbb{Z}_{m}\) is a finite field if \(m\) is prime or otherwise a finite ring. This has several mathematical effects, e.g., the existence of unique multiplicative inverses. Unless otherwise noted, we assume the more general case where \(m\) is not prime (\(\mathbb{Z}_{m}\) is a ring).
We define the prime factor decomposition of \(m\) as
\[m=p_{1}^{k_{1}}\cdots p_{h}^{k_{h}} \tag{9}\]
with the set of prime factors as
\[\mathscr{P}=\{p_{1},\ldots,p_{h}\} \tag{10}\]
and their multiplicities
\[\mathcal{K}=\{k_{1},\ldots,k_{h}\}. \tag{11}\]
The set of prime weights can be generated using
\[\mathscr{P}_{w}=\{s:\gcd(s,m)=1\}\qquad\forall s\in\mathbb{Z}_{m}\backslash \tag{12}\]
and the set of non-prime weights by using
\[\bar{\mathscr{P}}_{w}=\{s:\gcd(s,m)\neq 1\}\qquad\forall s\in\mathbb{Z}_{m}\backslash \tag{13}\]
where gcd denotes the greatest common divisor.
#### Iii-D1 Transient and Cycle Lengths
The behavior of a CA over time can be separated into a transient phase of length \(k\) and a cyclic phase of length \(c\). For linear CAs, this can be expressed as
\[\mathbf{W}^{k}\mathbf{s}^{(0)}=\mathbf{W}^{k+c}\mathbf{s}^{(0)} \tag{14}\]
with the circulant rule matrix \(\mathbf{W}\) and the initial configuration \(\mathbf{s}^{(0)}\)[19], [40]. The decomposition of the state space of a CA into transients and cycles gives further information about its dynamic behavior. A linear CA with no transient phase has no Garden-of-Eden states. Garden-of-Eden states have no predecessors and can thus only appear as initial states, if the CA has a transient phase. On the computation of transient and cycle lengths, we refer the interested reader to [40]-[46]3.
Footnote 3: We would like to thank C. Qureshi for his valuable input on the computation of cycle lengths of linear mappings over finite fields.
#### Iii-D2 Cyclic Subgroup Generation
A cyclic subgroup is generated by a generator element \(g\). This generator can be used to generate the multiplicative
\[\mathscr{S}^{\times}(g)=\{g^{0},g^{1},g^{2},\ldots,g^{(m-1)}\} \tag{15}\]
and additive
\[\mathscr{S}^{+}(g)=\{0,g,2g,\ldots,(m-1)g\} \tag{16}\]
cyclic subgroups [47]-[49].
The order of the cyclic additive subgroup \(|\mathscr{S}^{+}(g)|\) can be calculated by
\[|\mathscr{S}^{+}(g)|=\frac{m}{\gcd(m,g)} \tag{17}\]
[48], [49]. We use the order of cyclic subgroups to analyze whether the set of possible states during an iteration of a linear CA shrinks or not.
#### Ii-B3 Topological Properties
For a mathematical analysis, it is often convenient to consider infinite linear CAs whose lattice consist of infinitely many cells [50]. Hence, further properties of infinite one-dimensional CAs can be defined that characterize the behavior of the CA as a dynamic system. For some properties, we only give informal and intuitive descriptions. Formal definitions can be found in [25], [51]-[53]. In the following, the symbol \(\exists_{n}\) denotes "there exist exactly \(n\)-times".
State Space and OrbitIntuitively, the set of all possible lattice configurations for infinite CAs can be thought of as forming a state space. Furthermore, the notion of distance that induces a metric topology on the state space can be integrated. For a detailed definition, we refer the interested reader to [51]. An individual element in this set is a specific state configuration of the lattice. The series of points in the state space during operation of an infinite linear CA, i.e., the path \((\mathbf{s}^{(0)},\ldots,\mathbf{s}^{(I)})\) along the visited lattice configurations under iteration of \(F\) for \(I\) iterations with initial configuration \(\mathbf{s}^{(0)}\), is called orbit. Based on this topological framework, further properties of the dynamic behavior of linear one-dimensional CAs can be computed that characterize it for the asymptotic case of \(N\rightarrow\infty\). However, only finite lattices can be realized in practical implementations and simulations of CAs, whereby periodic boundary conditions have only limited influence on the behavior of the CA compared to static boundary conditions [54].
Topological EntropyThe topological entropy is a measure of uncertainty of a dynamical system under repeated application of its mapping function (global rule \(F\) for infinite linear CAs) starting with a partially defined initial state [25]. It can be used to characterize the asymptotic behavior of the system with respect to its operation. Since discrete and finite dynamical systems fall into periodic state patterns, the topological entropy gives an idea of the complexity of the orbit structure and can be used to distinguish ordered and chaotic dynamical systems. For example, two runs of the same (infinite) linear CA with different initial configurations that are close in the state space can be considered. If the linear CA has a low entropy, the final states of the two runs are also likely to be close in the state space [53]. However, suppose the CA has a high topological entropy. In that case, it shows chaotic behavior and the CA is likely to produce diverging orbits during the two runs even though the initial states were close. Hence, a high entropy leads to increased uncertainty in the dynamical system's behavior. This behavior can also be seen in Fig. 4, where the orbits of the rule with smaller entropy (Fig. 4a and 4b) show a less chaotic behavior compared to the orbits of the rule with higher entropy (Fig. 4c and 4d).
The topological entropy (probabilistic approach) is closely related to the Lyapunov exponents (geometric approach) and can be computed based thereon. Assuming a CA over \(\mathbb{Z}_{m}\), with the prime factor decomposition (9), we define for \(i=1,\ldots,h\)
\[\begin{split}\mathcal{P}_{i}&=\left\{0\right\}\cup \left\{j:\gcd\left(w_{j},p_{i}\right)=1\right\}\\ L_{i}&=\min\mathcal{P}_{i}\\ R_{i}&=\max\mathcal{P}_{i}\end{split} \tag{18}\]
with \(w_{j}\) as defined in (5). Then the left \(\lambda^{-}\) and right \(\lambda^{+}\) Lyapunov exponents are [25]
\[\begin{split}\lambda^{-}&=\max_{1\leq i\leq h}\left\{ R_{i}\right\}\\ \lambda^{+}&=-\min_{1\leq i\leq h}\left\{L_{i} \right\}.\end{split} \tag{19}\]
The topological entropy can be calculated using [25]
\[\mathscr{H}=\sum_{i=1}^{h}k_{i}\left(R_{i}-L_{i}\right)\log_{2}\left(p_{i} \right). \tag{20}\]
To be able to compare the topological entropy of a CA acting on different-sized finite rings, we introduce the normalized topological entropy
\[\widetilde{\mathscr{H}}=\frac{\mathscr{H}}{\sum_{i=1}^{h}k_{i}\log_{2}\left(p_ {i}\right)}=\frac{\mathscr{H}}{\log_{2}\left(m\right)} \tag{21}\]
with \(m\) as defined in (9). For prime power rings, \(\widetilde{\mathscr{H}}\) will only be integer values, where \(\widetilde{\mathscr{H}}=1\) will be the smallest nonzero entropy, \(\widetilde{\mathscr{H}}=2\) the second smallest etc.
Fig. 4: Iteration diagram of linear CA with \(m=4\), \(\hat{n}=3\), \(N=12\), \(\mathbf{w}=(0,2,1)\) (resulting in \(H=2\)) and (a) a single cell initialized with state \(1\) (impulse) or (b) random initial configuration for \(I=9\) iterations. Figures (c) and (d) have the same setup, but with \(\mathbf{w}=(1,2,1)\) (resulting in \(H=4\)). The colors indicate different cell states in \(\mathbb{Z}_{m}\).
Equicontinuity:A linear CA is said to be _equicontinuous_ (or _stable_) if any two states within a fixed size neighborhood in the state space diverge by at most some upper bound distance under iteration of \(F\)[51]. _Equicontinuity_ is given if the linear CA fulfills the condition [51]
\[(\forall p\in\mathscr{P}):p\mid\gcd(m,w_{-r},\ldots,w_{-1},w_{1},\ldots,w_{r}). \tag{22}\]
SensitivityOn the other hand, a linear CA is _sensitive_ to initial conditions if, for any initial state \(\mathbf{s}^{(0)}\), there exists another distinct initial state in any arbitrarily small neighborhood of \(\mathbf{s}^{(0)}\), such that both orbits diverge by at least some lower bound distance [51]. If the condition
\[(\exists p\in\mathscr{P}):p\nmid\gcd(m,w_{-r},\ldots,w_{-1},w_{1},\ldots,w_{r}) \tag{23}\]
is fulfilled, the corresponding CA is _sensitive_[51].
ExpansivitySuppose the orbits of any two different states in the state space diverge by at least some lower bound distance under forward iteration of \(F\). In that case, the corresponding CA is called _positively expansive_[51]. Compared to _sensitivity_, _positive expansivity_ is a stronger property. _Positive expansivity_ is given for a linear CA if [51]
\[\gcd(m,w_{-r},\ldots,w_{-1})=\gcd(m,w_{1},\ldots,w_{r})=1. \tag{24}\]
For invertible infinite linear CAs, this concept can be generalized by additionally considering backward iteration of \(F\) and calling such CAs _expansive_[51]. The condition for _expansivity_ is the same as (25) for linear CAs.
Transitivity: Transitivity is given for a linear CA, if it has states that eventually move under iteration of \(F\) from one arbitrarily small neighborhood to any other [52]. In other words, the linear CA cannot be divided into independent subsystems. Codenotti and Margara [55] showed that, for CAs, _transitivity_ implies _sensitivity_. The condition for _transitivity_ of a linear CA is [52]
\[\gcd(m,w_{-r},\ldots,w_{-1},w_{1},\ldots,w_{r})=1. \tag{25}\]
In addition, _strong transitivity_ is given if a CA has orbits that include every state of its state space. For _strong transitivity_, a linear CA must fulfill the condition [51]
\[(\forall p\in\mathscr{P})(\exists w_{i},w_{j}):p\nmid w_{i}\wedge p\nmid w_{j}. \tag{26}\]
ErgodicityIn contrast to _transitivity_, _ergodicity_ concerns statistical properties of the orbits of a dynamical system. While _transitivity_ indicates the state space of infinite linear CAs cannot be separated, _ergodicity_, intuitively, denotes the fact that typical orbits of almost all initial states (except for a set of points with measure zero) in any subspace under iteration of \(F\) eventually revisit the entire set with respect to the normalized Haar measure [56, 57]. Cattaneo et al. [56] show that, for infinite linear CAs, _ergodicity_ and _transitivity_ are equivalent. The condition for a linear CA to be _ergodic_ is the same as (25).
RegularityIf cyclic orbits are dense in the state space for an infinite linear CA, then it is denoted as _regular_[52]. _Regularity_ is defined for linear CA by condition [52]
\[\gcd(m,w_{-r},\ldots,w_{r})=1. \tag{27}\]
Surjectivity and InjectivityThe global rule \(F\) of a linear CA is _surjective_ if every state configuration has a predecessor. Thus, _surjective_ CAs have no Garden-of-Eden states and no transient phase [21]. Cattaneo et al. [56] showed that transitive CAs are surjective. For one-dimensional CAs, _surjectivity_ is equivalent to _regularity_ of the global rule \(F\)[52]. _Surjectivity_ for \(F\) is given if condition (27) is fulfilled [17].
Injectivity of \(F\) denotes the fact that every state has at most one predecessor. Every _injective_ CA is also _surjective_[51]. If \(F\) is _surjective_ and _injective_, the CA is called _bijective_, which is equivalent to reversibility [21]. The condition for _injectivity_ of a linear CA is given by [17]
\[(\forall p\in\mathscr{P})(\exists_{1}w_{i}):p\nmid w_{i}. \tag{28}\]
ChaosThe behavior of dynamical systems can range from ordered to chaotic. The framework of dynamical systems lacks a precise and universal definition of chaos. However, there is widespread agreement that chaotic behavior is based on _sensitivity_, _transitivity_ and _regularity_[58]. Manzini and Margara [51] identified five classes of increasing degree of chaos for linear CAs: _equicontinuous_ CAs, _sensitive_ but not _transitive_ CAs, _transitive_ but not _strongly transitive_ CAs, _strongly transitive_ but not _positively expansive_ CAs, and _positively expansive_ CAs. Since for linear CAs, _transitivity_ implies _sensitivity_ and _surjectivity_, whereby the latter is in turn equivalent to _regularity_, _transitive_ linear CAs can be classified as topologically chaotic [56].
#### Ii-B4 Error Metric
To be able to compare different models, we use the Mean Squared Error (MSE)
\[\mathit{MSE}(\mathbf{y},\bar{\mathbf{y}})=\frac{1}{n}\sum_{i=1}^{n}\left(\bar {y}_{i}-y_{i}\right)^{2} \tag{29}\]
and the Normalized Mean Squared Error (NMSE)
\[\mathit{NMSE}(\mathbf{y},\bar{\mathbf{y}})=\frac{\mathit{MSE}(\mathbf{y}, \bar{\mathbf{y}})}{\mathrm{Var}(\bar{\mathbf{y}})} \tag{30}\]
with the ground truth \(\bar{\mathbf{y}}\) and the prediction of the model \(\mathbf{y}\).
## III Refined ReCA Architecture
Based on the ReCA architecture initially published by Yilmaz [8], we refine our view on the architecture to be able to more precisely define the different computational steps within the ReCA model. In the rest of this paper, without loss of generality, we only consider the
Fig. 5: Refined ReCA architecture
case of one-dimensional time series \(\mathbf{x}=(x^{(0)},\ldots,x^{(T-1)})\) with \(x^{(t)}\in[-1,1]\). If n-dimensional data should be used, the transformation, quantization, mapping, and encoding layers are adjusted to the input dimension. For data \(x^{(t)}\notin[-1,1]\), the transformation and quantization needs to be adopted.
We split the encoding layer (Fig. 3) into different parts since it fulfills several different and independent tasks. The refined ReCA architecture is depicted in Fig. 5. The input data is fed into the transformation layer (section III-A), which prepares the data for the following quantization. The transformation layer can also be used to run any transformation functions, e.g., tangens hyperbolicus, on the input data. After the input is transformed, we need to quantize it to the allowed states \(x_{q}\in\mathbb{Z}_{m}\). This is done by the quantization layer (section III-B). Note that the transformation and quantization layers often work together to achieve the desired \(x_{q}\). The quantized input \(x_{q}\) is then passed to the mapping layer (section III-C) and then the encoding layer (section III-D). The mapping layer selects the CA cells into which the quantized input should be encoded. The encoding layer then executes the encoding. After the CA in the reservoir updated the cells for a fixed number of iterations, the states of the CA are used by the readout layer (section III-E) to calculate the ReCA model output \(\mathbf{y}^{(t)}\).
In section III-F, we will combine the aforementioned layers to the ReCA model and describe a complete iteration of the model for an inference time step.
To improve the readability of the definitions, the superscript \((t)\) is removed in the rest of this section when the time context is clear.
### _Transformation_
We separate the transformation layer into two steps. First, we apply a transformation function \(\tilde{\mathbf{x}}_{\tau}=\tau(\mathbf{x})\) to the input. Second, we scale the transformed input to the range \(x_{\tau}\in[0,m-1]\) since we require this input range in the subsequent quantization layer. For our setup, we analyzed the following transformation methods:
* _complement_ \[\tilde{x}_{\tau}=\begin{cases}x,&\text{if }x\in[0,1],\\ 2+x,&\text{otherwise.}\end{cases}\] (31)
* _gray_ and _scale_offset_ \[\tilde{x}_{\tau}=x+1\] (32)
* _sign_value_ \[\tilde{x}_{\tau}=\begin{cases}x,&\text{if }x\in[0,1],\\ -x+1,&\text{otherwise.}\end{cases}\] (33)
Rescaling is then done using
\[x_{\tau}=\frac{m-1}{2}\tilde{x}_{\tau}. \tag{34}\]
The idea of the different transformation approaches is to mimic different floating-point to fixed-point conversion methods. Using the _complement_ transformation will represent the numbers similar to a two's complement while _sign_value_ uses a binary sign and value representation. The _scale_offset_ approach will shift the input range to only positive numbers and then use the default binary representation. The _gray_ transformation uses the same shift but will encode the values using gray code. The conversion to gray code is only correct if m is a power-of-two, otherwise neighboring values might not differ in only one bit.
### _Quantization_
To quantize the input values, we use the typical rounding approach
\[\tilde{x}_{q}=\begin{cases}0,&\text{if }x_{\tau}\in[0,0.5),\\ 1,&\text{if }x_{\tau}\in[0.5,1.5),\\ 2,&\text{if }x_{\tau}\in[1.5,2.5),\\ &\vdots\\ m-1,&\text{if }x_{\tau}\in[m-1.5,m-1].\end{cases} \tag{35}\]
In case of _gray_ transformation, the quantized input \(\tilde{x}_{q}\) is transformed once more, leading to the final quantized value4
Footnote 4: This could also be achieved by changing the quantization function.
\[x_{q}=\begin{cases}\tilde{x}_{q}\oplus(\tilde{x}_{q}>>1)\mod m,&\text{if } \textit{gray},\\ \tilde{x}_{q},&\text{else}\end{cases} \tag{36}\]
with \(\oplus\) representing the binary bitwise _exclusive-or_ and \(>>\) is the binary right-shift operation. The mod \(m\) operation is only needed if m is not a power of 2.
### _Mapping_
Yilmaz [8] mentions that multiple random projections of the input into the reservoir are necessary to achieve low errors. However, instead of implementing multiple separate CA reservoirs as in [8], we follow the design as described in [36] and subdivide a single CA lattice into multiple parts. Therefore, we divide the lattice of the CA in the reservoir into \(N_{r}\) compartments. Each compartment has the same number of \(N_{c}\) cells. For example, a lattice of size \(N=512\) divided into \(N_{r}=16\) compartments with \(N_{c}=32\) cells each is described by the tuple \((N_{r},N_{c})=(16,32)\) with \(N=N_{r}N_{c}\). The mapping layer selects the cells of the CA that should receive the input value. One cell is randomly selected out of each compartment, into which the input is encoded in the next step. This random mapping is fixed once and does not change. It can be modeled as a vector \(\mathbf{p}\in\mathbb{Z}_{m}^{N}\) with the entries representing the cells of the CA that shall receive the input set to \(x_{q}\), and all other entries set to zero (see Fig. 6 part I).
### _Encoding_
Since the mapping layer only defines into which cells the quantized input \(x_{q}\) should be encoded, we have to define how the encoding is actually done. For this, we use the following commonly used encoding functions. Let \(\bar{s}_{0}\) be the initial state of an individual cell in the reservoir that has been selected to receive \(x_{q}\) via random mapping (see section III-C). Then, the encoded cell state \(s_{0}\) is defined by:
* _replacement_ encoding [8] \[s_{0}=x_{q}\] (37)
* bitwise _xor_ encoding [34, 37] \[s_{0}=x_{q}\oplus\bar{s}_{0}\mod m.\] (38)
Note that if \(m\) is a power of two, then the mod \(m\) operation in (38) can be omitted.
Additionally, we analyzed the following new encoding functions:
* _additive_ encoding \[s_{0}=x_{q}+\bar{s}_{0}\mod m\] (39)
* _subtractive_ encoding \[s_{0}=|x_{q}-\bar{s}_{0}|\] (40)
The states of the cells not selected by the mapping layer do not change during the encoding process. The _replacement_ encoding overwrites the information stored in the affected cells of the CA with the new input. This is different for the _xor_ encoding, which combines the new input with the current cell states and is, next to _replacement_ encoding, commonly used in ReCA. In order to analyze the influence of small changes in the encoding, we use the _additive_ and _subtractive_ encoding schemes, which slightly differ from the _xor_ encoding.
### _Readout_
The readout layer is typically the weighted sum of the reservoir output
\[\mathbf{y}=\mathbf{U}\mathbf{r}+\mathbf{b} \tag{41}\]
with the weight matrix \(\mathbf{U}\) and bias \(\mathbf{b}\). The reservoir output \(\mathbf{r}\) can be a single state vector \(\mathbf{s}\) of the CA, but is usually chosen to be a concatenation of the CA state at multiple iteration steps [8]. Since \(\mathbf{U}\) and \(\mathbf{b}\) are the only trainable parameter in the ReCA model, a simple linear regression can be used. To simplify the notation, it will be assumed that the input to the readout layer \(\mathbf{r}\) has a \(1\) appended to also include the bias \(\mathbf{b}\) in the weight matrix \(\mathbf{U}\).
To train the ReCA model, the reservoir output \(\mathbf{r}^{(t)}\) is concatenated for each input \(\mathbf{x}^{(t)}\) into \(\mathbf{R}\). Furthermore, the ground truth solutions \(\tilde{\mathbf{y}}^{(t)}\) are concatenated in the same way to generate \(\tilde{\mathbf{Y}}\). When using ordinary least squares, the weight matrix \(\mathbf{U}\) can be calculated by
\[\mathbf{U}=\left(\mathbf{R}^{T}\mathbf{R}\right)^{-1}\mathbf{R}^{T}\tilde{ \mathbf{Y}}. \tag{42}\]
There are many different adoptions to the linear regression algorithm. For example, Tikhonov regularization [59], also called L2 regularization, can be added, resulting in Ridge Regression [60]. It is also possible to run linear regression in an online and sequential approach [61]. We use Ridge Regression in our experiments.
### _ReCA Computations_
For each input sample \(x^{(t)}\), the ReCA model performs several steps. The whole process is shown for an elementary rule 90 CA [18] and \(I=4\) iterations in Fig. 6. First, the input is transformed and quantized according to sections III-A and III-B resulting in the quantized sample \(x^{(t)}_{q}\). Next, \(x^{(t)}_{q}\) is mapped to the reservoir cells by the mapping vector \(\mathbf{p}^{(t)}\) as described in section III-C. As an example, Fig. 6 (part I) shows the random mapping of a sample \(x^{(t)}_{q}=1\) on the elements of \(\mathbf{p}^{(t)}\), such that each compartment receives the input at one randomly selected cell. After the mapping, the quantized input has to be encoded into the initial state \(\mathbf{\tilde{s}}^{(t)}\) of the reservoir at time \(t\), resulting in the encoded initial state \(\mathbf{s}^{(t)}\) as described in section III-D. In Fig. 6 (part II), this is depicted for the _xor_ encoding. The encoded state \(\mathbf{s}^{(t)}\) forms the initial state \(\hat{\mathbf{s}}^{(0)}\) for the CA, which can then be executed for a number of iterations \(I\in\mathbb{N}^{+}\), such that \(\hat{\mathbf{s}}^{(0)}\) evolves under the repeated application of the linear CA rule to \(\hat{\mathbf{s}}^{(I)}\) (see Fig. 6 part III). After the execution of the CA finishes, the reservoir outputs the concatenated CA states \(\mathbf{r}^{(t)}=\left[\hat{\mathbf{s}}^{(0)},\ldots,\hat{\mathbf{s}}^{(I)}\right]\) (see Fig. 6 part IV) as mentioned in section III-E. The last state \(\hat{\mathbf{s}}^{(I)}\) will be used as the initial reservoir state \(\tilde{\mathbf{s}}^{(t+1)}\) for the next input sample \(x^{(t+1)}\).
Fig. 6: Example of ReCA computation for an input sample \(x^{(t)}_{q}=1\) with an elementary rule 90 CA, \((N_{r},N_{c})=(3,4)\), _xor_ encoding and \(I=4\) steps. The state of the CA after the \(i^{\text{th}}\) iteration is denoted by \(\hat{\mathbf{s}}^{(t)}\). The colors of the lattice indicate the three compartments.
### _Hyperparameters_
Since the trainable parameters in the readout layer can be optimized using simple linear optimization techniques, a crucial step in designing ReCA models is the choice of hyperparameters. In our analysis, we focus on the following general hyperparameters:
* Number of states \(m\) of the CA: This has an influence on the operation domain of the CA since \(\mathbb{Z}_{m}\) is either a field (if \(m\) is prime) or a ring (if \(m\) is non-prime). It significantly affects the mathematical properties and thus the dynamic behavior of the CA. Since \(m\) defines the number of possible states of each cell, it also influences the linear separability of the reservoir output in the readout layer.
* True neighborhood \(\hat{n}\): The size of the neighborhood influences the expansion rate of local information on the lattice and thus also affects the dynamic behavior.
* Lattice size \(N\): This impacts the size of the dynamical system and thus affects the complexity and of the CA
* Subdivision of the lattice into \(N_{r}\) compartments with \(N_{c}\) cells each: This influences the mapping of the input samples onto the reservoir cells.
* Number of iterations \(I\) of the CA per input sample: This influences the degree of interactions between the cells per input sample.
* Transformation and quantization: This choice of transformation and quantization functions define how the input data is presented to the dynamical system.
* Mapping and encoding: The mapping and encoding methods define how the input is inserted into the state of the dynamical system.
Next to the general hyperparameters, an increased importance receives the hyperparameter \(F\), i.e., the global rule of the CA, because it essentially defines the fundamental basis of the dynamics and topological properties of the CA. As the rule space of linear CAs grows exponentially with respect to \(m\) and \(\hat{n}\), it is vital to receive guidance when it comes to hyperparameter selection in the design process of ReCA models. Since we restrict or analysis to linear rules, we term the respective framework as ReLiCA.
It is important to note that all of these hyperparameters have interdependent effects on the overall behaviour of the CA and in turn on the performance of the ReLiCA model in time series processing tasks.
## IV Proposed ReLiCA Design Algorithm
Since guidance in the choice of hyperparameters would greatly speed-up and assist in the design of ReLiCA models, we propose the Reservoir Computing using Linear Cellular Automata Design Algorithm (ReLiCADA). We start with a short analysis of the influence of the CA rules and transformation, quantization, mapping, and encoding layers on the ReLiCA model performance in section IV-A before introducing our ReLiCADA in section IV-B.
### _CA Impact on ReLiCA Model Performance_
The choice of the CA rule and the choice of transformation, quantization, mapping, and encoding functions significantly impact the overall ReLiCA model performance. We depict the _NMSE_ for different ReLiCA models for the _MG__25_ dataset (see section V-A) in Fig. 7 (other datasets produce similar results). For this analysis, we ran all possible CA rules with all combinations of the transformation and quantization configurations (_complement_, _gray_, _scale__offset_, _sign__value_) and the encoding functions (_additive_, _replacement_, _subtractive_, _xor_). The empirical cumulative distribution function specifies the proportion of ReLiCA models with the same or lower _NMSE_. As the figure shows, only a tiny percentage of all ReLiCA models come close to the optimal performance for the chosen \(m\) and \(\hat{n}\) (lower left part of Fig. 7). To the best of our knowledge, up to date, there are no clear rules or guidelines on how to select the linear CA. Thus, obtaining a well-performing ReLiCA model remained challenging. Because of this, we developed an algorithm that pre-selects promising CA rules. We will introduce this algorithm in the following subsection.
### _ReLiCA Design Algorithm_
We propose the Reservoir Computing using Linear Cellular Automata Design Algorithm (ReLiCADA) to assist in the design of ReLiCA models. ReLiCADA selects combinations of CA rules, transformation, quantization, mapping, and encoding functions that will likely lead to well-performing ReLiCA models. The main idea is to limit the search space of linear CA rules from \(m^{n}\) (see section II-B) to a small number of promising rules and to select matching transformation, quantization, mapping, and encoding functions. Another purpose of ReLiCADA is to be able to identify ReLiCA models that produce low errors on a wide range of different datasets, and not only on a single pathological dataset like the 5-bit memory task.
ReLiCADA is based on the evaluation of thousands of train-test runs followed by a mathematical analysis of linear CA properties. Our approach was to exhaustively test the performance of ReLiCA models with almost all combinations of the abovementioned transformation, quantization, mapping, and encoding methods over the
Fig. 7: Empirical cumulative distribution functions for _MG__25_. The different configurations have the following number of data points: 960, 15360, 8064.
complete rule search space of several linear CA configurations (\(\hat{n}\), \(m\), \(N\) and \(I\)) on several different datasets. As depicted in Fig. 7, only a tiny percentage of all ReLiCA models achieve low errors, hindering random and heuristic search approaches, especially for complex CAs (larger \(m\) or \(\hat{n}\)). Our experiments indicate that specific conditions on the choice of the model's hyperparameters lead to an improvement in performance. For example, some transformation and quantization approaches are more robust against hyperparameter changes than others, and most of the generally good performing linear CA rules share common mathematical properties (see section VI for a detailed discussion of the results of the experiments). We identified these common rule properties and described them in terms of the mathematical parameters as defined in section II-D. The result is ReLiCADA, a set of selection rules, which are applied to the hyperparameters of ReLiCA models. It limits the large number of all possible configurations to a small number of promising candidate models. A crucial part is the pre-selection of only very few linear CA rules that are among the top-performing rules in the overall rule space. In doing so, ReLiCADA enormously reduces the design time of ReLiCA models because it prevents the need to undergo an exhaustive search over the whole linear CA rule space, which is not feasible, especially for more complex CA. Instead, ReLiCADA enables the aimed testing of a few promising models that are sharply defined by the following conditions.
We use the definitions stated in section II-D to describe ReLiCADA. We limited our analysis to \(\left|\mathscr{P}\right|\leq 2\), which will also be assumed in the description of the rule selection algorithm. This was done since we are primarily interested in \(\mathbb{Z}_{m}\) with a single prime factor. Some of the proposed rules might also work for the case \(\left|\mathscr{P}\right|>2\) or might be generalized, but no verification was done for that.
#### Iii-B1 ReLiCADA Design Rules
The ReLiCADA selects configurations only if all of the following conditions are fulfilled:
\[\text{transformation}=\textit{scale\_offset} \tag{43a}\] \[\text{quantization}=\textit{scale\_offset}\] (43b) \[\text{mapping}=\textit{random}\] (43c) \[\text{encoding}=\textit{replacement}\] (43d) \[(\forall p\in\mathscr{P})(\exists_{1}w_{i}):p\nmid w_{i}\] (43e) \[\widetilde{\mathscr{R}}=1\] (43f) \[\text{remove mirrored rules} \tag{43g}\]
The following selection rules will only be used based on the choice of \(m\):
* if \(m\) is not prime, i.e., \(\mathbb{Z}_{m}\) forms a ring: \[\exists_{2}i:w_{i}\neq 0\] (44a) \[\forall w_{i}:(w_{i}\notin\mathscr{P}_{w})\vee(w_{i}\in\{1,m-1\})\] (44b) \[\forall w_{i}:(w_{i}\notin\widetilde{\mathscr{P}}_{w})\vee(| \mathscr{S}^{+}(w_{i})|=4)\] (44c)
* if \(m\) is prime, i.e., \(\mathbb{Z}_{m}\) forms a field: \[\forall w_{i}:(w_{i}=0)\vee(\mathscr{S}^{\times}(w_{i})=\mathbb{Z}_{m}\backslash 0)\] (45)
* if \(\left|\mathscr{P}\right|=2\): \[(\exists w_{i},w_{j}):(p_{1}\nmid w_{i})\wedge(p_{2}\nmid w_{j})\] (46)
Conditions (43) will always be used independent of the choice of \(m\) and \(\hat{n}\), while conditions (44) to (46) are only used dependent on the choice of \(\mathbb{Z}_{m}\). If \(\mathbb{Z}_{4}\) is chosen, it is impossible to fulfill rule (44c). Because of this, it will be ignored for the \(\mathbb{Z}_{4}\) case.
The selection (43g) between the rule \(\mathbf{w}\) and its mirrored rule \(\hat{\mathbf{w}}\) is made using the following condition
\[\sum_{i=-r}^{-1}w_{i}\leq\sum_{i=1}^{r}w_{i}, \tag{47}\]
which evaluates to true for only one of the two rules if \(\mathbf{w}\neq\hat{\mathbf{w}}\). If \(\mathbf{w}=\hat{\mathbf{w}}\), this condition will always be fulfilled. If the condition is true, we choose \(\mathbf{w}\) and otherwise \(\hat{\mathbf{w}}\). The selection between \(\mathbf{w}\) and \(\hat{\mathbf{w}}\) is not optimized to increase the performance and is only used to further reduce the number of selected rules. Because of this, also other selection methods between \(\mathbf{w}\) and \(\hat{\mathbf{w}}\) than (47) are possible.
For any given \(N,m\) and \(\hat{n}\), algorithm 1 implements the process of rule selection (see appendix F).
#### Iii-B2 Reasoning Behind Design Rules
The conditions (43a) to (43d) fix the transformation, quantization, mapping, and encoding methods to an evidently well-performing combination (see section VI-A). The conditions (43e), (43f) and (44) to (46) belong to the rule selection for the linear CA reservoir. To limit the number of selected rules even further, we added condition (43g).
Our experiments showed that nearly all of the generally well-performing rules are _injective_. Because of this, we included (43e) (see (28)). With this condition, the CA is not only _injective_ but also _surjective_ and _regular_ (see (27)). Moreover, the CAs do not have a transient phase because of the _injectivity_[21]. The _injectivity_ of the CA results in them not being _strong transitive_ (see (26)) and not being _positive expansive_ (see (24)).
Furthermore, it was clear that \(\widetilde{\mathscr{R}}\) has a significant impact on the ReCA model performance. Using (43f) ensures that the CA is _sensitive_ (see (23)) as well as _transitive_, _ergodic_, and _expansive_ (see (25)). While this would also be the case for other \(\widetilde{\mathscr{R}}\) values, \(\widetilde{\mathscr{R}}=1\) resulted, in most cases, in the best performance and has the advantage of the smallest possible neighborhood \(\hat{n}\geq 3\), which reduces the complexity of hardware implementations5. Condition (43f) also implies that the CA is not _equicontinuous_ (see (22)).
Footnote 5: other \(\widetilde{\mathscr{R}}\) values may require \(\hat{n}\geq 5\)
Conditions (44) to (46) were chosen to improve the ReCA model performance and reduce the overall amount of rules. These conditions were not chosen based on mathematical characteristics.
#### Iii-B3 Edge of Chaos
While there is, to the best of our knowledge, no analysis of the Edge of Chaos (EoC) done in the ReCA framework, it is broadly discussed for CAs [28, 29, 31, 58]. The EoC can be compared to the Edge of
Lyapunov Stability (EoLS) in the ESNs framework [62]. Verstraeten et.al. [62] analyzed the connection of the Lyapunov exponents of a specific ESN model to its memory and non-linear capabilities. Through these analyses, it was shown that CAs have the highest computational power at the EoC and ESN models at the EoS.
Using the five groups of CA rules with increasing degree of chaos, as defined by Manzini and Margara [51] (see paragraph II-D3j), we can see that the CAs selected by ReLiCADA all belong to the third group, implying that they exhibit a "medium" amount of chaos. By the definitions of Devaney and Knudsen they are chaotic, but not expansive chaotic [63]. Since the CA rules selected by ReLiCADA are among the best performing rules, we suppose, without any proof, that this might correlate to the edge of chaos.
As an example, for the configuration \(m=4\) and \(\hat{n}=3\), ReLiCADA selects, among others, the rule with \(\mathbf{w}=(0,2,1)\), which is depicted in Fig. 4a and Fig. 4b. The two iteration diagrams show that this linear CA, on the one hand, has memorization capabilities by shifting the initial state to the left. This left shift can also be interpreted as a transmission of local information along the lattice. On the other hand, it shows interactions of neighboring cells during iteration. These properties (storage, transmission, and interaction) constitute computational capabilities in dynamical systems [29, 31]. Generally, all selected ReLiCADA CA rules show similar behavior.
#### Iv-B4 Number of Rules Selected by ReLiCADA
In Table I, the number of rules selected by ReLiCADA and the number of all linear rules are listed for different \(m\) and \(\hat{n}=3\). It is worth pointing out that the number of selected rules by ReLiCADA is independent of \(\hat{n}\), whereas the number of total rules depends on the chosen neighborhood \(\hat{n}\). From Table I, it is easy to see that ReLiCADA reduces the number of rules to analyze by several orders of magnitude. Hence, when designing a ReCA model for a specific application, one does not have to check all rules in the rule space, but only the few rules that are pre-selected by ReLiCADA.
## V Experiments
We will now introduce the experimental setup used to verify and validate the performance of ReLiCADA. The datasets used are introduced in section V-A and the models, to compare the performance of ReLiCADA to, in section V-B.
### _Datasets_
In order to test the performance of the different hyperparameter configurations of the ReLiCA models, we use datasets that have already been used in several other papers to compare different time series models. The following datasets can thus be regarded as benchmark datasets. These datasets might not need fast inference times, one of the main advantages of ReCA models, but are suitable choices for broad comparability with other studies. All datasets are defined over discrete time steps with \(t\in\mathbb{N}\). We use \(x(t)\) to describe the input to the model, and \(y(t)\) represents the ground truth solution. The \(x\) and \(y\) values are rescaled to \([-1,1]\). Unless otherwise noted, the task is to do a one-step-ahead prediction, i.e., \(y(t)=x(t+1)\), using the inputs up to \(x(t)\). The abbreviations used to name the datasets throughout the paper are denoted by _(name)_.
#### V-A1 Henon Map
The Henon Map (_Henon_) was introduced in [64] and is defined as
\[y(t)=x(t+1)=1-1.4x(t)^{2}+0.3x(t-1). \tag{48}\]
#### V-A2 Mackey-Glass
The Mackey-Glass time series uses the nonlinear time-delay differential equation introduced by [65]
\[\frac{dx}{dt}=\beta\frac{x(t-\tau)}{1+x(t-\tau)^{n}}-\gamma x(t) \tag{49}\]
with \(\beta=0.2\), \(\gamma=0.1\), \(\tau=17\), and \(n=10\). The task is to predict \(y(t)=x(t+1)\) using \(x(t)\) (\(MG\)). Furthermore, we use the prediction task \(y(t)=x(t+25)\) using \(x(t)\) (\(MG\_25\)).
#### V-A3 Multiple Superimposed Oscillator
The Multiple Superimposed Oscillator (MSO) is defined as
\[x(t)=\sum_{i=1}^{n}\sin(\varphi_{i}t)\text{, with }t\in\mathbb{N}. \tag{50}\]
The MSO12 dataset uses \(\varphi_{1}=0.2\), \(\varphi_{2}=0.331\), \(\varphi_{3}=0.42\), \(\varphi_{4}=0.51\), \(\varphi_{5}=0.63\), \(\varphi_{6}=0.74\), \(\varphi_{7}=0.85\), \(\varphi_{8}=0.97\), \(\varphi_{9}=1.08\), \(\varphi_{10}=1.19\), \(\varphi_{11}=1.27\), and \(\varphi_{12}=1.32\) as defined in [66]. We use the prediction tasks \(y(t)=x(t+1)\) (\(MSO\)) and \(y(t)=x(t+3)\) (\(MSO\_3\)) with \(x(t)\) as input.
#### V-A4 Nonlinear Autoregressive-Moving Average
The Nonlinear Autoregressive-Moving Average was first introduced in [67] as a time series dataset. We use the 10th order (_NARMA_10_)
\[\begin{split} x(t+1)&=0.3x(t)+0.05x(t)\sum_{i=0}^ {9}\left(x(t-i)\right)\\ &\quad+1.5u(t-9)u(t)+0.1,\end{split} \tag{51}\]
the 20th order (_NARMA_20_)
\[\begin{split} x(t+1)=&\ \tanh[0.3x(t)+0.05x(t)\sum_{i=0}^{ 19}\left(x(t-i)\right)\\ &+1.5u(t-19)u(t)+0.01]+0.2,\end{split} \tag{52}\]
and the 30th order (_NARMA_30_)
\[\begin{split} x(t+1)=&\ 0.2x(t)+0.004x(t)\sum_{i=0}^{ 29}\left(x(t-i)\right)\\ &+1.5u(t-29)u(t)+0.201\end{split} \tag{53}\]
versions as defined in [68]. The input \(u(t)\) is generated by a uniform independent and identically distributed (i.i.d.) random variable in the interval \([0,0.5]\). The task is to predict \(x(t)\) using \(u(t)\).
#### V-A5 Nonlinear Communication Channel
This dataset emulates a nonlinear communication channel and was introduced in [69] as
\[\begin{split} q(t)=&\ 0.08u(t+2)-0.12u(t+1)+u(t)+ 0.18u(t-1)\\ &-0.1u(t-2)+0.09u(t-3)-0.05u(t-4)\\ &+0.04u(t-5)+0.03u(t-6)+0.01u(t-7)\\ x(t)=&\ q(t)+0.036q(t)^{2}-0.011q(t)^{3}.\end{split} \tag{54}\]
The channel input \(u\) is a random i.i.d. sequence sampled from \(\{-3,-1,1,3\}\). The task is to predict \(x(t-2)\) using \(u(t)\) (_NCC_).
#### V-A6 Pseudo Periodic Synthetic Time Series
Introduced by UC Irvine [70], the dataset can be generated using
\[x(t)=\sum_{i=3}^{7}\frac{1}{2^{i}}\sin\left(2\pi\left(2^{2+i}+rand(2^{i}) \right)*\frac{t}{10000}\right) \tag{55}\]
as defined in [71] (_PPST_).
#### V-A7 Predictive Modeling Problem
First introduced by Xue et. al. [72], the dataset can be generated using
\[x(t)=\sin(t+\sin(t)),\ \ \ \ \ \text{with}\ t\in\mathbb{N} \tag{56}\]
(_PMP_).
### _Compared Models_
To have a reference for the ReLiCA model performance values, several state-of-the-art models were used as a baseline. These models and their hyperparameters are established in this section. In the following description, a parameter optimized during hyperparameter optimization is denoted by a range, e.g., \([a,b]\). The results for these models are listed in Table II.
#### V-B1 Neural Networks
These models were created using TensorFlow 2.8.0 [73] with the default settings unless otherwise noted. The models have an Input layer and use a Dense layer as output. The hidden layers were adopted to the used model. We used Adam[74] as the optimizer, and the learning rate was \([10^{-10},1]\). As a loss function, _MSE_ is used.
The Recurrent Neural Network [4] (_RNN_) uses a SimpleRNN layer with 64 units with dropout \([0,1]\) and recurrent dropout \([0,1]\).
The GRU layer was used for the Gated Recurrent Unit NN [75] with 32 units and dropout \([0,1]\).
The Long Short Term Memory NN [76] uses the LSTM layer with 32 units and dropout \([0,1]\).
The Neural Network [77]_NN_ model uses \([1,4]\) Dense layers with \([1,64]\) neurons per layer as hidden layers. The inputs to this model are the last 20 values of \(x(t)\), which results in the vector \(\mathbf{x}=[x(t-19),x(t-18),\ldots,x(t)]\).
#### V-B2 RC Models
We used an ESN, SCR, and DLR model. All models use the Scikit-learn 1.1.2 [78] Ridge optimizer with an alpha \([10^{-10},1]\).
The _ESN_ model [3] uses the Tensorflow Addons ESN cell implementation embedded into our code. We used 128 units with a connectivity of \(10\,\%\). The other parameters are input scale \([0,10]\), input offset \([-10,10]\), spectral radius \([0,1]\), and leaky integration rate \([0,1]\).
We implemented the _SCR_ and _DLR_ models according to [5]. Both use 256 units, a spectral radius of \([0,1]\), input scale \([0,10]\), and input offset \([-10,10]\).
#### V-B3 ReCA Models
We used our implementation of our modified ReCA architecture together with the non-linear CA rules found by a GA in [36]. The lattice has a size of \((16,32)\), and the CA performs four iterations per input sample. All rules found by Babson et al. [36] were analyzed using all combinations of the transformation and quantization configurations (_complement_, _gray_, _scale_offset_, _sign_-value_) and the encoding functions (_additive_, _replacement_, _subtractive_, _xor_). A Ridge optimizer is used for training. We call this model _Babson_.
The ReCA models were trained using 100 parallel models with 1100 time samples each. For testing and validation, 1100 data points are used. For training, testing, and validation, the first 100 data points are used as initial transient and are not utilized.
#### V-B4 Linear Model
A simple linear regression model using Scikit-learn was also evaluated. Like the _NN_, the linear model, denoted by _Linear_, has the last 20 values of \(x(t)\) as input.
#### V-B5 Hyperparameter Optimization
The hyperparameter optimization framework Optuna[79] is used to optimize the hyperparameters of the models. We use the TPE-Sampler algorithm with 100 runs per model. For models using epoch-based training, early stopping was used. It was configured to stop the training if the loss is not decreasing by at least \(10^{-5}\) with a patience of three epochs.
#### V-B6 Complexity
One of the main advantages of ReCA models is their low computational complexity. To compare the complexities of the different types of models, we approximated their computational complexities of the inference step. This analysis was optimized for implementations on FPGAs without the usage of specialized hardware like multiply-accumulate units. Nevertheless, it should be a good indication also for other types of implementations. Assuming two numbers \(a\), \(b\) that are represented by \(a^{\prime}\), \(b^{\prime}\) bits, we define the following complexities:
addition and subtraction have a complexity of \(\min(a^{\prime},b^{\prime})\), whereas multiplication and division have a complexity of \(a^{\prime}\times b^{\prime}\). For the additions and subtractions, we assume that the hardware does not need to deal with the most significant bits (MSBs) of the larger number since these are zero in the smaller number. For multiplication and division, we assume a shift-and-add implementation. To approximate the complexity of the _tanh_ function, we use the seventh-order Lambert's continued fraction [80] as an approximation. We assume the same complexity for the _sigmoid_ function. The _ReLU_ function has a complexity of zero.
We assume that the input and output have 32 bits, and all models use 32 bits to represent their internal states. For the ReCA models, the CA uses the required number of bits to represent \(\mathbb{Z}_{m}\), and the readout layer also uses 32 bits.
The number of units in the different baseline models was chosen to make the overall model complexity similar to the tested ReLiCA models. Because of this, the number of units was not optimized for model performance.
## VI Results
The results of our experiments can be divided into two phases. In the first phase, as described in section VI-A, we identified well-working choices for some of the general hyperparameters of the ReLiCA model that were fixed for the large number of experiments and exhaustive CA rule performance analysis in phase two, which is described in section VI-B. Detailed results of the experiments can be found in appendix C.
### _General Hyperparameters_
Since the main focus of our analysis lies in selecting suitable combinations of transformation, quantization, mapping, and encoding methods, and linear CA rules for the reservoir, we fixed some of the hyperparameters of the ReLiCA framework to reduce the parameter space. Therefore, we first analyzed the influence of the reservoir size (\(N\)) and the number of CA iterations on the overall ReLiCA performance. The datasets used for the following analysis are MG, MG_25, MSO, MSO_3, NARMA_-10, NARMA_20, NARMA_30, NCC, PPST, PPST_10, PMP (see section V-A).
To test the influence of the reservoir size, we tested the following lattice sizes: \((16,32)\), \((16,33)\), and \((17,31)\). These were chosen since the total number of cells is similar, but their prime factor decomposition differs significantly. The results for the ReLiCA model using _scale_offset_, _replacement_, \(\mathbb{Z}_{4}\), \(\hat{n}=3\) and \(I=4\) are depicted in Fig. 8. Other ReLiCA models showed similar behavior. Since none of the lattice sizes is superior to the others, we used \((16,32)\) in the following experiments. This was done since, for most hardware implementations, a power-of-two number of cells would most likely be suitable.
To see the influence of the CA iterations we tested the ReLiCA model using _scale_offset_, _replacement_, \(\mathbb{Z}_{4}\), \(\hat{n}=3\), and \((N_{r},N_{c})=(16,32)\). The results are shown in Fig. 9, and other configurations resulted in similar results. Increasing the number of CA iterations to \(>2\) steps did not lead to a significant monotonic decrease in the overall _NMSE_. This is in line with the results by Babson et al. [36], where they achieved a success rate of 99 % in the 5-bit memory task for complex CA reservoirs and four iterations. In their study, elementary (\(m=2\), \(\hat{n}=3\)) CAs were found to require eight iterations. However, Nichele et al. [33] show that several single elementary CA rules also achieve a success rate of \(\geq\)95 % in the 5-bit memory task with only four iterations. Since higher numbers of CA iterations imply a higher computational complexity and longer training and testing times, we fixed the number of iterations to four in all subsequent experiments.
Another finding is that the _replacement_ encoding together with the _scale_offset_ transformation achieves low errors in most configurations and is thus the most stable encoding with respect to changing values of the other hyperparameters. This can be seen in Fig. 10. Therefore, we fixed the transformation method to _scale_offset_ and the encoding to random _replacement_.
The random mapping generator's seeds, the only random element in the ReLiCA model, were fixed to ensure
Fig. 8: Influence of the lattice size \((N_{r},N_{c})\) on ReLiCA with \(\mathbb{Z}_{4}\), \(\hat{n}=3\), \(I=4\) scale_offset, and replacement.
Fig. 9: Influence of the number of iterations \(I\) on ReLiCA with \(\mathbb{Z}_{4}\), \(\hat{n}=3\), (16,32), scale_offset, and replacement.
reproducible results.
### _Rule Selection_
To analyze the performance of the Reservoir Computing using Linear Cellular Automata Design Algorithm, we used the following time-series benchmark datasets: MG, MG_25, MSO, MSO_3, NARMA_10, NARMA_20, NARMA_30, NCC, PPST, PPST_10, PMP (see section V-A). To train the ReLiCA models, a Ridge optimizer is used with \(\alpha=1\), the default value for Scikit-learn. The number of states \(m\), neighborhood \(\hat{n}\), and local rule of the CA are varied throughout the experiments. We denote the models designed using ReLiCADA by ReLiCA* and the general class of ReLiCA models using the whole set of possible linear CA rules by ReLiCA. All individual performance values are listed in the appendices A and C. We used a train-test split for the datasets to conduct our experiments. Unless otherwise noted, the test performance values are used.
In Fig. 11, we compare the mean _NMSE_ of the overall best ReLiCA model, analyzing all possible linear rules, with the best and worst ReLiCA* model, whose rules were selected by ReLiCADA. Best and worst are determined per dataset, resulting in the possibility that different rules are used for the different datasets. It can be seen that the best ReLiCA* model is very close to the overall best ReLiCA model, especially considering that the overall worst rule has a mean _NMSE_\(>1\). Not only the best ReLiCA* model shows nearly optimal performance, but also the worst one. It is also evident that increasing \(\hat{n}\) from 3 to 5 did not improve the performance. This behavior was also verified for several other values of \(m\) (see appendix C).
Instead of using only the mean _NMSE_ for this analysis, we also checked how many ReLiCA models are worse than a selected ReLiCA* model. The results are depicted in Fig. 12 and clearly show that the best ReLiCA* model is at least better than 95 % of the total rule space. Even the worst ReLiCA* model is still better than 80 % of the overall ReLiCA models. This again verifies that the performance of all rules selected by ReLiCADA is far better than randomly choosing a linear rule.
As it is reasonable to test all configurations selected by ReLiCADA it is possible to always achieve the best performance in Fig. 11 and 12.
Since one goal was to achieve a computationally simple model with low complexity while maintaining good model performance, we compared these two parameters in Fig. 13. We used a train-test-validation split of the dataset for this analysis. The test performance values were used to select the best model, and the validation performance values are shown in Fig. 13. No large deviations between test and validation performance were evident during our experiments. The ReLiCA models have less complexity compared to the RC and NN models. Despite their computational simplicity, they still achieve similar or even better performance. Increasing \(m\) for the ReLiCA models increases not only the model complexity but also the model performance. However, it is apparent that the performance gain by increasing \(m\) declines. A neighborhood of \(\hat{n}=3\) was chosen for the ReLiCA models since increasing the neighborhood would not result in
Fig. 11: Comparison of the performance of the overall best linear rule with the best and worst rule selected by ReLiCADA.
Fig. 12: Rules selected by ReLiCADA are better than \(x\%\) of the overall linear rules.
Fig. 10: Comparison of the different transformation, quantization, mapping, and encoding functions using ReLiCA with Z\({}_{4}\), \(\hat{n}=3\)\(I=4\), \((N_{r},N_{c})=(16,32)\). Used abbreviations: _additive_, _replacement_, _subtractive_, _zor_; _complement_, _gray_, scale_offset_, _sign_\({}_{-}\) value.
better performance.
Despite the nonlinear, and thus more complex, CA of the _Babson_ models, their performance is not up to the ReLiCA* models. While the ReLiCA* \(\mathbb{Z}_{4}\) models achieve a mean _NMSE_ of \(0.12\), the _Babson_ models only achieve \(0.34\). As the nonlinear CA rules of the Babson models have been optimized with a GA, this indicates that heuristic search and optimization algorithms cannot deal with the structure and size of the general CA rule space very well.
To analyze the influence of the random mapping on the ReLiCA model performance, we tested several different seeds for the random mapping generator. While there is an influence on the performance, it is neglectable for the models selected by ReLiCADA. In Fig. 14, the empirical cumulative distribution function for different seeds is visualized for \(\mathbb{Z}_{4}\), \(\hat{n}=3\) ReLiCA and ReLiCA* models using _scale_offset_ and _replacement_. The slight performance difference decreases even further with larger \(m\).
During our experiments, we mainly focused on the integer rings \(m=2^{a}\) with \(a\in\mathbb{N}^{+}\) since these are most suitable for implementations on FPGAs and other digital systems. Nevertheless, we verified ReLiCADA for several other values of \(m\) (see appendix C). These results showed that ReLiCADA can also be used for \(m\neq 2^{a}\). According to our experiments, CAs over \(\mathbb{Z}_{2}\) behave differently. For example, the best encoding for these CAs is the _xor_ encoding. Since this configuration was not of primary interest, we did not analyze this further. Furthermore we verified ReLiCADA on lattice sizes not equal to \((16,32)\) and iterations not equal to \(4\). Also, for these configurations, ReLiCADA showed great improvements in performance compared to the whole set of all possible ReLiCA models. The performance values are listed in appendix C.
We also ran tests where the quantized input \(x_{q}\) was directly fed into the readout layer, forming a quantized skip connection. When the _replacement_ encoding was used, this did not lead to any performance gain. Since ReLiCADA only uses _replacement_ encoding, quantized skip-connections are not used in our models. However, a performance gain was observed when the readout layer was provided with the original input \(x\) directly. Since this imposes only a very little increase in complexity, we recommend using this skip connection if possible.
### _Nonlinear Capabilities_
During our experiments, we saw that ReLiCA models could not deal with highly nonlinear datasets, like _Henon_, very well. However, after using the hyperparameter optimizer Optuna to optimize the quantization thresholds (see (35)) and the regularization of the Ridge optimizer, the performance of the ReLiCA model increased drastically. The ReLiCA* model with \(\mathbb{Z}_{16}\), \(\hat{n}=3\) achieved an _NMSE_ of \(0.321\) before optimization and \(0.048\) after. Other transformation and quantization layers could likely improve the nonlinear capabilities of linear ReLiCA models. However, this was not further analyzed.
Further tests have shown that the ReLiCA model performance also improves on the other datasets when Optuna is used to optimize quantization thresholds. Since we wanted to create a fast and easy-to-train model, we refrained from using threshold optimization in our results.
## VII Conclusion
ReCA represents a particular form of the broader field of RC that is particularly suited to be implemented on FPGAs. However, the choice of hyperparameters and, primarily, the search for suitable CA rules are major challenges during the design phase of such models. When restricted to linear CAs, fundamental properties can be computed analytically. Based on the results of nearly a million experiments, we recognized that linear CA rules that achieve low errors on many relevant benchmark datasets have specific mathematical properties. Based on these insights, we developed the Reservoir Computing using Linear Cellular Automata Design Algorithm, which selects hyperparameters that have been shown to work
Fig. 14: Influence of the random mapping seed on the ReLiCA model performance. The used model configuration is: \(\mathbb{Z}_{4}\), \(\hat{n}=3\), _scale_–_offset_, _replacement_.
Fig. 13: Comparison of model performance with model complexity.
well in the experiments. Most importantly, the proposed algorithm pre-selects a few rules out of the rule space that grows exponentially with increasing \(m\) and \(\hat{n}\). As it has been shown, the best-performing selected rules are among the top \(5\,\%\) of the overall rule space. Moreover, the proposed models achieve, on average, a lower error than other state-of-the-art neural network models and, at the same time, exhibit less computational complexity, showing the strength of ReLiCADA. Furthermore, with the immensely reduced hyperparameter space, the time needed to design and implement ReCA models is drastically reduced. In conclusion, ReLiCADA is a promising approach for designing and implementing ReCA models for time series processing and analysis.
## Appendix A Performance Values of Compared Models
Tables II to IV list the _NMSE_ values of the models compared throughout this paper. The dark blue color highlights the model with the lowest _NMSE_ for the respective dataset. The light blue color indicates that the model has a similar performance (same value rounded to 3 decimal places) compared to the best model.
The validation performance values of the reference models are listed in Table II. All models except the Linear model were optimized using Optuna.
Tables III and IV list the performance values of the ReCA models for the test and validation set.
## Appendix B Supplementary Materials
Supplementary files to this paper can be found in the git repository at [https://github.com/jkantic/ReLiCADA](https://github.com/jkantic/ReLiCADA).
## Appendix C Performance Values of ReLiCADA
The _NMSE_ values of all tested ReLiCADA models are listed in the file _relicada_performance_xlsx_ (supplementary materials). The table lists the number of rules selected by ReLiCADA as well as the minimum, mean, and maximum _NMSE_ values of the selected rules. If the whole rule space for a given configuration was tested, these values are also calculated for the whole set of rules. Furthermore, the percentage of rules worse than the best/worst ReLiCADA rule is stated.
## Appendix D Tested Configurations
The file _configs.xlsx_ (supplementary materials) contains all ReLiCA model configurations tested. The trans/quant column lists the used transformation and quantization algorithm.
## Appendix E Raw Experiment Output
The supplementary materials to this paper include CSV files containing the raw experiment results for all tested configurations. For each dataset, a separate CSV file lists: \((N_{r},N_{c})\), \(I\), \(m\), \(\hat{n}\), transformation, quantization, encoding, mapping, seed, \(\mathbf{w}\), and the _NMSE_ performance value.
## Appendix F ReLiCADA Pseudocode
The pseudocode in algorithm 1 together with (43) to (46) and the explanation in section IV-B can be used to implement ReLiCADA.
|
2305.10818 | Diffusion Language Models Generation Can Be Halted Early | Diffusion Language models (DLMs) are a promising avenue for text generation
due to their practical properties on tractable controllable generation. They
also have the advantage of not having to predict text autoregressively.
However, despite these notable features, DLMs have not yet reached the
performance levels of their autoregressive counterparts. One of the ways to
reduce the performance gap between these two types of language models is to
speed up the generation of DLMs. Therefore, we propose a novel methodology to
address this issue in this work. It enables the execution of more generation
steps within a given time frame, leading to higher-quality outputs.
Specifically, our methods estimate DLMs completeness of text generation and
allow adaptive halting of the generation process. We evaluate our methods on
Plaid, SSD, and CDCD DLMs and create a cohesive perspective on their generation
workflows. Finally, we confirm that our methods allow halting these models and
decrease the generation time by $10$-$40$\% without a drop in the quality of
model samples. | Sofia Maria Lo Cicero Vaina, Nikita Balagansky, Daniil Gavrilov | 2023-05-18T08:56:05Z | http://arxiv.org/abs/2305.10818v4 | # Democratized Diffusion Language Model
###### Abstract
Despite the potential benefits of Diffusion Models for NLP applications, publicly available implementations, trained models, or reproducible training procedures currently need to be publicly available. We present the Democratized Diffusion Language Model (DDLM), based on the Continuous Diffusion for Categorical Data (CDCD) framework, to address these challenges. We propose a simplified training procedure for DDLM using the C4 dataset and perform an in-depth analysis of the trained model's behavior. Furthermore, we introduce a novel early-exiting strategy for faster sampling with models trained with score interpolation. Since no previous works aimed at solving downstream tasks with pre-trained Diffusion LM (e.g., classification tasks), we experimented with GLUE Benchmark to study the ability of DDLM to transfer knowledge. With this paper, we propose available training and evaluation pipelines to other researchers and pre-trained DDLM models, which could be used in future research with Diffusion LMs.
## 1 Introduction
Language Models (LMs) have proven instrumental in Natural Language Processing (NLP) tasks, often trained autoregressively (Radford et al., 2019; Raffel et al., 2020; Chowdhery et al., 2022) or via Masked Language Models (MLMs) (Devlin et al., 2019; He et al., 2020; Liu et al., 2019; Lan et al., 2020). While these have shown great success across various benchmarks (Wang et al., 2018, 2019; Rajpurkar et al., 2016), exploring alternate models like Diffusion Models (Ho et al., 2020; Song et al., 2020) holds promise.
However, training Diffusion LMs has faced challenges like reproducibility issues and lack of pre-trained weights (Anonymous, 2023; Dieleman et al., 2022; Lin et al., 2022). Our work aims to rectify these issues by developing a reproducible training process, providing pre-trained weights, and assessing the efficacy of a Diffusion LM for downstream tasks.
We introduce the Democratized Diffusion Language Model (DDLM), an adaptation of the Continuous Diffusion for Categorical Data (CDCD) framework (Dieleman et al., 2022). Trained on the C4 dataset (Raffel et al., 2020), we test its performance and transferability using an innovative sampling method and the GLUE Benchmark (Wang et al., 2018). Although Diffusion LMs must catch up to other baselines, our work hopes to stimulate further research.
By sharing the pre-trained DDLM models and related code, we aim to encourage advances in Diffusion LMs. This paper also discusses the challenges in training Diffusion LMs and suggests possible future research directions.
Background
Within the background section, we briefly describe the Continuous diffusion for categorical data (CDCD) (Dieleman et al., 2022) framework since we selected it as a starting point for our model.
**Score interpolation**. The essential part of CDCD work is the score interpolation objective, which replaces the score matching objective usually used to train diffusion models with continuous data (Hyvarinen, 2005). With noised input embeddings \(x\), CDCD suggests predicting a distribution of \(|V|\) possible embeddings for each token in the sequence \(p(x_{0}|x,t)\). This distribution is obtained by simply predicting its logits and applying the softmax function over them. Cross-entropy loss is then used to estimate \(p(x_{0}|x,t)\); i.e., \(\mathcal{L}_{CE}(x_{0},x,t)=-\log\big{(}p(x_{0}|x,t)\big{)}\) is minimized. With this loss, we take noised embeddings and then try to predict the embedding with the discrete distribution.
Estimation of score function \(\hat{s}(x,t)\) used for sampling with ODE solver is then evaluated as \(\hat{s}(x,t)=\mathbb{E}_{p(x_{0}|x,t)}\big{[}s(x,t|x_{0})\big{]}=\frac{\hat{x} _{0}-x}{t^{2}}\), where \(\hat{x}_{0}=\mathbb{E}_{p(x_{0}|x,t)}\big{[}x_{0}\big{]}\) is predicted embeddings, and \(s(x,t|x_{0})=\frac{x_{0}-x}{t^{2}}\)(Karras et al., 2022).
Note that std \(\sigma\) of noise added at time \(t\) is equal to this time itself; i.e., \(\sigma=t\)(Dieleman et al., 2022). Because of this, after adding a noise to embeddings, they are scaled by \(\frac{1}{\sqrt{1+t^{2}}}\), so the std of passed to the model embeddings will again be equal to \(1\).
**Embeddings normalization**. As the model with \(\mathcal{L}_{CE}\) loss function forced to predict correct embeddings from noisy ones, with a naive application, the such objective will lead to uncontrollable growth of embeddings norm, to make them easier to distinguish. To prevent uncontrolled growth of embedding norms, CDCD applies \(L_{2}\) normalization during training.
**Time warping**. During the training, it is necessary to sample the time \(t\) from some distribution. CDCD trained CDF of time \(\mathrm{F}_{\phi}(t)\) following Kingma et al. (2021). More concretely, for the CDCD framework, it is trained with a loss \(\mathcal{L}_{TW}=\|\widetilde{\mathrm{F}}_{\phi}(t)-\mathcal{L}_{CE}(\dots,t)\|\), where \(\widetilde{\mathrm{F}}_{\phi}(t)\) is unnormalised CDF parametrized with \(\phi\). By normalizing and inverting \(\widetilde{\mathrm{F}}_{\phi}(t)\), we could obtain samples from it. \(p(x_{0}|x,t)\) is then conditioned on \(t\) via conditional layer normalization (Perez et al., 2018). As Dieleman et al. (2022), we further refer to shaping the noise schedule as _time warping_.
**Noise masking**. CDCD proposes practical implementations for training language models (LMs). The first approach involves injecting noise into the embedding sequence continuation while keeping its beginning intact, also known as prefix masking. Alternatively, noise can be injected at random sequence positions akin to Masked Language Models training (fully random masking) (Devlin et al., 2019; He et al., 2020; Liu et al., 2019; Lan et al., 2020). The third approach combines these two, injecting noise into random positions in a sequence continuation (mixed masking). The cross-entropy loss \(\mathcal{L}_{CE}\) is calculated, setting loss values for conditional embeddings to zero.
## 3 Democratizing Continuous Diffusion for Categorical Data
### Notes on Reproducibility of CDCD
The reproducibility of the CDCD framework, as presented by Dieleman et al. (2022), needs to be revised. The original paper's lack of specific details and source code makes reproducing the results challenging, thus hindering further experiments. Reimplementing CDCD solely from the original paper does not ensure accuracy, as the primary evaluation metric, Autoregressive Negative Log-Likelihood (AR-NLL), was tested with an undisclosed Language Model.
In this work, we address these issues and provide a functional CDCD framework implementation for facilitating further research. Our replication attempts also revealed potential enhancements to the framework, discussed later in this paper.
### Experimental Setup
To evaluate our modifications to the CDCD framework, we trained DDLM-Base models (147M parameters) on the C4 dataset (Raffel et al., 2020) using a sequence length of \(64\) tokens. Training data was tokenized with \(|V|=32k\), and we utilized \(256\)-sized embeddings for tokens. The model was
trained on \(8\) NVidia A100 SXM4 80GB GPUs, with a million training steps completed in roughly two days. Hyperparameters are provided in Table 1.
We drew \(5\)k examples from the C4 validation set for text generation validation and sampled \(5\) continuations with different seeds. We used the Euler sampler (Karras et al., 2022) with 50 steps for the diffusion models, ensuring a similar speed to equivalent autoregressive models (See Figure 3). The prompt size was set at 32 tokens. Several metrics were employed to assess generated texts: AR-NLL measured by GPT-Neo-1.3B Black et al. (2021), MAUVE metric Pillutla et al. (2021), average distinct N-grams over samplings, Zipf's coefficient over token distribution, token entropy, and the self-BLEU score of the generated texts.
### Reproducing Time Warping
During our initial experiments, we found that time warping is redundant. We investigated possible reasons for such a non-consistency with Dieleman et al. (2022) is the parametrization of \(\widetilde{\mathrm{F}}_{\phi}(t)\). Since \(\mathrm{F}_{\phi}(t)\) should reach a value \(1\) in a finite number of steps, selecting its support is a hyperparameter to choose from. Dieleman et al. (2022) used \(t_{min}=1\), and \(t_{max}=300\) for their experiments.
Although, we found that one could use smaller values of \(t_{max}\) (yet, still large enough to make noised embeddings non-trivial to classify). Furthermore, when using smaller values of \(t_{max}\), we observed that we could omit time warping and sample \(t\) uniformly between \(t_{min}\) and \(t_{max}\). Following Dieleman et al. (2022), we also used low-disrepancy sampling (Kingma et al., 2021).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline L & H & D & Seq. len. & Spans & Optim. \\
8 & 8 & 1024 & 64 & 8 & Adam \\ \hline LR & Scheduler & Warmup & Batch size & \(t_{max}\) & Steps \\
3e-5 & Cos. w/ Warmup & 10k & 1024 & 10 & 1e6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Pre-training hyperparameters used for our experiments. L stands for number of layers, H is for number of heads in Transformer layer, and D is for hidden size. Note that we performed ablation study with DDLM-Base model, so in some cases we reported results with different parameters (e.g., used different \(t_{max}\) values). In such cases we explicitly told which parameters we used.
\begin{table}
\begin{tabular}{c|c|c c|c c} \hline \hline & & Unconditional & \multicolumn{3}{c}{Prefix\({}_{32}\)} \\ \hline \(t_{max}\) & TW & AR-NLL \(\downarrow\) & dist\({}_{1}\)\(\uparrow\) & AR-NLL \(\downarrow\) & MAUVE \(\uparrow\) & dist\({}_{1}\)\(\uparrow\) \\ \hline
5 & - & 8.45 & **0.87** & 6.10 & 0.19 & **0.90** \\ & + & 8.45 & 0.86 & 5.86 & 0.36 & 0.88 \\ \hline
8 & - & 5.41 & 0.65 & 4.54 & 0.83 & 0.68 \\ & + & 5.76 & 0.67 & 7.48 & 0.01 & 0.98 \\ \hline
10 & - & 4.63 & 0.57 & **4.07** & **0.87** & 0.58 \\ & + & **4.44** & 0.56 & 4.80 & 0.67 & 0.61 \\ \hline
20 & - & 3.25 & 0.36 & 3.57 & 0.73 & 0.39 \\ & + & 3.34 & 0.37 & 3.64 & 0.72 & 0.37 \\ \hline
50 & - & 3.10 & 0.21 & 3.51 & 0.36 & 0.24 \\ & + & 5.97 & 0.66 & 3.36 & 0.41 & 0.22 \\ \hline
100 & - & NaN & 0.06 & 5.05 & 0.56 & 0.68 \\ & + & 5.02 & 0.60 & 3.45 & 0.13 & 0.13 \\ \hline
300 & - & – & – & – & – & – \\ & + & 2.15 & 0.10 & 3.42 & 0.08 & 0.11 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of different \(t_{max}\) values and presence of time warping (TW) for text generation for the DDLM-Base model. We marked dist\({}_{1}\) with values lower than \(0.5\) with red color. We bolded the best results among models that do not provide low dist\({}_{1}\). Analysis of results can be seen in Section 3.3.
See Table 2 for the results of this experiment. First, we observed that large values of \(t_{max}\) produced incomprehensive samplings. For \(t_{max}\geq 50\) without time warping, trained models showed nearly zero number of distinct n-grams. While adding time warping increased distinct n-grams, doing so resulted in a dramatically increased value of the AR-NLL metric. For smaller values of \(t_{max}\), deciding which model performed best is harder. \(t_{max}=20\) showed a better AR-NLL metric, though the higher repetition of tokens. See Table 4 for examples of text samplings with \(t_{max}\in[10,20]\). Due to the lower repetition value, we decided that \(t_{max}=10\) performed best. \(t_{max}<10\) performed poorly.
It is notable that while using time warping with \(t_{max}=10\) results in slightly lower repetition for samplings and better AR-NLL for unconditional generation, for conditional generation sampling \(t\) uniformly leads to better AR-NLL and MAUVE. Since the MAUVE metric directly measures the alignment of text continuation with ground truth continuation, this could indicate better natural language understanding capabilities for a model trained with uniform sampling. Thus, we used a setup with \(t_{max}=10\) and uniform sampling of \(t\) for the following experiments.
### Span Masking Strategy
While Dieleman et al. (2022) preferred mixed masking, we suggest that prefix masking, a component of mixed masking, could be extended to span masking (Anonymous, 2023b). In span masking, a token sequence is divided into \(k\) parts (\(k\) being a random integer from \(1\) to a constant \(k_{max}=9\)) by randomly selecting \(k-1\) indices. These indices delineate \(k\) spans, each masked entirely with 50% probability. This method aligns with practical applications of DDLM by training the model to handle scenarios where only the sequence end is provided as a condition. When we refer to "masked" tokens, we are talking about tokens that will have their embedding noised.
We trained DDLM with span masking as per Anonymous (2023b), comparing it with the prefix, random, and mixed masking strategies. Models were evaluated on unconditional and conditional text generation tasks. In the latter, we sampled texts with a prefix length of \(32\) (as in Section 3.3) and in the middle of sequences, enclosed by beginning and ending prompts of length \(16\). We anticipated that enclosed sampling would offer insights into the model's ability to sample text at arbitrary sequence positions. We refer this setup as "Inpainting\({}_{32}\)" in Table 3.
Prefix masking demonstrated lower AR-NLL for an unconditional and prefix-conditioned generation but higher repetition. Notably, MAUVE for prefix conditioning was lower than for span and mixed strategies. For enclosed conditioning, span sampling showed the lowest AR-NLL, albeit with fewer distinct tokens. Prefix masking for enclosed conditioning had lower MAUVE compared to other strategies.
There is no definitive conclusion, as all masking strategies excelled in specific tasks per certain metrics. As per Section 3.3, we speculate that a higher MAUVE metric could indicate superior language understanding abilities, where prefix masking showed the lowest values. However, the differences in metric values were minimal. Hence, we continued further experiments using the span masking strategy.
See Table 5 for the final metrics of the trained model.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline Generation Task & Masking & Span & Random & Mixed & Prefix \\ \hline \multirow{2}{*}{Unconditional} & AR-NLL \(\downarrow\) & 4.279 & 4.442 & 4.283 & **4.155** \\ & dist\({}_{1}\uparrow\) & **0.563** & 0.556 & 0.562 & 0.551 \\ \hline \multirow{3}{*}{Prefix\({}_{32}\)} & AR-NLL \(\downarrow\) & 3.928 & 4.070 & 3.939 & **3.917** \\ & dist\({}_{1}\uparrow\) & 0.588 & 0.576 & 0.588 & **0.589** \\ & MAUVE \(\uparrow\) & 0.884 & 0.876 & **0.885** & 0.873 \\ \hline \multirow{3}{*}{Inpainting\({}_{32}\)} & AR-NLL \(\downarrow\) & **3.955** & 4.089 & 3.974 & 4.060 \\ & dist\({}_{1}\uparrow\) & 0.587 & 0.581 & 0.589 & **0.596** \\ \cline{1-1} & MAUVE \(\uparrow\) & 0.891 & **0.894** & 0.892 & 0.888 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of different masking strategies for text generation with DDLM-Base. We bolded best metric value across masking strategies. See Section 3.4 for more details.
## 4 Experiments
### Early Exiting Strategy
Our initial experiment aimed to understand token behavior during generation. Given DDLM's categorical operation, we evaluated token switches throughout the generation, hypothesizing a high initial rate that decreases over time. We assessed token switches at each timestep \(t\) during generation
\begin{table}
\begin{tabular}{p{113.8pt}|p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline \multicolumn{2}{c}{Prefix} & \multicolumn{2}{c}{w/o TW} & \multicolumn{2}{c}{w/ TW} \\ \hline \multicolumn{2}{c}{\(t_{max}\) = 10} \\ \hline The woman who died after falling from a bridge over the A21 has been identified as a Sevenoaks mum. Marta Kendle, 37, fell & in on the night and she has been with several children on bail when she was 17. She was taken shortly after the death of her family wedding in South Africa & into peacefully in the tragic last accident after Belfast and was at the result of a woman walking on at a collision on the same hospital. SheShe traveled to her \\ \hline messaging system designed as a plugin-based system. Doubtless the kopete people would disagree with me here, but I’d say & I would. No, no. I would guess the kopete project will thank all of me for the system. I love the problem one, but & that OmnindM software is far for the most K system solution. I wanted to argue that the multiseasoning plugin is simply as a computer that K \\ \hline \hline As the nation’s largest no-reserve internet auction firm specializing in construction and agricultural equipment, Purple Wave is focused on transforming the way in which sellers reach & their goals by selling buyers, selling communities, and helping the people in the world to sell to the marketplace. & buyers and sellers buyers. In the auction, sellers of buyers, brokers, and brokers from the largest auction grew over the largest markets in the nation. \\ \hline Stretch Therapy (ST) is a comprehensive system that includes stretching, fascia remodelling, strengthening, neural re-patterning, and relaxation. & The ST therapy also helps to improve the pain, pain and nervous condition. & Stretch includes physical and neck therapies, including neck, neck, neck, neck and neck, neck pain and massage and muscle disorders. \\ \hline \hline \end{tabular}
\end{table}
Table 4: Conditional sampling for different \(t_{max}\) values obtained with DDLM-Base. We show in color tokens that appeared in the text more than once. See more details in Section 3.3.
\begin{table}
\begin{tabular}{p{113.8pt}|p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline \multicolumn{2}{c}{MAUVE \(\uparrow\)} & AR-NLL \(\downarrow\) & dist\({}_{1}\uparrow\) & dist\({}_{2}\uparrow\) & dist\({}_{3}\uparrow\) & s.-BLEU \(\downarrow\) & t. ent. & zipf \\ \hline Data & N/A & 3.31 & N/A & N/A & N/A & 0.19 & 7.71 & 0.90 \\ \hline \hline \multicolumn{2}{c}{Unconditional} & & & & & & & \\ DDLM-Base & N/A & 4.28 & 0.56 & 0.89 & **0.95** & **0.37** & 6.80 & 1.12 \\ GPT-Neo & N/A & **2.27** & 0.66 & 0.88 & 0.89 & 0.45 & **7.03** & 1.05 \\ GPT-2 & N/A & 2.62 & **0.67** & **0.90** & 0.90 & 0.43 & 6.87 & **1.10** \\ \hline \hline \multicolumn{2}{c}{Prefix\({}_{32}\)} & & & & & & & \\ DDLM-Base & **0.88** & 3.93 & 0.59 & **0.87** & **0.91** & **0.23** & **7.49** & **0.94** \\ GPT-Neo & 0.83 & **3.20** & 0.58 & 0.85 & 0.88 & 0.27 & 7.38 & 0.96 \\ GPT-2 & 0.86 & 3.21 & **0.60** & 0.86 & 0.89 & 0.26 & 7.45 & 0.96 \\ \hline \multicolumn{2}{c}{Inpainting\({}_{32}\)} & & & & & & & \\ DDLM-Base & 0.89 & 3.96 & 0.59 & 0.87 & 0.91 & 0.24 & 7.51 & 0.95 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Final metrics of DDLM model trained with \(t_{max}=10\), without time warping, and Span Masking Strategy. We used \(50\) diffusion steps for a generation. Token entropy and Zipf’s coefficient are expected to be close to the values from the dataset. See Section 3.2 for details of the experimental setup, and Sections 3.3, 3.4 for details of the ablation study.
and at various DDLM pre-training checkpoints, alongside the entropy of the embedding prediction \(p(x_{0}|x,t)\). For this, we sampled sequences with \(200\) steps (refer to Figure 1 (top)).
Interestingly, the trained model exhibited zero token switches after approximately the 100th sampling step, suggesting an early exit possibility in DDLM generation. This could be facilitated by introducing a 'patience' hyperparameter \(n_{p}\), which stops generation once token switches reach zero for \(n_{p}\) steps. Figure 1 (bottom) displays a comparison of plain and early exiting generation on the AR-NLL metric.
AR-NLL comparisons with different generation step \(n_{steps}\) and \(n_{p}\) values revealed that early exiting does not affect performance, reinforcing the observation that generation can be halted once token switches reduce to zero. Interestingly, adjusting \(n_{p}\) and \(n_{steps}\) can enhance the base algorithm's performance for a fixed number of generation steps. For instance, comparing vanilla sampling with \(n_{steps}=200\) to \(n_{steps}=500\) and \(n_{p}=25\) (both methods finish generation at around \(200\) steps) suggested that early exit with larger \(n_{steps}\) performs better.
To comprehend why the trained model tends towards minimal token switches early in generation, we also examined the L2 norm of \(\hat{x}_{0}\) and \(x\) during generation1 (refer to Figure 2). We found \(\hat{x}_{0}\) rapidly reaches an L2 norm of \(16\), the L2 norm of normalized embeddings during pre-training. This aligns with our observation of the entropy of \(p(x_{0}|x,t)\) reaching near-zero values within \(100\) generation
Figure 1: The number of token switches (top left), and the entropy of \(p(x_{0}|x,t)\) during the generation process (top right), AR-NLL metric of number diffusion steps \(n_{steps}\) and different \(n_{p}\) (bottom left), AR-NLL metric of actually performed steps and different \(n_{p}\) (bottom right). By the end of the training, we see that models behave so that it is possible to introduce an early exiting criterion for the generation process based on token switches number. Performing early exit does not hurt performance and produces samples with the same AR-NLL value. At the same time, if we think of actually performed generation steps, using early exit allows us to increase the quality of generated texts since we could increase \(n_{steps}\) and reduce \(n_{p}\). See Section 4.1 for more details.
steps. Fascinatingly, the L2 norm of \(x\) first reduce, then increases from its large initialization value, suggesting \(x\) travels from one point on the embedding sphere surface to another via its interior.
To support this hypothesis, we evaluated \(\cos\) between score \(\hat{s}\) with final score \(\hat{s}_{0}\) and \(\cos\) between \(x\) with final \(x_{0}\) during the generation process. After the 100th step, the scoring angle remains stable, indicating that the sampling mechanism determines the mid-generation's final embedding improvement direction. This constant direction forces \(x\) to the embedding sphere boundary, leading to high-confidence results and near-zero token switches.
Empirical evidence suggests that \(x\) traverses between two points on the surface of a sphere via its interior. By reducing the initial noise scale, we can effectively abbreviate the trajectory of \(x\). See Figure 3 and Table 8 for results. We discerned that a lower initial noise scale enables \(||x_{0}||_{2}\) to reach its minimum value more rapidly during generation. Though, this approach concurrently trims down the total number of unique tokens, limiting the samplings' variability.
### Down-stream Fine-Tuning
We utilize the GLUE benchmark (Hendrycks & Gimpel, 2016) to evaluate the model's ability to transfer knowledge for downstream tasks. We conduct a comparative analysis by fine-tuning the pre-trained DDLM, RoBERTa (Liu et al., 2019), and GPT-2 (Radford et al., 2019) models. We performed a grid hyper-parameter search with ranges from Table 8. Each task was trained for \(10\) epochs except for RTE, which we trained for \(20\). After each epoch, we evaluated the appropriate task metric on the validation set. As a final result, we used the best occurred metric on the validation set. We evaluated the best-performing hyper-parameter set for \(5\) times and reported the mean values across task metrics and seeds.
The mean results are presented in Table 6. It is evident that DDLM significantly underperforms compared to RoBERTa on these benchmarks, and mainly underperforms when compared to GPT
Figure 2: The L2 norm of embeddings \(||\hat{x}_{0}||_{2}\) (top left), the L2 norm of embeddings \(||x_{0}||_{2}\) (top right), \(\cos\) of the angle between score estimation \(\hat{s}\) and final score in the end of generation (bottom left), and \(\cos\) of the angle between embedding \(x\) and final embedding in the end of generation (bottom right) during the generation process for DDLM-Base. See Section 4.1 for more details.
2, although it demonstrates comparable performance on certain tasks such as MRPC and QQP. Interestingly, when comparing only Diffusion models, there is a substantial performance gap between Span and other masking strategies, indicating on better Natural Language Understanding of model trained with Span masking.
These findings highlight the need for further research on the application of Diffusion LMs in solving downstream tasks, an area that has been largely overlooked in recent studies (Anonymous, 2023b; Dieleman et al., 2022; Li et al., 2022).
### Sampling Speed
Our study also involved examining the sampling speed of DDLM compared to GPT-NeoX with 128M parameters. For this analysis, sequences of varying lengths were sampled 50 times, and the average result was reported. Our tests were run on Nvidia A100 SXM4 80GB GPUs, using mixed precision for evaluation. See Figure 3 for results.
Although DDLM and autoregressive LM share the same quadratic asymptotic sampling speed, DDLM tends to be quicker when dealing with sequences of a specific length or longer. For instance, with 50 diffusion steps (\(n_{steps}\)), DDLM was slower than GPT-NeoX for sequences that were less than 64 tokens. However, when it comes to longer sequences, DDLM outperforms autoregressive sampling. This performance boost can be attributed to DDLM's need to execute a fixed number of generation steps for specific sequence lengths. Once the sequence length surpasses \(n_{steps}\), this method is faster than autoregressive sampling.
\begin{table}
\begin{tabular}{r|c|c c c c c c} \hline \hline Model Name & Parameters & COLA & SST-2 & MRPC & QQP & MNLI & QNLI & RTE \\ \hline From Scratch & 147.0M & 12.7 & 81.5 & 75.1 & 78.6 & 65.1 & 63.0 & 54.3 \\ \hline DDLM-Base (Random) & 147.0M & 22.0 & 84.8 & 76.4 & 85.0 & 71.0 & 78.5 & 55.4 \\ DDLM-Base (Mixed) & 147.0M & 23.2 & 84.9 & 79.3 & 85.2 & 72.4 & 64.2 & 54.5 \\ DDLM-Base (Span) & 147.0M & 30.3 & 88.6 & 81.8 & 87.3 & 75.9 & 82.9 & 54.0 \\ \hline RoBERTa-base & 126.6M & **63.0** & **94.8** & **90.6** & 86.5 & **89.9** & **90.8** & **77.3** \\ GPT-2 & 124.4M & 46.6 & 92.2 & 82.8 & **87.9** & 80.9 & 86.7 & 66.8 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of different models on the GLUE Benchmark Dev. set. We bolded the best result for specific datasets and underlined the best result across diffusion models. For DDLM, we evaluated models trained with Random, Mixed, and Span masking strategies. See Section 4.2 for details.
Figure 3: (left) The L2 norm of embeddings \(||x_{0}||_{2}\) during the generation process for different initial scales of \(||x_{0}||_{2}\) for DDLM-Base. See Section 4.1 for details. (right) Inference time measurements. See Section 4.3 for details.
## 5 Related Work
Diffusion models on discrete data already have positive results in image generation and image captioning tasks (Chen et al., 2022). In NLP, diffusion models can be successfully applied to translation and summarization (Savinov et al., 2021; Anonymous, 2023; Yuan et al., 2022) tasks achieving state-of-the-art performance alongside non-autoregressive models. Additionally, there is progress in controllable text generation conditioned on the complex attribute with classifier-guided text diffusion (Li et al., 2022).
Since pre-training procedures are necessary to achieve the best performance of transformer-based models, some works focus on utilizing large text corpora to obtain promising results on unconditional text generation tasks (Anonymous, 2023; Dieleman et al., 2022). Furthermore, the GENIE (Lin et al., 2022) model uses weights the pre-training procedure achieves to improve performance on the downstream tasks. However, the GENIE model has significant computation overhead from 1,000 reverse diffusion steps and cannot be viewed as a standard NLP baseline.
## 6 Conclusion and Future Work
This paper aimed to standardize a pipeline for training the Diffusion Language Model (LM) based on the CDCD framework. Through our research, we simplified the original training process and proposed the DDLM model, providing the model weights and code to aid further research.
We evaluated our model and found that Diffusion LMs currently underperform compared to conventional baselines for downstream tasks, such as those in the GLUE Benchmarking Datasets. Interestingly, we discovered early exiting is feasible during the DDLM generation process. By leveraging early exiting, we can improve sample quality via automatic metrics by increasing generation steps and reducing the patience hyperparameter, decreasing the expected number of generation steps.
As for future work, the reasons behind DDLM's underperformance in downstream tasks are yet to be fully understood. More precise fine-tuning of DDLM (including a broader evaluation of hyperparameters) and scaling up the model could enhance its performance. We also look forward to novel approaches to early exiting strategies with Diffusion LM models. For instance, training a Diffusion LM to allow even earlier exits could further reduce the total number of required generation steps.
\begin{table}
\begin{tabular}{c|c c c c c c c c} \hline \hline Model Name & Parameters & COLA & SST-2 & MRPC & QQP & MNLI & QNLI & RTE \\ \hline DDLM-Base (Random) & 147.0M & 10.9 & 85.3 & 81.2/69.9 & 65.4/87.1 & 73.3/71.1 & 79.7 & **50.8** \\ DDLM-Base (Span) & 147.0M & **25.9** & **86.8** & **81.2/74.8** & **67.1/86.8** & **75.3/74.5** & **83.2** & 50.2 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparison of different models on the GLUE Benchmark Test set. We bolded the best result. See Section 4.2 for details.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Batch Size & [ 16, 32, 64, 128 ] \\ Learning Rate & [ 3e-4, 1e-4, 3e-5, 1e-5 ] \\ \(t\) & [1, 5, 10] \\ Optimizer & AdamW \\ Scheduler & Constant \\ Sequence Length & 64 \\ \hline \hline \end{tabular}
\begin{tabular}{c|c c c c} \hline \hline Noise & AR-NLL \(\downarrow\) & dist\({}_{1}\uparrow\) & dist\({}_{2}\uparrow\) & dist\({}_{3}\uparrow\) \\ \hline
0.5 & 2.38 & 0.13 & 0.29 & 0.44 \\
0.8 & 3.83 & 0.48 & 0.81 & 0.90 \\
0.9 & 4.05 & 0.53 & 0.86 & 0.93 \\
1.0 & 4.27 & 0.56 & 0.89 & 0.95 \\
1.1 & 4.52 & 0.69 & 0.91 & 0.96 \\
1.2 & 4.79 & 0.61 & 0.92 & 0.96 \\ \hline \hline \end{tabular}
\end{table}
Table 8: (left) Hyper-parameter ranges for fine-tuning on GLUE. \(t\) is time passed to the model (See Section 2 for details). (right) Performance of DDLM-Base depending on initial noise scale of \(x\). See Section 4.1 for details. |
2303.12359 | From Clean Room to Machine Room: Commissioning of the First-Generation
BrainScaleS Wafer-Scale Neuromorphic System | The first-generation of BrainScaleS, also referred to as BrainScaleS-1, is a
neuromorphic system for emulating large-scale networks of spiking neurons.
Following a "physical modeling" principle, its VLSI circuits are designed to
emulate the dynamics of biological examples: analog circuits implement neurons
and synapses with time constants that arise from their electronic components'
intrinsic properties. It operates in continuous time, with dynamics typically
matching an acceleration factor of 10000 compared to the biological regime. A
fault-tolerant design allows it to achieve wafer-scale integration despite
unavoidable analog variability and component failures. In this paper, we
present the commissioning process of a BrainScaleS-1 wafer module, providing a
short description of the system's physical components, illustrating the steps
taken during its assembly and the measures taken to operate it. Furthermore, we
reflect on the system's development process and the lessons learned to conclude
with a demonstration of its functionality by emulating a wafer-scale
synchronous firing chain, the largest spiking network emulation ran with analog
components and individual synapses to date. | Hartmut Schmidt, José Montes, Andreas Grübl, Maurice Güttler, Dan Husmann, Joscha Ilmberger, Jakob Kaiser, Christian Mauch, Eric Müller, Lars Sterzenbach, Johannes Schemmel, Sebastian Schmitt | 2023-03-22T07:50:51Z | http://arxiv.org/abs/2303.12359v1 | From Clean Room to Machine Room: Commissioning of the First-Generation BrainScaleS Water-Scale Neuromorphic System
###### Abstract
The first-generation of BrainScaleS, also referred to as BrainScaleS-1, is a neuromorphic system for emulating large-scale networks of spiking neurons. Following a "physical modeling" principle, its VLSI circuits are designed to emulate the dynamics of biological examples: analog circuits implement neurons and synapses with time constants that arise from their electronic components' intrinsic properties. It operates in continuous time, with dynamics typically matching an acceleration factor of 10 000 compared to the biological regime. A fault-tolerant design allows it to achieve wafer-scale integration despite unavoidable analog variability and component failures. In this paper, we present the commissioning process of a BrainScaleS-1 wafer module, providing a short description of the system's physical components, illustrating the steps taken during its assembly and the measures taken to operate it. Furthermore, we reflect on the system's development process and the lessons learned to conclude with a demonstration of its functionality by emulating a wafer-scale synchronous firing chain, the largest spiking network emulation ran with analog components and individual synapses to date.
Neuromorphic hardware, wafer-scale integration, spiking neural networks, emulated networks, analog neuromorphic devices, synfire chains.
## I Introduction
Simulating the dynamic properties of large-scale spiking neural networks is challenging due to the massively parallel interactions of their neurons and synapses. The BrainScaleS neuromorphic architecture proposes a solution to this dilemma by providing inherently parallel computation at nodes operating as neurons and synapses and communicating through asynchronous spikes. It thereby achieves a constant emulation speed with increasing network sizes [1].
BrainScaleS implements physical models of neurons and synapses on a CMOS substrate with analog circuits, while the spike communication is digital. On the one hand, the physical models inherently provide solutions to neuron and synapse dynamics in continuous time, in contrast to the time-discretized and numerically integrated solutions of digital systems and software simulations. On the other hand, the programmable digital communication of action potentials allows for flexible network topologies and the possibility of using digital logic to feed and read spike events from outside the system. Furthermore, circuits are operated in strong inversion, targeting dynamics with a typical speedup factor of 10 000 compared to biological real-time.
The BrainScaleS-1 system utilizes wafer-scale integration to achieve large ASIC counts with energy efficiency and high communication bandwidth. The structure of its underlying neuromorphic chip and the technology to achieve its wafer-scale integration are introduced in [2, 3, 4, 5]. Turning the silicon wafer into a ready-to-use system, though, implicates bringing several additional components, shown in fig. 1, to work hand in hand. For that cause, a commissioning chain is established, which is this paper's focus.
We first illustrate the different components that constitute the system and how they are tested. Then, we show the steps to assemble the module before it is finally placed in the machine room, as shown in fig. 2. In the second part of the paper, we describe the methods devised to bring such a system into a reliable substrate for neuromorphic experiments: a large number of VLSI analog components inevitably leads to malfunctioning parts and analog variability, for which an underlying fault-tolerant design and suitable handling have to be put in place. To demonstrate its operation and the successful implementation of these measures, a biologically-motivated network of spiking neurons, a synchronous firing chain, is emulated on a fully commissioned BrainScaleS-1 wafer module.
The system belongs to the still-nascent field of neuromorphic computing and remains under continuous development. Having pioneered a neuromorphic wafer-scale integration of VLSI analog and digital circuits, we also discuss the lessons learned while solving or circumventing the challenges faced along the way.
## II System Components and Individual Tests
A BrainScaleS-1 wafer module is depicted in fig. 1. Each of its constituent boards is individually tested before its integration into the system, which permits differentiating errors in the parts from those arising from the assembly. A short description of each component and the tests it undergoes is given in the following.
### _The BrainScaleS-1 W wafer_
The heart of each module is an uncut \(20\,\mathrm{cm}\) wafer, displayed in fig. 3a, fabricated in UMC \(180\,\mathrm{nm}\) technology comprising 384 High Input Count Analog Neural Network (HICANN) ASICs. Each HICANN contains 512 analog neuron circuits implementing the adaptive exponential integrate-and-fire model [5]. Single neuron circuits receive input from up to 220 analog synapses. Since neuron membranes can interconnect in groups of up to 64, a maximum of \(14\,080\) synapses can provide input to each of these composite neurons. Synapse weights are stored with 4-bit resolution in local SRAM at each synapse.
Each HICANN stores \(12\,384\) analog quantities for parameterization of its analog circuits in Single-Poly Floating Gate (FG) CMOS cells that retain their operation levels according to their isolated gate's accumulated charge [7, 8]. These FGs are written via an onboard 10-bit-resolution DAC, enabling reprogramming via incremental loops with feedback. Then, the stored values get translated to either a voltage or a current using a source follower or a current mirror, respectively, to set neuron parameters and other onboard circuit operation levels. While these FGs present a low-power, small-space solution to store analog operation settings, they introduce write-cycle to write-cycle variability, as will be further discussed.
W wafer-wide communication is achieved with a custom-developed redistribution layer applied post-wafer-production, creating around \(160\,000\) lateral connections across chip borders [9]. These connections provide the modules with on-wafer spike event communication through low-voltage differential signaling (LVDS) buses utilizing an asynchronous serial event transmission protocol. Furthermore, connections through top-layer pads on the wafer provide the modules with parallel per HICANN off-wafer communication, which in conjunction with programmable and redundant components, constitute the system's fault tolerance [2].
_Testing:_ In order to assess the effect of wafer post-processing on the digital yield of an entire wafer, initial needle card tests were carried out on two unprocessed1 wafers to determine their yield immediately after production. Since the wafers undergoing these tests cannot be further processed, comparing results on the same wafers before and after the post-processing is, however, not possible.
Footnote 1: “unprocessed” in this context means untested wafers straight from the manufacturer, before the custom redistribution layers have been added.
The setup for these tests in the institute's clean room is
Fig. 1: (a) 3D-schematic of a BrainScaleS-1 wafer module (dimensions: \(50\,\mathrm{cm}\times 50\,\mathrm{cm}\times 15\,\mathrm{cm}\)) hosting the wafer (A) and 48 communication boards (B). The positioning mask (C) aligns elastomeric connectors that link the wafer to the large Main PCB (D). Support PCBs provide power supply (E & F) for the on-wafer circuits as well as access (G) to analog dynamic variables such as neuron membrane voltages. The connectors for inter-wafer and off-wafer/host connectivity (48 \(\times\) Gigabit-Ethernet) are distributed over all four edges (H) of the Main PCB. Mechanical stability is provided by an aluminum frame (I). (b) Photograph of a fully assembled wafer module. Taken from [6].
Fig. 3: (a) The BrainScaleS-1 wafer with applied postprocessing to achieve wafer-scale integration and to establish its connection to the (b) bottom side of the Main PCB. There, the wafer connects through elastomeric connectors to the center, marked with 1. In the borders, 48 connectors, marked with 2, accommodate the communication boards.
Fig. 2: The BrainScaleS-1 machine room comprising 20 wafer modules organized in 5 racks. A slot in the middle of each rack hosts the Analog Readout Module and the Main Control Units of its neighboring wafer modules. Gigabit-Ethernet cables connect each wafer module via aggregation switches to the control cluster positioned in the middle rack. Taken from [6].
shown in fig. 4, and the procedure is as follows. The needle card is used to contact each individual ASIC. Immediately after contacting and powering up, the total current on the used lab supply is measured to detect potential power shorts. Henceforward, all digital memory cells on the HICANN circuits are tested using a built-in Joint Test Action Group (JTAG) access mode. During these tests, 448 HICANNs on each of the two wafers were tested, and 93 % of them showed no single digital error. To compare, UMC's calculator estimates a yield of approximately 85 % by taking into account the process parameters and circuit size. However, our results are only an estimation: On the one hand, the tested digital memory cells only cover a fraction of the whole silicon area, which is dominated by analog circuitry. Therefore, the digital test yield could be assumed to be too optimistic. On the other hand, perfect power and signal integrity could not be ensured while connecting the circuits through the needles, leading to a possible detection of false negatives, caused for example by slightly underpowered memory cells. In addition, only wafers from the initial engineering sample production have been available for testing. No documentation has been available to relate the production yield data from UMC to small batch-size engineering runs. Nonetheless, the results match the expectations taking the high level of uncertainty into account. Also, a yield in the order of, e.g., 85 % would not indicate that 15 % of the dies cannot be used. Instead, advantaging from the fault-tolerant design, and depending on the defect type, it could suffice to disable single neuron or synapse circuits, for example, on affected HICANNs that are otherwise fully functional and can remain available for experiments.
### _Main PCB_
The Main PCB, displayed in fig. 3b, is a 43 cm \(\times\) 43 cm passive interconnector board for most parts of the wafer-scale integration system. Seven of its \(14\) layers are used to distribute \(23\) power rails carrying up to 200 A of current. The rest of the layers are used to route \(1152\) power monitoring, \(1472\) high-speed differential communication, and different sideband signals. Auxiliary boards, communication infrastructure, and the silicon wafer are connected via various kinds of detachable connectors. These enable system modularity for development and upgrades, desirable for research and development in dynamic environments over longer timespans.
_Testing:_ The manufacturer1 performs complete optical inspection and electrical tests of the Main PCB. The BrainScaleS-1 wafer modules are assembled using exclusively fully validated, error-free Main PCBs.
Footnote 1: Manufactured by Würth Elektronik, Germany
### _Auxiliary Boards_
The wafer module is completed by populating it with 48 communication boards and auxiliary boards for power delivery, control, monitoring, and inter-module communication.
#### Iii-C1 Communication Boards
Each communication board2 contains a field-programmable gate array (FPGA) and connects to one HICANN group consisting of 8 HICANNs. These boards communicate through separate high-speed LVDS interfaces with each of the connected HICANNs to configure, monitor, and coordinate the experiment runs; they feed and collect generated spikes into/from the experiments. Furthermore, they synchronize the start of experiments to allow for wafer wide execution. Trigger signals generated on these boards also align experiments with analog recordings using the Analog Readout Module (AnaRM).
Footnote 2: Developed at the chair of Hochparallele VLSI-Systeme und Neuromikroelektronik at TU Dresden
_Testing:_ The communication boards are tested on a standalone setup that implements loopback connections for the high-speed interfaces. For this purpose, a test board accommodates and tests four PCBs in parallel, as shown in fig. 5a. Primarily automated and controlled via software, the tests switch the power supply via General Purpose Interface Bus. Programming is performed via JTAG and Power Management Bus. Tests comprising current consumption measurements, loading and communicating with the FPGA design, as well as memory tests are conducted. In addition, communication with the host computer as well as the links to the wafer and neighboring communication boards are tested. As per data logs, only 18 out of \(1404\) produced PCBs had to be discarded after failed tests.
#### Iii-C2 Wafer I/O PCB
Each one of the module's four Wafer I/O PCBs (WIOs)2 attaches to twelve communication boards, aggregating Gbit-Ethernet and connections to other communication boards.
Footnote 2: Developed at the chair of Hochparallele VLSI-Systeme und Neuromikroelektronik at TU Dresden
_Testing:_ A manual approach is followed as the number of boards is smaller than that of the communication boards. The board, shown in fig. 5b, is supplied with power, and the proper functioning of the DC/DC converters is checked with a multimeter. Individual communication ports are tested. In addition, the proper transmission of signals using a signal generator and differential probes is measured. A partial test of the JTAG pins is also carried out. As per data logs, only 2 out of 120 produced WIOs were discarded after failed tests.
#### Iii-C3 Main Power Supply
The Main Power Supply (PowerIt) has three output channels: Two 1.8 V outputs as main analog and digital supplies of the wafer with a current limit of 200 A each, as well as a 9.6 V output capable of up to 110 A to supply
Fig. 4: (a) Photograph of the wafer proper and (b) a close-up of a wafer under test. Different needle cards have been developed and used for tests carried out before wafer post-processing (section II-A, visible in this setup) and before wafer module assembly (section III-B), respectively.
the communication boards. Multiple custom-milled copper parts ensure a low-resistance screw connection between the PowerIt and the Main PCB. Additionally, digital control of the voltages and sensors as near to the wafer as possible allow for compensation of IR-drop. An integrated microcontroller can measure input and output currents and voltages via shunt resistors, hall sensors, and voltage dividers.
_Testing:_ Commissioning of the PowerIt involves basic functionality tests and calibration of the current and voltage measurement circuits using an external electronic load capable of sinking 4.8 kW and precision multimeters, see fig. 5c.
#### Iii-B4 Auxiliary Power Supply
The Auxiliary Power Supply PCB (AuxPwr) designed in [10], receives 9.6 V from the PowerIt and provides ten different voltage outputs for the wafer module. The currents drawn at the derived voltages vary from 50 mA, for the common-mode voltage of the LVDS on-wafer communication, to 60 A for the synapse driver output. The board has an L-shape with linear and switching regulators placed on different axes to reduce the coils' electromagnetic-noise induction. In addition, the usage of intermediate voltages reduces the power dissipation for the voltage scaling. An onboard microcontroller monitors all the voltages and currents. Four voltages can be controlled digitally through the Inter-Integrated Circuit (I2C) protocol.
_Testing:_ The AuxPwr components' functionality is tested during the calibration process of the board, during which an external voltmeter permits adjusting voltage offsets. A two-point linear calibration under load is performed for the currents. The test stand can be seen in fig. 5d.
#### Iii-B5 Control Unit for Reticles
Since the BrainScaleS-1 wafer is not cut into individual chips, the wafer module must be fault-tolerant to individual HICANN problems. For this purpose, the Main PCB features power-FETs for the supply rails of each HICANN group of the wafer; overcurrents manifest as a large voltage drop across these power transistors. The Control Unit for Reticles (CURe) controls the gates of these transistors and monitors the supply voltages of the wafer. Three microcontrollers manage the measured data and react to fault conditions by shutting off the power of the affected HICANN groups. Thus, the CURe allows to identify individual fatal faults and to exclude the respective HICANN groups from the usable components. The term reticle refers to the semiconductor manufacturing process and consists of one HICANN group.
_Testing:_ The CURe is tested using a custom setup producing the voltages expected inside the actual BrainScaleS-1 wafer module, simulating all possible fault conditions while the response time is measured. Likewise, the drive strength of the control signals for the power transistors on the Main PCB is quantified. The test setup is displayed in fig. 5e.
#### Iii-B6 Analog Readout Module
Further insight into the neuron dynamics can be obtained via measurements of its membrane potential, allowing for a better understanding of experiment results and the implementation of calibration routines. To this end, each neuron contains a switchable analog output amplifier that connects to one of two 50 \(\Omega\) output buffers per die. These two outputs are each short-circuited across dies in the same HICANN group. Therefore, each of these groups has two analog outputs, totaling 96 independent analog channels available on each wafer module.
The AnaRM system consists of twelve FPGA-controlled 12-bit ADC modules that allow for the digitization of the membrane voltages on one wafer module per BrainScaleS-1 system rack. Each of the modules in the AnaRM system connects through a ribbon cable to one of two Analog Breakout PCBs mounted on the Main PCB, receiving eight analog signals that are multiplexed into the ADC. An additional digital signal acts as a trigger; four HICANN groups share one, allowing synchronization during an experiment between the involved communication boards, HICANNs and the AnaRM system. Overall, the AnaRM system can simultaneously sample 12 membrane traces per wafer module.
_Testing:_ The FPGA board in the AnaRM, displayed in fig. 5f, undergoes DRAM memory tests and basic functional testing of all its peripheral components. The analog front end is tested during the calibration of the modules. This calibration is performed using a source meter to generate a series of ground-truth voltages, which are subsequently measured using each
Fig. 5: Auxiliary boards under test. (a) communication boards test setup and (b) Wafer I/O PCB board. (c) Main Power Supply connected to programmable power supply and electronic load. (d) Auxiliary Power Supply PCB test stand. (e) Control Unit for Reticles test stand. Each Power Emulation Systems for Testing (PEST) board emulates the supply voltages of one HICANN group. (f) FPGA board of the Analog Readout Module. During the calibration, the pins on the top left are connected via a 50 \(\Omega\) impedance to an external source meter, while the module is connected via USB to the host computer. Figures (a) and (b) made available by S. Schiefer, TU-Dresden.
input channel. A 50 \(\Omega\) series impedance is used at the output of the source meter to match the impedance of the output buffers on the HICANN. This voltage divider formed by the output and input impedances halves the 1.8 V span of the HICANN output to the 0.9 V maximum input of the AnaRM. A linear function fits the recorded signal to the source meter voltages, and the per board offset and gains are stored in a database.
### _Main Control Unit_
The Main Control Unit (MaCU) consists of a Raspberry Pi powered by the standby voltage of the PowerIt. Using the I\({}^{2}\)C protocol to communicate with all other wafer module components, it controls the start-up sequence of the system. Additionally, it monitors the multitude of components of a wafer module, which is crucial to ensure robust operation. With this in mind, the MaCU aggregates over 1800 metrics per wafer, e.g., supply voltages, temperatures, or the active/inactive status of components. Most data is of a time-series nature and stored via Graphite [11], with visualization through Grafana dashboards [12]. These dashboards are hierarchically structured, allowing an intuitive drill-down navigation of the data. As it is not practical to manually oversee such a large amount of metrics, alerts are set up to check for unexpected events. For example, supply voltages are checked to be in a valid range and to remain constant over time. Furthermore, event data, e.g., powering up components, is handled via the ELK stack [13] but also integrated into Grafana and displayed as marks. These allow easily matching the events with changes in the time-series data.
_Testing:_ The Raspberry Pi computers used for the MaCUs are purchased and commissioned without further tests. However, the maintenance and deployment of the control and monitoring software they run is part of the system's continuous integration development methodology [14].
## III System Assembly and Integration Tests
In addition to the tests devised for the individual components, the BrainScaleS-1 wafer module assembly process is carried out along with additional tests that allow pinpointing problems to the individual steps. In the following, we discuss the module assembly method and the different tests it undergoes during this phase.
### _Wafer to Main PCB Marriage and Module Integration_
The wafer is connected to a total of 11 904 pads on the Main PCB via 384 elastomeric connectors, shown in fig. 6a. Mounting the Main PCB and the silicon wafer in custom-milled aluminum brackets allows reaching the compression forces required by the connectors. The station used to align the two components is shown in fig. 6b. Electrical resistance tests, described in section III-B2, are performed while compressing the elastomeric connectors to ensure correct positioning and even pressure distribution. Then, the wafer module is populated with the auxiliary boards and, when fully assembled, connected to the MaCU. Afterward, it is put on a test stand for initial full-system tests using the same communication chain later used for experiments. Following this step, the wafer module is placed in a rack in the machine room and attached to the AnaRM system.
### _Tests at Different Assembly Stages_
Stage-specific tests allow mapping arising errors to individual assembly steps of the BrainScaleS-1 wafer module, which enables evaluating and improving the procedure. This section shows the test results obtained for one wafer as an example.
#### Iii-B1 Pre-Assembly Tests of All HICANNs on the Wafer
Before placing a wafer in a module, digital and analog tests are performed on a wafer proper in the institute's clean room, see fig. 4. These tests distinguish production problems from those arising in the wafer module assembly procedure.
Similar to the initial needle card tests on the unprocessed wafers, described in section II-A, a test system was built using a different needle card connecting to the redistribution layer of a pair of HICANNs on a wafer with post-processing. Extended analog and digital tests are run on the connected dies, a process that is repeated until the entire wafer is analyzed. These tests serve two purposes: first, to sort out wafers with a high error count that might arise from disrupted connections in the post-processing, and second, to establish a base level for the following assembly tests. Figure 7a shows the results of a high-level test for all HICANNs of one wafer. The image shows more test results than the number of dies on the picture of the assembled wafer module. The reason for this was design constraints and limited routing resources on the Main PCB, by which not all HICANN groups could be electrically connected
Fig. 6: (a) Detail view of the elastomeric connectors that connect the pads on the BrainScaleS-1 wafer with the Main PCB. (b) Station used to align the Main PCB to the silicon wafer. The Main PCB is fixed by springs that apply a constant force (blue arrows). Its position is controlled with a micrometer linear stage (red arrows). Angular errors can be corrected by rotating the wafer (purple arrows). (c) Test PCBs mounted on the Main PCB to measure the connectivity to the wafer during assembly.
and thus used within the module context; those at the edge of the wafer were left out. For the same reason, the two HICANN groups at the center are without high-speed connection.
#### Iii-C2 Tests During the Assembly Phase
For these additional tests the Main PCB is equipped with test PCBs1, shown in fig. 6c, which measure ESD diode currents and termination resistances between the LVDS lines on the wafer. The tests determine whether a good connection of the wafer to the Main PCB exists. Figure 7b shows the result of one of these tests, where only the same faulty device on HICANN group 29, also detected in the needle card test, can be seen. No additional faulty devices validate that the wafer to Main PCB marriage was appropriate.
Footnote 1: Developed by the group of Yasar Gübrüz at Sabanci University, Istanbul
#### Iii-C3 Post-Assembly Tests of All HICANNs on the Wafer
After the assembly of the wafer module is completed, the same tests run on the pre-assembly phase are conducted, and results are compared. The results for one test are shown in fig. 7c. The errors in HICANN groups 15 and 29 are still present, while the errors in groups 36 and 42 are not. Further investigations could trace these last errors to connection problems of the needle card used in the wafer proper.
## IV Commissioning Software
After assembly, additional steps are necessary to bring the BrainScaleS-1 wafer module into readiness for experiments. These include digital tests to find and exclude malfunctioning components and calibrating the individual neurons to address manufacturing-process-induced circuit mismatches. Databases store the results from these two steps, allowing serialized data storage to disk. See [14] for details. Furthermore, all steps are fully automated and periodically executed after installation of the module in the machine room to track the systems' current state.
### _Communication Tests_
The first test that is executed on a newly assembled wafer module is the communication test, which is used to find unresponsive HICANNs. Communication problems most likely arise from insufficient connection quality between the Main PCB and the wafer, cf. [9], or from scratches or similar defects on the post-processing layers.
During the test, an individual connection is established to each of the 384 HICANNs of one wafer. The test is split into a high-speed test and a JTAG test, which reflects the two possibilities to communicate with the HICANN. Failures are stored separately in the availability database. The result of a communication test is shown in fig. 7d. In this example, the result comparison between the test stand and the rack-mounted fully assembled wafer module shows one additional HICANN group and 3 individual HICANNs that cannot communicate via JTAG.
### _Memory Tests_
Using a whole uncut wafer, each BrainScaleS-1 wafer module profits from better energy efficiency and higher bandwidth for communication between its ASICs as if these were produced separately and then integrated. This approach presents a challenge, though, as producing an error-free wafer-scale system in such a way is not possible, as ASICs with manufacturing-induced problems cannot be removed. The BrainScaleS-1 system addresses this through a digital memory test, which in conjunction with the fault-tolerant system design, enables dynamic handling of malfunctioning components. Executed after assembly as well as periodically, the test also tracks the state of the systems over time. Therefore, it allows to operate wafer modules despite a subset of malfunctioning components or connections, consequently increasing the yield of functional systems.
The test builds upon the communication test and establishes a connection to a HICANN group. First, it initializes the connected communication board and the HICANN under test. Subsequently, each digital memory is repeatedly write/read-tested using random values. If a mismatch is found, the largest functional unit that depends on the malfunctioning component is excluded so that it is not utilized in experiments. HICANNs that can communicate only via JTAG are exclusively used for spike route-through to and from neighboring HICANNs on the same wafer. For these, a routing-specific reduced memory test
Fig. 7: Test results of one BrainScaleS-1 wafer for the different assembly steps: (a) Before assembly) during assembly (c) after assembly. In (a) and (c), the number in the smallest rectangles shows the amount of errors found on the corresponding HICANN. Purple or red indicate that all tests were successful or failed, respectively. For grey HICANNs the test was skipped since no connection could be established using the wafer proper. In (b), test results are shown per elastomeric connector and a yellow rectangle indicates a problem in the high-speed communication of one HICANN. (d) Communication test result. HICANNs without high-speed communication are marked yellow, without JTAG communication red. The center two HICANN groups have no high-speed interface by design. Consequently, they are marked faulty in all tests requiring high-speed communication to the Main PCB.
minimizes the runtime using the slower connection. In total, more than 42 MiB of digital memory get tested per wafer. Results for a fully assembled wafer module are shown in table I. Tested components and their position on the HICANN are visualized in fig. 8.
With 110 KiB per HICANN, the configuration registers of the synapses make up the largest part of the tested memory. They are split into two synapse arrays per HICANN, each of which is programmed by a custom on-chip SRAM controller described in [15]. In the tests, on 1.97 % of the synapse arrays, unstable behavior is observed. This means, consecutive write/read operations with fixed values on a single synapse register show varying results. Since problems in individual synapse registers are very unlikely and could also derive e.g. from the control chain, a special stability test is introduced. There, each register is tested several times with the same value. If a single register shows unstable behavior, the whole synapse array is excluded. Thereby, at the expense of functional components, only stable programmable synapses are used during experiments.
A test with ten write/reads of random data per component and a stability test with ten repetitions takes approximately 70 s per HICANN. Since the tests can be executed in parallel for each HICANN group, a full wafer test takes approximately 10 min and can be executed periodically to track the state of the systems.
### _Effective Exclusion of Components_
In special cases, it is not enough to skip malfunctioning components during an experiment, but it is also important to be aware of hardware specific dependencies that can be linked with these components. This is achieved through an additional step, the effective exclusion of components, where functional but dependent components are excluded. Several dependencies lead to an effective exclusion. Some of them are visualized in fig. 8.
* Unstable repeater controller: To enhance the signal integrity of spike events that have to be routed across several HICANNs, the signal is regenerated between dies by repeaters. These repeaters are organized in blocks where each block has a custom on-chip controller used to program its repeaters. Since failures in the digital memory of the repeaters are very unlikely, more than one failing repeater per block indicates that there could also be a problem in the control chain. To ensure no unstable components are used, all repeaters connected to the corresponding repeater block are removed from the availability database in such cases.
* Buses connected to malfunctioning repeaters: Buses are used to route spike events between neuron circuits. On boundaries between two HICANNs, the buses are connected to repeaters that regenerate the signal. Each repeater is connected to a bus on its own HICANN as well as on a neighboring one. If a repeater is failing the memory test, there is no possibility to test if it sends
Fig. 8: Left: Picture of the HICANN with labeled components and marked areas shown on the right side. Top right: Detail of the synapse array. Two synapse rows are connected to one synapse driver. All synapses of the same column are connected to one neuron circuit. Middle right: Left half of the merger tree. Neuron input from the top gets routed to the buses on the bottom. Several inputs can be merged on the same bus. Background-generators are used to inject additional signals generated on-chip. Right bottom: Sketch of the bus system. Buses are connected by a sparse switch matrix. Repeaters, used to regenerate the signals, connect buses of neighboring HICANNs.
wrong signals to its connected buses. To circumvent this, all buses connected to such a repeater are excluded and thus not used during an experiment. The same holds for repeaters on HICANNs without JTAG connection. As the repeaters cannot be initialized correctly, all neighboring buses connected to repeaters on the problematic HICANN are excluded.
* Malfunctioning FG controller: The FGs are not only used to configure the neurons but also to supply bias voltages to the spike event routing. If an error in the controller programming the FGs is found, the whole HICANN is excluded from the availability database and, in the following, treated as if there would be no JTAG connection. Such a HICANN is not used at all in experiments.
* Without high-speed: HICANNs that have no high-speed connection are, due to the higher bandwidth requirements, not used to emulate neurons or external inputs but only used to route spike events. This is achieved by removing all neurons and external input mergers from the availability database.
* No routing options: To improve the placement and prevent lost connections, the algorithm checks that all the components required to establish a route from each neuron and external input merger are available. If not, the neuron or the external input merger is excluded and therefore skipped in the process of building a network.
* Handling hardware versioning: In an earlier version of the post-processing, connections were established to HICANNs on the edges of the wafer that must not be connected. To prevent leakage currents from these dies, the connected buses are excluded. Therefore, it is unnecessary to distinguish wafer versions in all the following steps.
An overview of removed components before and after the effective exclusion of components can be seen in table I. The availability database, used to handle the excluded components, allows for storing different states on disk, so malfunctioning components and effective components can be differentiated afterward. This is for example important during the initialization of the HICANNs, where only malfunctioning components have to be handled specifically.
### _Analog Readout Tests_
Before usage, the analog recording system gets verified for correct connectivity and configuration by running a series of tests. Each HICANN is set in sequence to generate two different voltage levels, which the AnaRM measures. The voltage levels originate from the configuration of one of the FGs. A recording that agrees with the settings and whose noise levels are within a tolerance threshold indicates that the system is ready for experiments or calibration runs.
### _Calibration_
VLSI transistors are subject to manufacturing variations translating into differences in signal response. This problem and the potential impacts have been noted since the first approaches to neuromorphic computing using VLSI [16]. Consequently, the HICANN's microelectronic analog circuits require correction mechanisms to deliver homogeneous responses.
As the manufacturing variability is stationary within the components' operating ranges, thus termed fixed-pattern noise, it can be reduced by suitable calibration. To this end, a framework has been developed for the BrainScaleS-1 wafer module that performs a one-time circuit characterization through running sequences of experiments that sweep neuron parameters, measure the effect in the observable, and perform appropriate fits on suitable models. The process creates a database that holds the calibration results and is loaded on routine hardware usage, allowing for automatic translation between biological-space parameters and FG-stored parameters. Such a conversion is automated and transparent for the users when running an experiment. See [14] for details.
The calibration procedure configures all the neuron circuits at once and then processes the individual measurements to allow for programming the FGs in parallel. In addition, parallelizing the analysis algorithms on the already measured steps further optimizes the time required for calibration. Regardless, an increase in the number of calibration steps could improve the quality of the fits, while also parameters that are more sensitive to FG parameter variability benefit from an increase in the number of measurement repetitions. Consequently, calibration time and precision of the results require balancing.
#### Iv-D1 Calibration Methodology
In the BrainScaleS-1 system, the only analog neuron property that can be directly recorded is the membrane voltage. Accordingly, all parameter calibrations are based on membrane recordings under different parameter configurations. In general, the calibration of one parameter sweeps over its operating range while maintaining the rest of the parameters constant. The execution order is relevant, as some calibration routines require an already calibrated subset of parameters. Furthermore, the calibration accounts for analog readout noise, and measurements can be repeated to factor in FG parameter variability.
The main neuron calibration parameters are summarized in fig. 9. In the following, the calibration procedure is exemplarily shown for the parameter \(I_{\text{pulse}}\), which controls the refractory period \(\tau_{\text{ref}}\), i.e., the time after the emission of a neuron's action potential during which its membrane is clamped to the reset potential and the neuron can elicit no further spike. The higher \(I_{\text{pulse}}\) is, the shorter the achieved \(\tau_{\text{ref}}\). Each \(I_{\text{pulse}}\) calibration step sets the resting potential \(E_{\text{leak}}\) above the level at which a spike event is elicited, i.e., \(V_{\text{threshold}}\), which causes the neurons to spike continuously. The inter spike interval (ISI) is the measurable result.
In the first step, \(I_{\text{pulse}}\) is set to maximum, and the corresponding ISI is regarded as \(\text{ISI}_{0}\), the minimum attainable interval under the current settings. Larger refractory periods are referenced to \(\text{ISI}_{0}\) by using
\[\tau_{\text{ref}}(I_{\text{pulse}})=\text{ISI}(I_{\text{pulse}})-\text{ISI}_{ 0}, \tag{1}\]
making the minimum \(\tau_{\text{ref}}\) zero seconds by definition.
Afterward, each step's distinct target FG values of \(I_{\text{pulse}}\) are programmed, causing changes observable in the ISI and thus in \(\tau_{\text{ref}}\). The obtained set of configured parameters and their achieved refractory periods is then fit to a model, which in the case of \(\tau_{\text{ref}}\) corresponds to
\[I_{\text{pulse}}=\frac{1}{(c_{0}+c_{1}\cdot\tau_{\text{ref}})}. \tag{2}\]
Such a model derives from transistor-level simulations described in [17]. The resulting fits for five neurons are shown in fig. 10.
The pair of constants \(c_{0}\) and \(c_{1}\) corresponding to model eq. (2) is stored in the calibration database for each neuron, which is then used for translation from \(\tau_{\text{ref}}\) in seconds to \(I_{\text{pulse}}\) in digital value. Further details for each parameter calibration are provided in the supplementary material.
Depending on each parameter's sensitivity to the programmed FG values, some calibrations enable a more precise setting of parameters than others. An increased sensitivity due to non-linear hardware dependencies is found where small changes in FG values cause large changes in the observables. Furthermore, for some FGs only a limited range of their available parameter space is used, reducing the ability to set their corresponding parameters precisely. As can be seen from the measured values in fig. 10, such is the case for \(I_{\text{pulse}}\). For comparison, fig. 11 shows how the leak potential \(E_{\text{leak}}\), which is easier to control, obtains a more precise calibration than \(I_{\text{pulse}}\). For this reason, the control precision of several parameters was improved in the second-generation BrainScaleS-2 chip [18] partly by enabling digital value storage.
#### Iv-B2 Synapse Weight Calibration
The calibration of the synaptic input differs from the other calibrations due to its additional dependency on the synapse drivers. The strength of a synapse is configured by three hardware parameters. The 4-bit digital weight \(w\) stored per synapse, a scaling factor \(gmax\_div\) stored per synapse row, and the FG-stored reference parameter \(V_{\text{gmax}}\). This last parameter is set per synapse row and selects one of four possible values shared by blocks of 128 neurons. Calibrating this large parameter
Fig. 11: Histograms of achieved parameter settings on all neurons of one HICANN for (a) the refractory time constant controlled by the parameter \(I_{\text{pulse}}\) and (b) the leak potential controlled by the parameter \(E_{\text{leak}}\). Pale and intense colors correspond to the hardware achieved time constants and voltages for different target values (shown in black-dashed lines), before and after the calibration is applied, respectively. The narrowing and centering of the achieved value distributions is better for \(E_{\text{leak}}\) than for \(\tau_{\text{ref}}\).
Fig. 10: Exemplified calibration procedure for the refractory period. Sample fits obtained for a set of neurons, relating the \(I_{\text{pulse}}\) parameter configured with the Floating Gates with the measured \(\tau_{\text{ref}}\). Seven values within the dynamic range of \(I_{\text{pulse}}\) were used for the fits.
Fig. 9: Simplified neuron circuit schematic, displayed on the bottom, with the most relevant calibrated parameters in the BrainScaleS-1 system. The leak conductance controlled by \(I_{\text{gl}}\) is constantly driving the membrane potential \(V_{\text{mem}}\) towards the rest voltage \(E_{\text{leak}}\). A spike is elicited when the membrane potential reaches the threshold voltage \(V_{\text{threshold}}\). After a neuron spikes, its membrane’s potential is connected to the reset voltage \(V_{\text{reset}}\) for a period controlled by the parameter \(I_{\text{pulse}}\). For simplicity, one synaptic input is displayed out of two through which a neuron integrates excitatory and inhibitory input currents \(I_{\text{gal}}\); this controls a conductance between the reversal potential \(E_{\text{syn}}\) and the membrane with a synaptic time constant controlled by \(V_{\text{gate}}\). Each input receives currents from all the synapses connected to one column in the synaptic array, displayed on top. Additional parameters \(V_{\text{syn}}\), \(V_{\text{comoff}}\), and \(I_{\text{com}}\) provide further control over the synaptic input, as further discussed in the supplementary material.
space for each of the 512 neurons with 110 connected synapse drivers using the analog readout system, which allows for measuring 12 membrane traces in parallel, is not possible in a reasonable time frame. Therefore, a per wafer translation is performed, where only some of the components are taken into account to find the average circuit behavior. The measurement requires the results of all previous calibrations. Neurons on different HICANNs are stimulated by a single spike for different combinations of the three hardware parameters to cover the whole parameter range. Subsequently, a fit of the conductance based neuron model is applied to the recorded membrane traces to extract the ratio between biological weight and membrane capacitance \(\frac{w_{\text{max}}}{C_{\text{thr}}}\). Since the membrane capacitance is fixed during experiments, it is unnecessary to separately determine both values. During the fit, the model parameter of the already calibrated reversal potential is fixed. The reduced \(\chi^{2}\) value of the fit is used to identify and exclude saturation effects of the involved operational transconductance amplifier\({}_{1}\), cf. fig. 9, which might occur for large weight values. Finally, the weight translation is found by fitting the expected hardware behavior
\[A(\frac{w\cdot V_{\text{gmax}}}{gmax\_div}+i_{0}+i_{1}\cdot w_{1}+i_{2}\cdot w _{2}+i_{4}\cdot w_{4}+i_{8}\cdot w_{8}), \tag{3}\]
adapted from [19], to the results of the first fits. The fit parameters \(i_{0\text{-}8}\) characterize the effect of parasitic capacitances found in the synaptic circuit for each enabled bit of the 4-bit weight value \(w\). Figure 12a demonstrates the large parameter space of the synapse weight calibration. It shows the measurement of a single neuron, stimulated by a single synapse driver for a single \(V_{\text{gmax}}\) value without rewriting the FGs. The performance of the fit applied on the whole measured parameter space is shown for fixed values of \(gmax\_div\) in fig. 12b and for fixed digital weight values \(w\) in fig. 12c. Although the whole neuron circuit and consequently the expected noise of each individual component is involved, the error of each measurement does not exceed the variations observed in other calibrations. However, additional deviations arise from rewriting the FGs, which is demonstrated in fig. 12d; this renders the search for a more precise fit function unbeneficial. In addition, the per wafer calibration opted over a per neuron circuit calibration introduces a dominant error due to the deviations between neuron circuits, shown in fig. 12e. A precise weight calibration within a reasonable runtime would be achievable via a parallel measurement of each neuron circuit. This would also allow to exclude neurons showing unintended behavior. However, this is not possible with the currently used analog readout system. Nonetheless, the lack of a perfect weight calibration can be circumvented via in-the-loop training on the BrainScaleS-1 system, as shown for inference tasks in previous results [6].
#### Iv-B3 Calibration Based Exclusion of Components
The operation of the HICANNs during the calibration is similar to the operation during experiments. All components have to work correctly for the calibration to succeed. Failing calibrations indicate unintended behavior. This allows for testing the whole die, especially the analog circuits that cannot be tested directly. Additionally, thresholds can be defined to exclude outliers. Consequently, neurons that do not pass all calibration steps are excluded from the availability database. Numbers of calibration based excluded neurons on a typical wafer are given in table II.
## V Experiment Showcase - Synchronous Firing Chain
Previous experiments on the BrainScaleS-1 system relied on a small subset of the available neurons [6, 20, 21]. In this section, we use a synchronous firing chain (synfire chain) to
Fig. 12: Results of the synapse weight calibration. (a) Weight measurement for a fixed neuron circuit for different settings of the digital weight \(w\) and hardware parameter \(gmax\_div\) with \(V_{\text{gmax}}=700\,\mathrm{LSB}\). Horizontal dashed lines indicate cuts with fixed values of the hardware parameter \(gmax\_div\) shown in (b), vertical dashed lines indicate cuts with fixed digital weight values to shown in (c). In (b) and (c), solid lines represent measured values, dashed lines the results of the fit of eq. (3) applied on the whole measured parameter space. (d) Variations of weight measurements with and without rewriting of the Floating Gates. Values are extracted for 3 digital weight parameters \(w\) from a fixed neuron with fixed hardware parameters (\(V_{\text{gmax}}=700\,\mathrm{LSB}\), \(gmax\_div=2\,\mathrm{LSB}\)). (e) Comparison of a per wafer and a per neuron weight calibration. Measurements for the entire parameter space are performed on a subset of neurons. The calibration is then performed for the whole subset or per individual neuron. The histogram shows the difference between the measured and expected values using the obtained calibrations.
utilize a large number of the available wafer module resources. We start with a relatively short chain to illustrate the behavior of the network and finally present a longer one that utilizes a large part of a single wafer module.
Synfire chains can filter for synchronous activity and propagate the activity along a chain of neuron groups [22, 23]. We choose synfire chains since they can easily be scaled up to arbitrary sizes by increasing the chain length as well as the number of neurons in a single group and have been studied extensively in previous publications [24, 25, 26]. Furthermore, synfire chains were used to showcase the functionality of the predecessor of BrainScaleS-1 [27] and to characterize the behavior of the current system in software simulations [28].
Figure 13 displays a synfire chain with feed-forward inhibition. Each chain link consists of an excitatory and inhibitory population. The inhibitory populations are connected to the excitatory population within the same group. This feed-forward inhibition can enhance the filtering properties of the chain [29, 26]. The excitatory population forwards its outputs to both populations within the next group. External stimulus is injected in the form of Gaussian pulse packages [24]. The strength \(a\) denotes the number of input spikes per stimulus neuron and \(\sigma\) the standard deviation of the Gaussian from which the spike times are drawn. We will use \((a,\sigma)\) to refer to specific packages.
### _Network Behavior_
In a first step we will look at a relatively short chain with six chain links, shown in fig. 14, to illustrate how the filtering properties of the chain can be tuned. Table III summarizes some of the key properties of the network. We used the manual placement described in [14] to place the different populations on the wafer. Specifically, we distribute the external stimulus over several HICANNs in order to minimize spike loss due to limited bandwidth.
As mentioned previously, synfire chains are able to filter for synchronous input and to synchronize less-synchronous input as it travels along the chain [24, 23]. Figure 14a shows the propagation of three different input stimuli along the chain. In case of a relatively weak and synchronous input \((1,1)\) a single, narrow package travels along the chain. If the input is stronger and more asynchronous, we observe a broader response in the first groups of the chain which is synchronized as the signal propagates along the chain such that the responses in the final group are comparable. Too weak and asynchronous input, here \((1,4)\) as an example, dies out and does not cause a response in the final group. This is in agreement with previous results [24, 29, 27, 28].
Figure 14b shows in more detail for which input stimuli the propagation along the chain is successful. In agreement with the previous observations, weak and asynchronous input is not transmitted to the final group. The response in the final group is almost uniform. This indicates that the packages are synchronized as they travel along the chain. Setting appropriate parameters which reproduce the expected results from simulations relies on the calibration routines, introduced in section IV-E. The calibration allows to set model parameters in the biological domain and reduces the inherent mismatch between the physical components.
### _Wafer-Scale Network_
The previous section demonstrates the implementation and control of a short synfire chain on the BrainScaleS-1 system. This section shows that the commissioning efforts described in section IV also facilitate the implementation of wafer-scale networks. The properties of this synfire chain are summarized in table III.
The complexity of the emulation increases with the size of the model. While for a relatively short chain it is possible to investigate the behavior of individual neurons and manually detect malfunctioning and bad calibrated entities, this is not feasible for larger experiments. Therefore, digital tests described in section IV-B are essential to automatically avoid these components during the experiment.
To simplify the automatic routing of the abstract network description to physical entities on the wafer, we once again employ manual mapping, see fig. 15a. We place the
Fig. 13: Structure of the synfire chains presented in this section. The synfire chain is made up of several groups of excitatory (blue) and inhibitory (red) populations. The inhibitory population connects to the excitatory population within the same group and aims to improve the chain’s filtering for synchronous input [26, 29]. Each excitatory population is connected to the excitatory and inhibitory population of the next group. By repeating this construction schema (grey), chains of arbitrary length can be realized. The network is excited by a stimulus population (orange) which projects to the excitatory and inhibitory population of the first group.
different groups in a zig-zag pattern starting from the top-left side towards the bottom of the wafer and then back up towards the top-right side. This placement schema allows the BrainScaleS-1 operating system [14] to find appropriate connections between the different populations and minimizes synapse loss, i.e. synaptic connections that could not be mapped to the hardware.
We were able to successfully emulate a synfire chain with 190 chain links on the BrainScaleS-1 system. Figure 15b shows an example of a pulse package that travels along the full length of the chain. The activity of the individual groups still depends on the exact neuron and synapse properties, but the calibration ensures that the pulse package remains compact. A synchronous pulse reaches the final group after a signal propagation time of about 600 ms in the biological regime, which corresponds to 60 us wall-clock time.
## VI Discussion
Starting its development more than ten years ago, the first-generation BrainScaleS wafer-scale neuromorphic system represents a milestone toward a large-scale analog neural network emulation platform. Over years during which several modules have been commissioned and experiments run, we have learned important lessons on building and handling such a complex system. We discovered drawbacks in our first implementation; some of them could successfully be circumvented via our commissioning software. Our second-generation neuromorphic BrainScaleS-2 chip [18] addresses BrainScaleS-1's design weaknesses. Moreover, it enables the application of advanced learning mechanisms by introducing a digital plasticity processor, neuron multi-compartment capabilities, as well as extended analog to digital conversion capacities.
In this paper, we described the individual components of a BrainScaleS-1 wafer module and showed the necessary steps to assemble it. A wafer-scale analog system is complex and requires many hardware components working concurrently.
Fig. 14: Hardware emulation of a chain with six chain links. (a) Propagation of pulse packages along the chain. Successful propagation depends both on the strength \(a\) as well as the synchronicity \(\sigma\) of the initial stimulus, represented by (\(a\), \(\sigma\)). Broad input stimuli synchronize along the chain or do not reach the end of the chain. (b) Average number of spikes per neuron in the final group \(\tilde{a}_{\text{out}}\) of the chain in dependency on the initial strength \(a\) and synchronicity \(\sigma\). Each input package was presented 40 times and the results are averaged over all presentations. The pulse packages propagate if the initial input is strong and synchronous enough. In the region of stable propagation the output strength is almost constant, near the separation of the two regimes the average strength of the final pulse package decreases. This separation line between successful propagation and failure of transmission can be controlled by several parameters such as the synaptic weights.
Fig. 15: Hardware emulation of a chain with 19 000 neurons. Further parameters of the network can be found in table III. (a) Mapping of the network to a BrainScaleS-1 wafer. HICANNs excluded from the availability database are marked in red, cf. section IV-B. HICANNs which cannot host an entire group are marked in orange and are not used in the experiment. On each HICANN colored in blue an entire group of neurons is placed. Colored lines indicate synaptic connections. (b) Response of the chain to an input packet of strength \(a=1\) and spread \(\sigma=1\).
Once a wafer module is assembled, it is often not possible to pinpoint defects in individual components. To alleviate this, each component must get tested on its own; malfunctioning ones must be repaired or replaced before they are added to the system. Additional tests during the assembly are also crucial to allow for finding and solving errors that arise during that process. The remaining problems are handled by the exclusion of affected components or circuits from the availability database to ensure the correct operation of the system.
The importance of the tests and monitoring remains after the wafer module gets placed in the rack. For example, tight monitoring during system operation is necessary to uncover the wear out of system components. Automated alerts are fundamental for warning in case of values deviating over time. Furthermore, the tests executed nightly help keep track of the wafer modules' state.
Concerning the wafer in the core of the BrainScaleS-1 system, the probability of fabrication defects in microelectronics is proportional to the circuit area [30]. Thus, it is unfeasible to build such a large analog system without malfunctioning components. This will most likely further intensify in the future by utilizing novel materials. With this in mind, the digital tests introduced are executed nightly to identify such malfunctioning components and exclude them from our availability database. These tests enable storing different states of the database on disk and allow to differentiate actual malfunctioning components from those not usable due to a dependency. The users can then utilize reliable components, possibly even using a custom availability database.
An additional challenge using analog hardware is the fixed-pattern noise introduced by unavoidable manufacturing process variations. In the BrainScaleS-1 system, this is worsened by the design decision to use FGs to store the neuron configuration. These cells allow for long-term storage of analog parameters without storing digital values onboard. However, the current implementation introduces write-cycle to write-cycle variability. Though small, these variations lead to noticeable errors if they are further enlarged by non-linear dependencies between control signal and observable. To minimize these effects, we presented our calibration framework, which also allows non-expert users to configure experiments in the biological domain without specific knowledge of the hardware. We demonstrated the narrowing and centering of the achieved value distribution for exemplary parameters after the calibration was applied, limited by thermal noise and the variations caused by the FGs, nonetheless. Since single-poly floating-gate cells are non-standard devices and not supported by the manufacture, the second-generation BrainScaleS-2 chip reverts to a digital parameter storage scheme employed in a previous neuromorphic architecture [31], thereby vastly improving analog parameter accuracy. Since the second generation uses a manufacturing process with much smaller geometry, namely 65 nm vs. 180 nm, the area penalty for the digital parameter storage is manageable. A further advantage of the novel parameter storage is the reduced programming time [32]. In the presented wafer-scale implementation, the single-poly floating-gate parameter storage was the only feasible solution to achieve the required number of analog parameters for the neuron circuits.
On top of explaining the calibration methodology, we demonstrated the necessity for parallel execution of the calibrations. The large parameter space of the synapse weight calibration exceeds reasonable runtimes using the current readout system. In order to circumvent this, we introduced a per wafer calibration which, compared to a per circuit calibration, shows larger errors but can be generated in a reasonable time frame. To improve this, we developed a new readout system, which will replace the external set of ADCs with on-wafer-module boards, increasing the parallel readout capabilities from 12 to 96 channels [33]. Moreover, in the BrainScaleS-2 chip, we introduce a per neuron-circuit ADC system, which allows for a massive parallel calibration [18]. A per-circuit calibration before each experiment becomes feasible with such a solution.
Finally, we demonstrated the operation of a fully commissioned BrainScaleS-1 wafer module implementing synfire chains. While small chains portray the capability to fine-tune the network parameters, extending to a long chain of 190 links illustrates the possibility to scale up networks. Successfully mapped to an inherently imperfect substrate, it consists of the largest spiking network emulation run with analog components and individual synapses to date.
Our endeavor in developing and maintaining the BrainScaleS-1 system has demonstrated, while illustrating the field's challenges, that building wafer-scale analog neuromorphic hardware is feasible. Furthermore, the BrainScaleS-1 wafer module with its operating system laid the foundation for the next-generation systems; all lessons learned from the first generation contribute to the success of future large-scale neuromorphic systems.
## Acknowledgments
The authors wish to thank all present and former members of the Electronic Vision(s) research group contributing to the BrainScaleS-1 platform, development and operation methodologies, as well as software development. We thank S. Schiefer and S. Hartmann from the group Hochparallelle VLSI-Systeme und Neuromikorelektronik at TU-Dresden for the development and before-assembly routine testing of the communication boards and the WIOs, as well as for providing test details and images for this writing. We thank M. Yaziki and O. Ceylan from the group of Yasar Gurbuz at Sabanci University, Istanbul for the development of test boards that are being used during assembly of the wafer modules. We thank Wurth Elektronik, Germany for dedicated and detailed support during the development of the Main PCB. This work has received funding from the EU ([FP7/2007-2013], [H2020/2014-2020]) under grant agreements 604102 (HBP), 269921 (BrainScaleS), 243914 (Brain-i-Nets), 720270 (HBP SGA1), 785907 (HBP SGA2) and 945539 (HBP SGA3), the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy
EXC 2181/1-390900948 (the Heidelberg STRUCTURES TRUCTURES Excellence Cluster), the Helmholtz Association Initiative and Networking Fund (ACA, Advanced Computing Architectures) under Project SO-092, as well as from the Manfred Stark Foundation.
## References
* [1]D. Bruderle, J. Bill, B. Kaplan, J. Kremkow, K. Meier, E. Muller, and J. Schemmel (2010-07) Simulator-like exploration of cortical network architectures with a mixed-signal VLSI system. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 2784-2787. External Links: Document, Link Cited by: SSI.
* [2]J. Schemmel, J. Fieres, and K. Meier (2008) Wafer-scale integration of analog neural networks. In Proceedings of the 2008 International Joint Conference on Neural Networks (IJCNN), pp.. Cited by: SSI.
* [3]J. Schemmel, J. Fieres, and K. Meier (2008) Wafer-scale integration of analog neural networks. In Proceedings of the 2008 International Joint Conference on Neural Networks (IJCNN), pp.. Cited by: SSI.
* [4]J. Schemmel, K. Meier, and E. Muller (2004-10) A new VLSI model of neural microcircuits including spike time dependent plasticity. In Proceedings of the 2004 International Joint Conference on Neural Networks (IJCNN'04), pp. 1711-1716. Cited by: SSIV-A.
* [5]S. S.
Supplemental Material
From Clean Room to Machine Room:
Commissioning of the First-Generation BrainScaleS
W wafer-Scale Neuromorphic System
Hartmut Schmidt*, Jose Montes*, Andreas Grubl, Maurice Guttler, Dan Husmann, Joscha Ilmberger, Jakob Kaiser, Christian Mauch, Eric Muller, Lars Sterzenbach, Johannes Schemmel, Sebastian Schmitt
*Contributed equally
## I Calibration Details for the System's Neuron and Synapse Circuits
A calibration procedure is in place for the BrainScaleS-1 system, which compensates for manufacture-induced analog circuit variability. It accounts for analog readout noise by averaging the features extracted from the membrane traces over time. In addition, measurements are repeated and then averaged after rewriting the Single-Poly Floating Gates (FGs), where stated, to consider FG write-cycle to write-cycle parameter storage variability.
A detailed explanation of each parameter calibration conducted on the wafer module is provided in the following. We first describe the parameter, explain the calibration approach and the settings used, and show plots illustrating the results. In addition to the synaptic-weight calibration presented in the main text, these constitute the complete neuron- and synapse-circuit calibrations performed in the system. Details of the sample points measured, the models utilized, and the average runtimes per parameter are summarized in table I.
_Readout shift_: On each High Input Count Analog Neural Network (HICANN), every neuron's membrane trace can be recorded by connecting its switchable analog output amplifier to one of two output buffers. Due to circuit variability, each amplifier adds a constant offset to the recorded traces, the so called readout shift. It has to be determined first, since all further calibrations are influenced by it.
* _How_: Neuron membranes are interconnected in groups of 64 (the maximum possible). Their individual resting membranes are recorded and every neuron's deviation from the group's mean is stored.
* _Settings_: \(E_{\text{leak}}=0.9\,\mathrm{V}\), the middle of the range, \(V_{\text{threshold}}\) above resting potential, \(I_{\text{conv}}\) set to 0 A to switch off both operational transconductance amplifiers (OTAs) for the excitatory and inhibitory synaptic input conductances.
* _Effects_: The offset is automatically corrected for all subsequent calibrations by loading the calibration backend. The distribution of the analog output amplifier offsets of all neurons on one HICANN is shown in fig. 0(a).
\(V_{\text{reset}}\): The potential to which a neuron's membrane is set after a spike is generated. It is shared among a group of 128 neurons. Each HICANN contains four of these groups.
* _How_: Neurons are set to spike continuously by setting their leak potential \(E_{\text{leak}}\) above their threshold potential \(V_{\text{threshold}}\). A recording time of 80 \(\upmu\)s per target value collects an average of 39 inter spike intervals on each membrane. The refractory time \(\tau_{\text{ref}}\) is set to maximum in order to allow for long baseline traces between the spikes. The reset voltage is calculated as the average over all the interspike baseline samples to account for readout noise.
* _Settings_: \(I_{\text{conv}}=0\,\mathrm{A}\) for both excitatory and inhibitory synaptic inputs, shutting off the OTA of their synaptic conductance. \(I_{\text{gl}}=1.1\,\mathrm{\mu A}\), \(I_{\text{pulse}}=20\,\mathrm{nA}\) to set the refractory time to a high value.
* _Sweep_: \(V_{\text{reset}}\), with \(E_{\text{leak}}=V_{\text{reset}}+0.4\,\mathrm{V}\), \(V_{\text{threshold}}=V_{\text{reset}}+0.2\,\mathrm{V}\)
* _Effects_: The achieved hardware voltage distribution is shifted towards the correct target value from its original mean, as can be seen in fig. 0(b). The standard deviation does not improve for all targets since the shared nature of the parameter limits the action of the correction over
the individual neurons.
\(V_{\text{threshold}}\): The threshold potential of the leaky integrate-and-fire model, at which an action potential is elicited and the membrane's voltage is forced into the reset potential for the refractory period.
* _How_: Synaptic inputs are minimized in order to isolate the membrane and the threshold detect circuits. The threshold potential \(V_{\text{threshold}}\) is set below the leak potential \(E_{\text{leak}}\) to elicit constant spiking. The maximum membrane voltage at several spike peaks is averaged and considered the true threshold voltage.
* _Settings_: \(I_{\text{conv}}=0\,\text{A}\), \(I_{\text{gl}}=1.5\,\upmu\text{A}\)
* _Sweep_: \(V_{\text{threshold}}\), \(V_{\text{reset}}=V_{\text{threshold}}-200\,\mathrm{mV}\), \(E_{\text{leak}}=V_{\text{threshold}}+200\,\mathrm{mV}\)
* _Effects_: The corrected hardware voltage distribution is centered around the correct target value. The standard deviation decreases, as can be seen in fig. 1c.
\(E_{\text{gsni}}\): The inhibitory reversal potential towards which the OTA in the inhibitory synaptic input drives the membrane when processing synaptic input.
* _How_: \(V_{\text{convofli}}\) of the inhibitory synaptic input is set to a small value so the bias generator forces the membrane potential to the inhibitory reversal potential. No spikes are elicited since the threshold voltage is never reached. Once the neuron is at rest the averaged membrane voltage characterizes the reversal potential.
* _Settings_: \(E_{\text{leak}}=0.8\,\mathrm{V}\), \(I_{\text{convx}}=0\,\text{A}\), \(I_{\text{gl}}=0\,\text{A}\), \(V_{\text{convofli}}=0.1\,\mathrm{V}\), \(V_{\text{syntcx,y}}=1.8\,\mathrm{V}\), \(V_{\text{threshold}}=1.2\,\mathrm{V}\)
* _Sweep_: \(E_{\text{gsni}}\)
* _Effects_: The achieved inhibitory reversal potential voltages before and after calibration are shown in fig. 1d.
\(I_{\text{pulse}}\): Bias current that controls how fast the neuron's timing mechanism recovers from the reset state after a spike is generated.
* _How_: Neurons are set to spike continuously by setting \(E_{\text{leak}}\) above \(V_{\text{threshold}}\). For the refractory time constant measurements, the baseline traces corresponding to the reset-state of the membranes are extracted. \(I_{\text{pulse}}\) is first set to its maximum and the effective refractory period is measured and recorded; this constitutes the minimum achievable period denoted thus \(\tau_{0}\). The subsequent measured refractory periods are referenced to \(\tau_{0}\) by substracting \(\tau_{0}\) from them, and fitting eq. (Main-2) from the main text.
* _Settings_: \(E_{\text{leak}}=1.2\,\mathrm{V}\), \(V_{\text{threshold}}=0.8\,\mathrm{V}\), \(E_{\text{gsni}}=1.2\,\mathrm{V}\), \(E_{\text{gsni}}=0.8\,\mathrm{V}\), \(V_{\text{reset}}=0.5\,\mathrm{V}\)
* _Sweep_: \(I_{\text{pulse}}\)
* _Effects_: The achieved refractory time constants' mean is closer to the target value after the calibration is obtained and applied, as can be observed in fig. 11a in the main text. The standard deviations reduce. In fig. 10 in the main text the limited precision to configure the refractory time constant is demonstrated, as only a fraction of the possible parameter range of \(I_{\text{pulse}}\) results in reasonable configurations.
\(E_{\text{leak}}\): The reference voltage towards which the membrane potential is constantly driven through the leak conductance.
* _How_: Synaptic inputs are minimized and the membranes are read on a resting state.
* _Settings_: \(I_{\text{conv}}=0\,\text{A}\), \(V_{t}=1.2\,\mathrm{V}\), \(V_{\text{reset}}=0.9\,\mathrm{V}\)
* _Sweep_: \(E_{\text{leak}}\)
* _Effects_: The corrected hardware voltage distribution is centered around the correct target value. The standard deviation decreases, as can be seen in fig. 11b in the main text.
\(V_{\text{convofli}}\): Offset voltage for the integrator on the excitatory synaptic input. The voltage parameter is used by a bias generator that controls the reference of OTA\({}_{1}\), compensating for mismatches. The offset should balance two effects: minimize an undesired permanent current flowing to the membrane, which shifts the neuron's resting potential, against the weakening of the synaptic input caused by a too substantial compensation. Consequently, the goal of the calibration is to find the sweet spot in between, where the bias generator compensates precisely for the mismatch of OTA\({}_{1}\).
* _How_: The point of interest is the transition from a zero to a non-zero conductance on OTA\({}_{1}\). It is measured by the shift of the resting potential arising for different values of \(V_{\text{convofk}}\). The calibrated value of \(V_{\text{convofk}}\) corresponds to the first value where the resting potential is no longer shifted. In addition, the linear range of the relation between the membrane rest-voltage shift and \(V_{\text{convofk}}\) is characterized. Effects from the inhibitory synaptic input are minimized by using low values for \(E_{\text{gsni}}\), \(I_{\text{conv}}\) and a high \(V_{\text{convofli}}\). Furthermore, the effect is more pronounced for lower values of \(E_{\text{leak}}\).
* _Settings_: \(E_{\text{leak}}=0.8\,\mathrm{V}\), \(E_{\text{gsni}}=0.4\,\mathrm{V}\), \(E_{\text{gsn}}=1.2\,\mathrm{V}\), \(I_{\text{convi}}=0\,\mathrm{A}\), \(I_{\text{gl}}=0.2\,\upmu\text{A}\) a low value that limits the
Fig. 1: (a) Analog readout offset distribution for the 512 neurons of one HICANN. Calibration results for the parameters (b) \(V_{\text{test}}\), (c) \(V_{\text{threshold}}\) and (d) \(E_{\text{gsni}}\). Pale and intense colors correspond to the hardware achieved voltages for different target values (shown in black-dashed lines) before and after the calibration is applied, respectively. For \(V_{\text{reset}}\) the correction effect is limited by the parameter being shared by 128 neurons.
leakage current from the synapse onto the membrane, \(V_{\text{como}\text{r}\text{f}}=$1.8\,\mathrm{V}$\), \(V_{\text{threshold}}=$1.2\,\mathrm{V}$\), \(V_{\text{reset}}=$0.3\,\mathrm{V}$\)
* _Sweep_: \(V_{\text{como}\text{r}\text{f}}\)s
* _Effects_: A calibrated \(V_{\text{como}\text{r}\text{f}}\) parameter limits the deviations in the effective resting potential arising from leaks through the excitatory synaptic input, as shown in fig. 2. Nevertheless, a minimal \(I_{\text{gl}}\) is required to allow the neuron membranes to exhibit uniform effective resting potentials.
\(V_{\text{como}\text{r}\text{f}}\): Offset voltage for the integrator on the inhibitory synaptic input conductance. The calibration principle is the same as for \(V_{\text{como}\text{r}\text{f}}\)s, but it should be performed independently as both inputs introduce leak currents into the membrane.
* _How_: A low \(E_{\text{synx}}\), \(I_{\text{comx}}\) and high \(V_{\text{como}\text{r}\text{f}}\) minimize effects from the excitatory synaptic input.
* _Settings_: \(E_{\text{leak}}=$0.8\,\mathrm{V}$\), \(E_{\text{syai}}=$0.4\,\mathrm{V}$\), \(E_{\text{syai}}=$1.2\,\mathrm{V}$\), \(I_{\text{convx}}=$0\,\mathrm{A}$\), \(I_{\text{gl}}=$0.2\,\mathrm{\SIUnitSymbolMicro A}$\) a low value that limits the leakage current from the synapse onto the membrane, \(V_{\text{como}\text{r}\text{f}}=$1.8\,\mathrm{V}$\), \(V_{\text{threshold}}=$1.2\,\mathrm{V}$\), \(V_{\text{reset}}=$0.3\,\mathrm{V}$\)
* _Sweep_: \(V_{\text{como}\text{r}\text{f}}\)
The following parameter calibrations use input spikes to generate post synaptic potentials (PSPs) on the membrane. From the shape of the voltage traces, it is possible to approximate parameters related to the time constants of synaptic inputs (\(\tau_{\text{syn}}\)) and the membrane (\(\tau_{\text{mem}}\)). For a single input spike arriving while the membrane of a LIF neuron is in a steady state, the PSP shape can be either described by an \(\alpha\)-function, if both time constants are the same, or by a difference of exponentials if one of the time constants is smaller [1]. This behavior is described by \(V(t)\approx\)
\[\left\{\begin{aligned} & E_{\text{leak}}+\theta(t_{\text{s}})A \left(\text{exp}\left(\frac{t_{\text{s}}-t}{\tau_{\text{i}}}\right)-\text{ exp}\left(\frac{t_{\text{s}}-t}{\tau_{\text{i}}}\right)\right)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\text{if $\tau_{\text{i}}\neq\tau_{\text{i}}$}\\ & E_{\text{leak}}+\theta(t_{\text{s}})\text{exp}\left(1-\frac{t-t_ {\text{s}}}{\tau_{\text{i}}}\right)\frac{t-t_{\text{s}}}{\tau_{\text{i}}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\text{if $\tau_{\text{i}}=\tau_{\text{i}}$},\end{aligned}\right. \tag{1}\]
with
\[A=\frac{h}{\tau^{\frac{1}{1-\tau}}-\tau^{\frac{\tau}{1-\tau}}} \tag{2}\]
and \(\tau=\frac{\tau_{\text{i}}}{\tau_{\text{i}}}\) a ratio between \(\tau_{\text{mem}}\) and \(\tau_{\text{syn}}\), derived in [2] and further developed in [3]. It relates the membrane's voltage course with both relevant time constants and the height \(h\) of the PSP. The fitting algorithm fixes one of the time constants and varies the other. Although the PSPs are symmetric in \(\tau_{\text{mem}}\) and \(\tau_{\text{syn}}\), the fact that typically \(\tau_{\text{mem}}>\tau_{\text{syn}}\) is considered. Once the parameters are determined from the measurements through fitting the model, a linear fit is used to obtain a calibration relating parameters with FG values, as with the previously treated calibrations.
\(I_{\text{gl}}\): Bias current that controls the membrane's leakage conductance. This parameter and the chosen membrane capacitance, which can be set to two different values, determines the membrane time constant.
* _How_: The input spikes should arrive with enough space to allow the membrane to return to a steady-state after each perturbation. A strong excitatory synaptic input is set to achieve a better signal-to-noise ratio. Fitting eq. (1) returns both the membrane and the synaptic input time constant from the PSP shape. A fit of the softplus function \[\tau_{\text{mem}}=\frac{a\cdot\text{log}(1+\text{exp}(c\cdot(b-I_{\text{gl}})) )}{c}+\text{offset}\] (3) is subsequently used to translate between biological-space parameters and FG-stored parameters.
* _Settings_: \(E_{\text{leak}}=$0.8\,\mathrm{V}$\), \(E_{\text{synx}}=$1.3\,\mathrm{V}$\), \(E_{\text{syai}}=$0.6\,\mathrm{V}$\), \(V_{\text{syatx}}=$1.6\,\mathrm{V}$\), \(V_{\text{como}\text{r}\text{f}}=$0.9\,\mathrm{V}$\), \(V_{\text{como}\text{r}\text{f}}=$0.9\,\mathrm{V}$\), \(V_{\text{threshold}}=$1.2\,\mathrm{V}$\), \(V_{\text{reset}}=$0.3\,\mathrm{V}$\), \(V_{\text{gmax}}0=$1\,\mathrm{V}$\), \(\text{gmax}_{\text{div}}=$30\,\mathrm{LSB}$\), _big capacitor_, \(I_{\text{gl}}\) speedup normal.
* _Sweep_: \(I_{\text{gl}}\)
* _Effects_: The achieved membrane time constants' mean is closer to the target value after the calibration is obtained and applied, as can be observed in fig. 3b. The standard deviations are reduced. However, as seen in fig. 3a, the precision to configure \(\tau_{\text{mem}}\) is limited as only a fraction of the possible parameter range of \(I_{\text{gl}}\) results in reasonable configurations.
\(V_{\text{synx}}\): Voltage controlling the excitatory synapse time constant, \(\tau_{syn,x}\), by varying the voltage integrator's resistive element. Large values of \(V_{\text{synx}}\) shift \(E_{\text{leak}}\) towards the reversal potential, since leak currents in the synaptic input integrator increase for higher voltages.
* _How_: Similar to the \(I_{\text{gl}}\) calibration, input spikes that arrive with enough separation are used. Equation (1) is fitted to
Fig. 3: (a) Fits for the parameter \(I_{\text{gl}}\) against the achieved membrane time constant on five neurons, using a softplus function model and eight measurement steps. (b) Distribution of membrane time constants before and after the \(I_{\text{gl}}\) calibration is applied for all neurons, with pale and intense colors, respectively.
Fig. 2: Effective resting potential of the neurons on one HICANN (a) before and (b) after calibration of parameter \(V_{\text{como}\text{r}\text{f}}\).
extract the time constants. Afterwards, a fit of the softplus function
\[\tau_{\text{syn}}=\frac{a\cdot\text{log}(1+\text{exp}(c\cdot(b-V_{\text{sync}}))) }{c}+\text{offset} \tag{4}\]
to the extracted values is used to translate between biological-space parameters and FG-stored parameters.
* _Settings:_\(E_{\text{leak}}=0.8\,\text{V}\), \(E_{\text{syri}}=0.6\,\text{V}\), \(E_{\text{syrx}}=1.3\,\text{V}\), \(I_{\text{gl}}=0.3\,\text{\mu A}\), \(V_{\text{comorfx}}=1.8\,\text{V}\), \(V_{\text{comorfi}}=1.8\,\text{V}\), \(V_{\text{threshold}}=1.2\,\text{V}\), \(V_{\text{reset}}=0.3\,\text{V}\), \(V_{\text{gma0}}=0.05\,\text{V}\), \(\text{gmax\_div}=30\,\text{LSB}\)
* _Sweep_: \(V_{\text{syrtcx}}\)
* _Effects_: The distribution of the achieved excitatory synaptic time constants, \(\tau_{syn,x}\) before and after calibration of \(V_{syntcx}\) is shown in fig. 4a.
\(V_{\text{syrict}}\): Voltage controlling the inhibitory synapse time constant, \(\tau_{syn,i}\), by varying the voltage integrator's resistive element.
* _How_: Similar to the \(V_{\text{syntx}}\) calibration.
* _Settings:_\(E_{\text{leak}}=0.8\,\text{V}\), \(E_{\text{syri}}=0.3\,\text{V}\), \(E_{\text{syrx}}=1.3\,\text{V}\), \(I_{\text{gl}}=0.3\,\text{\mu A}\), \(V_{\text{comorfx}}=1.8\,\text{V}\), \(V_{\text{comorfx}}=1.8\,\text{V}\), \(V_{\text{threshold}}=1.2\,\text{V}\), \(V_{\text{reset}}=0.3\,\text{V}\), \(V_{\text{gma0}}=0.05\,\text{V}\), \(\text{gmax\_div}=30\,\text{LSB}\)
* _Sweep_: \(V_{\text{syrtici}}\)
* _Effects_: The distribution of the achieved inhibitory synaptic time constants, \(\tau_{syn,i}\) before and after calibration of \(V_{syntcx}\) is shown in fig. 4b.
\(E_{\text{syr}}\): In biologically plausible networks, the excitatory reversal potential is above the threshold and thus never reached by the membrane potential. Its calibration is a good showcase for pitfalls during the operation of analog circuits. Intuitively, a direct measurement using the membrane potential would be used for both reversal potentials. However, similar to their biological counterparts, the circuits of the HICANN chip are not designed for the membrane potential to get close to the excitatory reversal potential. Thus, the circuits show a non-linear behavior when approaching the reversal potential, as they deviate from the center of their design ranges. This can be observed in fig. 5a. Therefore, the excitatory reversal potential is measured indirectly.
* _How_: The height of the PSP of a stimulated neuron is measured for different resting potentials in the linear regime of the circuits. A linear extrapolation is used to extract the resting potential where the height reaches zero, shown in fig. 5a. In the conductance based synapse model this resting potential is equal to the reversal potential. The measurements are repeated for different reversal potentials to extract the linear dependency between hardware value and applied voltage.
* _Settings:_\(I_{\text{comori}}=0\,\text{A}\), \(I_{\text{gl}}=10^{-7}\,\text{s}\), \(V_{\text{comorfx,i}}=0.9\,\text{V}\), \(V_{\text{syntx,i}}=2\times 10^{-7}\,\text{s}\), \(V_{\text{threshold}}=1.8\,\text{V}\), \(V_{\text{gmax}}=0.9\,\text{V}\), \(gmax\_div}=2\,\text{LSB}\), \(w=15\,\text{LSB}\)
* _Sweep:_\(E_{\text{leak}}\), \(E_{\text{syr}}\)
* _Results_: Results of the calibration compared to a direct measurement can be seen in fig. 5b. The disadvantage of the indirect measurement is the increased runtime and the dependency on the shape of the PSP. Small variations of hardware parameters, most likely due to the necessity to rewrite the FG value of the resting potential, are enlarged by the linear extrapolation performed to find the reversal potential. As a result, fig. 5b shows larger variations for the indirect calibration than the direct measurement. Nevertheless, the technique allows for correctly calibrating the excitatory reversal potential without directly measuring it.
|
2305.09391 | A Possible Quantum Gravity Hint in Binary Black Hole Merger | We present a semi-rigorous justification of Bekenstein's Generalized Second
Law of Thermodynamics applicable to a universe with black holes present, based
on a generic quantum gravity formulation of a black hole spacetime, where the
bulk Hamiltonian constraint plays a central role. Specializing to Loop Quantum
Gravity, and considering the inspiral and post-ringdown stages of binary black
hole merger into a remnant black hole, we show that the Generalized Second Law
implies a lower bound on the non-perturbative LQG correction to the
Bekenstein-Hawking area law for black hole entropy. This lower bound itself is
expressed as a function of the Bekenstein-Hawking area formula for entropy.
Results of the analyses of LIGO-VIRGO-KAGRA data recently performed to verify
the Hawking Area Theorem for binary black hole merger, are shown to be entirely
consistent with this Loop Quantum Gravity-induced inequality. However, the
consistency is independent of the magnitude of the Loop Quantum Gravity
corrections to black hole entropy, depending only on the negative algebraic
sign of the quantum correction. We argue that results of alternative quantum
gravity computations of quantum black hole entropy, where the quantum entropy
exceeds the Bekenstein-Hawking value, may not share this consistency. | Parthasarathi Majumdar | 2023-05-16T12:25:41Z | http://arxiv.org/abs/2305.09391v5 | # Quantum Gravity Effect in Binary Black Hole Merger
###### Abstract
We present a semi-rigorous justification of Bekenstein's Generalized Second Law of Thermodynamics applicable to a universe with black holes present, based on a generic quantum gravity formulation of a black hole spacetime, where the bulk Hamiltonian constraint plays a central role. Specializing to Loop Quantum Gravity, and considering the inspiral and post-ringdown stages of binary black hole merger into a remnant black hole, we show that the Generalized Second Law implies a lower bound on the non-perturbative LQG correction to the Bekenstein-Hawking area law for black hole entropy. This lower bound itself is expressed as a function of the Bekenstein-Hawking area formula for entropy. Using the analyses of LIGO-VIRGO-KAGRA data recently performed to verify the Hawking Area Theorem for binary black hole merger, this Loop Quantum Gravity-induced lower bound is shown to be entirely consistent with the data.
## I Introduction
It is a consensus view that GW150914 and subsequent similar observations by the LIGO consortium pertain to binary black hole (BBH) mergers to a black hole remnant [1]-[9]. To reinforce this standpoint, several research groups [11]-[13] have recently sought to investigate the validity of Hawking's theorem [14] on the impossibility of decrease of the area of black hole horizons in any physical process, by more detailed analyses of the data on BBH coalescence. Recall that this theorem, as well as the other Laws of Black Hole Mechanics [15] are based directly on classical general relativity, and as such, their verification from observational data is also an endorsement of that theory as the correct description of physical spacetime.
Inspired by ref. [15], Bekenstein [16] proposed that in a universe with black holes present, a Generalized Second law of Thermodynamics must hold, in which the entropy of black holes (which was supposed to originate from a quantum theory of gravity) is taken into account. Taken together with Bekenstein's other hypothesis that black hole entropy must be a (linear) function of the horizon area, and adopting confirmatory arguments from Hawking's seminal work on black hole radiance [17], these proposals are the key pillars on which Black Hole Thermodynamics is founded. The Generalized Second Law reduces to Hawking's area theorem when restricted to classical general relativity. Calculations of black hole entropy in LQG [18] and in superstring theory (restricted to five dimensional extremal black holes) [19] both confirm the BH area law. However, following Bekenstein's argument that black hole entropy must have quantum gravity origins, one expects specific corrections to the classical area theorem for every serious proposal of quantum gravity.
In this paper, we first attempt a semi-rigorous justification of both Bekenstein's hypotheses based on a generic formulation of quantum gravity where the role of the quantum Hamiltonian constraint is highlighted. We next specialize to Loop Quantum Gravity (LQG) where an ab initio non-perturbative computation of black hole entropy has been performed by different groups over two decades [23] - [33], leading to specific corrections to the semi-classical Bekenstein-Hawking (BH) entropy. These LQG corrections are themselves functions only of the BH entropy. Incorporating these corrections into the Generalized Second Law as applied to BBH coalescence studied for almost a decade by the LVK collaboration, an inequality emerges, giving an estimate of the magnitude of the LQG corrections. This inequality can be expressed directly in terms of the measured total horizon area (BH entropy) of the inspiralling black holes, very much prior to merger, and in terms of the 'area (BH entropy) excess' deduced much later, post-ringdown, from the measured area of the merger remnant. The successful assay on verification of the Hawking area theorem [11] - [13] is then used to show that our LQG bound is entirely consistent with results of these analyses of LVK data.
## II Generalized Second Law
A generic classical black hole spacetime, depicted in Fig. 1, can be described mathematically by \(\mathcal{B}=\mathcal{M}-\mathcal{J}^{-}(\mathcal{I}^{+})\), where \(\mathcal{M}\) is the entire spacetime and \(\mathcal{J}^{-}(\mathcal{I}^{+})\) is the chronological past of asymptotic future null infinity. The _inner_ boundary of \(\mathcal{B}\,\ \partial\mathcal{B}=h_{+}\) is called the future event horizon.
The quantum description of such a spacetime may begin from the assumption of the Hilbert space of the system \(\mathcal{H}\) having the structure \(\mathcal{H}_{\mathcal{B}}\otimes\mathcal{H}_{h_{+}}\). Any general state \(|\Psi\rangle\in\mathcal{H}\) can then be expanded as
\[|\Psi\rangle=\sum_{\mathcal{B},h_{+}}C_{\mathcal{B}h_{+}}|\psi_{\mathcal{B}} \rangle\otimes|\psi_{h_{+}}\rangle \tag{1}\]
where, the complex matrix coefficients \(C_{\mathcal{B}h_{+}}\) are not necessarily diagonal, thereby permitting possible entanglement between the bulk (\(\mathcal{B}\)) and horizon (boundary) states. The Hamiltonian for the spacetime is assumed to have the structure \(\hat{H}=\hat{H}_{\mathcal{B}}\otimes\mathds{I}_{h_{+}}\oplus\mathds{I}_{ \mathcal{B}}\otimes\hat{H}_{h_{+}}\), i.e., \(\hat{H}_{\mathcal{B}}\) acts only on \(|\psi_{\mathcal{B}}\rangle\in\mathcal{H}_{\mathcal{B}}\), while \(\hat{H}_{h_{+}}\) acts only on \(|\psi_{h_{+}}\rangle\in\mathcal{H}_{h_{+}}\). A third and important assumption is that states of the
black hole Hilbert space \(\mathcal{H}_{\mathcal{B}}\) are _solutions_ of the quantum Hamiltonian constraint : \(\hat{H}_{\mathcal{B}}|\psi_{\mathcal{B}}\rangle=0\).
As a consequence, it follows that the 'average energy' of the system
\[\langle\Psi|\hat{H}|\Psi\rangle = \sum_{h_{+}}D_{h_{+}}\langle\psi_{h_{+}}|\hat{H}_{h_{+}}|\psi_{h_{+ }}\rangle\] \[D_{h_{+}} = \sum_{\mathcal{B}}|C_{\mathcal{B},h_{+}}|^{2}|||\psi_{\mathcal{B} }\rangle||^{2}. \tag{2}\]
We now consider a canonical ensemble of such spacetimes in equilibrium with a heat bath with an inverse temperature \(\beta\); the canonical partition function is given by the standard definition : \(\mathcal{Z}=Tr\exp-\beta\hat{H}\) where the trace is over all states \(|\Psi\rangle\in\mathcal{H}\). Eqn (2) is now seen to imply that \(\mathcal{Z}=Tr_{h_{+}}\exp-\beta\hat{H}_{h_{+}}\equiv\mathcal{Z}_{h_{+}}(\beta)\). Thus the Hamiltonian constraint reduces the thermodynamics of the system to the thermodynamics of the horizon states which then serve as microstates for computation of the canonical entropy of the system. If these horizon states also diagonalize a suitably defined _area operator_, then the canonical entropy
\[S(\beta)\equiv\left(1+\frac{\partial}{\partial\log\beta}\right)\mathcal{Z}_{h _{+}}=S(A_{h_{+}}) \tag{3}\]
Thus, somewhat heuristically, we are led to Bekenstein's contention that black holes must have an entropy (gravitational in character) which is to be a function of the horizon area. Further, he hypothesized [16] that the functional form of this entropy must be _linear_, which when reinforced by Hawking's seminal work on black hole radiance [17], leads to the Bekenstein-Hawking area law for black hole entropy \(S_{BH}(A_{h_{+}})=A_{h_{+}}/4l_{P}^{2}\) with \(l_{P}\) being the Planck length. The veracity of the area law has been verified in ab initio calculations in several serious proposals of quantum gravity, including loop quantum gravity [18] (for four dimensional generic black holes), and for five dimensional extremal black holes in string theory [19]. In the former case however, quantum spacetime fluctuations [23] - [33] lead to a whole slew of quantum corrections to the Bekenstein-Hawking area law, as briefly recapitulated in the next section.
We end this section with the observation that if two black holes, initially far away, orbit each around other, leading to an eventual merger to a remnant black hole with emission of gravitational waves, treating this is as an isolated system, the thermodynamic second law of entropy increase would imply that
\[S_{bh}(A_{h_{+}})+S_{GW}\geq S_{bh1}(A_{h_{1+}})+S_{bh2}(A_{h_{2+}}). \tag{4}\]
where, \(S_{bh}\) is the entropy of the remnant black hole, while \(S_{bh1},S_{bh2}\) are entropies of the inspiralling ones. This is known as the Generalized Second Law of thermodynamics in a universe where black holes are present and may merge emitting gravitational waves. It is obvious that if \(S_{bh}=S_{BH}\), then (4) is just a simple addendum to Hawking's classical black hole area theorem [14]. However, with quantum spacetime corrections to \(S_{bh}\) beyond the area law, eqn (4) may imply further non-trivial predictions.
## III Quantum spacetime corrections to Bekenstein-Hawking entropy
_Isolated_ horizons [20]-[21], a non-stationary generalization of stationary event horizons, are a particularly useful concept for the ab initio computation of black hole entropy. Classically, the symplectic structure on such horizons is that of an \(SU(2)\) Chern-Simons theory of connections which are pullbacks of the spacetime connection in the first order formulation of general relativity, to the spherical foliation of the horizons. Solder two-forms constructed from bulk densitized triads in the Sen-Ashtekar formulation of general relativity represent sources for the horizon Chern-Simons connection fields. In bulk LQG [22], holonomies of connections along the edges of spin network and the fluxes of the densitized triads through surfaces bounded by the edges, represent the quantum degrees of freedom. The inner boundary of this quantum geometrical structure is a punctured \(S^{2}\) (for non-rotating isolated horizons), with punctures carrying spin deposited on the \(S^{2}\) by bulk spin network edges. In this framework, fluxes are distributional, thus providing 'pointlike' sources for the quantum Chern-Simons field strength. The states on the punctured \(S^{2}\) are the microstates in a microcanonical ensemble, being the states of the \(SU(2)\) Chern-Simons theory coupled to the spins at punctures [23].
The dimensionality of the Hilbert space of these states is itself related to the _number_ of conformal blocks of the conformally invariant \(SU(2)_{k}\) Wess-Zumino-Witten model that exists on a spatial foliation of the isolated horizon with punctures at the location of the sources.
Figure 1: Classical black hole spacetime
For large \(k\), this number can be computed in terms of the spins [23], yielding, for a spin configuration \(j_{1},...j_{P}\)
\[\mathcal{N}(j_{1},...j_{P}) = \prod_{i=1}^{P}\sum_{m_{i}=-j_{i}}^{j_{i}}[\delta_{\sum_{n=1}^{P}m _{n},0} \tag{5}\] \[- \frac{1}{2}\delta_{\sum_{n=1}^{P}m_{n},-1}-\frac{1}{2}\delta_{\sum _{n=1}^{P}m_{n},1}].\]
The total number of states is given by
\[\mathcal{N}=\sum_{P}\prod_{i=1}^{P}\sum_{j_{i}}\mathcal{N}(j_{1},...j_{P}). \tag{6}\]
Usual Boltzmann entropy is given by \(S=\log\mathcal{N}\), and in the limit of large \(k=A/l_{P}^{2}\), one obtains for the microcanonical entropy of quantum isolated horizons[25]-[33], the result
\[S_{bh}=S_{BH}-\frac{3}{2}\log S_{BH}+\mathcal{O}(S_{BH}^{-1})\, \tag{7}\]
where, \(S_{BH}\equiv A_{h+}/4l_{P}^{2}\) is the semiclassical Bekenstein-Hawking area law for any black hole with \(A_{h+}\) being the cross-sectional area of the horizon, and \(l_{P}=(G\hbar/c^{3})^{1/2}\) is the Planck length. In some of the cited works it has been claimed that the isolated horizon states are those of a \(U(1)\) Chern-Simons theory; however, as shown in ref. [34], taking account of the additional gauge fixing in these papers, the corrections given in (7) remain valid.
## IV Prediction from the generalized second law
We define the remnant Bekenstein-Hawking entropy \(S_{BH}(A_{h_{+}})\equiv S_{BHr}\), and the inspiral black holes have \(S_{BH}(A_{h_{+i_{\circ}}})\equiv S_{BH1},S_{BH}(A_{h_{2+}})\equiv S_{BH2}\), so that the Generalized Second Law (4) can be re-expressed, including the LQG corrections in (7), as
\[S_{BHr}+S_{GW}-\frac{3}{2}\log S_{BHr} \geq S_{BH1}+S_{BH2} \tag{8}\] \[- \frac{3}{2}\log(S_{BH1}S_{BH2})\]
Defining \(S_{BHi}\equiv S_{BH1}+S_{BH2}\) as the inspiral black hole entropy, and \(\Delta S_{BH}\equiv S_{BHr}-S_{BHi}\) as the change in entropy due to the coalescence, the inequality (8) can be rewritten as
\[\Delta S_{BH}+S_{GW}\geq\log\left(\frac{S_{BH1}S_{BH2}}{S_{BHr}}\right)^{-3/2}. \tag{9}\]
This can be reorganized and expressed in terms of direct measureables
\[S_{BHi}^{-1}\log\left\{\frac{S_{BHi}[1-(\delta_{12}S_{BH}/S_{BHi })^{2}]}{4[1+(\Delta S_{BH}/S_{BHi})]}\right\}\geq \tag{10}\] \[- \frac{2}{3}\frac{\Delta S_{BH}+S_{GW}}{S_{BHi}}\]
where \(\delta_{12}S_{BH}\equiv|S_{BH1}-S_{BH2}|\).
A perusal of the analyses in ref.s [11]-[13] reveals that the relative entropy excess \(\Delta S_{BH}/S_{BHi}\in[\Delta_{max}S_{BH}/S_{BHi}\,\ \Delta_{min}S_{BH}/S_{BHi}]\) where error bars have been taken into account. This enables rewriting (10) as a strict inequality
\[S_{BHi}^{-1}\log\left\{\frac{S_{BHi}[1-(\delta_{12}S_{BH}/S_{BHi })^{2}]}{4[1+(\Delta_{min}S_{BH}/S_{BHi})]}\right\}> \tag{11}\] \[- \frac{2}{3}\frac{\Delta_{max}S_{BH}+S_{GW}}{S_{BHi}}\]
We now make a few approximations : clearly for BBH mergers like GW150914, the inspiralling black holes are similar, such that \((\delta_{12}S_{BH}/S_{BHi})^{2}<<1\); likewise, the analyzed data from ref.s [11]-[13] shows that \(\Delta_{min}S_{BH}/S_{BHi}<<1\). As regards the gravitational wave entropy \(S_{GW}\), a preliminary estimate made in ref. [35] implies that \(S_{GW}/S_{BHi}<<\Delta_{max}S_{BH}/S_{BHi}\). With these approximations, the inequality (11) reduces to
\[\frac{\log(S_{BHi}/4)}{S_{BHi}}>-\frac{2}{3}\frac{\Delta_{max}S_{BH}}{S_{BHi}}. \tag{12}\]
That this inequality is valid is obvious from the data analyses of ref.s [11]-[13]: \(S_{BHi}>>4\), ensuring that the _lh_s is strictly positive. Also, for most data \(\Delta_{max}S_{BH}/S_{BHi}>1\), rendering the _rhs_ strictly negative. Thus, as mentioned earlier, the lower bound on LQG corrections to the BH entropy, derived by substitution in the Generalized Second Law, is entirely consistent with the analyzed LVK data. This is perhaps the first time that a prediction based on a quantum gravity proposal has been fully borne out by LVK data on BBH mergers.
Figure 2: Quantum black hole
Discussion
We would like to draw attention to the fact that the validity of the bound, seemingly unsurprising, rests totally on results presented in ref.s [11]-[13]. Had the cited works not verified the Hawking area theorem with the accuracy they have, we would not be able to claim the validity of the inequality (12) on the basis of data. Indeed, the theoretical derivation made no assumptions to this effect.
At this point we should mention that a key assumption regarding comparison with LVK data is that the inspiralling as well as post-merger remnant black holes in a BBH merger are slowly spinning so that the non-rotating approximation is approximately applicable. The LQG corrections to the Bekenstein-Hawking entropy constitute a robust result in the non-rotating regime. For rotating horizons, there are ambiguities in the LQG approach to the calculation of black hole entropy, which are yet to be satisfactorily resolved. It is hoped that once these issues are resolved, a similar consistency with LVK data, as we have sought to present here, will emerge. We hope to report on this in a future publication.
## VI Acknowledgements
I thank Prof Badri Krishnan and Soumendra Kishore Roy for providing me with ref.s [10] - [13]
|
2302.02263 | Vaccination in a two-strain model with cross-immunity and
antibody-dependent enhancement | Dengue and Zika incidence data and the latest research have raised questions
about how dengue vaccine strategies might be impacted by the emergence of Zika
virus. Existing antibodies to one virus might temporarily protect or promote
infection by the other through antibody-dependent enhancement (ADE). With this
condition, understanding the dynamics of propagation of these two viruses is of
great importance when implementing vaccines. In this work, we analyze the
effect of vaccination against one strain, in a two-strain model that accounts
for cross-immunity and ADE. Using basic and invasion reproductive numbers, we
examined the dynamics of the model and provide conditions to ensure the
stability of the disease-free equilibrium. We provide conditions on
cross-immunity, ADE and vaccination rate under which the vaccination could
ensure the global stability of the disease-free equilibrium. The results
indicate scenarios in which vaccination against one strain may improve or
worsen the control of the other, as well as contribute to the eradication or
persistence of one or both viruses in the population. | Lorena C. Bulhosa, Juliane F. Oliveira | 2023-02-04T23:57:16Z | http://arxiv.org/abs/2302.02263v2 | # Vaccination in a two-strain model with cross-immunity and antibody-dependent enhancement
###### Abstract
Dengue and Zika incidence data and the latest research have raised questions about how dengue vaccine strategies might be impacted by the emergence of Zika virus. Existing antibodies to one virus might temporarily protect or promote infection by the other through antibody-dependent enhancement (ADE). With this condition, understanding the dynamics of propagation of these two viruses is of great importance when implementing vaccines. In this work, we analyze the effect of vaccination against one strain, in a two-strain model that accounts for cross-immunity and ADE. Using basic and invasion reproductive numbers, we examined the dynamics of the model and provide conditions to ensure the stability of the disease-free equilibrium. We provide conditions on cross-immunity, ADE and vaccination rate under which the vaccination could ensure the global stability of the disease-free equilibrium. The results indicate scenarios in which vaccination against one strain may improve or worsen the control of the other, as well as contribute to the eradication or persistence of one or both viruses in the population.
_Keywords: Vaccine, cross-immunity, antibody-dependent enhancement, two-strain model, dengue virus, Zika virus._
## 1 Introduction
Dengue and Zika are two important arbovirus affecting humans. Global dengue incidence has increased dramatically, putting about half the world's population at risk. According to one estimate, around 100 to 400 million cases of dengue occur worldwide each year, resulting in around 20,000 deaths [6, 10, 18]. Zika virus (ZIKV) became better known in 2016, when pregnant women pre-exposed to ZIKV infection caused disabilities and microcephaly in newborns. ZIKV has been detected in 89 countries, posing infected individuals at a higher risk for severe neurologic sequelae. Its infections cause symptoms similar to dengue disease, leading to massive misdiagnosis in co-spreading countries [21, 22].
Planning prevention and control strategies to reduce the burden of dengue virus (DENV) and ZIKV is no easy task. Both viruses are mainly transmitted by the Aedes Aegypti mosquito, which is abundant in settings with environmental conditions favorable to its development and proliferation. In addition, DENV transmission is strongly influenced by the dynamic spread of its four serotypes, and it have been affected by the emergence of ZIKV [20]. The level of antibodies against one dengue serotype can cause different body reactions in the case of a secondary infection [13]. Recovery from infection by a dengue serotype provides lifelong immunity to that serotype. However, individuals who later become infected with a different serotype may experience _antibody-dependent enhancement_ (ADE), where antibodies from a previous infection do not protect (in a long term) against a new infection, but increase the individual susceptibility and the risk of severe outcomes [32]. DENV and ZIKV both come from the flavivirus family and are genetically similar. Therefore, the interactions between these viruses could be similar to those between two different dengue virus serotypes [24, 26].
The issues surrounding the interactions between Zika and dengue fever have potential implications for case surveillance and vaccine development [7, 16, 23, 31]. As researchers continue to analyze the data and
the biological basis of their findings, mathematical models provide a tool that can contribute to understand vaccination strategies and its implications on the dynamics of multi-strain circulation [4, 19, 35]. By assuming that the vaccine would be less effective against the serotype with the highest transmission intensity, the authors in [19] found that vaccine may be effective against the weaker strain but contribute to an increase in incidence of secondary infections of the stronger one. However, in the long term, vaccination strategies could reduce the overall proportion of infections, but still with periodic yearly outbreaks of the strong strain.
In [4], ADE effect was studied in a two-sorotype dengue model with vaccination against one and both strains. They concluded that if vaccination is against only one strain, eradication of the other will not be achieved if it was currently in an endemic state. If the population is vaccinated against both strains separately, there are conditions on vaccination rates to ensure the eradication of both diseases. But using dengue parameters, this strategy might not work if a person cannot receive both vaccines. Zika and dengue interaction was investigated in [35]. The authors constructed a vaccination model considering the ADE effect and the possibility of co-infection by both viruses. They analyzed the dynamics of the model through basic and invasion numbers. Their results show a positive vaccination effect in controlling dengue. However, their simulations indicate an increase in Zika incidence due to ADE.
Few works in the literature examine the full specificities of the interaction of dengue and Zika. In this work, we develop a more general approach by modelling the effect of dengue vaccination in a two-strain model, considering both temporary cross-immunity and the ADE effect between strains. After model formulation, we calculate the main equilibria and the basic and invasion reproductive numbers of both viruses. We study the dynamics of the model and provided conditions for the local and global stability of the main equilibria and the persistence of one or both diseases. Finally, the effect of the ADE factor and temporary cross-immunity are examined. Vaccination criteria are established and simulations are performed to illustrate the possible outcomes of vaccination strategies.
## 2 Model formulation
The model describes the circulation of two strains, denoted 1 and 2, with a vaccination against strain 1. Note that in the Dengue and Zika transmission scenario, the strains can represent the Dengue and Zika viruses. To simplify the model, the mosquito population is not taken into account.
The population is divided into groups: susceptible individuals to both strains, \(S\); vaccinated individuals against strain 1, \(V\); infected individuals with strain \(i\) but still susceptible to strain \(j\), \(I_{i}\), for \(i,j=1,2\) and \(i\neq j\); immune individuals to strain \(j\) and with temporary immunity to strain \(i\), \(C_{i}\), for \(i,j=1,2\) and \(i\neq j\); immune individuals to strain 1 and still susceptible to strain 2, vaccinated and unvaccinated, \(R_{v1}\) and \(R_{1}\), respectively; immune individuals to strain 2 and still susceptible to strain 1, \(R_{2}\); immune individuals to strain \(j\) and infected with strain \(i\), \(Y_{i}\), for \(i,j=1,2\) and \(i\neq j\); and immune individuals to all the strain, \(R_{12}\). The total population infected by strain \(i\) is denoted \(J_{i}\):
\[J_{i}=I_{i}+Y_{i},\quad i=1,2.\]
Thus, the total population is given by:
\[N(t)=S(t)+V(t)+J_{1}(t)+J_{2}(t)+C_{1}(t)+C_{2}(t)+R_{1}(t)+R_{2}(t)+R_{v1}(t) +R_{12}(t).\]
The flowchart of the model can be seen in Figure 1.
The population is born and dies at a constant rate of \(\Lambda\) and \(\mu\), respectively. Part of the population is vaccinated against the virus infection 1 at birth, at a per capita rate \(v,0<v\leq 1\). The remaining unvaccinated susceptible population becomes infected by the virus \(i\) at a per capita rate \(\beta_{i}J_{i}/N\). Infected individuals with virus \(i\) recover at a rate of \(\gamma_{i}\). Individuals who recover from infection with virus \(i\) become immune to this virus and have temporary immunity to virus \(j\), for \(i,j=1,2\) and \(i\neq j\). This cross-immunity against the virus \(i\) wanes at a per capita rate \(\theta_{i}\). The vaccine's immune response is assumed to also confer temporary immunity to the virus 2, and this cross-immunity wanes at the rate per capita \(\theta_{v2}\). We assume that after loss of cross-immunity to virus \(j\), individuals who are immune to the virus \(i\) may be more or less susceptible to secondary infection by virus \(j\) due to antibody-dependent enhancement. Thus, the unvaccinated individuals are infected at a per capita rate \(\alpha_{j}\beta_{j}J_{j}/N\), for \(j=1,2\), while the vaccinated individuals are infected with the virus 2 at a per capita rate \(\alpha_{v2}\beta_{2}J_{2}/N\). The parameters \(\alpha_{k}\), for \(k=1,2,v2\) represent the fraction that decreases (\(0<\alpha_{k}<1\)) or increases (\(\alpha_{k}>1\)) the susceptibility to secondary infections. If there is no effect from the antibodies, then \(\alpha_{k}=1\). After recovery from both infections, individuals can no longer become infected. Table 1 summarizes the parameters and compartments of the model.
The following equations, with appropriate initial conditions, represent the disease dynamics model:
\[\frac{dS}{dt} =(1-v)\Lambda-\beta_{1}J_{1}\frac{S}{N}-\beta_{2}J_{2}\frac{S}{N}-\mu S\] \[\frac{dV}{dt} =v\Lambda-(\theta_{v2}+\mu)V\] \[\frac{dI_{1}}{dt} =\beta_{1}J_{1}\frac{S}{N}-(\gamma_{1}+\mu)I_{1}\] \[\frac{dI_{2}}{dt} =\beta_{2}J_{2}\frac{S}{N}-(\gamma_{2}+\mu)I_{2}\] \[\frac{dC_{1}}{dt} =\gamma_{1}I_{1}-(\theta_{2}+\mu)C_{1}\] \[\frac{dC_{2}}{dt} =\gamma_{2}I_{2}-(\theta_{1}+\mu)C_{2}\] \[\frac{dR_{1}}{dt} =\theta_{2}C_{1}-\alpha_{2}\beta_{2}J_{2}\frac{R_{1}}{N}-\mu R_{1}\] \[\frac{dR_{2}}{dt} =\theta_{1}C_{2}-\alpha_{1}\beta_{1}J_{1}\frac{R_{2}}{N}-\mu R_{2}\] \[\frac{dR_{v1}}{dt} =\theta_{v2}V-\alpha_{v2}\beta_{2}J_{2}\frac{R_{v1}}{N}-\mu R_{v1}\] \[\frac{dY_{1}}{dt} =\alpha_{1}\beta_{1}J_{1}\frac{R_{2}}{N}-(\gamma_{1}+\mu)Y_{1}\] \[\frac{dY_{2}}{dt} =\alpha_{2}\beta_{2}J_{2}\frac{R_{1}}{N}+\alpha_{v2}\beta_{2}J_{2 }\frac{R_{v1}}{N}-(\gamma_{2}+\mu)Y_{2}\] \[\frac{dR_{12}}{dt} =\gamma_{1}Y_{1}+\gamma_{2}Y_{2}-\frac{3}{2}R_{12}, \tag{1}\]
\begin{table}
\begin{tabular}{c|l}
**Parameter** & **Description (for \(i,j=1,2\))** \\ \hline \(\Lambda\) & Birth rate \\ \(\mu\) & Per capita death rate \\ \(\beta_{i}\) & Transmission rate of virus \(i\) \\ \(\gamma_{i}\) & Per capita recovery rate of infected people with virus \(i\) \\ \(\theta_{i}\) & Per capita loss rate of cross-immunity to virus \(i\) after previous infection with virus \(j\) \\ \(\theta_{v2}\) & Per capita loss rate of cross-immunity to virus 2 obtained by vaccination \\ \(\alpha_{i}\) & ADE factor that can alter the susceptibility of unvaccinated individuals to the virus \(i\) \\ \(\alpha_{v2}\) & ADE factor that can alter the susceptibility of vaccinated individuals to virus 2 \\ \(v\) & Per capita vaccination rate \\
**Compartments** & **Description** \\ \hline \(S\) & Susceptible individuals to both virus \\ \(V\) & Vaccinated individuals against the virus 1 \\ \(I_{i}\) & Individuals with primary infection by the virus \(i\) \\ \(C_{i}\) & Individuals recovered from infection with virus \(i\) and have cross-immunity to virus \(j\) \\ \(R_{i}\) & Unvaccinated individuals immune to virus \(i\) and susceptible to virus \(j\) \\ \(R_{v1}\) & Vaccinated individuals to virus 1 and susceptible to virus 2 \\ \(Y_{i}\) & Individuals infected by virus \(i\) and immune to virus \(j\) \\ \(R_{12}\) & Individuals immune to both virus \\ \hline \end{tabular}
\end{table}
Table 1: Parameters and compartments of the model.
Figure 1: Schematic representation of the infection status due to the concomitant transmission of viruses \(1\) and \(2\), considering that the population is vaccinated against the virus \(1\).
It follows from the equations that
\[\frac{dN(t)}{dt}=\Lambda-\mu N(t).\]
Therefore,
\[\lim_{t\rightarrow+\infty}N(t)=\frac{\Lambda}{\mu}.\]
Then, without loss of generality, we assume that \(N(t)=\Lambda/\mu\), for \(t\geq 0\).
Since the system variables represent populations, it is necessary that their values are non-negative and that the system solution is bounded. Proposition 2.1 shows the limitation and positivity of solutions.
**Proposition 2.1**.: _Consider the system of equations (1). Given an initial condition in \(\mathbb{R}^{12}_{+}\), then the following conditions hold:_
1. _There exist a unique bounded solution in_ \(\mathbb{R}^{12}_{+}\) _for the system (_1_), for all_ \(t\geq 0\)_;_
2. \(\mathbb{R}^{12}_{+}\) _is positively invariant under the flow of (_1_);_
3. _If S(0) is strictly positive, then_ \(S(t)\)_,_ \(V(t)\) _and_ \(R_{v1}(t)\) _are strictly positive for all_ \(t>0\)_._
The proof of Proposition 2.1 can be found in Appendix A.
In the remaining of the work, we consider \(S(0)>0\). It follows from the previous Proposition that the model is well-posedness in the set
\[\Gamma = \left\{(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{ 2},R_{12})\in\mathbb{R}^{12}_{+};\right.\] \[\left.S+V+I_{1}+I_{2}+C_{1}+C_{2}+R_{1}+R_{2}+R_{v1}+Y_{1}+Y_{2}+ R_{12}=\Lambda/\mu\right\}.\]
## 3 Relevant equilibria and reproduction numbers
In this section, we will find the relevant equilibria of the system from an epidemiological point of view, and calculate the basic and invasion reproduction numbers, which are threshold parameters for the stability of the equilibria.
### Disease-free equilibrium and the basic reproductive number
The disease-free equilibrium (DFE), \(E^{0}\), is the equilibrium point where there is no infections in the population, that is, when \(I_{1}=I_{2}=Y_{1}=Y_{2}=0\). Thus, \(E^{0}=(S^{0},V^{0},0,0,C_{1}^{0},C_{2}^{0},R_{1}^{0},R_{2}^{0},R_{v1}^{0},0,0,R _{12}^{0})\) has coordinates
\[S^{0}=(1-v)\frac{\Lambda}{\mu},\,V^{0}=\frac{v\mu}{\theta_{v2}+\mu}\frac{ \Lambda}{\mu},\,R_{v1}^{0}=\frac{\theta_{v2}v}{\theta_{v2}+\mu}\frac{\Lambda}{\mu} \tag{2}\]
and \(C_{1}^{0}=C_{2}^{0}=R_{1}^{0}=R_{2}^{0}=0\).
The basic reproduction number, \(\mathcal{R}_{0}\), is defined as the average number of secondary infections produced when a infectious individual is introduced into a fully susceptible population. Its importance lies in the fact that it is a threshold parameter for the stability of disease-free equilibrium. In the following, applying the next generation matrix method [33], we will define the basic reproduction number for the system (1).
The vector referring to the compartments with infected individuals, \(x=(J_{1},J_{2})\), satisfies
\[\dot{x}=f(x)-v(x),\]
where \(f\) represents the rate of new infections and \(v\) represents the transfer rate of individuals by other means:
\[f=\left(\begin{array}{c}\frac{\beta_{1}J_{1}S}{N}+\frac{\alpha_{1}\beta_{1} J_{1}R_{2}}{N}\\ \frac{\beta_{2}JS}{N}+\frac{\beta_{2}J_{2}(\alpha_{2}R_{1}^{0}+\alpha_{v2}R_{ v1})}{N}\end{array}\right)\text{ and }v=\left(\begin{array}{c}(\gamma_{1}+\mu)J_{1}\\ (\gamma_{2}+\mu)J_{2}\end{array}\right).\]
The matrices \(F\) and \(V\) are the Jacobian matrices of \(f(x)\) and \(v(x)\), respectively, evaluated in the \(E^{0}\):
\[F=\left(\begin{array}{cc}\frac{\beta_{1}S^{0}}{N}&0\\ 0&\frac{\beta_{2}S^{0}}{N}+\frac{\beta_{2}\alpha_{2}R_{v1}^{0}}{N}\end{array} \right)\text{ and }V=\left(\begin{array}{cc}\gamma_{1}+\mu&0\\ 0&\gamma_{2}+\mu\end{array}\right).\]
We define the basic reproductive number as the spectral radius of the next generation matrix \(FV^{-1}\):
\[\mathcal{R}_{0}=\rho(FV^{-1})=\max\left\{\mathcal{R}_{1},\mathcal{R}_{2} \right\}, \tag{3}\]
where
\[\mathcal{R}_{1}=\frac{\beta_{1}S^{0}}{N(\gamma_{1}+\mu)}=\frac{\beta_{1}}{\gamma_{ 1}+\mu}(1-v) \tag{4}\]
and
\[\mathcal{R}_{2}=\frac{\beta_{2}(S^{0}+\alpha_{v2}R_{v1}^{0})}{N(\gamma_{2}+\mu) }=\frac{\beta_{2}}{\gamma_{2}+\mu}\left[1+v\left(\frac{\alpha_{v2}\theta_{v2}} {\theta_{v2}+\mu}-1\right)\right]. \tag{5}\]
The expression (4) for \(\mathcal{R}_{1}\) is given by the product of the transmissibility of the strain \(1\), \(\beta_{1}\), the average time an individual spends in the infectious compartment, \(1/(\gamma_{1}+\mu)\), and the fraction of susceptible individuals to this strain (unvaccinated) in the disease-free equilibrium, \(S^{0}/N\). Thus, \(\mathcal{R}_{1}\) represents the average number of new infections caused by an infected individual by the strain \(1\), in his infectious period, when there is no other infectious individual in the population.
The expression (5) for \(\mathcal{R}_{2}\) is given by the sum of two components. The first of them is the product of the transmissibility of the strain \(2\), \(\beta_{2}\), the average time an individual spends in the infectious compartment, \(1/(\gamma_{2}+\mu)\), and the fraction of unvaccinated susceptible individuals in the disease-free equilibrium, \(S^{0}/N\). The second term is the product of the transmissibility of the strain \(2\), \(\beta_{2}\), the average time an individual spends in the infectious compartment, \(1/(\gamma_{2}+\mu)\), the fraction of vaccinated susceptible individuals in the disease-free equilibrium, \(R_{v1}^{0}/N\), and the factor of increase or not of susceptibility \(\alpha_{v2}\). Just like \(\mathcal{R}_{1}\), the value \(\mathcal{R}_{2}\) represents the average number of new infections caused by an infected individual by the strain \(2\), in his infectious period, when there is no other infectious individual in the population.
**Remark 3.1**.: _In the model without vaccination (\(v=0\)), the basic reproductive number is the maximum between the reproductive numbers of each strain,_
\[\mathcal{R}_{1}^{wv}=\frac{\beta_{1}}{\gamma_{1}+\mu}\text{ and }\mathcal{R}_{2}^{ wv}=\frac{\beta_{2}}{\gamma_{2}+\mu}. \tag{6}\]
**Remark 3.2**.: _The vaccination decreases the value of \(\mathcal{R}_{1}^{wv}\), reducing the number of new infections by the strain \(1\). The effect of the vaccination over strain \(2\) depends on the parameters of the vaccine, \(\alpha_{v2}\) and \(\theta_{v2}\), referents to ADE and loss of cross-immunity against strain \(2\)._
### Endemic boundary equilibria
In addition to disease-free equilibrium, we look for more two relevant equilibriums on the boundary: the endemic equilibrium where there are only infections by the strain \(1\), \(E^{1}\), and the endemic equilibrium where there are only infections by the strain \(2\), \(E^{2}\).
At the equilibrium \(E^{1}\), the values of \(I_{2},C_{2},R_{2},Y_{1},Y_{2}\) and \(R_{12}\) are zero. Then
\[E^{1}=(S^{*},V^{*},I_{1}^{*},0,C_{1}^{*},0,R_{1}^{*},0,R_{v1}^{*},0,0,0), \tag{7}\]
where
\[S^{*}=\frac{(\gamma_{1}+\mu)\Lambda}{\beta_{1}\mu},\quad V^{*}= \frac{v\Lambda}{\theta_{v2}+\mu},\quad I_{1}^{*}=\frac{(1-v)\Lambda}{\gamma_{ 1}+\mu}\left(1-\frac{1}{\mathcal{R}_{1}}\right),\] \[C_{1}^{*}=\frac{\gamma_{1}}{\theta_{2}+\mu}I_{1}^{*},\quad R_{1} ^{*}=\frac{\theta_{2}}{\mu}C_{1}^{*}\quad\text{and}\quad R_{v1}^{*}=\frac{ \theta_{v2}}{\mu}V^{*}.\]
The expression of \(\mathcal{R}_{1}\) is given in (4). Note that the endemic equilibrium \(E^{1}\) exists if and only if \(\mathcal{R}_{1}>1\).
At the equilibrium \(E^{2}\), the values of \(I_{1},C_{1},R_{1}\) and \(Y_{1}\) are zero. Then
\[E^{2}=(S^{*},V^{*},0,I_{2}^{*},0,C_{2}^{*},0,R_{2}^{*},R_{v1}^{*},0,Y_{2}^{*},R _{12}^{*}), \tag{8}\]
where
\[S^{*}=\frac{(1-v)\Lambda}{x+\mu},\quad V^{*}=\frac{v\Lambda}{ \theta_{v2}+\mu},\quad I_{2}^{*}=\frac{(1-v)x\Lambda}{(x+\mu)(\gamma_{2}+\mu)},\quad C_{2}^{*}=\frac{(1-v)x\gamma_{2}\Lambda}{(x+\mu)(\gamma_{2}+\mu)( \theta_{1}+\mu)},\] \[R_{2}^{*}=\frac{(1-v)x\gamma_{2}\theta_{1}\Lambda}{(x+\mu)(\gamma _{2}+\mu)(\theta_{1}+\mu)\mu},\quad R_{v1}^{*}=\frac{v\theta_{v2}\Lambda}{( \theta_{v2}+\mu)(\alpha_{v2}x+\mu)},\] \[Y_{2}^{*}=\frac{v\alpha_{v2}x\theta_{v2}\Lambda}{(\alpha_{v2}x+ \mu)(\theta_{v2}+\mu)(\gamma_{2}+\mu)},\quad R_{12}^{*}=\frac{v\alpha_{v2}x \theta_{v2}\gamma_{2}\Lambda}{(\alpha_{v2}x+\mu)(\theta_{v2}+\mu)(\gamma_{2}+ \mu)\mu},\] \[x=\frac{\beta_{2}\mu(I_{2}^{*}+Y_{2}^{*})}{\Lambda}\]
and \(x\) is solution of the quadratic equation
\[ax^{2}+bx+c=0, \tag{9}\]
with coefficients \(a,b\) and \(c\) given by
\[a = \alpha_{v2}\] \[b = \mu\alpha_{v2}\left[1-\frac{\beta_{2}(1-v)}{\gamma_{2}+\mu}\right] +\mu\left[1-\frac{\beta_{2}\alpha_{v2}\theta_{v2}v}{(\gamma_{2}+\mu)(\theta_{v2 }+\mu)}\right]\] \[c = \mu^{2}\left(1-\mathcal{R}_{2}\right).\]
If \(\mathcal{R}_{2}\leq 1\), the fractions in the expression of \(b\) must be smaller than one or equal to one, and it is not possible for both to be one. Therefore, \(b>0\). We also have \(c\geq 0\). Since that \(a>0\), the equation (9) does not have roots with positive real parts. This implies that there is no endemic equilibrium like \(E^{2}\). Thus, for an equilibrium \(E^{2}\) to exist, we must have \(\mathcal{R}_{2}>1\). In this case, \(c<0\). Since the coefficient \(a\) is positive, the equation (9) has two real roots and only one of them is positive. In resume, if \(\mathcal{R}_{2}>1\), there is a unique endemic equilibrium where there are infections only by the strain 2. The value of \(I_{2}^{*}+Y_{2}^{*}\) at the equilibrium is calculated through the positive solution of the equation (9).
The results above give us the following Theorem.
**Theorem 3.3**.: _Let \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\) be given in (4) and (5), respectively. The system (1) has an endemic equilibrium with infections caused only by the strain \(1\) if and only if \(\mathcal{R}_{1}>1\). The system (1) has an endemic equilibrium with infections caused only by the strain \(2\) if and only if \(\mathcal{R}_{2}>1\). In both cases the equilibria are unique._
### Invasion reproduction numbers
Like the basic reproduction number, the invasion reproduction number is a relevant threshold parameter for the analysis of equilibrium stability. It means the average number of new infections caused by an individual infected by a strain during his infectious period, in a population that is susceptible to this strain, but is at the endemic equilibrium of another strain. This concept is explained, for example, in [8, 33]. The invasion numbers, as the basic reproduction number, were calculated using the next generation matrix [33].
To define the invasion reproduction number of strain 2 at the equilibrium of strain 1, \(\mathcal{R}_{1}^{2}\), we consider \(x=J_{2}\) and proceed building the matrices \(f(x)\) and \(v(x)\) (which are scalars) of the new infections (caused by the strain 2) and the remaining transfer terms, respectively. The next generation matrix is the matrix \(F_{2}V_{2}^{-1}\), where \(F_{2}\) and \(V_{2}\) are the Jacobian of the matrices \(f\) and \(v\), evaluated at the equilibrium \(E^{1}\). Thus,
\[\mathcal{R}_{1}^{2}=\rho(F_{2}V_{2}^{-1})=\frac{\beta_{2}S^{*}}{(\gamma_{2}+ \mu)N}+\frac{\beta_{2}(\alpha_{2}R_{1}^{*}+\alpha_{v2}R_{v1}^{*})}{(\gamma_{2} +\mu)N}, \tag{10}\]
where \(S^{*}\), \(R_{1}^{*}\) and \(R_{v1}^{*}\) are given in the expression of \(E^{1}\) (7).
For interpretation of the \(\mathcal{R}_{1}^{2}\), remember that at the equilibrium \(E^{1}\), an individual infected by strain 2 can infect the unvaccinated susceptible individuals (susceptible to all strains), \(S^{*}\), and the individuals immune to strain 2. In the last case we have two options: the individuals that had an infection by the strain 1, recovered and, after of a period, lost the cross-immunity against strain 2, \(R_{1}^{*}\); or the individuals that received the vaccine and, after of a period, lost the cross-immunity offered by the vaccine against strain 2, \(R_{v1}^{*}\). The parameter \(\beta_{2}\) is the transmissibility of the strain 2, and \(1/(\gamma_{2}+\mu)\) is the duration of the infectious period of an individual infected by the strain 2. The parameters \(\alpha_{2}\) and \(\alpha_{v2}\) are the factors of ADE that can appear after recuperation from an infection by the strain 1 or after a vaccination, respectively.
Analogously, we calculated the invasion reproduction number of the strain 1 at the equilibrium of the strain 2:
\[\mathcal{R}_{2}^{1}=\rho(F_{1}V_{1}^{-1})=\frac{\beta_{1}S^{*}}{(\gamma_{1}+ \mu)N}+\frac{\alpha_{1}\beta_{1}R_{2}^{*}}{(\gamma_{1}+\mu)N}, \tag{11}\]
where \(S^{*}\) and \(R_{2}^{*}\) are given in the expression of \(E^{2}\) (8).
At the equilibrium \(E^{2}\), an individual infected by strain 1 can infect the unvaccinated susceptible individuals (susceptible to all strains), \(S^{*}\), and the individuals immune to strain 2 and susceptible to strain 1, \(R_{2}^{*}\). The last, had an infection by strain 2, recovered and, after of a period, lost the cross-immunity against strain 1. The parameter \(\beta_{1}\) is the transmissibility of the strain 1, and \(1/(\gamma_{1}+\mu)\) is the duration of the infectious period of an individual infected by the strain 1. The parameter \(\alpha_{1}\) is the factor of ADE that can appear after recuperation from an infection by the strain 2.
Stability analysis
In this section, we will give results about the stability of the disease-free and endemic equilibria. Before starting this analysis, we will comment about the stability of the DFE in the model without vaccination.
### The DFE in the model without vaccination
The model without vaccination [1, 30] describes the dynamics of population with the circulation of two strains of a virus. In this model \(v=0\) and the states \(V\) and \(R_{v1}\) are not considered. The model is well-posedness in the set
\[\Gamma^{wv} = \left\{(S,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},Y_{1},Y_{2},R_{12}) \in\mathbb{R}_{+}^{10};\right.\] \[\left.S+I_{1}+I_{2}+C_{1}+C_{2}+R_{1}+R_{2}+Y_{1}+Y_{2}+R_{12}= \Lambda/\mu\right\}.\]
The DFE is the point
\[E_{0}^{wv}=\left(\frac{\Lambda}{\mu},0,0,0,0,0,0,0,0\right).\]
In [30], we can find the following Theorem.
**Theorem 4.1**.: _Let \(\mathcal{R}_{1}^{wv}\) and \(\mathcal{R}_{2}^{wv}\) be as given in (6). If \(\mathcal{R}_{0}^{wv}=\max\{\mathcal{R}_{1}^{wv},\mathcal{R}_{2}^{wv}\}<1\), then the DFE, \(E_{0}^{wv}\), is locally asymptotically stable. If \(\mathcal{R}_{0}^{wv}>1\), then the DFE is unstable._
Proof.: (Sketch) The proof is obtained linearizing the system and evaluating at \(E_{0}^{wv}\). In the Jacobian matrix, eight eigenvalues are negative and the others two have the signal determined by \(\mathcal{R}_{1}^{wv}\) and \(\mathcal{R}_{2}^{wv}\). If \(\mathcal{R}_{0}^{wv}<1\), the eigenvalues are all negative. If \(\mathcal{R}_{0}^{wv}>1\), at least one of them is positive.
Next, we will analyze the dynamics of the model with vaccination, given by the system of equations (1).
### Local stability
With the definition of \(\mathcal{R}_{0}\), we proved the first result about the local stability of the disease-free equilibrium.
**Theorem 4.2**.: _Let \(\mathcal{R}_{0}\) be as defined in (3). The disease-free equilibrium of the model (1), \(E^{0}\), is locally asymptotically stable if \(\mathcal{R}_{0}<1\), and unstable if \(\mathcal{R}_{0}>1\)._
Proof.: Linearizing the system (1) at the equilibrium point \(E^{0}\) and calculating the characteristic polynomial, we obtained:
\[P(\lambda) = (\lambda+\theta_{v2}+\mu)(\lambda+\theta_{2}+\mu)(\lambda+\theta _{1}+\mu)(\lambda+\mu)^{5}(\lambda+\gamma_{1}+\mu)(\lambda+\gamma_{2}+\mu)\] \[[(\gamma_{1}+\mu)(\mathcal{R}_{1}-1)-\lambda][(\gamma_{2}+\mu)( \mathcal{R}_{2}-1)-\lambda]\]
Here, if \(\mathcal{R}_{0}=\max\{\mathcal{R}_{1},\mathcal{R}_{2}\}<1\), then all the eigenvalues are negative real numbers and the equilibrium is locally asymptotically stable. If \(\mathcal{R}_{0}>1\), at least one of the eigenvalues is positive and the equilibrium is unstable.
With the definition of the invasion reproduction numbers, we proved the following results about the local stability of the endemic boundary equilibria.
**Theorem 4.3**.: _Let \(\mathcal{R}_{1}\) and \(\mathcal{R}_{1}^{2}\) be as defined in (4) and (10), respectively. Suppose \(\mathcal{R}_{1}>1\). The endemic boundary equilibrium of the model (1), \(E^{1}\), is locally asymptotically stable if \(\mathcal{R}_{1}^{2}<1\), and unstable if \(\mathcal{R}_{1}^{2}>1\)._
Proof.: Linearizing the system (1) at the equilibrium point \(E^{1}\) and calculating the characteristic polynomial, we obtained:
\[P(\lambda) = (-\mu-\lambda)^{3}[-(\theta_{v2}+\mu)-\lambda][-(\theta_{2}+\mu) -\lambda][-(\theta_{1}+\mu)-\lambda][-(\gamma_{1}+\mu)-\lambda][-(\gamma_{2} +\mu)-\lambda]\] \[\left[-\left(\frac{\alpha_{1}\beta_{1}I_{1}^{*}}{N}+\mu\right)- \lambda\right][(\gamma_{2}+\mu)(\mathcal{R}_{1}^{2}-1)-\lambda]Q(\lambda),\]
where
\[Q(\lambda)=\lambda^{2}+b\lambda+c,\]
and the coefficients \(b\) and \(c\) are
\[b=(\gamma_{1}+2\mu)+\frac{\beta_{1}I_{1}^{*}}{N}-\frac{\beta_{1}S^{*}}{N},\]
\[c=\mu(\gamma_{1}+\mu)+\frac{\beta_{1}I_{1}^{*}(\gamma_{1}+\mu)}{N}-\frac{\beta_{ 1}S^{*}\mu}{N}.\]
Substituting the expressions of the \(S^{*}\) and \(I_{1}^{*}\), given in (7), in the expressions for \(b\) and \(c\), we have
\[b = \mu+\frac{\beta_{1}\mu(1-v)}{\gamma_{1}+\mu}\left(1-\frac{1}{ \mathcal{R}_{1}}\right),\] \[c = \beta_{1}\mu(1-v)\left(1-\frac{1}{\mathcal{R}_{1}}\right).\]
As \(\mathcal{R}_{1}>1\), then \(b>0\) and \(c>0\). Therefore, the two roots of the polynomial \(Q(\lambda)\) have negative real parts. It follows that if \(\mathcal{R}_{1}^{2}<1\), then all roots of \(P(\lambda)\) have negative real parts and the point \(E^{1}\) is locally asymptotically stable, while if \(\mathcal{R}_{1}^{2}>1\), \(P(\lambda)\) has one positive real root and the point \(E^{1}\) is unstable.
**Theorem 4.4**.: _Let \(\mathcal{R}_{2}\) and \(\mathcal{R}_{2}^{1}\) be as defined in (5) and (11), respectively. Suppose \(\mathcal{R}_{2}>1\). The endemic boundary equilibrium of the model (1), \(E^{2}\), is locally asymptotically stable if \(\mathcal{R}_{2}^{1}<1\), and unstable if \(\mathcal{R}_{2}^{1}>1\)._
Proof.: To simplify the calculations, we will consider the system (1) with the variables
\[J_{1},Y_{1},J_{2},Y_{2},S,V,C_{1},C_{2},R_{1},R_{2},R_{v1},R_{12}.\]
Linearizing the system at the equilibrium \(E^{2}\), we have the characteristic polynomial
\[P(\lambda) = -(-\mu-\lambda)^{2}[-(\theta_{v2}+\mu)-\lambda][-(\theta_{2}+\mu )-\lambda][-(\theta_{1}+\mu)-\lambda][-(\gamma_{1}+\mu)-\lambda][-(\gamma_{2}+ \mu)-\lambda]\] \[\left[-\left(\frac{\alpha_{2}\beta_{2}J_{2}^{*}}{N}+\mu\right)- \lambda\right][(\gamma_{1}+\mu)(\mathcal{R}_{2}^{1}-1)-\lambda]Q(\lambda).\]
Using that
\[\beta_{2}S^{*}/N+\alpha_{v2}\beta_{2}R_{v1}^{*}/N-(\gamma_{2}+\mu)=0,\]
\(Q(\lambda)\) is the polynomial \(Q(\lambda)=\lambda^{3}+b\lambda^{2}+c\lambda+d\), with positive coefficients given in Appendix C. The signals of the real parts of roots of \(Q\) can be studied by the Routh-Hurwitz criterion [2]. The table of the method is
\[\left(\begin{array}{cccc}1&c&0&0\\ b&d&0&0\\ \frac{bc-d}{b}&0&0&0\\ d&0&0&0\end{array}\right).\]
More calculations shows that the first column is positive. As there is no change of signal in this column, by the Routh criterion, the real parts of the roots of \(Q\) are negative.
It follows that the equilibrium \(E^{2}\) is stable if \(\mathcal{R}_{2}^{1}<1\), and unstable if \(\mathcal{R}_{2}^{1}>1\).
**Remark 4.5**.: _Note that in the case \(\alpha_{1}\leq 1\), it is valid that \(\mathcal{R}_{2}^{1}\leq\mathcal{R}_{1}\) (see Appendix B). That is, if \(\mathcal{R}_{1}<1\), the strain \(1\) can not invade the endemic equilibrium of the strain \(2\). If \(\alpha_{1}>1\), even with \(\mathcal{R}_{1}<1\), the strain \(1\) may or may not persist. The analogous is valid for the case \(\alpha_{2}\leq 1\) and \(\alpha_{2}>1\)._
### Analysis of the subsystems
In this section, we will study the dynamics of two following subsystems, where there are infections by only one of the strains:
\[\frac{dS}{dt} = (1-v)\Lambda-\frac{\beta_{1}I_{1}S}{N}-\mu S\] \[\frac{dV}{dt} = v\Lambda-(\theta_{v2}+\mu)V\] \[\frac{dI_{1}}{dt} = \frac{\beta_{1}I_{1}S}{N}-(\gamma_{1}+\mu)I_{1}\] \[\frac{dC_{1}}{dt} = \gamma_{1}I_{1}-(\theta_{2}+\mu)C_{1}\] \[\frac{dR_{1}}{dt} = \theta_{2}C_{1}-\mu R_{1}\] \[\frac{dR_{v1}}{dt} = \theta_{v2}V-\mu R_{v1} \tag{12}\]
\[\frac{dS}{dt} = (1-v)\Lambda-\frac{\beta_{2}J_{2}S}{N}-\mu S\] \[\frac{dV}{dt} = v\Lambda-(\theta_{v2}+\mu)V\] \[\frac{dI_{2}}{dt} = \frac{\beta_{2}J_{2}S}{N}-(\gamma_{2}+\mu)I_{2}\] \[\frac{dC_{2}}{dt} = \gamma_{2}I_{2}-(\theta_{1}+\mu)C_{2}\] \[\frac{dR_{2}}{dt} = \theta_{1}C_{2}-\mu R_{2}\] \[\frac{dR_{v1}}{dt} = \theta_{v2}V-\frac{\alpha_{v2}\beta_{2}J_{2}R_{v1}}{N}-\mu R_{v1}\] \[\frac{dY_{2}}{dt} = \frac{\alpha_{v2}\beta_{2}J_{2}R_{v1}}{N}-(\gamma_{2}+\mu)Y_{2}\] \[\frac{dR_{12}}{dt} = \gamma_{2}Y_{2}-\mu R_{12}, \tag{13}\]
defined in the sets
\[\Gamma_{1}=\left\{(S,V,I_{1},C_{1},R_{1},R_{v1})\in\mathbb{R}_{+}^{6};S+V+I_{1 }+C_{1}+R_{1}+R_{v1}=\frac{\Lambda}{\mu}\right\}\]
and
\[\Gamma_{2}=\left\{(S,V,I_{2},C_{2},R_{2},R_{v1},Y_{2},R_{12})\in\mathbb{R}_{+} ^{8};S+V+I_{2}+C_{2}+R_{2}+R_{v1}+Y_{2}+R_{12}=\frac{\Lambda}{\mu}\right\},\]
respectively. The dynamics of the systems (12) and (13) are the dynamics of the full system (1) when the strain 2 is extinct and when the strain 1 is extinct, respectively. From the Proposition (2.1), the systems (12) and (13) are well-defined in the sets \(\Gamma_{1}\) and \(\Gamma_{2}\), respectively. The disease-free equilibria \(E_{1}^{0}\) and \(E_{2}^{0}\), of the subsystems (12) and (13), respectively, correspond to the disease-free equilibrium of the full system, \(E^{0}\). That is, they have coordinates \(S\), \(V\) and \(R_{v1}\) equal to \(S^{0}\), \(V^{0}\) and \(R_{v1}^{0}\), respectively, given in (2), and the others coordinates are null. The interior equilibrium of each system can also be directly deduced from the endemic boundary equilibria of the full system. The interior equilibrium of the system (12) is \(E_{1}^{1}=(S^{*},V^{*},I_{1}^{*},C_{1}^{*},R_{1}^{*},R_{v1}^{*})\), where the coordinates of \(E_{1}^{1}\) are the positive coordinates given in (7) for \(E^{1}\). The interior equilibrium of the system (13) is \(E_{2}^{2}=(S^{*},V^{*},I_{2}^{*},C_{2}^{*},R_{2}^{*},R_{v1}^{*},Y_{2}^{*},R_{1 2}^{*})\). On the same way, the coordinates of \(E_{2}^{2}\) are the positive coordinates given in (8) for \(E^{2}\).
Next, we prove the global stability of the disease-free and endemic equilibria in the subsystems (12) and (13). A version (see [29], Chap. 2, page 29) of the LaSalle's Invariance Principle [27] is the main tool used in the proofs. During the process, Lyapunov functions are constructed using combinations of the classical Lyapunov function \(L=x-x^{*}\ln x\), used since 1980's in ecological models [11], quadratic functions [34] and the methods described in [14].
**Theorem 4.6**.: _If \(\mathcal{R}_{1}\leq 1\), then the disease-free equilibrium, \(E_{1}^{0}\), is globally asymptotically stable for system (12) in \(\Gamma_{1}\)._
Proof.: It is clear that the set \(\Gamma_{1}\) is invariant by the solution of the system.
At the equilibrium \(E_{1}^{0}\), it is valid that
\[(1-v)\Lambda-\mu S^{0}=0. \tag{14}\]
Let \(L\) be the Lyapunov function
\[L(t)=\left(S-S^{0}-S^{0}\ln\frac{S}{S^{0}}\right)+I_{1} \tag{15}\]
in \(G=\{(S,V,I_{1},C_{1},R_{1},R_{v1})\in\Gamma_{1};S>0\}\).
Differentiating \(L\) with respect to \(t\), along solutions of (12), and using equation (14), gives
\[L^{\prime}(t) = (S-S^{0})\left[\frac{(1-v)\Lambda}{S}-\mu-\frac{\beta_{1}I_{1}}{N }\right]+\frac{\beta_{1}I_{1}S}{N}-(\gamma_{1}+\mu)I_{1}\] \[=(1-v)\Lambda(S-S^{0})\left(\frac{1}{S}-\frac{1}{S^{0}}\right)+ \frac{\beta_{1}I_{1}S^{0}}{N}-(\gamma_{1}+\mu)I_{1}\] \[=(1-v)\Lambda\left(2-\frac{S}{S^{0}}-\frac{S^{0}}{S}\right)+I_{1 }(\gamma_{1}+\mu)(\mathcal{R}_{1}-1).\]
We have that \(2-S/S^{0}-S^{0}/S\leq 0\) and the equality is valid only if \(S=S^{0}.\) Since that \(\mathcal{R}_{1}\leq 1,\) we have \(L^{\prime}(t)\leq 0\) in \(G.\)
If \(\mathcal{R}_{1}<1,\) then \(L^{\prime}(t)=0\) if and only if \(I_{1}=0\) and \(S=S^{0}.\) If \(\mathcal{R}_{1}=1,\) then \(L^{\prime}(t)=0\) if and only if \(S=S^{0}.\) Note that \(V\) tends to \(V^{0},\) when \(t\) tends to infinity. Also, if \(V=V^{0},\) integrating the equation for \(dR_{v1}/dt,\) we have that \(R_{v1}\) tends to \(R_{v1}^{0}\) when \(t\) tends to infinity. Since that \(S^{0}+V^{0}+R_{v1}^{0}=\Lambda/\mu,\) the largest invariant set by (12) contained in \(E=\{(S,V,I_{1},C_{1},R_{1},R_{v1})\in G;L^{\prime}(t)=0\}\) is the singleton \(\{E_{1}^{0}\}.\) Thus, the endemic equilibrium \(E_{1}^{0}\) is globally asymptotically stable in \(G,\) by LaSalle's Invariable Principle [29]. All orbit of the system (12) starting at a point in \(\Gamma_{1},\) belongs to \(G\) for \(t>0.\) Thus, the equilibrium \(E_{1}^{0}\) is globally asymptotically stable in \(\Gamma_{1}.\)
**Theorem 4.7**.: _If \(\mathcal{R}_{2}\leq 1\), then the disease-free equilibrium \(E_{2}^{0}\) is globally asymptotically stable for system (13) in \(\Gamma_{2}\)._
Proof.: In this proof, we will use the method in [14].
At the equilibrium \(E_{2}^{0},\) it is valid that
\[(1-v)\Lambda-\mu S^{0}=0\] \[v\Lambda-(\theta_{v2}+\mu)V^{0}=0\] \[\theta_{v2}V^{0}-\mu R_{v1}^{0}=0. \tag{17}\]
Let \(L\) be the Lyapunov function
\[L(t)=\left(S-S^{0}-S^{0}\ln\frac{S}{S^{0}}\right)+\left(V-V^{0}-V^{0}\ln\frac {V}{V^{0}}\right)+\left(R_{v1}-R_{v1}^{0}-R_{v1}^{0}\ln\frac{R_{v1}}{R_{v1}^{ 0}}\right)+I_{2}+Y_{2}\]
defined in \(G=\{(S,V,I_{2},C_{2},R_{2},R_{v1},Y_{2},R_{12})\in\Gamma_{2};S>0,V>0,R_{v1}>0\}.\)
Differentiating \(L(t),\) with respect to \(t,\) along solutions of (13), and using the equations in (17), we have
\[L^{\prime}(t) = (S-S^{0})\left[\frac{(1-v)\Lambda}{S}-\frac{\beta_{2}J_{2}}{N}- \mu\right]+(V-V^{0})\left[\frac{v\Lambda}{V}-(\theta_{v2}+\mu)\right]\] \[+(R_{v1}-R_{v1}^{0})\left(\frac{\theta_{v2}V}{R_{v1}}-\frac{ \alpha_{v2}\beta_{2}J_{2}}{N}-\mu\right)+\frac{\beta_{2}J_{2}S}{N}+\frac{ \alpha_{v2}\beta_{2}J_{2}R_{v1}}{N}-(\gamma_{2}+\mu)J_{2}\] \[= (1-v)\Lambda(S-S^{0})\left(\frac{1}{S}-\frac{1}{S^{0}}\right)+v \Lambda(V-V^{0})\left(\frac{1}{V}-\frac{1}{V^{0}}\right)+\theta_{v2}(R_{v1}- R_{v1}^{0})\left(\frac{V}{R_{v1}}-\frac{V^{0}}{R_{v1}^{0}}\right)\] \[+J_{2}(\gamma_{2}+\mu)(\mathcal{R}_{2}-1)\] \[= F(S,V,R_{v1})+J_{2}(\gamma_{2}+\mu)(\mathcal{R}_{2}-1),\]
where
\[F(S,V,R_{v1})=(1-v)\Lambda(S-S^{0})\left(\frac{1}{S}-\frac{1}{S^{0}}\right)+v \Lambda(V-V^{0})\left(\frac{1}{V}-\frac{1}{V^{0}}\right)+\theta_{v2}(R_{v1}- R_{v1}^{0})\left(\frac{V}{R_{v1}}-\frac{V^{0}}{R_{v1}^{0}}\right).\]
We will show that \(F(S,V,R_{v1})\leq 0,\) and the equality is valid only if \(S=S^{0},\)\(V=V^{0}\) and \(R_{v1}=R_{v1}^{0}.\) For this denote \(x=\frac{S}{S^{0}},\)\(y=\frac{V}{V^{0}},\)\(z=\frac{R_{v1}}{R_{v1}^{0}}.\) Rewriting \(F(S,V,R_{v1}):=F(x,y,z),\) we have
\[F(x,y,z) = (1-v)\Lambda(x-1)\left(\frac{1}{x}-1\right)+v\Lambda(y-1)\left( \frac{1}{y}-1\right)+\theta_{v2}V^{0}(z-1)\left(\frac{y}{z}-1\right)\] \[= 2(1-v)\Lambda+2v\Lambda+\theta_{v2}V^{0}-(1-v)\Lambda x-(1-v) \Lambda\frac{1}{x}+(-v\Lambda+\theta_{v2}V^{0})y-v\Lambda\frac{1}{y}\] \[-\theta_{v2}V^{0}z-\theta_{v2}V^{0}\frac{y}{z}.\]
Using the method in [14], we rewrite \(F(x,y,z)\) as
\[F(x,y,z)=(1-v)\Lambda\left(2-x-\frac{1}{x}\right)+(v\Lambda-\theta_{v2}V^{0}) \left(2-y-\frac{1}{y}\right)+\theta_{v2}V^{0}\left(3-z-\frac{1}{y}-\frac{y}{z }\right).\]
Lastly, using the two first equations in (17), \(F(x,y,z)\) can be rewritten as
\[F(x,y,z)=\mu S^{0}\left(2-x-\frac{1}{x}\right)+\mu V^{0}\left(2-y-\frac{1}{y} \right)+\mu R_{v1}^{0}\left(3-z-\frac{1}{y}-\frac{y}{z}\right).\]
Since that the arithmetic average is greater or equal than geometric average, \(F(x,y,z)\leq 0\) and the equality is valid if and only if \(x=y=z=1.\)
Thus, since that \(\mathcal{R}_{2}\leq 1,\) we have \(L^{\prime}(t)\leq 0.\) If \(\mathcal{R}_{2}<1,\) then \(L^{\prime}(t)=0\) if and only if \(J_{2}=0\) and \(F(S,V,R_{v1})=0.\) If \(\mathcal{R}_{2}=1,\) then \(L^{\prime}(t)=0\) if and only \(F(S,V,R_{v1})=0.\) Note that \(S=S^{0}\) for all \(t\) implies \(J_{2}=0.\) Thus, the largest invariant set of (13) contained in
\[E = \{(S,V,I_{2},C_{2},R_{2},R_{v1},Y_{2},R_{12})\in G;L^{\prime}(t)=0\}\] \[= \{(S,V,I_{2},C_{2},R_{2},R_{v1},Y_{2},R_{12})\in G;S=S^{0},V=V^{0},R_{v1}=R_{v1}^{0}\}\]
is the singleton \(\{E_{2}^{0}\}.\) It follows from the LaSalle's Invariance Principle [29] that the equilibrium \(E_{2}^{0}\) is globally asymptotically stable in \(G.\) From the similar calculations to those in Proposition 2.1, all orbit of (13) belongs to \(G\) for all \(t>0.\) Therefore, \(E_{2}^{0}\) is globally asymptotically stable in \(\Gamma_{2}.\)
The following theorems give us information about the stability of the interior equilibrium of each subsystem.
**Theorem 4.8**.: _Consider \(\mathcal{R}_{1}>1.\) The equilibrium \(E_{1}^{1}\) is globally asymptotically stable for system (12) in \(\{(S,V,I_{1},C_{1},R_{1},R_{v1})\in\Gamma_{1};I_{1}>0\}.\)_
Proof.: We will use the Lyapunov function described in [34].
At the equilibrium \(E_{1}^{1},\) it is valid that
\[(1-v)\Lambda-\frac{\beta_{1}I_{1}^{*}S^{*}}{N}-\mu S^{*}=0\] \[\frac{\beta_{1}I_{1}^{*}S^{*}}{N}-(\gamma_{1}+\mu)I_{1}^{*}=0. \tag{18}\]
Let \(L\) be the Lyapunov function
\[L(t)=\frac{1}{2}\left[(S-S^{*})+(I_{1}-I_{1}^{*})\right]^{2}+k\left(I_{1}-I_{1 }^{*}-I_{1}^{*}\ln\frac{I_{1}}{I_{1}^{*}}\right),\]
where \(k=\frac{2\mu+\gamma_{1}}{\beta_{1}}\frac{\Lambda}{\mu},\) defined in
\[G=\{(S,V,I_{1},C_{1},R_{1},R_{v1})\in\Gamma_{1};I_{1}>0\}.\]
Differentiating \(L\) with respect to \(t,\) along solutions of (12) gives
\[L^{\prime}(t) = [(S-S^{*})+(I_{1}-I_{1}^{*})][(1-v)\Lambda-\mu S-(\gamma_{1}+\mu )I_{1}]+k(I_{1}-I_{1}^{*})\left[\frac{\beta_{1}S}{N}-(\gamma_{1}+\mu)\right].\]
Using the equations in (18), we have
\[L^{\prime}(t) = -[(S-S^{*})+(I_{1}-I_{1}^{*})][(\gamma_{1}+\mu)(I_{1}-I_{1}^{*})+ \mu(S-S^{*})]+\frac{k\beta_{1}}{N}(I_{1}-I_{1}^{*})(S-S^{*})\] \[= -\mu(S-S^{*})^{2}-(\gamma_{1}+\mu)(I_{1}-I_{1}^{*})^{2}.\]
Thus, \(L^{\prime}(t)\leq 0\) and the equality is valid if and only if \(S=S^{*}\) and \(I_{1}=I_{1}^{*}.\)
Lastly, since that \(\{E_{1}^{1}\}\) is the maximum invariant set of (12) contained in
\[\{(S,V,I_{1},C_{1},R_{1},R_{v1})\in G;L^{\prime}(t)=0\}=\{(S,V,I_{1},C_{1},R_{1 },R_{v1})\in G;S=S^{*},I_{1}=I_{1}^{*}\},\]
by the LaSalle's Invariance Principle [29], the equilibrium \(E_{1}^{1}\) is globally asymptotically stable in \(G.\)
**Theorem 4.9**.: _Consider \(\mathcal{R}_{2}>1.\) The equilibrium \(E_{2}^{2}\) is globally asymptotically stable for system (13) in \(\{(S,V,I_{2},C_{2},R_{2},R_{v1},Y_{2},R_{12})\in\Gamma_{2};I_{2}+Y_{2}>0\}.\)_
Proof.: Remember that \(J_{2}=I_{2}+Y_{2}.\) At the equilibrium \(E_{2}^{2},\) it is valid that
\[(1-v)\Lambda-\frac{\beta_{2}J_{2}^{*}S^{*}}{N}-\mu S^{*}=0\] \[v\Lambda-(\theta_{v2}+\mu)V^{*}=0\] \[\frac{\beta_{2}J_{2}^{*}S^{*}}{N}+\frac{\alpha_{v2}\beta_{2}J_{2} ^{*}R_{v1}^{*}}{N}-(\gamma_{2}+\mu)J_{2}^{*}=0\] \[\theta_{v2}V^{*}-\frac{\alpha_{v2}\beta_{2}J_{2}^{*}R_{v1}^{*}}{ N}-\mu R_{v1}^{*}=0. \tag{19}\]
Define the Lyapunov function
\[L(t)=\left(S-S^{*}-S^{*}\ln\frac{S}{S^{*}}\right)+\left(V-V^{*}-V^{*}\ln\frac{V}{V ^{*}}\right)+\left(J_{2}-J_{2}^{*}-J_{2}^{*}\ln\frac{J_{2}}{J_{2}^{*}}\right)+ \left(R_{v1}-R_{v1}^{*}-R_{v1}^{*}\ln\frac{R_{v1}}{R_{v1}^{*}}\right)\]
in \(G=\{(S,V,I_{2},C_{2},R_{2},R_{v1},Y_{2},R_{12})\in\Gamma_{2};S>0,V>0,J_{2}>0,R_{v 1}>0\}\).
Differentiating \(L\) along of the solution of (13) and using the equations (19), we have
\[L^{\prime}(t) = (S-S^{*})\left[(1-v)\Lambda\left(\frac{1}{S}-\frac{1}{S^{*}} \right)-\frac{\beta_{2}(J_{2}-J_{2}^{*})}{N}\right]+v\Lambda(V-V^{*})\left( \frac{1}{V}-\frac{1}{V^{*}}\right) \tag{20}\] \[+(J_{2}-J_{2}^{*})\left[\frac{\beta_{2}(S-S^{*})}{N}+\frac{ \alpha_{v2}\beta_{2}(R_{v1}-R_{v1}^{*})}{N}\right]\] \[+(R_{v1}-R_{v1}^{*})\left[\theta_{v2}\left(\frac{V}{R_{v1}}- \frac{V^{*}}{R_{v1}^{*}}\right)-\frac{\alpha_{v2}\beta_{2}(J_{2}-J_{2}^{*})}{ N}\right]\] \[= (1-v)\Lambda(S-S^{*})\left(\frac{1}{S}-\frac{1}{S^{*}}\right)+v \Lambda(V-V^{*})\left(\frac{1}{V}-\frac{1}{V^{*}}\right)+\theta_{v2}(R_{v1}- R_{v1}^{*})\left(\frac{V}{R_{v1}}-\frac{V^{*}}{R_{v1}^{*}}\right)\]
After some calculations, as in Theorem 4.7, we concluded that the expression in (20), obtained for \(L^{\prime}(t)\), is non-positive. Furthermore, \(L^{\prime}(t)=0\) if and only if \(S=S^{*}\), \(V=V^{*}\) and \(R_{v1}=R_{v1}^{*}\). Thus,
\[E = \{(S,V,I_{2},C_{2},R_{2},R_{v1},Y_{2},R_{12})\in G;L^{\prime}(t)=0\}\] \[= \{(S,V,I_{2},C_{2},R_{2},R_{v1},Y_{2},R_{12})\in G;S=S^{*},V=V^{*},R_{v1}=R_{v1}^{*}\}.\]
The maximum invariant set of (13) contained on the set \(E\) is the singleton \(\{E_{2}^{2}\}\), then the endemic equilibrium \(E_{2}^{2}\) is globally asymptotically stable in \(G\), by LaSalle's Invariable Principle [29]. From the Proposition 2.1, all orbit of the system (13) starting at a point in \(\Gamma_{2}\), with \(J_{2}=I_{2}+Y_{2}>0\), belongs to \(G\) for \(t>0\). Thus, the equilibrium \(E_{2}^{2}\) is globally asymptotically stable in \(\{(S,V,I_{2},C_{2},R_{2},R_{v1},Y_{2},R_{12})\in\Gamma_{2};I_{2}+Y_{2}>0\}\).
### Global stability
Next, we will establish conditions for global stability of the DFE.
**Lemma 4.10**.: _Suppose \(J_{1}(0)>0\). Denote \(x=(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{12})\in\Gamma\). Denote \(\Sigma(t)=S(t)+I_{1}(t)+I_{2}(t)+C_{2}(t)+Y_{1}(t)+R_{2}(t)\) for \(t\geq 0\). Every orbit of (1) in \(\Gamma\) enters in_
\[H=\left\{x\in\Gamma;\Sigma\leq\frac{(1-v)\Lambda}{\mu}\right\}, \tag{21}\]
_and \(H\) is positively invariant under the flow of (1)._
Proof.: From the equations of the system, we have
\[\Sigma^{\prime}(t) = (1-v)\Lambda-\mu\Sigma(t)-\gamma_{1}J_{1}(t).\]
Using the Comparison Theorem (Theorem B.1, [29]), \(\Sigma(t)\leq\frac{(1-v)\Lambda}{\mu}\) for all \(t>0\), if
\[\Sigma(0)=S(0)+I_{1}(0)+I_{2}(0)+C_{2}(0)+Y_{1}(0)+R_{2}(0)\leq\frac{(1-v) \Lambda}{\mu}.\]
Which implies that \(H\) is positively invariant under the flow of (1).
If \(J_{1}(0)>0\), it follows from the equations (1) that \(J_{1}(t)>0\) for all \(t>0\). If \(J_{1}>0\) and \(\Sigma\geq\frac{(1-v)\Lambda}{\mu}\), then \(\Sigma^{\prime}<0\). Thus, every forward orbit enters into \(H\) after a certain time.
With this lemma, we will show the asymptotic stability of the DFE in \(H\).
**Theorem 4.11**.: _Suppose \(\mathcal{R}_{0}\leq 1\) and \(\alpha_{1}\leq\frac{1}{\mathcal{R}_{1}}\). Let \(H\) be as defined in (21). The DFE, \(E^{0}\), is globally asymptotically stable in \(H\)._
Proof.: Let \(L\) be the Lyapunov function, defined in \(H\), by \(L=J_{1}\). Differentiating \(L\), with respect to \(t\), along of solutions of the model, we have
\[L^{\prime}(t) = J_{1}^{\prime}(t)\] \[= J_{1}(\gamma_{1}+\mu)\left[\frac{\beta_{1}S}{(\gamma_{1}+\mu)N}+ \frac{\alpha_{1}\beta_{1}R_{2}}{(\gamma_{1}+\mu)N}-1\right].\]
Suppose that \(\alpha_{1}\leq 1\). In this case,
\[\frac{\beta_{1}S}{(\gamma_{1}+\mu)N}+\frac{\alpha_{1}\beta_{1}R_{2}}{(\gamma_{ 1}+\mu)N}-1\leq\frac{\beta_{1}S}{(\gamma_{1}+\mu)N}+\frac{\beta_{1}R_{2}}{( \gamma_{1}+\mu)N}-1=\mathcal{R}_{1}\frac{S+R_{2}}{S^{0}}-1. \tag{22}\]
Suppose that \(\alpha_{1}>1\). In this case,
\[\frac{\beta_{1}S}{(\gamma_{1}+\mu)N}+\frac{\alpha_{1}\beta_{1}R_{2}}{(\gamma_ {1}+\mu)N}-1\leq\frac{\alpha_{1}\beta_{1}S}{(\gamma_{1}+\mu)N}+\frac{\alpha_{ 1}\beta_{1}R_{2}}{(\gamma_{1}+\mu)N}-1=\alpha_{1}\mathcal{R}_{1}\frac{S+R_{2}} {S^{0}}-1. \tag{23}\]
In the set \(H\) is valid \(S+I_{1}+I_{2}+C_{2}+Y_{1}+R_{2}\leq\frac{(1-v)\Lambda}{\mu}=S^{0}\). Note that, if \(J_{1}=I_{1}+Y_{1}>0\), then \(S+R_{2}<S^{0}\). Thus, using the hypothesis, in both cases, if \(J_{1}>0\), the expressions (22) and (23) are negative. It follows that \(L^{\prime}(t)\leq 0\), and \(L^{\prime}(t)=0\) if and only if \(J_{1}=0\).
Denote \(M\) the largest invariant set contained in
\[E = \{(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{1 2})\in H;L^{\prime}(t)=0\}\] \[= \{(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{ 12})\in H;J_{1}=0\}.\]
It is easy to see that if \(J_{1}=0\), then \(C_{1}\) and \(R_{1}\) tend to zero, when \(t\) tends to infinity. Thus,
\[M\subseteq\{(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{1 2})\in\Gamma;I_{1}=C_{1}=R_{1}=Y_{1}=0\}.\]
It follows, from the Theorem 4.7, that \(M=\{E^{0}\}\). Thus, from the LaSalle's Invariance Principle [27], the DFE is asymptotically stable.
Next, we will give other conditions for the global stability of the DFE.
From the equation, \(V^{\prime}=v\Lambda-(\theta_{v2}+\mu)V\), we have
\[\lim_{t\rightarrow+\infty}V(t)=\frac{v\Lambda}{\theta_{v2}+\mu}=V^{0}.\]
It is clear that if \(V(0)=V^{0}\), then \(V(t)=V^{0}\) for all \(t\geq 0\).
**Lemma 4.12**.: _Suppose \(J_{2}(0)>0\) and \(V(0)=V^{0}\). Denote \(x=(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{12})\in\Gamma\). Denote \(\Sigma(t)=S(t)+I_{1}(t)+I_{2}(t)+C_{1}(t)+R_{1}(t)\) for \(t\geq 0\). Every orbit of (1) in \(\Gamma\) enters in_
\[H=\left\{x\in\Gamma;R_{v1}\leq\frac{\theta_{v2}v\Lambda}{\mu(\theta_{v2}+\mu) }\text{ and }\Sigma\leq\frac{(1-v)\Lambda}{\mu}\right\}, \tag{24}\]
_and \(H\) is positively invariant under the flow of (1)._
Proof.: Supposing the initial condition \(V(0)=V^{0}\) for the variable \(V\),
\[R_{v1}^{\prime}=\theta_{v2}V^{0}-\alpha_{v2}\beta_{2}J_{2}\frac{R_{v1}}{N}-\mu R _{v1}.\]
If \(J_{2}(0)>0\), it follows from the equations (1) for \(dI_{2}/dt\) and \(dY_{2}/dt\) that \(J_{2}>0\) for \(t>0\). If \(J_{2}>0\) and \(R_{v1}\geq\theta_{v2}v\Lambda/\mu(\theta_{v2}+\mu)\), then \(R_{v1}^{\prime}<0\) and \(R_{v1}\) decreases until a value smaller than \(\theta_{v2}v\Lambda/\mu(\theta_{v2}+\mu)\).
From the equations of the system, we have
\[\Sigma^{\prime}(t) = (1-v)\Lambda-\mu\Sigma(t)-\gamma_{2}I_{2}-\frac{\beta_{2}\alpha_{2 }J_{2}R_{1}}{N}.\]
If \(J_{2}(0)>0\), then \(I_{2}>0\) for \(t>0\). Thus, if \(J_{2}>0\) and \(\Sigma\geq\frac{(1-v)\Lambda}{\mu}\), then \(\Sigma^{\prime}<0\) and \(\Sigma\) decreases until a value smaller than \(\frac{(1-v)\Lambda}{\mu}\).
Therefore, every forward orbit of (1) enters into \(H\) after a certain time.
Using the Comparison Theorem, \(R_{v1}(t)\leq\theta_{\varepsilon 2}v\Lambda/\mu(\theta_{\varepsilon 2}+\mu)\) for all \(t>0\), if \(R_{v1}(0)\leq\dfrac{\theta_{\varepsilon 2}v\Lambda}{\mu(\theta_{v2}+\mu)}\). In the same way, \(\Sigma(t)\leq\dfrac{(1-v)\Lambda}{\mu}\) for all \(t>0\), if
\[\Sigma(0)=S(0)+I_{1}(0)+I_{2}(0)+C_{1}(0)+R_{1}(0)\leq\dfrac{(1-v)\Lambda}{\mu}.\]
Thus, \(H\) is positively invariant under the flow of (1).
Next, we will show the global stability of the DFE in the set \(H\), defined in the previous Lemma.
**Theorem 4.13**.: _Suppose \(\mathcal{R}_{0}\leq 1\) and \(\alpha_{2}\leq\dfrac{1}{\mathcal{R}_{2}}\). Suppose also \(V(0)=V^{0}\) and \(H\) as defined in (24). The orbits of (1) in \(H\) converge for the DFE, \(E^{0}\)._
Proof.: Let \(L\) be the Lyapunov function, defined in \(H\), by \(L=J_{2}\). Differentiating \(L\), with respect to \(t\), along of solutions of the model, we have
\[L^{\prime}(t) = J_{2}^{\prime}(t)\] \[= J_{2}(\gamma_{2}+\mu)\left[\dfrac{\beta_{2}S}{(\gamma_{2}+\mu)N }+\dfrac{\alpha_{2}\beta_{2}R_{1}}{(\gamma_{2}+\mu)N}+\dfrac{\alpha_{v2}\beta_ {2}R_{v1}}{(\gamma_{2}+\mu)N}-1\right].\]
If \(\alpha_{2}\leq 1\), then
\[\dfrac{\beta_{2}S}{(\gamma_{2}+\mu)N}+\dfrac{\alpha_{2}\beta_{2} R_{1}}{(\gamma_{2}+\mu)N}+\dfrac{\alpha_{v2}\beta_{2}R_{v1}}{(\gamma_{2}+\mu)N} -1 \leq \dfrac{\beta_{2}S}{(\gamma_{2}+\mu)N}+\dfrac{\beta_{2}R_{1}}{( \gamma_{2}+\mu)N}+\dfrac{\alpha_{v2}\beta_{2}R_{v1}}{(\gamma_{2}+\mu)N}-1 \tag{25}\] \[= \dfrac{\beta_{2}(S+R_{1}-S^{0})}{(\gamma_{2}+\mu)N}+\dfrac{ \alpha_{v2}\beta_{2}(R_{v1}-R_{v1}^{0})}{(\gamma_{2}+\mu)N}+\mathcal{R}_{2}-1.\]
If \(\alpha_{2}>1\), then
\[\dfrac{\beta_{2}S}{(\gamma_{2}+\mu)N}+\dfrac{\alpha_{2}\beta_{2} R_{1}}{(\gamma_{2}+\mu)N}+\dfrac{\alpha_{v2}\beta_{2}R_{v1}}{(\gamma_{2}+\mu)N} -1 \leq \dfrac{\alpha_{2}\beta_{2}S}{(\gamma_{2}+\mu)N}+\dfrac{\alpha_{2 }\beta_{2}R_{1}}{(\gamma_{2}+\mu)N}+\dfrac{\alpha_{v2}\beta_{2}R_{v1}}{( \gamma_{2}+\mu)N}-1 \tag{26}\] \[\leq \dfrac{\alpha_{2}\beta_{2}(S+R_{1}-S^{0})}{(\gamma_{2}+\mu)N}+ \dfrac{\alpha_{v2}\beta_{2}(R_{v1}-R_{v1}^{0})}{(\gamma_{2}+\mu)N}+\alpha_{2} \mathcal{R}_{2}-1.\]
Using the hypothesis, we have \(\mathcal{R}_{2}-1\leq 0\) and \(\alpha_{2}\mathcal{R}_{2}-1\leq 0\).
In the set \(H\) it is valid \(R_{v1}\leq R_{v1}^{0}\), then
\[\dfrac{\alpha_{v2}\beta_{2}(R_{v_{1}}-R_{v1}^{0})}{(\gamma_{2}+\mu)N}\leq 0.\]
Furthermore, it is valid that \(\Sigma=S+I_{1}+I_{2}+C_{1}+R_{1}\leq S^{0}\). If \(J_{2}>0\), if follows from the equation of the system for \(dI_{2}/dt\), that \(I_{2}(t)>0\), and, therefore, \(S+R_{1}<\Sigma\). Thus, if \(J_{2}>0\), then \(S+R_{1}<S^{0}\).
In both cases, we concluded that if \(J_{2}>0\), the expressions (25) and (26) are negative. It follows that \(L^{\prime}(t)\leq 0\), and \(L^{\prime}(t)=0\) if and only if \(J_{2}=0\).
Denote \(M\) the largest invariant set contained in
\[E = \{(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{ 12})\in H;L^{\prime}(t)=0\}\] \[= \{(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{ 12})\in H;J_{2}=0\}.\]
It is easy to see that if \(J_{2}=0\), then \(C_{2}\), \(R_{2}\), \(Y_{1}\) and \(R_{12}\) tend to zero, when \(t\) tends to infinity. Thus,
\[M\subseteq\{(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{ 12})\in\Gamma;I_{2}=C_{2}=R_{2}=Y_{1}=Y_{2}=R_{12}=0\}.\]
It follows, from Theorem 4.6, that \(M=\{E^{0}\}\). From the LaSalle's Invariance Principle [27], the DFE is asymptotically stable.
In resume, we obtained the following theorem about the global stability of the DFE:
**Theorem 4.14**.: _Suppose \(\mathcal{R}_{0}\leq 1\). Suppose also \(\alpha_{1}\leq\dfrac{1}{\mathcal{R}_{1}}\) or \(\alpha_{2}\leq\dfrac{1}{\mathcal{R}_{2}}\). The DFE, \(E^{0}\), is globally asymptotically stable in \(\Gamma\)._
Proof.: If \(\mathcal{R}_{0}\leq 1\) and \(\alpha_{1}\leq 1/\mathcal{R}_{1}\), the result follows from the Lemma 4.10 and Theorem 4.11. If \(\mathcal{R}_{0}\leq 1\) and \(\alpha_{2}\leq 1/\mathcal{R}_{2}\), it follows from the Lemma 4.12 and Theorem 4.13. Note that, in Theorem 4.13, the omega limit set was assumed to lie in a restricted set (where \(V=V^{0}\)), and the equations were analyzed on that set. Here, since that \(E^{0}\) is globally asymptotically stable in this set, we conclude that the asymptotic behavior of the original system is the same (see, for example, Appendix F [29] about this topic).
Next, we obtain conditions for the global stability of the boundary endemic equilibria.
**Theorem 4.15**.: _Let \(J_{1}(0)>0\). Suppose \(\mathcal{R}_{2}\leq 1\), \(\alpha_{2}\mathcal{R}_{2}\leq 1\) and \(\mathcal{R}_{1}>1\). Then, the solution tends to endemic equilibrium \(E^{1}\)._
Proof.: Suppose \(\mathcal{R}_{2}\leq 1\) and \(\alpha_{2}\mathcal{R}_{2}\leq 1\). Taking the Lyapunov function \(L=J_{2}\) and following the same ideas in Theorem 4.13, the solution of system tends to the invariant set \(M\), where \(I_{2}=C_{2}=R_{2}=Y_{1}=Y_{2}=R_{12}=0\). Since \(\mathcal{R}_{1}>1\), it follows from Theorem 4.8 that \(M=\{E^{1}\}\). Thus, from the LaSalle's Invariance Principle, the solution tends to \(E^{1}\).
**Theorem 4.16**.: _Let \(J_{2}(0)>0\). Suppose \(\mathcal{R}_{1}\leq 1\), \(\alpha_{1}\mathcal{R}_{1}\leq 1\) and \(\mathcal{R}_{2}>1\). Then, the solution tends to endemic equilibrium \(E^{2}\)._
Proof.: This proof is analogous to previous theorem. Just follow initially the ideas of Theorem 4.11 and, then, use Theorem 4.9.
**Remark 4.17**.: _Remember that we saw in Theorem 4.3 that if \(\mathcal{R}_{2}<1\), \(\mathcal{R}_{1}>1\) and \(\mathcal{R}_{1}^{2}<1\), then the equilibrium \(E^{1}\) is locally asymptotically stable. It is important to note that the conditions \(\mathcal{R}_{2}<1\) and \(\alpha_{2}\mathcal{R}_{2}<1\) imply \(\mathcal{R}_{1}^{2}<1\). In the same way, by Theorem 4.4, if \(\mathcal{R}_{1}<1\), \(\mathcal{R}_{2}>1\) and \(\mathcal{R}_{2}^{1}<1\), then the equilibrium \(E^{2}\) is locally asymptotically stable. The conditions \(\mathcal{R}_{1}<1\) and \(\alpha_{1}\mathcal{R}_{1}<1\) imply \(\mathcal{R}_{2}^{1}<1\). The calculations are showed in Appendix B._
### Uniform persistence
Here, based on previous results, we will find conditions to ensure the uniform persistence of the system. We will use the classical Theorem of persistence (Theorem 4.3 in [9]), also used in [15, 35].
In the following, denote the boundary and the interior of \(\Gamma\) as \(\partial\Gamma\) and \(\bar{\Gamma}\), respectively.
**Definition 4.18**.: _The system \(x^{\prime}=f(t,x)\) is uniformly persistent, if there is a positive constant \(\epsilon\), such as_
\[\liminf_{t\to\infty}x_{i}(t)\geq\epsilon,\ i=1,...,n,\]
_for all trajectory with positive initial conditions, that is, \(x_{i}(0)>0,i=1,...,n\)._
**Theorem 4.19**.: _Suppose \(\mathcal{R}_{1}>1\), \(\mathcal{R}_{2}>1\), \(\mathcal{R}_{1}^{2}>1\) and \(\mathcal{R}_{2}^{1}>1\). The system (1) is uniformly persistent in \(\bar{\Gamma}\)._
To prove the Theorem, we will use the next Lemmas.
**Lemma 4.20**.: _Suppose \(\mathcal{R}_{1}>1\) and \(\mathcal{R}_{2}>1\). The largest positively invariant set under the flow of (1), contained in \(\partial\Gamma\), is \(\{E^{0}\}\cup\{E^{1}\}\cup\{E^{2}\}\)._
Proof.: Let \(M_{0}\), \(M_{00}\), \(M_{\partial 1}\) and \(M_{\partial 2}\) be the sets
\[M_{0} = \{x(0);x(t)\in\partial\Gamma\quad\forall t\geq 0\},\] \[M_{\partial 0} = \{(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{12 })\in\Gamma;J_{1}=J_{2}=0\},\] \[M_{\partial 1} = \{(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{12 })\in\Gamma;J_{1}>0\ \text{and}\ J_{2}=0\},\] \[M_{\partial 2} = \{(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{12 })\in\Gamma;J_{1}=0\ \text{and}\ J_{2}>0\}.\]
It follows from the system (1) that
\[J_{1}(t) = J_{1}(0)e^{-(\gamma_{1}+\mu)t}e^{\int_{0}^{t}[\beta_{1}(S(s)+ \alpha_{1}R_{2}(s))/N]ds}\] \[J_{2}(t) = J_{2}(0)e^{-(\gamma_{2}+\mu)t}e^{\int_{0}^{t}[\beta_{1}(S+ \alpha_{2}R_{1}+\alpha_{z2}R_{v1})/N]ds}.\]
For \(i=1,2\), it is clear that if \(J_{i}(0)=0\), then \(J_{i}(t)=0\) for all \(t>0\). In the same way, if \(J_{i}(0)>0\), then \(J_{i}(t)>0\) for all \(t>0\). Thus, \(M_{\partial 0}\), \(M_{\partial 1}\) and \(M_{\partial 2}\) are invariant under the flow of (1).
Here, it is clear that \(M_{\partial 0}\cup M_{\partial 1}\cup M_{\partial 2}\subseteq M_{\partial}\). We will show now that \(M_{\partial}\subseteq M_{\partial 0}\cup M_{\partial 1}\cup M_{\partial 2}\).
Suppose \(x(0)\in M_{0}.\) If \(x(0)\) has coordinates satisfying \(J_{1}(0)=J_{2}(0)=0,\) then \(x(0)\in M_{00}.\) If \(x(0)\) has coordinates satisfying \(J_{1}(0)>0\) and \(J_{2}(0)=0,\) then \(x(0)\in M_{01}.\) If \(x(0)\) has coordinates satisfying \(J_{1}(0)=0\) and \(J_{2}(0)>0,\) then \(x(0)\in M_{02}.\)
Finally, suppose \(x(0)\in M_{0}\) with coordinates satisfying \(J_{1}(0)>0\) and \(J_{2}(0)>0.\) From Proposition 2.1, \(S(t)>0,\)\(V(t)>0\) and \(R_{v1}(t)>0\) for \(t>0.\) Note that, for \(i=1,2,\)
\[I_{i}(t)=I_{i}(0)e^{-(\gamma_{i}+\mu)t}+\int_{0}^{t}\frac{\beta_{i}J_{i}(s)S(s )}{N}e^{-(\gamma_{i}+\mu)}ds.\]
Since that \(J_{i}(0)>0,\) then \(J_{i}(t)>0\) for \(t\geq 0.\) As \(S(t)>0\) for all \(t,\) then \(I_{i}(t)>0\) for \(t>0.\) Now, we will observe the equations for \(C_{i}(t)\) and \(R_{i}(t)\):
\[C_{i}(t) = C_{i}(0)e^{-(\theta_{j}+\mu)t}+\int_{0}^{t}\gamma_{i}I_{i}(s)e^{ -(\theta_{j}+\mu)(t-s)}ds\] \[R_{i}(t) = R_{i}(0)e^{-\int_{0}^{t}\left[\frac{\alpha_{i}\beta_{i}J_{i}}{N }+\mu\right]ds}+\int_{0}^{t}\theta_{j}C_{i}(s)e^{-\int_{s}^{t}\left[\frac{ \alpha_{i}\beta_{i}J_{i}}{N}+\mu\right]du}ds,\]
for \(i,j\in\{1,2\},\)\(i\neq j.\) As \(I_{i}(t)>0\) for \(t>0,\) then \(C_{i}(t)>0\) for \(t>0.\) As \(C_{i}(t)>0\) for \(t>0,\) then \(R_{i}(t)>0\) for \(t>0.\)
Lastly, we have
\[Y_{1}(t) = Y_{1}(0)e^{-(\gamma_{1}+\mu)t}+\int_{0}^{t}\frac{\alpha_{1}\beta _{1}J_{1}(s)R_{2}(s)}{N}e^{-(\gamma_{1}+\mu)(t-s)}ds\] \[Y_{2}(t) = Y_{2}(0)e^{-(\gamma_{2}+\mu)t}+\int_{0}^{t}\left(\frac{\alpha_{ 2}\beta_{2}J_{2}(s)R_{1}(s)}{N}+\frac{\alpha_{\nu}\beta_{2}J_{2}(s)R_{\nu 1}(s)}{N}\right)e^{-(\gamma_{2}+\mu)(t-s)}ds\] \[R_{12}(t) = R_{12}(0)e^{-\mu t}+\int_{0}^{t}\left(\gamma_{1}Y_{1}(s)+\gamma_ {2}Y_{2}(s)\right)e^{-\mu(t-s)}ds.\]
Since that \(J_{1},J_{2},R_{1},R_{2},R_{v1}>0\) for \(t>0,\) then it follows from above equations that \(Y_{1},Y_{2}>0\) for \(t>0,\) and, therefore, \(R_{12}>0\) for \(t>0.\)
We concluded that if \(x(0)\in M_{0}\) satisfies \(J_{1}(0)>0\) and \(J_{2}(0)>0,\) then all coordinates \(x_{i}(t)\) are positive for \(t>0.\) This implies that \(x(t)\notin\partial\Gamma\) for \(t>0.\) Which is a contradiction, since that \(x(0)\) was assumed in \(M_{0}.\)
Thus, we proved that \(M_{0}\subseteq M_{00}\cup M_{01}\cup M_{02}.\) What implies \(M_{0}=M_{00}\cup M_{01}\cup M_{02}.\)
Now, note that \(E^{0}\) is globally asymptotically stable in \(M_{00}.\) From Theorems 4.8 and 4.9, since that \(\mathcal{R}_{1}>1\) and \(\mathcal{R}_{2}>1,\) then \(E^{1}\) is globally asymptotically stable in \(M_{01},\) and \(E^{2}\) is globally asymptotically stable in \(M_{02}.\) Thus, the largest positively invariant set in \(\partial\Gamma\) is \(\{E^{0}\}\cup\{E^{1}\}\cup\{E^{2}\}.\)
**Lemma 4.21**.: _Suppose \(\mathcal{R}_{1}>1,\)\(\mathcal{R}_{2}>1,\)\(\mathcal{R}_{1}^{2}>1\) and \(\mathcal{R}_{2}^{1}>1.\) Then,_
1. _there is a neighborhood_ \(V_{0}\) _of_ \(E^{0}\) _such that_ \(J_{1}^{\prime}>0\) _and_ \(J_{2}^{\prime}>0\) _for all_ \(x\in V_{0}\setminus\{E^{0}\}\cap\bar{\Gamma}\)_,_
2. _there is a neighborhood_ \(V_{1}\) _of_ \(E^{1}\) _such that_ \(J_{2}^{\prime}>0\) _for all_ \(x\in V_{1}\setminus\{E^{1}\}\cap\bar{\Gamma}\)_,_
3. _there is a neighborhood_ \(V_{2}\) _of_ \(E^{2}\) _such that_ \(J_{1}^{\prime}>0\) _for all_ \(x\in V_{2}\setminus\{E^{2}\}\cap\bar{\Gamma}\)_._
Proof.: (i) Suppose \(\mathcal{R}_{1}>1.\) Let \(\delta_{0}^{1}\) be
\[\delta_{0}^{1}=\frac{(\mathcal{R}_{1}-1)(\gamma_{1}+\mu)N}{2\beta_{1}(1+\alpha_ {1})}>0.\]
Consider a neighborhood \(V_{0}\) of \(E^{0},\) contained in \(\Gamma,\) such that for all \(x\in V_{0},\)\(||x-E^{0}||<\delta_{0}^{1}.\) Thus, we have \(|S-S^{0}|<\delta_{0}^{1}\) and \(|R_{2}|=|R_{2}-R_{2}^{0}|<\delta_{0}^{1}.\)
We have
\[J_{1}^{\prime} = J_{1}(\gamma_{1}+\mu)\left[\frac{\beta_{1}(S-S^{0}+\alpha_{1}R_{ 2})}{(\gamma_{1}+\mu)N}-1+\mathcal{R}_{1}\right]\] \[> J_{1}(\gamma_{1}+\mu)\left[\frac{-\delta_{0}\beta_{1}(1+\alpha_{1 })}{(\gamma_{1}+\mu)N}-1+\mathcal{R}_{1}\right]\] \[= J_{1}(\gamma_{1}+\mu)\left(\frac{\mathcal{R}_{1}-1}{2}\right).\]
Thus, for \(x\in V_{0}\setminus\{E^{0}\}\cap\mathring{\Gamma}\), we have \(J_{1}^{\prime}>0\), since that \(\mathcal{R}_{1}>1\) and \(J_{1}>0\) in \(\mathring{\Gamma}\).
If \(\mathcal{R}_{2}>1\), then let \(\delta_{0}^{2}\) be a constant
\[\delta_{0}^{2}=\frac{(\mathcal{R}_{2}-1)(\gamma_{2}+\mu)N}{2\beta_{2}(1+ \alpha_{2}+\alpha_{\upsilon 2})}>0.\]
By the same reasoning, for \(x\in V_{0}\setminus\{E^{0}\}\cap\mathring{\Gamma}\), we have \(J_{2}^{\prime}>0\).
Thus, just take \(\delta_{0}=\min\{\delta_{0}^{1},\delta_{0}^{2}\}\).
(ii) Let \(\delta_{1}\) be
\[\delta_{1}=\frac{(\mathcal{R}_{1}^{2}-1)(\gamma_{2}+\mu)N}{2\beta_{2}(1+ \alpha_{2}+\alpha_{\upsilon 2})}>0.\]
Consider a neighborhood \(V_{1}\) of \(E^{1}\), contained in \(\Gamma\), such that for all \(x\in V_{1}\), \(||x-E^{1}||<\delta_{1}\). Thus, we have \(|S-S^{*}|<\delta_{1}\), \(|R_{1}-R_{1}^{*}|<\delta_{1}\) and \(|R_{\upsilon 1}-R_{\upsilon 1}^{*}|<\delta_{1}\).
We have
\[J_{2}^{\prime} = J_{2}(\gamma_{2}+\mu)\left[\frac{\beta_{2}(S-S^{*}+\alpha_{2}(R _{1}-R_{1}^{*})+\alpha_{\upsilon 2}(R_{\upsilon 1}-R_{\upsilon 1}^{*}))}{( \gamma_{2}+\mu)N}-1+\mathcal{R}_{1}^{2}\right]\] \[> J_{2}(\gamma_{2}+\mu)\left[\frac{-\delta_{1}\beta_{2}(1+\alpha_{ 2}+\alpha_{\upsilon 2})}{(\gamma_{2}+\mu)N}-1+\mathcal{R}_{1}^{2}\right]\] \[= J_{2}(\gamma_{2}+\mu)\left(\frac{\mathcal{R}_{1}^{2}-1}{2} \right).\]
Thus, for \(x\in V_{1}\setminus\{E^{1}\}\cap\mathring{\Gamma}\), we have \(J_{2}^{\prime}>0\), since that \(\mathcal{R}_{1}^{2}>1\) and \(J_{2}>0\) in \(\mathring{\Gamma}\).
(iii) Let \(\delta_{2}\) be
\[\delta_{2}=\frac{(\mathcal{R}_{2}^{1}-1)(\gamma_{1}+\mu)N}{2\beta_{1}(1+ \alpha_{1})}>0.\]
Consider a neighborhood \(V_{2}\) of \(E^{2}\), contained in \(\Gamma\), such that for all \(x\in V_{2}\), \(||x-E^{0}||<\delta_{2}\). Thus, we have \(|S-S^{*}|<\delta_{2}\) and \(|R_{2}-R_{2}^{*}|<\delta_{2}\).
We have
\[J_{1}^{\prime} = J_{1}(\gamma_{1}+\mu)\left[\frac{\beta_{1}(S-S^{*}+\alpha_{1}(R _{2}-R_{2}^{*}))}{(\gamma_{1}+\mu)N}-1+\mathcal{R}_{2}^{1}\right]\] \[> J_{1}(\gamma_{1}+\mu)\left[\frac{-\delta_{2}\beta_{1}(1+\alpha_{ 1})}{(\gamma_{1}+\mu)N}-1+\mathcal{R}_{2}^{1}\right]\] \[= J_{1}(\gamma_{1}+\mu)\left(\frac{\mathcal{R}_{2}^{1}-1}{2} \right).\]
Thus, for \(x\in V_{2}\setminus\{E^{2}\}\cap\mathring{\Gamma}\), we have \(J_{1}^{\prime}>0\), since that \(\mathcal{R}_{2}^{1}>1\) and \(J_{1}>0\) in \(\mathring{\Gamma}\).
Proof.: (Theorem 4.19) We will show that the system (1) satisfies Theorem D.2 in [29] or Theorem 4.3 in [9]. We already know that \(\Gamma\) is positively invariant under the flow of (1).
From Lemma 4.20, the largest invariant set in \(\partial\Gamma\) is \(M=\{E^{0}\}\cup\{E^{1}\}\cup\{E^{2}\}\). Assume that \(M\) is its own cover. From the proofs of previous Lemmas, each singleton in \(M\) is isolated and \(M\) is acyclic. Thus, the Hypothesis H of Theorem is satisfied.
From Lemma 4.21, there is a neighborhood of \(E^{0}\), contained in \(\Gamma\), such that \(J_{1}^{\prime}>0\) and \(J_{2}^{\prime}>0\) for all \(x\in V_{0}\setminus\{E^{0}\}\). Since that \(J_{1}^{\prime}+J_{2}^{\prime}>0\), one among the coordinates \(I_{1},Y_{1},I_{2},Y_{2}\) will increase, then a solution with initial condition in \(V_{0}\setminus\{E^{0}\}\cap\mathring{\Gamma}\) goes away from \(E^{0}\). In Lemma 4.21, \(V_{0}\) is the ball \(B(E^{0},\delta_{0})\). Let \(B_{0}\) be the open ball \(B_{0}=B(E^{0},\delta_{0}/2)\). Of the same way, a solution with initial condition in \(V_{1}\setminus\{E^{1}\}\cap\mathring{\Gamma}\) goes away from \(E^{1}\) and a solution with initial condition in \(V_{2}\setminus\{E^{2}\}\cap\mathring{\Gamma}\) goes away from \(E^{2}\). Also analogously, construct the balls \(B_{1}=B(E^{1},\delta_{1}/2)\) and \(B_{2}=B(E^{2},\delta_{2}/2)\).
Let \(\delta\) be the constant \(\delta=\min\{\delta_{0},\delta_{1},\delta_{2}\}\). A solution with initial condition in \(y\in S[\partial\Gamma,\delta]\cap\mathring{\Gamma}\) remains in the interior of the compact \(\Gamma\setminus(B_{0}\cup B_{1}\cup B_{2})\), from some \(t(y)>0\). Thus, the flow is point dissipative in \(S[\partial\Gamma,\delta]\cap\mathring{\Gamma}\).
It follows that any solution of (1), with initial condition in \(\mathring{\Gamma}\), get away from the boundary equilibria.
The prove is concluded by observing that the necessary and sufficient condition for uniform persistence in Theorem D.2 is equivalent to instability of \(E^{0}\), \(E^{1}\) and \(E^{2}\).
Following the steps of the previous theorem, it is possible to find other conditions for the uniform persistence of the system. Here, we will just state the following theorem:
**Theorem 4.22**.: _Assume that one of the following assumptions is valid:_
1. \(\mathcal{R}_{1}>1\)_,_ \(\mathcal{R}_{2}<1\) _and_ \(\mathcal{R}_{1}^{2}>1\)_, or_
2. \(\mathcal{R}_{1}<1\)_,_ \(\mathcal{R}_{2}>1\) _and_ \(\mathcal{R}_{2}^{1}>1\)_._
_Then, the system (1) is uniformly persistent in \(\hat{\Gamma}\)._
Note that, if hypothesis (i) is valid, then there are only two boundary equilibria, \(E^{0}\) and \(E^{1}\). If hypothesis (ii) is valid, then there are only two boundary equilibria, \(E^{0}\) and \(E^{2}\). As before, the hypothesis imply the instability of these equilibria.
**Theorem 4.23**.: _In the hypothesis of the Theorem 4.19 or 4.22, the system (1), with initial condition in \(\hat{\Gamma}\), is uniformly persistent, and there is an endemic equilibrium in \(\hat{\Gamma}\)._
Proof.: We will observe that the hypothesis of Theorem 2.8.6 in [3] are satisfied. We already prove that the system is uniformly persistent in \(\hat{\Gamma}\). From this and from the uniform limitation of the solutions, there is a compact set, \(A\subset\hat{\Gamma}\), which is an attractor for the flow of (1). Namely, for all point \(x\in A\), \(\epsilon\leq x_{i}\leq N-\epsilon\). The attraction region of \(A\) is \(\hat{\Gamma}\). Thus, from the Theorem 2.8.6 [3], \(A\) contains an equilibrium point. Since that \(A\subset\hat{\Gamma}\), this point is an endemic equilibrium.
In resume, we concluded that the local dynamics is determined by the thresholds \(\mathcal{R}_{1}\), \(\mathcal{R}_{2}\), \(\mathcal{R}_{1}^{2}\) and \(\mathcal{R}_{2}^{1}\). The local stability of the disease-free equilibrium (DFE) and endemic equilibria was determined by the basic and invasion reproductive numbers. For \(i=1,2\), we denoted \(\mathcal{R}_{i}^{vv}\), the basic reproductive number of strain \(i\), in a model without vaccination; and denoted it \(\mathcal{R}_{i}\), in our model, with vaccination. In addition to basic reproductive number, we used the invasion reproductive numbers \(\mathcal{R}_{i}^{j}\), \(i,j\in\{1,2\},i\neq j\). We show that if \(\mathcal{R}_{1}<1\) and \(\mathcal{R}_{2}<1\), then DFE is stable. Otherwise, it is unstable. For \(i,j\in\{1,2\}\), \(i\neq j\), if \(\mathcal{R}_{i}>1\), \(\mathcal{R}_{j}<1\) and \(\mathcal{R}_{j}^{j}<1\), then the endemic equilibrium \(E^{i}\), which has infections only by the strain \(i\), is stable. Otherwise, it is unstable. The proofs of global stability were obtained with stronger conditions. Assuming \(\mathcal{R}_{i}<1\) and \(\alpha_{i}\mathcal{R}_{i}<1\), for \(i\in\{1,2\}\), we proved that the solution tends to a set where the strain \(i\) is eradicated, that is, \(J_{i}=0\). We emphasize the fact that this hypothesis implies \(\mathcal{R}_{i}<1\) and \(\mathcal{R}_{j}^{j}<1\), for \(j\in\{1,2\}\), \(j\neq i\). If in addition we have \(\mathcal{R}_{j}<1\), we concluded that the DFE is globally asymptotically stable; and if \(\mathcal{R}_{j}>1\), the endemic equilibrium \(E^{j}\) is globally asymptotically stable. Lastly, we show that the system is uniformly persistent if \(\mathcal{R}_{i}>1\), \(\mathcal{R}_{j}<1\) and \(\mathcal{R}_{i}^{j}>1\), for \(i,j\in\{1,2\},i\neq j\); or if \(\mathcal{R}_{i}>1\) and \(\mathcal{R}_{i}^{j}>1\), for \(i,j\in\{1,2\},i\neq j\).
## 5 Vaccination rate, temporary cross-immunity and ADE effect
We know that the objective in the vaccination strategy is reducing the basic reproductive number until a value less than one, so that the number of new infections decreases and the diseases eventually disappear from the population. Depending on the parameters related to the diseases and to vaccine, the vaccination strategy may or may not eradicate one or both diseases. In the following, we consider \(\mathcal{R}_{1}^{vv}>1\).
### Vaccination strategies
The basic reproductive numbers of the model with and without vaccination are related as
\[\mathcal{R}_{1} =\mathcal{R}_{1}^{wv}(1-v)<1\Longleftrightarrow v>1-\frac{1}{ \mathcal{R}_{1}^{vv}} \tag{27}\] \[\mathcal{R}_{2} =\mathcal{R}_{2}^{wv}[1+v(K-1)]<1\Longleftrightarrow v(K-1)< \frac{1}{\mathcal{R}_{2}^{wv}}-1, \tag{28}\]
where \(K=\frac{\alpha_{v2}\theta_{v2}}{\theta_{v2}+\mu}\). Thus, the vaccination is always beneficial on the control of strain \(1\). On the control of strain \(2\), it may or may not be beneficial, depending on the value of \(K=K(\alpha_{v2},\theta_{v2})\).
**Remark 5.1**.: _For any vaccination rate \(v>0\),_
\[\frac{d\mathcal{R}_{2}}{d\alpha_{v2}}=\frac{v\mathcal{R}_{2}^{wv}\theta_{v2}}{ \theta_{v2}+\mu}>0\text{ and }\frac{d\mathcal{R}_{2}}{d(1/\theta_{v2})}=-\frac{v\alpha_{v2}\mathcal{R}_{2}^{ wv}\mu}{(1+\mu/\theta_{v2})^{2}}<0.\]
_Thus, the greater the parameter \(\alpha_{v2}\), the greater the basic reproductive number for strain \(2\), \(\mathcal{R}_{2}\); the greater the cross-immunity period \(1/\theta_{v2}\), the smaller \(\mathcal{R}_{2}\)._
**Remark 5.2**.: _If \(\alpha_{v2}<1+\mu/\theta_{v2}\)\((K<1)\), the vaccination is beneficial on the control of strain \(2\), but if \(\alpha_{v2}>1+\mu/\theta_{v2}\)\((K>1)\), the vaccination worsens the control of the strain \(2\)._
Consider \(\mathcal{R}_{2}^{vv}<1\). If \(K\leq 1/\mathcal{R}_{2}^{vv}\iff\alpha_{v2}\leq(1+\mu/\theta_{v2})(1/ \mathcal{R}_{2}^{vv})\), from the equation (28), \(\mathcal{R}_{2}<1\) for any vaccination rate. On the other side, if \(K>1/\mathcal{R}_{2}^{ww}\), the vaccination rate \(v\) must satisfy \(v<\frac{1/\mathcal{R}_{2}^{vv}-1}{K-1}\) to ensure the stability of DFE. In this case, combining this inequality with the equation (27), \(v\) has a lower and upper bound:
\[1-\frac{1}{\mathcal{R}_{1}^{vv}}<v<\frac{1/\mathcal{R}_{2}^{vv}-1}{K-1}.\]
Furthermore, we must have
\[1-\frac{1}{\mathcal{R}_{1}^{vv}}<\frac{1/\mathcal{R}_{2}^{vv}-1}{K-1}\iff \alpha_{v2}<\left(1+\frac{\mu}{\theta_{v2}}\right)\left(1+\frac{1/\mathcal{R }_{2}^{vv}-1}{1-1/\mathcal{R}_{1}^{vv}}\right).\]
Consider \(\mathcal{R}_{2}^{vv}\geq 1\). If \(K<1/\mathcal{R}_{2}^{vv}\), in addition to equation (27), the vaccination rate \(v\) must satisfy \(v>\frac{1-1/\mathcal{R}_{2}^{vv}}{1-K}\). Otherwise, \(\mathcal{R}_{2}>1\) for any value of \(v\).
In resume, we have the following Theorem.
**Theorem 5.3**.: _Consider \(\mathcal{R}_{1}^{vv}>1\) and denote \(v_{1}^{*}=1-\frac{1}{\mathcal{R}_{1}^{vv}}\), \(K=\frac{\alpha_{v2}\theta_{v2}}{\theta_{v2}+\mu}\), \(v_{2}^{*}=\frac{1/\mathcal{R}_{2}^{ww}-1}{K-1}\), \(\alpha_{1}^{*}=\left(1+\frac{\mu}{\theta_{v2}}\right)\frac{1}{\mathcal{R}_{2 }^{vv}}\) and \(\alpha_{2}^{*}=\left(1+\frac{\mu}{\theta_{v2}}\right)\left(\frac{1/ \mathcal{R}_{2}^{vv}-1}{1-1/\mathcal{R}_{1}^{vv}}+1\right)\). About the stability of the DFE, we have:_
1. _Suppose_ \(\mathcal{R}_{2}^{vv}<1\)_._
2. _If_ \(\alpha_{v2}\leq\alpha_{1}^{*}\)_, the DFE is stable if the vaccination rate_ \(v\) _satisfies_ \(v>v_{1}^{*}\)_._
3. _If_ \(\alpha_{1}^{*}<\alpha_{v2}<\alpha_{2}^{*}\)_, a vaccination rate_ \(v\) _satisfying_ \(v_{1}^{*}<v<v_{2}^{*}\) _ensures the stability of the DFE._
4. _If_ \(\alpha_{v2}\geq\alpha_{2}^{*}\)_, then the DFE is unstable regardless of vaccination rate_ \(v\)_._
5. _Suppose_ \(\mathcal{R}_{2}^{vv}\geq 1\)_._
6. _If_ \(\alpha_{v2}<\alpha_{1}^{*}\)_, a vaccination rate_ \(v\) _satisfying_ \(v>\max\left\{v_{1}^{*},v_{2}^{*}\right\},\) _ensures stability of the DFE._
7. _If_ \(\alpha_{v2}\geq\alpha_{1}^{*}\)_, the DFE is unstable regardless of vaccination rate_ \(v\)_._
In the previous Theorem, we show conditions in \(\alpha\), \(\theta\) and \(v\) to ensure \(\mathcal{R}_{0}<1\). In the stability analysis, we saw that the local stability of endemic equilibria depends on the invasion numbers, as well as the system persistence conditions. These numbers, in turn, also depends on parameters \(\alpha\), \(\theta\) and \(v\). The analogous fact was observed in [35], in a model with only the factor \(\alpha\). Next, we performed simulations to illustrate our results as a function of these parameters.
### Numerical simulations
The parameter values were chosen to represent the infections by the Zika and dengue viruses and can be seen in Table 2. The transmission rates were calculated to obtain the referenced basic reproductive numbers, from the literature. There are not many estimates for the basic reproductive number of Zika. Based on the references, we chose two values (less and greater than one) to run the simulations. It was assumed \(\alpha_{1}=\alpha_{2}=\alpha_{v2}=\alpha\) and \(\theta_{1}=\theta_{2}=\theta_{v2}=\theta\). We also assume \(\mathcal{R}_{1}^{vv}=1.3996>1\). The simulations illustrate the results of Theorems 4.2, 4.3, 4.4, 4.19, 4.22 and 5.3.
#### 5.2.1 Scenario 1
In this scenario, we assume \(\mathcal{R}_{2}^{vv}=0.8569<1\). The period of cross-immunity \(1/\theta\) is assumed \(2\) years. Figure 2 shows the regions where the invasion and basic reproductive number are greater or less than one, as a function of parameters \(\alpha\) and \(v\). The curves can see obtained (implicitly or explicitly) from the expressions for \(\mathcal{R}_{1}(v)\), \(\mathcal{R}_{2}(\alpha,v)\), \(\mathcal{R}_{2}^{1}(\alpha,v)\) and \(\mathcal{R}_{1}^{2}(\alpha,v)\).
In accordance with Theorem 5.3 and comments previous to the theorem, for values below \(\alpha_{1}^{*}\) is possible to obtain \(\mathcal{R}_{0}<1\) if \(v>v_{1}^{*}\); for values of \(\alpha\) between \(\alpha_{1}^{*}\) and \(\alpha_{2}^{*}\), if \(v\) satisfies \(v_{1}^{*}<v<v_{2}^{*}(\alpha)\), we have \(\mathcal{R}_{0}<1\). Lastly, if \(\alpha>\alpha_{2}^{*}\), it is not possible to obtain \(\mathcal{R}_{2}<1\) and, therefore, \(\mathcal{R}_{0}>1\) for any value of \(v\).
From the theoretical results, in the region \(I\), the DFE \(E^{0}\) is stable; in the region \(II\), the endemic equilibrium \(E^{1}\) is stable; in the region \(IV\), the endemic equilibrium \(E^{2}\) is stable. In the others regions, \(III\), \(V\) and \(VI\), we proved that the strains coexist. Next, we will illustrate this analysis with some examples.
First, suppose \(\alpha=1.5\). Note that \(\alpha_{1}^{*}<\alpha<\alpha_{2}^{*}\). We will vary the values of \(v\). Figure 3 shows the solution tending to equilibrium \(E^{1}\) for \(v=0.2\) (\(v<v_{1}^{*}\)). In this case, \(\mathcal{R}_{1}=1.1197>1\) and \(\mathcal{R}_{1}^{2}=0.9697<1\). Figure 4 shows the solution tending to equilibrium \(E^{0}\) for \(v=0.35\) (\(v_{1}^{*}<v<v_{2}^{*}\)). In this case, \(\mathcal{R}_{1}=0.9098<1\) and \(\mathcal{R}_{2}=0.9952<1\). The Figure 5 shows the solution tending to equilibrium \(E^{2}\) for \(v=0.5\) (\(v>v_{2}^{*}\)). In this case, \(\mathcal{R}_{2}=1.0545>1\) and \(\mathcal{R}_{2}^{1}=0.7128<1\). The values of \(v=0.2\), \(v=0.35\) and \(v=0.5\) correspond to regions \(II,\,I\) and \(IV\), respectively.
In accordance with Theorem 5.3, these simulations show that the DFE is stable for an intermediary vaccination rate. Now, observe the effect of the parameter related to ADE, \(\alpha\), for a fixed vaccination rate,
\begin{table}
\begin{tabular}{c c c c c}
**Parameter** & **Range** & **Assumed** & **Dimension** & **Reference** \\ \hline \(N\) & \(-\) & \(2.1\times 10^{8}\) & \(Dimensionless\) & [12] \\ \(\Lambda\) & \(-\) & \(2.1\times 10^{8}\times\frac{1}{75\times 52}\) & \(week^{-1}\) & Calculated \\ \(\mu\) & \(-\) & \(\frac{1}{75\times 52}\) & \(weak^{-1}\) & [12] \\ \(\beta_{1}\) & \(1-8\) & \(1.4\) & \(week^{-1}\) & [17] \\ \(\beta_{2}\) & \(1-5\) & \(1.0\) or \(1.3\) & \(week^{-1}\) & [35, 36, 5] \\ \(\gamma_{1}\) & \(-\) & \(\frac{7}{7}\) & \(weak^{-1}\) & [6] \\ \(\gamma_{2}\) & \(-\) & \(\frac{7}{6}\) & \(weak^{-1}\) & [28] \\ \(\alpha\) & \(0-5\) & \(-\) & \(Dimensionless\) & [37] \\ \(\theta\) & \(\frac{1}{3\times 52}-\frac{1}{32}\) & \(-\) & \(weak^{-1}\) & [25] \\ \hline \end{tabular}
\end{table}
Table 2: Parameters used in the simulations.
Figure 2: Basic and invasion reproductive numbers as a function of parameters \(\alpha\) and \(v\).
\(v=0.5\). We know that if \(v=0.5\) then \(\mathcal{R}_{1}<1\). From Figure 2, for small values of \(\alpha\), we have \(\mathcal{R}_{2}<1\) (region \(I\)). However, as seen, for \(\alpha=1.5\) we have \(\mathcal{R}_{2}>1\) and strain 2 persists (Figure 5, region \(IV\)). In this case \(\mathcal{R}_{2}^{1}<1\). Suppose now \(\alpha=3\) (region \(V\)). Figure 6 shows the persistence of both strains. In this case, \(\mathcal{R}_{1}=0.6998<1\), \(\mathcal{R}_{2}=1.6805>1\) and \(\mathcal{R}_{2}^{1}=1.0031>1\). Here, it is possible to see that although the basic reproductive number of strain 1 is less than one, its invasion reproductive number is greater than one and it can persist in the population. Note that, according to Theorem 4.6, in the absence of strain 2, strain 1 would be eradicated. The high value of \(\alpha\) cause a synergy between the strains, what allows the coexistence of them.
#### 5.2.2 Scenario 2
Suppose \(\mathcal{R}_{2}^{wv}=1.1140>1\). The parameter related to ADE was assumed \(\alpha=1.0\). That is, primary infections do not enhance, nor protect against secondary infections. Figure (a)a shows the regions where the invasion and basic reproductive number are greater or less than one, as a function of the parameters \(1/\theta\) and \(v\). The blue region corresponds to \(\mathcal{R}_{1}<1\), that is \(v>v_{1}^{*}\). In all this region, we have \(\mathcal{R}_{2}>1\); the invasion and reproductive numbers indicate the persistence of strain 2. This suggests the persistence of strain 2 regardless of vaccination rate. Nonetheless, from Remark 5.2, with the vaccination, we expected a decrease on the number of new infections by strain 2.
As said in Remark 5.1, \(\frac{d\mathcal{R}_{2}}{d(1/\theta)}=-\frac{v\alpha_{2}\mathcal{R}_{2}^{wv}\mu} {(1+\mu/\theta)^{2}}<0\). Nonetheless, since that \(|\frac{d\mathcal{R}_{2}}{d(1/\theta)}|<\mathcal{R}_{2}^{wv}\mu=\frac{\mathcal{ R}_{2}^{wv}}{75\times 52}\), the derivative is negative, but its absolute value is very small, what justifies the variation in the period of cross-immunity has no effect.
#### 5.2.3 Scenario 3
In this last scenario, as before, \(\mathcal{R}_{2}^{wv}=1.1140>1\). Assume \(v=0.5\). This vaccination rate is enough to obtain \(\mathcal{R}_{1}<1\). We will analyze if there are values of \(\alpha\) and \(1/\theta\) such that both diseases can be eradicated. Figure (b)b shows the invasion and basic reproductive numbers as a function of \(\alpha\) and \(1/\theta\). It is possible to see that the variation in \(\theta\) does not have effect in these reproductive numbers. On the other hand, it is possible to achieve \(\mathcal{R}_{2}<1\) for \(\alpha\) below one. Intermediary values of \(\alpha\) cause permanence of strain 2. High values of \(\alpha\) cause synergy between the strains. The low vaccination rate allows infections by strain 1, leading to permanence of both in the population despite \(\mathcal{R}_{1}<1\).
From Theorem 5.3\(-\)2, to obtain \(\mathcal{R}_{2}<1\) with some vaccination strategy (for some rate \(v\)), we must have \(\alpha<\alpha_{1}^{*}=(1+\mu/\theta)(1/\mathcal{R}_{2}^{wv})\). For the considered values, \((1+\mu/\theta)\approx 1\). If \((1/\mathcal{R}_{2}^{wv})<1\), \(\alpha_{1}^{*}\) is less than one or approximately one.
## 6 Discussion
Based on the last findings concerning Zika and dengue viruses, we analyzed the possible results of a vaccination strategy against one strain in a two-strain model that takes into account temporary cross-immunity and antibody-dependent enhancement (ADE). When we study vaccination strategies, look for a strategy that reduces the basic reproductive number, \(\mathcal{R}_{0}=\max\{\mathcal{R}_{1},\mathcal{R}_{2}\}\), to a value less than one, expecting that the number of new infections decreases until eventually, the disease disappears from the population. Supposing the vaccination against strain 1, it is important to note that \(\mathcal{R}_{2}\) is increasing as a function of the factor related to ADE (\(\alpha\)), and decreasing as a function of the period of cross-immunity \((1/\theta)\). As we have two strains with some competition and some synergy between them, we expected to reduce \(\mathcal{R}_{1}\), and, if possible, to reduce also \(\mathcal{R}_{2}\).
In the first moment, we studied the dynamics of the model through the basic and invasion reproductive numbers. It was shown, for example, the local stability of DFE when \(\mathcal{R}_{0}<1\). The asymptotic global stability was proved supposing also \(\alpha_{i}\mathcal{R}_{i}<1\), for \(i=1\) or \(2\). Note that if there is no ADE (\(\alpha\leq 1\)), \(\mathcal{R}_{0}<1\) ensures the global stability. We also provide conditions for the stability of the endemic equilibria and the coexistence of strains.
Figure 6: Persistence of both strains. (a) Infections by strain \(1\). (b) Infections by strain \(2\).
Then, in Theorem 5.3, we exhibited the needed vaccination rates to obtain \(\mathcal{R}_{1}<1\) and, when possible, \(\mathcal{R}_{2}<1\), as a function of \(\alpha_{v2}\) and \(\theta_{v2}\), parameters referring to cross-immunity and ADE, from the vaccination. First, it was assumed \(\mathcal{R}_{2}^{wv}\) (model without vaccination) is less than one. For small values of \(\alpha_{v2}\) (\(\alpha_{v2}<\alpha_{1}^{\star}(\theta_{v2})\)), we found a minimum vaccination rate, \(v_{1}^{\star}\), to ensure the stability of DFE. For intermediary values of \(\alpha_{v2}\) (\(\alpha_{1}^{\star}(\theta_{v2})<\alpha_{v2}<\alpha_{2}^{\star}(\theta_{v2})\)), a vaccination rate \(v\), \(v_{1}^{\star}<v<v_{2}^{\star}(\alpha_{v2},\theta_{v2})\), ensures the stability of DFE. Lastly, for high values of \(\alpha_{v2}\) (\(\alpha_{v2}\geq\alpha_{2}^{\star}(\alpha_{v2},\theta_{v2})\)), it is not possible to eradicate both diseases. In the worst case, with \(\mathcal{R}_{2}^{wv}>1\), for small values of \(\alpha_{v2}\) (\(\alpha_{v2}<\alpha_{1}^{\star}(\theta_{v2})\)), a minimum vaccination rate is required, \(v>\max\{v_{1}^{\star},v_{2}^{\star}(\alpha_{v2},\theta_{v2})\}\). For greater values of \(\alpha_{v2}\) (\(\alpha_{v2}>\alpha_{1}^{\star}(\alpha_{v2},\theta_{v2})\)), it is not possible to eradicate both diseases.
Simulations were done supposing that the period of cross-immunity and the level of cross-susceptibility are the same for both strains, and the vaccine has the same effect as an infection. The basic and invasion reproductive numbers were analyzed as a function of \(\alpha\), \(\theta\) and \(v\).
We simulated a case where \(\mathcal{R}_{2}^{wv}<1\). The results of vaccination strategies can be the persistence of only one of the strains, the coexistence, or the eradication of both. We fixed the period of cross-immunity (\(1/\theta\)) equal to 2 years. With the assumed parameters, for \(\alpha<\alpha_{2}^{\star}(\theta)\approx 1.6\), there is a vaccination strategy such that it is possible to ensure the stability of the DFE. Above this value, we have \(\mathcal{R}_{2}>1\), indicating the persistence of strain 2 or both strains.
We also analyzed a case where \(\mathcal{R}_{2}^{wv}>1\). We assume \(\alpha=1\), that is, there is no enhancement nor
Figure 7: Basic and invasion reproductive numbers as a function of the parameters (a) \(1/\theta\) and \(v\); (b) \(\alpha\) and \(1/\theta\).
protection from primary infections, and observe if temporary cross-immunity allows eradication of strain 2. For a period of cross-immunity as expected (1 to 3 years), it is not possible to obtain \(\mathcal{R}_{2}<1\). This suggests the persistence of strain 2, regardless of vaccination rate.
Lastly, with the same \(\mathcal{R}_{2}^{uv}>1\), we fixed the vaccination rate \(v=0.5\). This vaccination rate ensures the eradication of strain 1. We observed if there are values of \(\alpha\) and \(\theta\) such that the DFE is stable. The results does not vary much with \(\theta\). The values of \(\alpha\) determine if \(\mathcal{R}_{0}<1\) or \(\mathcal{R}_{0}>1\). Intermediary values of \(\alpha\) keep \(\mathcal{R}_{2}>1\). High values of \(\alpha\) can cause synergy between the strains with the persistence of strain 1, despite vaccination ensuring \(\mathcal{R}_{1}<1\). For the considered values, the threshold for \(\alpha\), which allows \(\mathcal{R}_{2}<1\), \((\alpha<(1+\mu/\theta)\times 1/\mathcal{R}_{2}^{uv})\), is below one. Note that if the average life expectancy is much greater than the period of cross-immunity \((1/\mu>>1/\theta)\), then \(1+\mu/\theta\approx 1\).
In the case \(\mathcal{R}_{2}^{uv}>1\), even when it is not possible to obtain \(\mathcal{R}_{2}<1\), with the vaccination, we can expect a decrease on the number of new infections by the strain 2. For this, we must have \(\alpha<(1+\mu/\theta)\). With the parameters used, this bound is greater than one, but very close to one. For example, for cross-immunity of 2 years and average life expectation of 75 years, this bound is 1.03.
These results indicate that the vaccination may or may not be beneficial on the control of strain 2. If strain 2 has basic reproductive number less than one, the cross-immunity can contribute to eradication of both strains. If, on the other hand, the reproductive basic number is greater than one, the existence or not of antibody-dependent enhancement can determine the eradication of strain 2.
## 7 Acknowledgement
Lorena C. Bulhosa was supported by a grant from the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) of Brazil [proc. 141180/2017-0]. Juliane F Oliveira was supported by a grant from the Oswaldo Cruz Foundation [grant number VPGDI-050-FIO-20-2-10, 2022]. The funders of the study had no role in the study design, data collection, data analysis, data interpretation, or the writing of the manuscript.
## Appendix A Proof of Proposition 2.1
Proof: Consider \(x=(S,V,I_{1},I_{2},C_{1},C_{2},R_{1},R_{2},R_{v1},Y_{1},Y_{2},R_{12})\) and suppose that \(x(0)\geq 0\). Then for any \(t>0\), we have
\[S(t) = S(0)e^{-\int_{0}^{t}(\beta_{1}J_{1}(s)/N+\beta_{2}J_{2}(s)/N+ \mu)ds}+(1-v)\Lambda\int_{0}^{t}e^{-\int_{s}^{t}(\beta_{1}J_{1}(u)/N+\beta_{2} J_{2}(u)/N+\mu)du}ds\geq 0\] \[V(t) = \left[V(0)+\frac{\Lambda v}{\theta_{v2}+\mu}\right]e^{-(\theta_{ v2}+\mu)t}+\frac{\Lambda v}{\theta_{v2}+\mu}\geq 0\] \[R_{v1}(t) = R_{v1}(0)e^{-\int_{0}^{t}(\alpha_{v2}\beta_{2}J_{2}(s)/N+\mu)ds} +\theta_{v2}\int_{0}^{t}V(s)e^{-\int_{s}^{t}(\alpha_{v2}\beta_{2}J_{2}(u)/N+ \mu)du}ds\geq 0.\] \[J_{1}(t) = J_{1}(0)e^{-(\gamma_{+}\mu)t}e^{\int_{0}^{t}[\beta_{1}(S(s)+ \alpha_{1}R_{2}(s))]/Nds}\geq 0\] \[J_{2}(t) = J_{2}(0)e^{-(\gamma_{+}\mu)t}e^{\int_{0}^{t}[\beta_{1}(S+\alpha _{2}R_{1}+\alpha_{v2}R_{v1})/Nds]}\geq 0.\] \[I_{i}(t) = I_{i}(0)e^{-(\gamma_{i}+\mu)t}+\int_{0}^{t}\frac{\beta_{i}(s)J_{ i}(s)S(s)}{N}e^{-(\gamma_{i}+\mu)(t-s)}ds\geq 0,\quad i\in\{1,2\}\] \[C_{i}(t) = C_{i}(0)e^{-(\theta_{j}+\mu)t}+\int_{0}^{t}\gamma_{i}I_{i}(s)e^{ -(\theta_{j}+\mu)(t-s)}ds\geq 0,\quad i,j\in\{1,2\},i\neq j\] \[R_{i}(t) = R_{i}(0)e^{-\int_{0}^{t}(\alpha_{j}\beta_{j}J_{j}(s)/N+\mu)ds}+ \theta_{j}\int_{0}^{t}C_{i}(s)e^{-\int_{s}^{t}(\alpha_{j}\beta_{j}J_{j}(u)/N+ \mu)du}ds\geq 0,\quad i,j\in\{1,2\},i\neq j\] \[Y_{1}(t) = Y_{1}(0)e^{-(\gamma_{1}+\mu)t}+\int_{0}^{t}\frac{\alpha_{1} \beta_{1}J_{1}(s)R_{2}(s)}{N}e^{-(\gamma_{1}+\mu)(t-s)}ds\geq 0\] \[Y_{2}(t) = Y_{2}(0)e^{-(\gamma_{2}+\mu)t}+\int_{0}^{t}\left(\frac{\alpha_{2} \beta_{2}J_{2}(s)R_{1}(s)}{N}+\frac{\alpha_{v2}\beta_{2}J_{2}(s)R_{v1}(s)}{N} \right)e^{-(\gamma_{2}+\mu)(t-s)}ds\geq 0\] \[R_{12}(t) = R_{12}(0)e^{-\mu t}+\int_{0}^{t}\left(\gamma_{1}Y_{1}(s)+\gamma_{ 2}Y_{2}(s)\right)e^{-\mu(t-s)}ds\geq 0.\]
In particular, if \(S(0)>0\), then \(S(t)\), \(V(t)\) and \(R_{v1}(t)\) are strictly positive for all \(t>0\). Thus, the invariance of \(\mathbb{R}_{+}^{12}\) under the flow follows directly from the equations in (1).
Since we supposed that the total population is constant and equal to \(\Lambda/\mu\), together with the invariance of \(\mathbb{R}^{12}_{+}\), we can conclude that solutions are limited.
Lastly, given an initial condition in \(\mathbb{R}^{12}_{+}\), the existence and uniqueness of solutions follows from the fact that vector field is a continuous and Lipschitz function in \(\mathbb{R}^{12}_{+}\).
## Appendix B Calculations of Remark 4.5
\[\mathcal{R}^{2}_{1} = \frac{\beta_{2}S^{*}}{(\gamma_{2}+\mu)N}+\frac{\beta_{2}(\alpha_ {2}R^{*}_{1}+\alpha_{v2}R^{*}_{v1})}{(\gamma_{2}+\mu)N}=\frac{\beta_{2}}{ \gamma_{2}+\mu}\frac{S^{*}+\alpha_{2}R^{*}_{1}+\alpha_{v2}R^{*}_{v1}}{N}.\]
If \(\alpha_{2}\leq 1\), the above expression is less or equal to
\[\frac{\beta_{2}}{\gamma_{2}+\mu}\frac{S^{*}+R^{*}_{1}+\alpha_{v2}R^{*}_{v1}}{N}.\]
If \(\alpha_{2}>1\), the above expression is less or equal to
\[\frac{\alpha_{2}\beta_{2}}{\gamma_{2}+\mu}\frac{S^{*}+R^{*}_{1}+\alpha_{v2}R^ {*}_{v1}}{N}.\]
We have that
\[\frac{S^{*}+R^{*}_{1}}{N} = \frac{\gamma_{1}+\mu}{\beta_{1}}+\frac{\theta_{2}\gamma_{1}(1-v) }{(\theta_{2}+\mu)(\gamma_{1}+\mu)}\left(1-\frac{1}{\mathcal{R}_{1}}\right) \tag{29}\] \[= \frac{\theta_{2}\gamma_{1}}{(\theta_{2}+\mu)(\gamma_{1}+\mu)}(1- v)-\frac{\gamma_{1}+\mu}{\beta_{1}}\left(1-\frac{\theta_{2}\gamma_{1}}{(\theta_{2}+ \mu)(\gamma_{1}+\mu)}\right)\] \[\leq \frac{\theta_{2}\gamma_{1}}{(\theta_{2}+\mu)(\gamma_{1}+\mu)}(1- v)\leq 1-v.\]
We also have \(\frac{\alpha_{v2}R^{*}_{v1}}{N}=\frac{\alpha_{v2}\theta_{v2}v}{\theta_{v2}+\mu}\).
Thus,
\[\frac{\beta_{2}}{\gamma_{2}+\mu}\frac{S^{*}+R^{*}_{1}+\alpha_{v2}R^{*}_{v1}}{ N}\leq\frac{\beta_{2}}{\gamma_{2}+\mu}\left[1-v+\frac{v\alpha_{v2}\theta_{v2}}{ \theta_{v2}+\mu}\right]=\mathcal{R}_{2}.\]
In resume, if \(\alpha_{2}\leq 1\), then \(\mathcal{R}^{2}_{1}\leq\mathcal{R}_{2}\); if \(\alpha_{2}>1\), then \(\mathcal{R}^{2}_{1}\leq\alpha_{2}\mathcal{R}_{2}\).
In the same way, we have
\[\mathcal{R}^{1}_{2}=\frac{\beta_{1}S^{*}}{(\gamma_{1}+\mu)N}+\frac{\alpha_{1} \beta_{1}R^{*}_{2}}{(\gamma_{1}+\mu)N}=\frac{\beta_{1}}{\gamma_{1}+\mu}\frac{ S^{*}+\alpha_{1}R^{*}_{2}}{N}.\]
If \(\alpha_{1}\leq 1\), then the above expression is less or equal to \(\frac{\beta_{1}}{\gamma_{1}+\mu}\frac{S^{*}+R^{*}_{2}}{N}\). If \(\alpha_{1}>1\), then the above expression is less or equal to \(\frac{\alpha_{1}\beta_{1}}{\gamma_{1}+\mu}\frac{S^{*}+R^{*}_{2}}{N}\).
We have that
\[\frac{S^{*}+R^{*}_{2}}{N}=(1-v)\left[\frac{\mu}{x+\mu}+\frac{x\gamma_{2}\theta _{1}}{(x+\mu)(\gamma_{2}+\mu)(\theta_{1}+\mu)}\right]\leq(1-v)\left[\frac{\mu }{x+\mu}+\frac{x}{(x+\mu)}\right]=1-v.\]
Thus,
\[\frac{\beta_{1}}{\gamma_{1}+\mu}\frac{S^{*}+R^{*}_{2}}{N}\leq\frac{\beta_{1}} {\gamma_{1}+\mu}(1-v)=\mathcal{R}_{1}.\]
In resume, if \(\alpha_{1}\leq 1\), then \(\mathcal{R}^{1}_{2}\leq\mathcal{R}_{1}\); if \(\alpha_{1}>1\), then \(\mathcal{R}^{1}_{2}\leq\alpha_{1}\mathcal{R}_{1}\).
## Appendix C Coefficients of Q
The coefficients \(b,c\) and \(d\) of \(Q(\lambda)\) are given by
\[b = \frac{\beta_{2}J_{2}}{N}+\frac{\alpha_{v2}\beta_{2}J_{2}}{N}+2\mu\] \[c = \left(\frac{\beta_{2}J_{2}}{N}+\mu\right)\left(\frac{\alpha_{v2} \beta_{2}J_{2}}{N}+\mu\right)+\frac{\alpha_{v2}^{2}\beta_{2}^{2}R_{v1}J_{2}}{ N^{2}}+\frac{\beta_{2}^{2}SJ_{2}}{N^{2}}\] \[d = \frac{\alpha_{v2}^{2}\beta_{2}^{2}R_{v1}J_{2}}{N^{2}}\left(\frac{ \beta_{2}J_{2}}{N}+\mu\right)+\frac{\beta_{2}^{2}SJ_{2}}{N^{2}}\left(\frac{ \alpha_{v2}\beta_{2}J_{2}}{N}+\mu\right).\] |
2310.02072 | A Variable Eddington Factor Model for Thermal Radiative Transfer with
Closure based on Data-Driven Shape Function | A new variable Eddington factor (VEF) model is presented for nonlinear
problems of thermal radiative transfer (TRT). The VEF model is a data-driven
one that acts on known (a-priori) radiation-diffusion solutions for material
temperatures in the TRT problem. A linear auxiliary problem is constructed for
the radiative transfer equation (RTE) with opacities and emission source
evaluated at the known material temperatures. The solution to this RTE
approximates the specific intensity distribution for the problem in all
phase-space and time. It is applied as a shape function to define the Eddington
tensor for the presented VEF model. The shape function computed via the
auxiliary RTE problem will capture some degree of transport effects within the
TRT problem. The VEF moment equations closed with this approximate Eddington
tensor will thus carry with them these captured transport effects. In this
study, the temperature data comes from multigroup $P_1$, $P_{1/3}$, and
flux-limited diffusion radiative transfer (RT) models. The proposed VEF model
can be interpreted as a transport-corrected diffusion reduced-order model.
Numerical results are presented on the Fleck-Cummings test problem which models
a supersonic wavefront of radiation. The presented VEF model is shown to
reliably improve accuracy by 1-2 orders of magnitude compared to the considered
radiation-diffusion model solutions to the TRT problem. | Joseph M. Coale, Dmitriy Y. Anistratov | 2023-10-03T14:14:04Z | http://arxiv.org/abs/2310.02072v1 | A Variable Eddington Factor Model for Thermal Radiative Transfer with Closure based on Data-Driven Shape Function
###### Abstract
A new variable Eddington factor (VEF) model is presented for nonlinear problems of thermal radiative transfer (TRT). The VEF model is a data-driven one that acts on known (a-priori) radiation-diffusion solutions for material temperatures in the TRT problem. A linear auxiliary problem is constructed for the radiative transfer equation (RTE) with opacities and emission source evaluated at the known material temperatures. The solution to this RTE approximates the specific intensity distribution for the problem in all phase-space and time. It is applied as a shape function to define the Eddington tensor for the presented VEF model. The shape function computed via the auxiliary RTE problem will capture some degree of transport effects within the TRT problem. The VEF moment equations closed with this approximate Eddington tensor will thus carry with them these captured transport effects. In this study, the temperature data comes from multigroup \(P_{1}\), \(P_{1/3}\), and flux-limited diffusion radiative transfer (RT) models. The proposed VEF model can be interpreted as a transport-corrected diffusion reduced-order model. Numerical results are presented on the Fleck-Cummings test problem which models a supersonic wavefront of radiation. The presented VEF model is shown to reliably improve accuracy by 1-2 orders of magnitude compared to the considered radiation-diffusion model solutions to the TRT problem.
keywords: Boltzmann transport equation, variable Eddington tensor, radiation diffusion, model order reduction, nonlinear PDEs
## 1 Introduction
The modeling and simulation of thermal radiation transport is an important consideration for many applications. Such phenomena can be found in a wide array of different fields, including: high-energy density (HED) physics, astrophysics, plasma physics, fire and combustion physics, and atmospheric and ocean sciences [1; 2; 3; 4; 5; 6]. Multiphysical models of these phenomena comprise complex systems of partial differential equations (PDEs) which must be solved by means of numerical simulation. Some challenges are associated with the numerical simulation these systems. The involved equations are characterized by tight coupling, strong nonlinearities and multiscale behavior in space-time.
Since radiative transfer (RT) is an important mechanism for energy redistribution in these phenomena, a photon transport model must be included in the systems of equations to model RT effects. The equations of RT are high-dimensional, their solution typically depending on 7 independent variables in 3D geometry. Thus along with the aforementioned challenges, a massive number of degrees of freedom (DoF) must be used in calculations to adequately describe the overall solution. For a simulation that discretizes each independent variable with an \(x-\)point grid, \(x^{7}\) DoF are required, making it reasonable to reach trillions or quadrillions of DoF. The resulting computational load and memory occupation can become intractable for large simulations without the use of exascale computing resources.
A common approach to develop models for simulation of RT phenomena in multiphysics problems is to apply methods of dimensionality reduction for the RT component. This will significantly reduce the required computational resources in exchange for some level of approximation and modeling error in the resulting solution. The goal for an RT model is to find balance between computational load with the desired fidelity of solutions. There exist many well-known RT models based on moment equations with approximate closures which have seen extensive use in application, such as \(M_{N}\) methods that use maximum entropy closure relations, the \(P_{1}\) and \(P_{1/3}\) models, and flux-limited diffusion models [7; 8; 9; 10; 11; 12]. Variable Eddington factor (VEF) models makeup another such class of RT models [13; 14; 15]. VEF models are constructed by reformulating the radiation pressure tensor in terms of the Eddington tensor which brings closure to the moment equations. There exist many ways to construct approximation for the Eddington tensor. Some commonly used VEF models apply Wilson, Kershaw and Levermore closures [16; 17; 18; 19; 12]. Numerical methods for solving the Boltzmann transport equation have been developed based on the first two angular moment equations with exact closure defined by the Eddington tensor [20; 21; 22; 23; 24].
The anisotropic diffusion (AD) model has been developed for thermal radiative transfer (TRT) and other particle transport applications [25; 26; 27; 28]. A diffusion equation is constructed which is closed by means of an AD coefficient. The AD coefficient is defined via the solution to an auxiliary transport problem which takes the form of an angular and spatially dependent shape function. The special shape function accounts for some degree of transport effects in the TRT problem. This yields the tensor-diffusion moment equations with transport-corrected AD coefficients. The AD model uses just one transport sweep to solve for the auxiliary function.
A new approach based on data-driven reduced-order models (ROMs) has been gaining popularity in recent years which make use of data-based methodologies to dimensionality reduction. Data-driven models have been developed for (i) linear particle transport problems [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48] (ii) nonlinear RT problems [49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60], and (iii) various problems in nuclear reactor-physics [61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71]. The fundamental idea behind these ROMs is to leverage databases of solutions to their problems of interest (known a-priori) to develop some reduction in the dimensionality for their involved equations. By nature, these models are problem-dependent since they are formulated using chosen datasets, and this allows for higher levels of accuracy than displayed by other types of ROMs (within the
regime of parameters covered by the datasets).
In this paper we present a data-driven VEF model for the fundamental TRT problem. This kind of TRT problem models a supersonic flow of radiation through matter and neglect-hydrodynamic motion of the underlying material and heat conduction [72]. Note that the TRT problem is characterized by the same fundamental challenges as the more general class of problems (e.g. radiation-hydrodynamics problems) and serves as a useful computational platform for the development of new models. There exist several pathways to obtain an approximate Eddington tensor by data-driven means if data on the RT solution for a subset of problem parameters can be obtained [53; 55; 58; 59; 60]. For the model developed here, the Eddington tensor is computed from approximate solution data to the TRT problem generated with radiation-diffusion models. It has been previously shown that the solution of the Boltzmann transport equation computed with a scattering source term evaluated by a diffusion solution yields a sufficiently accurate shape function for estimation of the Eddington tensor in linear problems [73; 74]. An extension of this idea to the nonlinear TRT problem is to use the material temperatures evaluated with a radiation-diffusion model to compute the opacity and emission source in the radiative transfer equation (RTE). This step of the new model uses only one transport sweep to calculate a shape function accounting approximately for transport effects, and to generate the Eddington tensor for the TRT problem.
The remainder of the paper is as follows. The TRT problem is described in Sec. 2, along with definitions for several classical moment-based RT models applied to generate data for computation of the auxiliary shape function. The new data-driven VEF model is formulated in Sec. 3. Numerical results are given in Sec. 4, followed by conclusions in Sec. 5.
## 2 Thermal Radiative Transfer and Models Based on Moment Equations
The TRT problem is formulated by the multigroup RTE
\[\frac{1}{c}\frac{\partial I_{g}}{\partial t}+\mathbf{\Omega}\cdot \mathbf{\nabla}I_{g}+\varkappa_{g}(T)I_{g}=\varkappa_{g}(T)B_{g}(T), \tag{1a}\] \[I_{g}|_{\mathbf{r}\in\partial\Gamma}=I_{g}^{\text{in}}\ \ \text{for}\ \ \mathbf{\Omega}\cdot\mathbf{n}_{\Gamma}<0,\quad I_{g}|_{t=0}=I_{g}^{0},\] (1b) \[\mathbf{r}\in\Gamma,\quad t>0,\quad\mathbf{\Omega}\in\mathcal{S},\quad g=1,\dots,G,\]
and the material energy balance (MEB) equation
\[\frac{\partial\varepsilon(T)}{\partial t}=\sum_{g=1}^{G}\bigg{(}\int_{4\pi}I_ {g}d\Omega-4\pi B_{g}(T)\bigg{)}\varkappa_{g}(T),\quad T|_{t=0}=T^{0}\,, \tag{2}\]
where \(\mathbf{r}\) is spatial position, \(t\) is time, \(g\) is the frequency group index, \(\Gamma\) is the spatial domain, \(\partial\Gamma\) is the boundary surface of \(\Gamma\), \(\mathbf{n}_{\Gamma}\) is the unit outward normal to \(\partial\Gamma\), \(I_{g}(\mathbf{r},\mathbf{\Omega},t)\) is the group specific photon intensity, \(T(\mathbf{r},t)\) is the material temperature, \(\varkappa_{g}(\mathbf{r},t;T)\) is the group material opacity, \(\varepsilon(\mathbf{r},t;T)\) is the material energy density, and \(B_{g}(\mathbf{r},t;T)\) is the group Planckian function given by
\[B_{g}(T)=\frac{2}{c^{2}h^{2}}\int_{\nu_{g-1}}^{\nu_{g}}\frac{\nu^{3}}{e^{\frac {\nu}{T}}-1}d\nu. \tag{3}\]
\(c\) is the speed of light, \(h\) is Planck's constant, \(\nu\) is photon frequency.
There are several TRT models which apply moment equations for the group radiation energy density
\[E_{g}=\frac{1}{c}\int_{4\pi}I_{g}\ d\Omega, \tag{4}\]
and flux
\[\mathbf{F}_{g}=\int_{4\pi}\mathbf{\Omega}I_{g}\ d\Omega, \tag{5}\]
to approximate the RTE. The \(P_{1}\) model is defined by the multigroup \(P_{1}\) equations for radiative transfer, given by
\[\frac{\partial E_{g}}{\partial t}+\mathbf{\nabla}\cdot\mathbf{F}_{g}+c \varkappa_{g}(T)E_{g}=4\pi\varkappa_{g}(T)B_{g}(T), \tag{6a}\] \[\frac{1}{c}\frac{\partial\mathbf{F}_{g}}{\partial t}+\frac{c}{3}\mathbf{ \nabla}E_{g}+\varkappa_{g}(T)\mathbf{F}_{g}=0,\] (6b) \[\mathbf{F}_{g}|_{\mathbf{r}\in\partial\Gamma}=\frac{1}{2}E_{g}|_{\mathbf{r} \in\partial\Gamma}+2F_{g}^{\rm in},\] (6c) \[E_{g}|_{t=0}=E_{g}^{0},\quad\mathbf{F}_{g}|_{t=0}=\mathbf{F}_{g}^{0}. \tag{6d}\]
The hyperbolic time-dependent \(P_{1}\) equations are derived from the RTE by taking its \(0^{\rm th}\) and \(1^{\rm st}\) angular moments. Closure for the moment equations is formulated by defining the highest (\(2^{\rm nd}\)) angular moment
\[H_{g}=\int_{4\pi}\mathbf{\Omega}\otimes\mathbf{\Omega}I_{g}\ d\Omega \tag{7}\]
with the expansion
\[I_{g}=\frac{1}{4\pi}(cE_{g}+3\mathbf{\Omega}\cdot\mathbf{F}_{g})\,. \tag{8}\]
This approximation yields
\[H_{g}=\frac{c}{3}E\,. \tag{9}\]
The \(P_{1/3}\) model for radiative transfer is formulated by the balance equation (6a) and the modified first moment equation given by [7, 8]
\[\frac{1}{3c}\frac{\partial\mathbf{F}_{g}}{\partial t}+\frac{c}{3}\mathbf{\nabla}E_{g} +\varkappa_{g}(T)\mathbf{F}_{g}=0\,, \tag{10}\]
The factor \(\frac{1}{3}\) at the time-derivative term in Eq. (10) produces the correct the propagation speed of radiation in vacuum.
The flux-limited diffusion (FLD) models are defined by the time-dependent multigroup diffusion equations [75, 76, 77, 78, 15]
\[\frac{\partial E_{g}}{\partial t}+c\mathbf{\nabla}\cdot(-D_{g}\mathbf{ \nabla}E_{g})+c\varkappa_{g}(T)E_{g}=4\pi\varkappa_{g}(T)B_{g}(T), \tag{11a}\] \[\mathbf{n}_{\Gamma}\cdot(-cD_{g}\mathbf{\nabla}E_{g})|_{\mathbf{r}\in\partial \Gamma}=\frac{1}{2}E_{g}|_{\mathbf{r}\in\partial\Gamma}+2F_{g}^{\rm in},\quad E_ {g}|_{t=0}=E_{g}^{0}, \tag{11b}\]
\[\mathbf{F}_{g}=-cD_{g}\mathbf{\nabla}E_{g}\,, \tag{12}\]
where \(D_{g}\) is the group diffusion coefficient. In this model, the time derivative of the flux in the first moment equation is neglected. This leads to a parabolic time-dependent equation for \(E_{g}\) with the diffusion coefficient defined by
\[D_{g}=\frac{1}{3\varkappa_{g}(T)}\,. \tag{13}\]
In general, the solution of the diffusion equation (11a) with \(D_{g}\) defined by Eq. (13) does not satisfy the flux-limiting condition
\[\frac{|\mathbf{F}_{g}|}{cE_{g}}\leq 1\,, \tag{14}\]
stemming from definitions of the radiation density and flux (Eqs. (4) and (5)). The FLD models introduce modifications of the diffusion coefficient to meet the condition in Eq. (14). In this study, we consider the coefficient proposed by E. Larsen [7]
\[D_{g}(T,E_{g})=\Bigg{[}\,\left(3\varkappa_{g}(T)\right)^{2}+\left(\frac{1}{E_ {g}}\mathbf{\nabla}E_{g}\right)^{2}\,\Bigg{]}^{\frac{1}{2}}. \tag{15}\]
## 3 Variable Eddington Factor Model for TRT with Diffusion-Based Shape Function
The Variable Eddington factor method is defined by the balance equation (Eq. (6a)) and the first moment equation
\[\frac{1}{c}\frac{\partial\mathbf{F}_{g}}{\partial t}+c\mathbf{\nabla}(\mathfrak{f}_{g }E_{g})+\varkappa_{g}(T)\mathbf{F}_{g}=0\,, \tag{16}\]
where closure is defined with
\[H_{g}=c\mathfrak{f}_{g}[\tilde{I}]E_{g}, \tag{17}\]
by means of the Eddington tensor given as
\[\mathfrak{f}_{g}[\tilde{I}]=\frac{\int_{4\pi}\mathbf{\Omega}\otimes\mathbf{\Omega} \tilde{I}_{g}d\Omega}{\int_{4\pi}\tilde{I}_{g}d\Omega}\,. \tag{18}\]
Here \(\tilde{I}_{g}\) is an approximation of the photon intensity. There exist a group of VEF models which use an approximation of the Eddington tensor defined via the first two moments of the photon intensity [17; 79; 18; 80; 81; 82; 83; 78; 84; 12].
To define the Eddington tensor, we formulate a model in which the material temperature distribution \(\tilde{T}\) for a TRT problem is calculated with one of the radiation-diffusion models
described in Sec. 2. A linear RTE is then defined by available \(\tilde{T}(\mathbf{r},t)\) for \(t\in[0,t^{\text{end}}]\) and \(\mathbf{r}\in\Gamma\)
\[\frac{1}{c}\frac{\partial\tilde{I}_{g}}{\partial t}+\mathbf{\Omega} \cdot\mathbf{\nabla}\tilde{I}_{g}+\varkappa_{g}(\tilde{T})\tilde{I}_{g}=\varkappa _{g}(\tilde{T})B_{g}(\tilde{T}), \tag{19a}\] \[\tilde{I}_{g}|_{\mathbf{r}\in\partial\Gamma}=I_{g}^{\text{in}}\ \ \text{for}\ \ \mathbf{n}_{\Gamma}\cdot\mathbf{\Omega}<0,\quad\tilde{I}_{g}|_{t=t_{0}}=I_{g}^{0},\] (19b) \[\mathbf{r}\in\Gamma,\quad t\in[0,t^{\text{end}}],\quad\mathbf{\Omega}\in \mathcal{S},\quad g=1,\dots,G.\]
The solution of the auxiliary RTE problem (19) gives an approximate distribution of radiation intensities \(\tilde{I}_{g}\) which accounts for the transport effects of the TRT problem and can used as a shape function to compute approximate Eddington tensor (18). The boundary conditions for the VEF moment equations are defined in terms of \(\tilde{I}_{g}\) as follows [20, 22]:
\[\mathbf{n}_{\Gamma}\cdot\mathbf{F}_{g}|_{\mathbf{r}\in\partial\Gamma}=cC_{g}[\tilde{I}_{g }](E_{g}|_{\mathbf{r}\in\partial\Gamma}-E^{\text{in}})+F_{g}^{\text{in}}, \tag{20}\]
where
\[C_{g}[\tilde{I}_{g}]=\frac{\int_{\mathbf{n}_{\Gamma}\cdot\mathbf{\Omega}>0}\mathbf{\Omega} \tilde{I}_{g}\ d\Omega}{\int_{\mathbf{n}_{\Gamma}\cdot\mathbf{\Omega}>0}\tilde{I}_{g}d \Omega}. \tag{21}\]
The RTE (19) with the given function of temperature can be efficiently solved with a single transport sweep per time step. To solve the Eqs. (19) ray tracing techniques (aka the method of long characteristics) are applied [85, 86, 87, 88, 89, 90, 91, 92]. In sum, the data-driven VEF model for TRT is constructed with:
* Radiation-diffusion solution data for material temperatures \(\tilde{T}\),
* The RTE with opacity and Planckian source evaluated with \(\tilde{T}\) (Eqs. (19)),
* The VEF equations (Eqs. (6a) & (16)), where the Eddington tensor is defined via Eq. (18) and boundary conditions given by Eq. (20).
Hereafter we refer to this model as the data-driven VEF model (DD-VEF).
```
Input: \(\{\mathbf{\tilde{T}}(t^{n})\}_{n=1}^{N}\) \(n=0\) while\(t^{n}<t^{\text{end}}\)do \(n=n+1\) \(\mathbf{\tilde{T}}(t^{n})\leadsto\mathbf{\tilde{I}}_{g}(t^{n})\) (Eqs. (19)) \(\mathbf{\tilde{I}}_{g}(t^{n})\leadsto\mathbf{\tilde{f}}_{g}(t^{n})\) (Eq. (18)) end Output: \(\{\ \mathbf{\tilde{f}}_{g}(t^{n})\}_{n=1}^{N}\)
```
**Algorithm 1**Offline phase of DD-VEF model
The process of solving TRT problems with the DD-VEF model can be explained as a two-phase methodology, which is outlined in Algorithms 1 and 2. The first (offline) phase demonstrated by Algorithm 1 represents the 'data-processing' operations to prepare the Eddington tensor closure data. The required input is an already known approximate material temperature distribution \(\tilde{T}\) for the entire spatial and temporal interval of interest. If we define a simulation with \(X\) spatial grid cells and \(N\) time steps, then this input data is the set \(\{\mathbf{\tilde{T}}(t^{n})\}_{n=1}^{N}\) where \(\mathbf{\tilde{T}}(t^{n})\in\mathbb{R}^{X}\). At each \(n^{\text{th}}\) time step, Eq. (19) is solved using \(\mathbf{\tilde{T}}(t^{n})\) for the vector of discrete radiation intensities in phase space \(\mathbf{\tilde{I}}_{g}\), which give rise to the approximate Eddington tensor on the discrete grid \(\mathbf{\tilde{f}}_{g}(t^{n})\) via Eq. (18). The discrete Eddington tensor values at each time step can be collected and stored in a dataset for later use with the DD-VEF model. This process of preparing the Eddington tensor data is referred to as the offline-phase because it only must completed once per temperature distribution \(\tilde{T}\), and the calculated Eddington tensor data can be stored away for later use.
The second (online) phase is outlined in Algorithm 2 and represents the operations required to solve a given TRT problem with the DD-VEF model. Taking as input the Eddington tensor data calculated in Algorithm 1, The DD-VEF equations are solved at each time step to generate vectors for temperature \(\mathbf{T}(t^{n})\), radiation energy densities \(\mathbf{E}_{g}(t^{n})\) and radiation fluxes \(\mathbf{\mathcal{F}}_{g}(t^{n})\) over all phase space. In this configuration of offline/online phases, only Algorithm 2 must be completed for any given TRT simulation, assuming Algorithm 1 was completed some time in the past to prepare the required datasets. It is important to note however, that both phases can be combined to save on storage requirements. In this case given the input for \(\{\mathbf{\tilde{T}}(t^{n})\}_{n=1}^{N}\), at each time step the approximate Eddington tensor is calculated and immediately used with the DD-VEF equations to generate the TRT solution for the time step.
```
Input: \(\{\mathbf{\tilde{f}}_{g}(t^{n})\}_{n=1}^{N}\) \(n=0\) while\(t^{n}<t^{end}\)do \(n=n+1\) \(\mathbf{\tilde{f}}_{g}(t^{n})\leadsto\{\mathbf{T}(t^{n}),~{}\mathbf{E}_{g}(t^{n}),~{}\mathbf{ \mathcal{F}}_{g}(t^{n})\}\) (Eqs. (6a), (16), (20)) end Output: \(\{\mathbf{T}(t^{n}),~{}\mathbf{E}_{g}(t^{n}),~{}\mathbf{\mathcal{F}}_{g}(t^{n})\}_{n=1}^{N}\)
```
**Algorithm 2**Online phase of DD-VEF model
## 4 Numerical Results
The DD-VEF model is analyzed with numerical testing on the classical Fleck-Cummings (F-C) test [93] in 2D Cartesian geometry. This test takes the form of a homogeneous square domain with sides 6 cm in length, whose material is defined with spectral opacity
\[\varkappa_{\nu}=\frac{27}{\nu^{3}}(1-e^{-\nu/T}). \tag{22}\]
Here \(\nu\) and \(T\) are measured in KeV. The left boundary is subject to an isotropic, black-body radiation source at a temperature of \(T^{\rm in}=1\) KeV. All other boundaries are vacuum. The initial temperature of the domain is \(T^{0}=1\) eV. The material energy density of the material is a linear one \(\varepsilon=c_{v}T\) where \(c_{v}=0.5917a_{R}(T^{\rm in})^{3}\). The problem is solved on the interval \(t\in[0,6\,{\rm ns}]\) with 300 uniform time steps \(\Delta t=2\times 10^{-2}\) ns. The phase space is discretized using a \(20\times 20\) uniform orthogonal spatial grid, 17 frequency groups (see Table 1) and 144 discrete directions. The Abu-Shumays angular quadrature set is used [94]. The implicit backward-Euler time integration scheme is used to discretize all equations in time. The BTE is discretized in space with the method of conservative long characteristics [92], and all low-order equations use a second-order finite-volumes scheme [95].
Note the full-order model (FOM) for this TRT problem is formulated as the RTE coupled with the MEB. Three radiation diffusion models are considered to generate \(\tilde{T}\): multigroup FLD, \(P_{1}\) and \(P_{1/3}\) (see Sec. 2). The physics embedded in \(\tilde{T}\) will vary depending on which diffusion type model is used in its computation. For instance the FLD, \(P_{1}\) and \(P_{1/3}\) models may all produce different propagation speeds (and spectral distributions) of the radiation wavefront [7]. These effects change how energy is redistributed in the F-C test and alters the distribution of material temperatures in space-time.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline \(g\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \(\nu_{g}\) [KeV] & 0.7075 & 1.415 & 2.123 & 2.830 & 3.538 & 4.245 & 5.129 & 6.014 & 6.898 \\ \hline \hline \(g\) & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & \\ \hline \(\nu_{g}\) [KeV] & 7.783 & 8.667 & 9.551 & 10.44 & 11.32 & 12.20 & 13.09 & 1\(\times 10^{7}\) & \\ \hline \end{tabular}
\end{table}
Table 1: Upper boundaries for each frequency group
Figure 1: Relative errors in the 2-norm for \(\tilde{T}\), \(\tilde{E}\) produced by the FLD, \(P_{1}\) and \(P_{1/3}\) models, and for \(T\), \(E\) found with the DD-VEF model generated using each \(\tilde{T}\), plotted vs time. Errors are calculated vs the FOM.
Figure 1 plots relative errors (w.r.t. the FOM solution) for the material temperature and total radiation energy density calculated in the 2-norm over space at each instant of time in \(t\in[0,6\,\mathrm{ns}]\). Separate curves are shown for each considered diffusion model and for the DD-VEF model. In each case the DD-VEF solution finds an increase in accuracy for \(T\) and \(E\) compared to the radiation diffusion solutions. The errors in \(T\) and \(E\) from the DD-VEF model are on the order of \(10^{-3}\) for the whole interval of time, whereas the diffusion model errors exist on order \(10^{-2}\) for the majority of times. The DD-VEF model is seen to increase the accuracy of each diffusion model by roughly an order of magnitude. The FLD model possesses the highest accuracy of all tested diffusion ROMs, and the DD-VEF model with highest accuracy is the one applied to the FLD solution.
Next we investigate the DD-VEF model's performance in capturing the radiation wavefront as it propagates through the spatial domain. Note that the F-C test mimics the class of supersonic radiation shock problems and experiments [72; 96; 97; 98]. One measurement of importance in these experiments concerns the time it takes for the radiation wavefront to reach the radiation-drive-opposite side of the test material [72; 97; 98]. A measurement of accuracy based on this wavefront-arrival metric can be derived by comparing the TRT solution at the right boundary of the F-C test, where the radiation wavefront propagates towards. The boundary-averaged material temperatures and radiati
Figure 3: Relative error for the FLD, \(P_{1}\) and \(P_{1/3}\) models, and the DD-VEF model generated with each diffusion model solution, for data located at and integrated over the right boundary of the domain.
Figure 2: FOM solution for data located at and integrated over the right boundary of the domain.
flux are defined as
\[\bar{F}_{R}=\frac{1}{L_{R}}\int_{0}^{L_{R}}\mathbf{e}_{x}\cdot\mathbf{F}(x_{R},y)\ dy, \tag{23}\]
\[\bar{E}_{R}=\frac{1}{L_{R}}\int_{0}^{L_{R}}E(x_{R},y)\ dy, \tag{24}\]
\[\bar{T}_{R}=\frac{1}{L_{R}}\int_{0}^{L_{R}}T(x_{R},y)\ dy, \tag{25}\]
where \(L_{R}=x_{R}=6\)cm. The FOM solution for these three quantities vs time is plotted in Figure 2. If the DD-VEF model can reproduce these integral quantities to acceptable levels of accuracy, they can be said to correctly reproduce the shape and propagation speed of the radiation wavefront. This is especially important to investigate given that the considered radiation diffusion models are known to produce nonphysical effects [7, 8, 9]. Figure 3 plots the relative error of the diffusion and DD-VEF produced values of \(\bar{F}_{R}\), \(\bar{E}_{R}\) and \(\bar{T}_{R}\). The relative errors for each quantity are decreased using the DD-VEF model by 1-2 orders of magnitude for most instances of time. There are several time intervals where the errors for a model'spike' downwards and then come back up. These occur when there is a change of sign in the error and do not indicate that the solution is more accurate there than at other instants of time. The most dramatic increase in accuracy is for the FLD \(\bar{F}_{R}\) by about 3 orders of magnitude. In fact, the FLD \(\bar{F}_{R}\) is the least accurate and the DD-VEF model \(\bar{F}_{R}\) using the FLD solution is the most accurate of the models shown. The explanation for this effect comes from the fact that the DD-VEF model only acts on the approximate material temperature it is given, and the FLD solution for \(\bar{T}_{R}\) (and \(T\) in general from Figure 1) is the most accurate of the considered radiation diffusion models.
Figure 4: Radiation energy density spectrum located at two points on the right boundary of the domain of the F-C test, taken at several time instances.
Finally, we consider the spectrum of radiation present on the right boundary of the test domain. Figure 4 plots the frequency spectrum of radiation energy densities for the F-C test obtained by FOM on two points of the right boundary face (\(x=6\) cm). The spectrum of radiation present at the midpoint of the right boundary (\(y=3\) cm) is shown on the left, and the radiation spectrum present at the corner of the test domain (\(y=0\) cm) is displayed on the right. Select instants of time are plotted to demonstrate how the radiation spectrum evolves. The points on the graphs are located at the center of each discrete energy group on the frequency-axis, and the values they take on are the group-averaged radiation energy densities \(\bar{E}_{g}=E_{g}/(\nu_{g}-\nu_{g-1})\). The plots have been 'zoomed in' to the spectrum peak, which leaves off the final frequency group. However, this point does not deviate significantly from the position of the second to last frequency group.
Figure 5 plots the errors of each model in the radiation energy densities vs photon frequency at the midpoint of the right boundary. Errors have been collected in the relative _temporal_ 2-norms
\[\|x(t)\|_{2}^{t}=\bigg{(}\int_{0}^{t^{\text{end}}}x(t)^{2}dt\bigg{)}^{-1/2}, \tag{26}\]
w.r.t. the full-order solution. The DD-VEF model is demonstrated to improve upon low-frequency group errors by roughly an order of magnitude at each considered point in space. The increase in accuracy from the diffusion solutions significantly improves as frequency increases starting from roughly \(\nu=3\) KeV. This is where the peak of (non-local) radiation emanating from the boundary drive should be located in frequency, as the Planckian spectrum peaks at \(\nu=2.82T\). This makes sense, since the higher frequency groups are closer to the streaming regime and should be better approximated by the transport-effects correction provided within the VEF model. The same errors calculated in the \(\infty-\)norm are close to those shown in the 2-norm, indicating that these results well represent the overall
Figure 5: Relative errors of the radiation spectrum located at the midpoint of the right boundary in the temporal 2-norm
errors produced by these models in the radiation spectrum at all instants of time. Note that although the last frequency group has not been included in the plots, its error value is close to that of the last shown frequency group for all models.
## 5 Conclusion
In this paper a data-driven VEF model is introduced for nonlinear TRT problems. An approximate Eddington tensor is constructed with a transport correction method applied to radiation diffusion-based solutions to the TRT problem. Three multigroup diffusion models were considered: a FLD model, and the \(P_{1}\), \(P_{1/3}\) models. The DD-VEF model provided an increase in accuracy of 1-2 orders of magnitude in the total radiation energy density and material temperature when applied to each diffusion-based solution. The entire spectrum of radiation present at the test domain right boundary was improved upon as well. The most significant reduction in error from the diffusion solutions in the frequency spectrum was in the high-frequency range with strong transport effects. Possible future extensions of this DD-VEF model include parameterization via interpolation between diffusion solutions for a series of TRT problems, or the use of other approximate models for TRT in place of radiation-diffusion.
## 6 Acknowledgements
Los Alamos Report LA-UR-23-31255. This research project was funded by the Sandia National Laboratory, Light Speed Grand Challenge, LDRD, Strong Shock Thrust. This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). The content of the information does not necessarily reflect the position or the policy of the federal government, and no official endorsement should be inferred.
|
2303.15350 | Improving Neural Topic Models with Wasserstein Knowledge Distillation | Topic modeling is a dominant method for exploring document collections on the
web and in digital libraries. Recent approaches to topic modeling use
pretrained contextualized language models and variational autoencoders.
However, large neural topic models have a considerable memory footprint. In
this paper, we propose a knowledge distillation framework to compress a
contextualized topic model without loss in topic quality. In particular, the
proposed distillation objective is to minimize the cross-entropy of the soft
labels produced by the teacher and the student models, as well as to minimize
the squared 2-Wasserstein distance between the latent distributions learned by
the two models. Experiments on two publicly available datasets show that the
student trained with knowledge distillation achieves topic coherence much
higher than that of the original student model, and even surpasses the teacher
while containing far fewer parameters than the teacher's. The distilled model
also outperforms several other competitive topic models on topic coherence. | Suman Adhya, Debarshi Kumar Sanyal | 2023-03-27T16:07:44Z | http://arxiv.org/abs/2303.15350v2 | # Improving Neural Topic Models with Wasserstein Knowledge Distillation
###### Abstract
Topic modeling is a dominant method for exploring document collections on the web and in digital libraries. Recent approaches to topic modeling use pretrained contextualized language models and variational autoencoders. However, large neural topic models have a considerable memory footprint. In this paper, we propose a knowledge distillation framework to compress a contextualized topic model without loss in topic quality. In particular, the proposed distillation objective is to minimize the cross-entropy of the soft labels produced by the teacher and the student models, as well as to minimize the squared 2-Wasserstein distance between the latent distributions learned by the two models. Experiments on two publicly available datasets show that the student trained with knowledge distillation achieves topic coherence much higher than that of the original student model, and even surpasses the teacher while containing far fewer parameters than the teacher's. The distilled model also outperforms several other competitive topic models on topic coherence.
Keywords:Topic modeling Knowledge distillation Wasserstein distance Contextualized topic model Variational autoencoder.
## 1 Introduction
Topic modeling has come up as an important technique to analyze large document corpora and extract their themes automatically [1], [30], [26]. Therefore, they are frequently used to obtain an overview of the topics in document archives and web search results, match queries and documents, and diversify search results [11, 28]. While latent Dirichlet allocation (LDA) [5] is the classical topic modeling algorithm, recent approaches exploit deep neural networks, specifically, variational autoencoders (VAEs) [13]. ProdLDA [24] is a well-known VAE-based topic model that uses a product of experts and a Laplace approximation to the Dirichlet prior. Bianchi et al. [3] recently proposed _CombinedTM_, a contextualized topic model that feeds into the VAE of ProdLDA a distributed representation of the document built with a pre-trained language model (PLM) like sentence-BERT (SBERT) [22] along with a bag-of-words (BoW) representation of the document. It achieves state-of-the-art topic coherence on many benchmark data sets. Given a VAE-based topic model pre-trained on a corpus, one can pass a document from the corpus through the VAE encoder and recover its topics. A remarkable feature
of contextualized topic models is that, if the PLM is multilingual and the input to the encoder solely consists of contextualized representations from the PLM, it is possible to train the model in one language and test it in another, making it a zero-shot topic model, also called _ZeroShotTM_[4]. Increasing the network complexity like the depth or width of the neural networks in the VAE might improve the coherence of the generated topics but produces a larger memory footprint, thereby making it difficult to store and use the topic models on resource-constrained devices. Using only contextualized embeddings in the input would also reduce the model size but could hit the topic quality as well.
In this paper, we investigate if a VAE-based topic model can be compressed without compromising topic coherence. For this purpose, we use knowledge distillation (KD), which involves a teacher model to improve the performance of a smaller student model [12]. While KD has been used for classification tasks in image [10] and text processing [17], this paper tackles an unsupervised learning problem for a generative model. Specifically, we distill knowledge from a CombinedTM teacher to a smaller ZeroShotTM student. In standard KD [12], the aim is to minimize the cross-entropy between the soft labels produced by the student and the teacher models along with the Kullback-Leibler (KL) divergence between their respective output distributions. But even if the two distributions have very little dissimilarity with each other, the KL-divergence may reach a very high value, and if the two distributions are not overlapping at all, it explodes to infinity [19]. To avoid these issues, we choose 2-Wasserstein distance [18] instead of KL-divergence in distillation loss. Our distillation process minimizes the cross-entropy between the soft labels produced by the teacher and the student, _and_ the square of the 2-Wasserstein distance between the latent distributions learned by the two models. Wasserstein distance arises in the theory of optimal transport and measures how 'close' two distributions are [21, 9, 27]. Unlike the KL divergence, if the Wasserstein between two distributions is high, this actually represents that the underlying distributions are very different from each other.
In summary, our contributions are: **(1)** We propose a 2-Wasserstein distance-based knowledge distillation framework for neural topic models. We call our method _Wasserstein knowledge distillation_. To the best of our knowledge, this is the first work on inter-VAE knowledge distillation for topic modeling. **(2)** Experiments on two public datasets show that in terms of topic coherence, the distilled model significantly outperforms the student and even scores better than the teacher. The distilled model also beats several strong baselines on topic coherence. This demonstrates the efficacy of our approach. We have made our code publicly available1.
Footnote 1: [https://github.com/AdhyaSuman/CTMKD](https://github.com/AdhyaSuman/CTMKD)
## 2 Background on Wasserstein Distance
Let \((\mathcal{X},d)\) be a complete separable metric space with metric \(d\) and equipped with a Borel \(\sigma\)-algebra. Let \(\mathcal{P}(\mathcal{X})\) denote the space of all probability measures
defined on \(\mathcal{X}\) with finite \(p\)-th moment for \(p\geq 1\). If \(\mathbb{P}_{1},\mathbb{P}_{2}\in\mathcal{P}(\mathcal{X})\), then \(\Pi(\mathbb{P}_{1},\mathbb{P}_{2})\) is defined to be the set of measures \(\pi\in\mathcal{P}(\mathcal{X}^{2})\) having \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\) as marginals. The \(p^{\text{th}}\) Wasserstein distance between the two probability measures \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\) in \(\mathcal{P}(\mathcal{X})\) is defined as
\[W_{p}(\mathbb{P}_{1},\mathbb{P}_{2})=\left(\inf_{\pi\in\Pi(\mathbb{P}_{1}, \mathbb{P}_{2})}\int_{\mathcal{X}^{2}}d(x,y)^{p}\,\mathrm{d}\,\pi(x,y)\right)^ {1/p} \tag{1}\]
\(W_{p}(\mathbb{P}_{1},\mathbb{P}_{2})\) is intuitively the minimum 'cost' of transforming \(\mathbb{P}_{1}\) to \(\mathbb{P}_{2}\) (or vice versa) [27]. Consider \(\mathcal{X}=\mathbb{R}^{n}\) with \(d\) as the Euclidean norm. Suppose \(\mathbb{P}_{1}=\mathcal{N}(\mu_{1},\Sigma_{1})\), and \(\mathbb{P}_{2}=\mathcal{N}(\mu_{2},\Sigma_{2})\) are normal distributions with means \(\mu_{1},\mu_{2}\in\mathbb{R}^{n}\) and symmetric positive semi-definite covariance matrices \(\Sigma_{1},\Sigma_{2}\in\mathbb{R}^{n\times n}\). From [18], the squared 2-Wasserstein distance between \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\) is given by:
\[W_{2}(\mathbb{P}_{1},\mathbb{P}_{1})^{2}=\|\mu_{1}-\mu_{2}\|_{2}^{2}+\text{ trace}\left(\Sigma_{1}+\Sigma_{2}-2(\Sigma_{2}^{1/2}\Sigma_{1}\Sigma_{2}^{1/2} )^{1/2}\right) \tag{2}\]
Wasserstein distance has been used to train various machine learning models including classifiers [7], Boltzmann machines [16], and generative adversarial networks [2], where it is found to be a better loss metric than KL-divergence.
## 3 Proposed Framework for Knowledge Distillation
Our framework for KD is shown in Figure 1. The teacher and the student models are both VAEs. The teacher \(T\) is a CombinedTM [3] that takes as input \(x\) a document encoded as the concatenation of the document's normalized BoW representation \(x_{\text{BoW}}\in\mathbb{R}^{V}\), where \(V\) is the vocabulary size, and its contextualized embedding \(x_{\text{ctx}}\) scaled to dimension \(V\) by a linear layer. The student
Figure 1: Framework for knowledge distillation from CombinedTM to ZeroShotTM.
is a ZeroShotTM [4]. While the student's encoder takes only the document's contextualized representation, its decoder still needs the BoW vector during training, but it is not necessary when we use only its trained encoder to infer the topics for a given document. The teacher's encoder is a multi-layer feed-forward neural network (FFNN) while we make the student's encoder an FFNN with one hidden layer.
A VAE-based topic model works as follows [24]. Suppose it has to learn \(K\) topics from a corpus. The VAE encoder having weights \(W\) learns the approximate posterior distribution \(q_{W}(z|x)\) represented by mean \(\mu\in\mathbb{R}^{K}\) and variance \(\sigma^{2}\in\mathbb{R}^{K}\) for an input instance \(x\). The decoder samples a vector \(z\sim q_{W}(z|x)\) using the reparameterization trick [13], and produces the document-topic vector \(\theta=\mathtt{softmax}(z)\), which is passed through a shallow FFNN with weight matrix \(\beta_{K\times V}\) to learn a distribution \(p_{\beta}(x|z)\). The VAE is trained by backpropagation to minimize the following loss \(\mathcal{L}_{\text{VAE}}\):
\[\mathcal{L}_{\text{VAE}}=\mathcal{L}_{\text{NLL}}+\mathcal{L}_{\text{KL}} \equiv-\mathbb{E}_{z\sim q_{W}(z|x)}\big{[}\log p_{\beta}(x|z)\big{]}+D_{\text {KL}}\big{(}q_{W}(z|x)\parallel p(z)\big{)} \tag{3}\]
where \(\mathcal{L}_{\text{NLL}}\) is the expected negative log-likelihood of the reconstructed BoW, and \(\mathcal{L}_{\text{KL}}\) is a regularizer measuring the KL-divergence of the encoder's output \(q_{W}(z|x)\) from the assumed prior \(p(z)\) of the latent distribution.
Now suppose that the teacher has been already trained on a dataset to learn \(K\) topics, and that, after training, the weights of its encoder and decoder are \(W_{T}^{*}\) and \(\beta_{T}^{*}\), respectively. We will use this frozen teacher model to train the student _with KD_ to learn \(K\) topics from the same dataset and the same vocabulary. We denote this KD-trained student by \(S^{\prime}\). Let the weights in its encoder and decoder be \(W_{S^{\prime}}\) and \(\beta_{S^{\prime}}\), respectively, at the start of some iteration during the training of \(S^{\prime}\). Given an input instance \(x\), the student's loss function has two components:
_(i) Loss associated with student VAE:_ The VAE loss \(\mathcal{L}_{\text{VAE}}\) is given by Eq. (3).
_(ii) Loss associated with knowledge distillation:_ While training \(S^{\prime}\), every instance \(x\) is passed through both \(T\) and \(S^{\prime}\). Suppose the teacher's encoder outputs the \(K\)-variate Gaussian \(\mathcal{N}(z|\mu_{T},\sigma_{T}^{2})\) while the student's encoder outputs the \(K\)-variate Gaussian \(\mathcal{N}(z|\mu_{S^{\prime}},\sigma_{S^{\prime}}^{2})\). Note that instead of a full covariance matrix, a diagonal covariance matrix (encoded as a vector) is learned [3], [4]. Let \(\Sigma_{T}=\mathtt{diag}(\sigma_{T}^{2})\) and \(\Sigma_{S^{\prime}}=\mathtt{diag}(\sigma_{S^{\prime}}^{2})\), which are easily observed to be symmetric positive semi-definite. We calculate the squared 2-Wasserstein distance between the distributions learned by \(T\) and \(S^{\prime}\) using Eq. (2):
\[\mathcal{L}_{\text{KD-2W}}= \|\mu_{T}-\mu_{S^{\prime}}\|_{2}^{2}+\text{trace}\left(\Sigma_{T} +\Sigma_{S^{\prime}}-2(\Sigma_{S^{\prime}}^{1/2}\Sigma_{T}\Sigma_{S^{\prime}} ^{1/2})^{1/2}\right) \tag{4}\]
We propose to minimize \(\mathcal{L}_{\text{KD-2W}}\) so that the distribution learned by the student is pulled close to that of the teacher. The decoder of the teacher and that of the student produce unnormalized logits \(u_{T}=\beta_{T}^{\top}\theta\) and \(u_{S^{\prime}}=\beta_{S^{\prime}}^{\top}\theta\), respectively. We compute the cross-entropy loss \(\mathcal{L}_{\text{KD-CE}}\) between the soft labels \(\mathtt{softmax}(u_{T}/t)\) and \(\mathtt{softmax}(u_{S^{\prime}}/t)\) where \(t\) is the softmax temperature (hyperparameter) [12]. In addition to identifying the most probable class, the soft labels formed by a higher softmax temperature (\(t>1\)) capture the correlation between the labels,
which is desired in the distillation framework. The total loss due to KD is
\[\mathcal{L}_{\text{KD}}=\mathcal{L}_{\text{KD-2W}}+t^{2}\mathcal{L}_{\text{KD-CE}} \tag{5}\]
Finally, with \(\alpha\in[0,1]\) as a hyperparameter, the total loss for the student \(S^{\prime}\) is
\[\mathcal{L}_{S^{\prime}}=(1-\alpha)\mathcal{L}_{\text{VAE}}+\alpha\mathcal{L}_ {\text{KD}} \tag{6}\]
## 4 Experimental Setup
We have performed all experiments in OCTIS [25], which is an integrated framework for topic modeling. We use the following datasets from OCTIS: **20NG**, which contains \(16,309\) newsgroup documents on \(20\) different subjects [25], and **M10** comprising \(8355\) scientific publications from \(10\) distinct research areas [20]. For each dataset, the vocabulary contains the 2K most common words in the corpus. We represent each topic by its top-\(10\) words. We use **Normalized Pointwise Mutual Information** (**NPMI**) [15] and **Coherence Value** (**CV**) [14, 23] to measure topic coherence. NPMI of a topic is high if the words in the topic tend to co-occur. CV is calculated using an indirect cosine measure along with the NPMI score over a boolean sliding window. Higher values of NPMI and CV are better.
The experiments are done for topic counts \(K\in\{20,50,100\}\) on the 20NG dataset and for topic counts \(K\in\{10,20,50,100\}\) on the M10 dataset, where \(20\) and \(10\) are the golden number of categories for 20NG and M10, respectively. We denote the teacher (CombinedTM) by \(\mathbf{T}\), the student (ZeroShotTM) by \(\mathbf{S}\), and the distilled student model (ZeroShotTM) by \(\mathbf{SKD}\). The encoder in \(\mathbf{T}\) uses \(768\)-dimensional contextualized sentence embeddings (SBERT) from paraphrase-distilroberta-base-v2. The encoders in \(\mathbf{S}\) and \(\mathbf{SKD}\) use \(384\)-dimensional SBERT embeddings from all-MiniLM-L6-v2 model.
Using the Bayesian optimization framework of OCTIS, we have calculated the optimal number of hidden layers \(H\) in the teacher's encoder (which takes as input the concatenation of a document's contextualized and BoW representations) from the set \(\{1,2,\ldots,10\}\) that maximizes the NPMI for the teacher. As shown in Table 1, on 20NG dataset, we found \(H=1\) for topic count \(K\in\{20,50\}\) and \(H=5\) for \(K=100\); on M10, we observed \(H=4\) for \(K=10\), \(H=5\) for \(K=20\), \(H=2\) for \(K=50\), and \(H=3\) for \(K=100\). Each hidden layer of the teacher contains \(100\) neurons.
We have tuned the hyperparameters \(\alpha\in[0,1]\) and \(t\in\{1,2,\ldots,5\}\) for \(\mathbf{SKD}\) in OCTIS. For performance analysis, we compare these models with **ProdLDA**[24], **NeuralLDA**[24], **Embedded Topic Model** (**ETM**) [6] and **LDA**[5], already
\begin{table}
\begin{tabular}{|c c c||c c c|} \hline
**Dataset** & \(\boldsymbol{K}\) & \(\boldsymbol{H}\) & **Dataset** & \(\boldsymbol{K}\) & \(\boldsymbol{H}\) \\ \hline \hline \multirow{4}{*}{**20NG**} & 20 & 1 & & & 10 & 4 \\ & 50 & 1 & & 20 & 5 \\ & 100 & 5 & & & 50 & 2 \\ & & 100 & 5 & & 100 & 3 \\ \hline \end{tabular}
\end{table}
Table 1: The optimal number of hidden layers \(H\) in the encoder of the teacher \(\mathbf{T}\) for each dataset and different topic counts \(K\).
implemented in OCTIS. We use the default parameters unless otherwise mentioned. All models are trained for 100 epochs with a batch size of 64. Each reported performance score is the median over 5 runs (except for **T**, where we use a single run as it must be frozen for KD).
## 5 Results
Models **S** and **SKD** contain the same number of parameters, which is smaller than that of **T**. The sizes of all the models depend on the SBERT dimension, the number and size of hidden layers, the number of topics, and the vocabulary size. For example, for 20 topics in 20NG, **T** takes 6.14 MB while **SKD** 2.74 MB (for parameters and buffers) - a reduction in model size by 55.4%. In general, the compression ranged from 37.6% to 56.3%.
Fig. 2 shows the coherence scores for each topic model for all topic settings and datasets. **SKD** achieves the highest NPMI and CV scores. Among **T**, **S**, and **SKD**, we find **SKD** performs much better than **S** and even modestly better than **T**. On 20NG, the NPMI scores of (**T**, **S**, **SKD**) are \((0.125,0.106,0.132)\) for \(K=20\), \((0.121,0.098,0.130)\) for \(K=50\), and \((0.098,0.076,0.105)\) for \(K=100\), so the maximum gain of **SKD** over **S** is 38.2% and that over **T** is 7.4%. Similarly on M10, the NPMI scores are \((0.073,0.046,0.084)\) for \(K=10\), \((0.076,0.037,0.08)\) for \(K=20\), \((0.053,-0.027,0.073)\) for \(K=50\), and \((0.059,-0.06,0.07)\) for \(K=100\). Thus, on M10, **SKD** improves NPMI of **S** by over 100% for \(K\in\{50,100\}\), and that of **T** by at most 37.7%. Student outperforming the teacher is surprising but has been reported earlier for _supervised_ tasks [8, 29].
When we deleted any one of the two loss terms from \(\mathcal{L}_{\text{KD}}\) in Eq. (5), NPMI and CV of **SKD** dropped (see Table 2). Thus, although the simpler model and
Figure 2: Coherence scores (**NPMI** and **CV**) for different topic models on two datasets: **20NG** and **M10**. The X-axis is marked with the topic counts used for each dataset.
weaker SBERT lower the student's performance, the knowledge distilled from the teacher's encoder and decoder vastly improves it.
The higher performance of the contextualized topic models over other topic models agrees with similar results in [3, 4]. In Table 3, we compare qualitatively some aligned topics learned by \(\mathbf{T}\), \(\mathbf{S}\), and \(\mathbf{SKD}\) from the 20NG corpus. For the first three topics, \(\mathbf{SKD}\) displays more word overlap than \(\mathbf{S}\) with the corresponding topics from \(\mathbf{T}\), showing that \(\mathbf{T}\) and \(\mathbf{SKD}\) learn similar topic-word distributions. Interestingly, the fourth topic from \(\mathbf{SKD}\) contains more healthcare-related words than the fourth topic from \(\mathbf{T}\) although the latter is also primarily on healthcare; this shows that \(\mathbf{SKD}\) can produce more coherent topics than \(\mathbf{T}\).
## 6 Conclusion
We have proposed a 2-Wasserstein loss-based knowledge distillation framework to compress a contextualized topic model. Experiments on two datasets show that the pruned topic model produces topics with coherence better than that of the topics produced by the student and even the larger teacher model. This is a new method for neural topic distillation. In the future, we would like to study it analytically and apply it to distill knowledge across other neural topic models.
\begin{table}
\begin{tabular}{|c|c c|c c c||c c c|c c c|c c c|} \hline \multirow{2}{*}{**KD-loss (\(\mathcal{L}_{\text{KD}}\))**} & \multicolumn{6}{c||}{**20NG**} & \multicolumn{6}{c|}{**M10**} \\ \cline{2-13} & \multicolumn{3}{c|}{**NPMI**} & \multicolumn{3}{c||}{**CV**} & \multicolumn{3}{c|}{**NPMI**} & \multicolumn{3}{c|}{**CV**} \\ \cline{2-13} & **20** & **50** & **100** & **20** & **50** & **100** & **10** & **20** & **50** & **100** & **10** & **20** & **50** & **100** \\ \hline \hline \(\mathcal{L}_{\text{KD-2W}}+\mathcal{L}_{\text{KD-CE}}\) & **0.132** & **0.130** & **0.105** & **0.687** & **0.657** & **0.638** & **0.084** & **0.080** & **0.073** & **0.070** & **0.522** & **0.499** & **0.485** & **0.475** \\ \hline \(\mathcal{L}_{\text{KD-2W}}\) & 0.109 & 0.114 & 0.089 & 0.659 & 0.638 & 0.615 & 0.051 & 0.049 & 0.037 & 0.043 & 0.498 & 0.479 & 0.459 & 0.452 \\ \(\mathcal{L}_{\text{KD-CE}}\) & 0.110 & 0.105 & 0.083 & 0.653 & 0.629 & 0.588 & 0.042 & 0.052 & 0.016 & 0.023 & 0.485 & 0.464 & 0.425 & 0.425 \\ \hline \end{tabular}
\end{table}
Table 2: Ablation study for the distillation loss term defined in Eq. (5). For each metric, the median over five independent runs for each topic count is mentioned.
\begin{table}
\begin{tabular}{|c|c|l|} \hline
**Model** & **ID** & **Topics** \\ \hline \hline \multirow{4}{*}{\(\mathbf{T}\)} & 0 & gun, law, firearm, crime, weapon, assault, amendment, state, police, permit \\ & 11 & russian, turkish, people, village, genocide, armenian, muslim, population, greek, army \\ & 17 & oil, engine, ride, front, road, chain, bike, motorcycle, water, gas \\ & 3 & health, make, presidient, patient, medical, people, doctor, disease, work, year \\ \hline \hline \multirow{4}{*}{\(\mathbf{S}\)} & 0 & **law**, _people_, **state**, _government_, **gun**, **amment**, _constitution_, **firearm**, **crime**, _privacy_ \\ & 1 & **armenian**, **village**, _solider_, _soviet_, **muslim**, _trop_, **turkish**, **russian**, **genocide**, _land_ \\ & 17 & **engine**, _car_, _mile_, **ride**, **bike**, **oil**, **front**, _wheel_, **motorcycle**, _tire_ \\ & 7 & **medical**, **disease**, _study_, _treatment_, **doctor**, **patient**, **health**, _food_, _risk_, _percent_ \\ \hline \hline \multirow{4}{*}{\(\mathbf{SKD}\)} & 0 & **gun**, law, weapon, firearm, amendment, crime, bill, assault, constitution_, **police** \\ & 11 & **turkish**, **genocide**, **armenian**, **russian**, **village**, **population**, _israeli_, _war_, _attack_, **muslim** \\ & 17 & **ride**, **engine**, _car_, **bike**, **motorcycle**, **front**, **oil**, _motor_, **road**, _seat_ \\ & 3 & **health**, **medical**, **doctor**, **disease**, **patient**, _insurance_, _treatment_, _drug_, _care_, _risk_ \\ \hline \end{tabular}
\end{table}
Table 3: Some selected topics output when \(\mathbf{T}\), \(\mathbf{S}\), and \(\mathbf{SKD}\) models are run on the 20NG corpus for 20 topics. If a word in a topic from \(\mathbf{S}\) or \(\mathbf{SKD}\) is shared with the corresponding topic in \(\mathbf{T}\), then it is in **bold** otherwise it is in _italic_. |
2302.02505 | A bijection between strongly stable and totally symmetric partitions | Artinian monomial ideals in $d$ variables correspond to $d$-dimensional
partitions. We define $d$-dimensional strongly stable partitions and show that
they correspond to strongly stable ideals in $d$ variables. We show a bijection
between strongly stable partitions and totally symmetric partitions which
preserves the side length of the minimal bounding box. | Seth Ireland | 2023-02-05T22:55:56Z | http://arxiv.org/abs/2302.02505v2 | # A bijection between strongly stable and totally symmetric partitions
###### Abstract.
Artinian monomial ideals in \(d\) variables correspond to \(d\)-dimensional partitions. We define \(d\)-dimensional strongly stable partitions and show that they correspond to strongly stable ideals in \(d\) variables. We then show a bijection between strongly stable partitions and totally symmetric partitions which preserves the side length of the minimal bounding box.
_Keywords_: Generic Initial Ideal, Borel Group, Borel Ideal, Symmetric Monomial Ideal, Strongly Stable Ideal, Strongly Stable Partition, Totally Symmetric Plane Partition, Plane Partition, Enumeration _MSC classification_: Primary: 05A17; Secondary: 13F55
which depicts the 2-dimensional partition
\[P=\{(0,0),(0,1),(0,2),(1,0),(1,1),(2,0),(3,0)\}\in\mathcal{P}_{2}(4).\]
**Example 2.3**.: A 3-dimensional partition is commonly called a _plane partition_. An example of a plane partition of 10 is
\[P=\{(0,0,0),(0,0,1),(0,1,0),(0,1,1),(1,1,0),(1,0,1),(1,2,0),(0,2,0),(2,0,0)\}\in \mathcal{P}_{3}(3)\]
which we can visualize with the diagram below
Plane partitions are also commonly represented in matrix notation. In matrix notation, the partition above could be written
\[\begin{array}{cccc}2&2&1\\ 2&1\\ 1&1\end{array}\]
**Definition 2.4**.: Let \(P\) be a \(d\)-dimensional partition. For a cell \(\alpha=(a_{1},\ldots,a_{d})\in P\), define the _\(j\)th arm length_ to be the largest integer \(h_{j}\) such that \(\alpha=(a_{1},a_{2},\ldots,a_{j}+h_{j},\ldots,a_{d})\in P\). Denote the vector of arm lengths by \(H(\alpha)=(h_{1},\ldots,h_{d})\) and call this the _Hook vector_ of \(\alpha\).
**Definition 2.5**.: A _strongly stable partition_ is a partition for which every cell's Hook vector is weakly increasing.
We will denote the set of strongly stable partitions which fit inside a box of side length \(n\) by
\[\tilde{\mathcal{B}}_{d}(n):=\{P\mid P\in\mathcal{P}_{d}(n)\text{ and }P\text{ is strongly stable}\}\]
**Example 2.6**.: An example of a 2-dimensional strongly stable partition \(P\in\tilde{\mathcal{B}}_{2}(7)\) is given below with Hook vectors given inside each cell.
Note that 2-dimensional strongly stable partitions are exactly the integer partitions with no repeats. These are commonly called _strict_ partitions.
**Example 2.7**.: An example of a 3-dimensional strongly stable partition \(P\in\tilde{\mathcal{B}}_{3}(4)\) is given below. One can verify that the Hook vectors of each cell are weakly increasing.
**Definition 2.8**.: A _totally symmetric partition_ is a partition for which \(\alpha\in P\implies\pi(\alpha)\in P\) for all \(\pi\in S_{d}\).
We denote the set of totally symmetric partitions which fit inside a box of side length \(n\) by
\[\tilde{\mathcal{T}}_{d}(n):=\{P\mid P\in\mathcal{P}_{d}(n)\text{ and }P\text{ is totally symmetric}\}\]
**Example 2.9**.: An example of a 2-dimensional totally symmetric partition \(P\in\tilde{\mathcal{T}}_{2}(7)\) is given below.
Note that 2-dimensional totally symmetric partitions are commonly called _self-conjugate_ partitions.
**Example 2.10**.: An example of a 3-dimensional totally symmetric partition (totally symmetric plane partition) \(P\in\tilde{\mathcal{T}}_{3}(4)\) is shown below.
## 3. Monomial Ideals
Fix a positive integer \(n\) and let \(K\) be a field of characteristic zero. We will often refer to _monomials_ by their multi-exponent notation \(\mathbf{x}^{\alpha}:=x_{1}^{a_{1}}\cdots x_{d}^{a_{d}}\in K[x_{1},\ldots,x_{d}]\). In this notation, let \(deg(\alpha)=\sum_{i=1}^{d}a_{i}\). A _monomial ideal_ is an ideal \(I\subset K[x_{1},\ldots,x_{d}]\) which is generated by monomials. Every monomial ideal \(I\subset K[x_{1},\ldots,x_{d}]\) has a unique minimal subset of monomial generators \(G(I)\).
Elements \((a_{i,j})\in GL_{d}(K)\) induce a linear automorphism on \(K[x_{1},\ldots,x_{d}]\)
\[f(x_{1},\ldots,x_{d})\mapsto f\big{(}\sum_{i=1}^{d}a_{i,1}x_{i},\ldots,\sum_{ i=1}^{d}a_{i,d}x_{i}\big{)}\]
The _Borel group_ is the subgroup of \(GL_{d}(K)\) consisting of upper triangular matrices. The ideals which are fixed by the action of the Borel group on the variables \(x_{1},\ldots,x_{d}\) are called _Borel-fixed ideals_.
**Definition 3.1**.: Let \(I\subset K[x_{1},\ldots,x_{d}]\) be a monomial ideal. If \(m\frac{x_{i}}{x_{j}}\in I\) for all \(m\in I\) with \(x_{j}\mid m\) and \(i<j\), then \(I\) is called _strongly stable_.
We will refer to the defining property of strongly stable ideals as the _variable exchange condition_.
**Proposition 3.2**.: _Let \(I\) be a monomial ideal. If \(char(K)=0\), then \(I\) is strongly stable iff \(I\) is Borel-fixed._
Note that we are implicitly using the variable order \(x_{1}>x_{2}>\cdots>x_{d}\). By shuffling this order, we would get different (but equivalent) sets of strongly stable ideals. The next proposition shows that we can check if \(I\) is strongly stable by checking the variable exchange condition on generators of \(I\).
**Proposition 3.3**.: _Let \(I\subset K[x_{1},\ldots,x_{d}]\) be a monomial ideal. Suppose that for all \(m\in G(I)\) and all integers \(1\leq i<j\leq d\) such that \(x_{j}\mid m\), we have \(m\frac{x_{i}}{x_{j}}\in I\). Then \(I\) is strongly stable._
See [4] for proofs of the previous two propositions.
**Example 3.4**.: The ideal \(I=(x^{4},x^{3}y,x^{2}y^{3},xy^{4},y^{7})\) is strongly stable, because \(x^{3}y\frac{x}{y}=x^{4}\in I\), \(x^{2}y^{3}\frac{x}{y}=x^{3}y^{2}\in I\), \(xy^{4}\frac{x}{y}=x^{2}y^{3}\in I\), and \(y^{7}\frac{x}{y}=xy^{6}\in I\).
Given a set of monomials \(A=\{\mathbf{x}^{\alpha_{1}},\ldots,\mathbf{x}^{\alpha_{r}}\}\), we can consider the minimal subset of \(A\) which generates the same ideal. We will use the notation \(min(A)\) to refer to this minimal subset.
**Definition 3.5**.: Let \(m\in K[x_{1},\ldots,x_{d}]\) be a monomial. A _Borel move_ is an operation that sends \(m\) to a monomial \(m\frac{x_{i_{1}}}{x_{j_{1}}}\cdots\frac{x_{i_{s}}}{x_{j_{s}}}\), where \(i_{t}<j_{t}\) and \(m\) is divisible by \(x_{j_{t}}\) for all \(t\).
**Definition 3.6**.: A monomial ideal \(I\subset K[x_{1},\ldots,x_{d}]\) is a _strongly stable ideal_ if it is closed under Borel moves.
**Definition 3.7**.: Let \(A\) be a subset of monomials. Define \(Borel(A)\) to be the smallest strongly stable ideal containing \(A\). We call the monomials in \(A\)_Borel generators_ of \(Borel(A)\).
**Proposition 3.8**.: _Every strongly stable ideal \(I\) has a unique minimal set of Borel generators. Refer to this set as \(Bgens(I)\)._
**Proposition 3.9**.: _Suppose \(I\) is a strongly stable ideal and \(m\in I\). Then \(m\in Bgens(I)\) iff for all \(x_{q}\) dividing \(m\), \(\frac{m}{x_{q}}\notin I\) and \(m\frac{x_{q+1}}{x_{q}}\notin I\)._
Definitions 3.5, 3.6, 3.7 and Propositions 3.8 and 3.9 are due to [1]. Note that definitions 3.5 and 3.6 are equivalent to definition 3.1. Also, in [1], they refer to strongly stable ideals as _Borel ideals_. We have chosen to use the term strongly stable in this paper.
**Example 3.10**.: For the ideal \(I=(x^{4},x^{3}y,x^{2}y^{3},xy^{4},y^{7})\),
\[Bgens(I)=\{x^{3}y,xy^{4},y^{7}\}\]
**Definition 3.11**.: A monomial ideal \(I\) is _symmetric_ if it is closed under the action of the symmetric group \(S_{d}\) on the variables.
**Proposition 3.12**.: _A monomial ideal \(I\) is symmetric iff \(G(I)\) is closed under the action of the symmetric group \(S_{d}\) on the variables._
Proof.: (\(\Rightarrow\)) Let \(g\in G(I)\) and \(\pi\in S_{d}\). Since \(I\) is symmetric, \(\pi(g)\in I\). To see that \(\pi(g)\in G(I)\), suppose not. Then there exists \(j\) so that \(\frac{\pi(g)}{x_{j}}\in I\). It follows that \(\pi^{-1}(\frac{\pi(g)}{x_{j}})=\frac{g}{x_{\pi^{-1}(j)}}\) so \(g\notin G(I)\). Contradiction.
(\(\Leftarrow\)) Let \(u\in I\). Then there exists some \(g\in G(I)\) so that \(g\mid u\). It follows that \(\pi(g)\mid\pi(u)\) for all \(\pi\in S_{d}\), so \(\pi(u)\in I\).
Given a set of monomials \(A=\{\mathbf{x}^{\alpha_{1}},\ldots,\mathbf{x}^{\alpha_{r}}\}\), we can _symmetrize_ this set of monomials by letting \(S_{d}\) act on the variables. Denote this set by
\[sym(A):=\{\mathbf{x}^{\pi(\alpha_{j})}\mid\mathbf{x}^{\alpha_{j}}\in A;\pi\in S _{d}\}\]
When \(A=\{\mathbf{x}^{\alpha}\}\) consists of a single monomial, we refer to \(sym(A)\) as the _orbit_ of \(\mathbf{x}^{\alpha}\).
**Definition 3.13**.: A monomial \(\mathbf{x}^{\alpha}\) is a _pure power_ of \(x_{j}\) if \(\alpha=(a_{1},\ldots,a_{j},\ldots,a_{d})\) with \(a_{i}=0\) for \(i\neq j\).
**Definition 3.14**.: An ideal \(I\subset K[x_{1},\ldots,x_{d}]\) is _Artinian_ if the Krull dimension of \(R/I\) is zero.
Note that a monomial ideal is Artinian if and only if \(I\) contains a pure power of \(x_{j}\) for \(1\leq j\leq d\). Denote the set of Artinian ideals with the largest degree of any pure power equal
to \(n\) by \(\mathcal{A}_{d}(n)\). Denote the set of Artinian strongly stable ideals with pure power \(x_{d}^{n}\in G(I)\) by
\[\mathcal{B}_{d}(n):=\{I\in\mathcal{A}_{d}(n)\mid I\text{ is strongly stable}\}\]
Similarly, use
\[\mathcal{T}_{d}(n):=\{I\in\mathcal{A}_{d}(n)\mid I\text{ is symmetric}\}\]
to denote the set of Artinian symmetric monomial ideals with pure power \(x_{d}^{n}\in G(I)\).
**Proposition 3.15**.: _Let \(I\in\mathcal{B}_{d}(n)\) with pure power \(x_{d}^{n}\in G(I)\). Then for every pure power \(x_{j}^{k_{j}}\in G(I)\), we have \(k_{j}\leq n\)._
Proof.: We can use Borel moves to get \(x_{j}^{n}\in I\) for all \(j\). So every pure power \(x_{j}^{k_{j}}\in G(I)\) must satisfy \(k_{j}\leq n\).
**Proposition 3.16**.: _Let \(I\in\mathcal{T}_{d}(n)\) with pure power \(x_{d}^{n}\in G(I)\). Then for every pure power \(x_{j}^{k_{j}}\in G(I)\), we have \(k_{j}=n\)._
Proof.: This follows from Proposition 3.12.
## 4. Partitions Correspond to Artinian Monomial Ideals
**Proposition 4.1**.: _If \(I\in\mathcal{A}_{d}(n)\), then \(P=\{\alpha\mid\mathbf{x}^{\alpha}\notin I\}\in\mathcal{P}_{d}(n)\)._
Proof.: First, we need to show that \(P=\{\alpha\mid\mathbf{x}^{\alpha}\notin I\}\) is a partition. Just notice that if \(\alpha=(a_{1},\ldots,a_{d})\in P\), then \(\mathbf{x}^{\alpha}\notin I\). It follows that \(\mathbf{x}^{(a_{1},\ldots,a_{j}-1,\ldots,a_{d})}\notin I\) so \((a_{1},\ldots,a_{j}-1,\ldots,a_{d})\in P\). Since \(I\) is Artinian, \(P\) is finite.
For every cell \(\gamma=(c_{1},\ldots,c_{d})\in P\), we have \(c_{i}<n\), because \(I\) contains a pure power of every variable with degree less than or equal to \(n\). Since \(x_{d}^{n}\in G(I)\), \((0,0,\ldots,0,n-1)\in P\).
**Proposition 4.2**.: _If \(P\in\mathcal{P}_{d}(n)\), then \(\{\mathbf{x}^{\alpha}\mid\alpha\notin P\}\in\mathcal{A}_{d}(n)\)._
Proof.: Let \(I=\{\mathbf{x}^{\alpha}\mid\alpha\notin P\}\). It is clear that \(I\) is a monomial ideal. Since \((0,0,\ldots,n,\ldots,0,0)\notin P\), \(x_{j}^{n}\in I\) for \(1\leq j\leq d\).
There is some cell \(\gamma=(c_{1},\ldots,c_{j},\ldots,c_{d})\in P\) with \(c_{j}=n-1\) for some \(1\leq j\leq d\). By the definition of \(d\)-dimensional partition, \((0,0,\ldots,n-1,\ldots,0)\in P\), so \(x_{j}^{n}\in G(I)\).
The two propositions above allow us to define a map \(\varphi:\mathcal{A}_{d}(n)\to\mathcal{P}_{d}(n)\) by
\[\varphi(I)=\{\alpha\in\mathbb{N}^{d}\mid\mathbf{x}^{\alpha}\notin I\}\]
with inverse \(\varphi^{-1}:\mathcal{P}_{d}(n)\to\mathcal{A}_{d}(n)\) given by \(\varphi^{-1}(P)=\{\mathbf{x}^{\alpha}\mid\alpha\notin P\}\).
**Example 4.3**.: Let \(I=(x^{4},x^{2}y,xy^{2},y^{3})\in\mathcal{A}_{2}(4)\). Then \(\varphi(I)\) is the partition from Example 2.2 and is shown below with \(\bullet\) used to represent generators of \(I\).
\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)
**Example 4.4**.: Let \(I=(x^{3},x^{2}y,y^{3},x^{2}z,xyz,y^{2}z,z^{2})\in\mathcal{A}_{3}(3)\). Then \(\varphi(I)\) is the plane partition from Example 2.3 shown below.
We will show that we can restrict \(\varphi\) to a bijection between strongly stable ideals and strongly stable partitions and between symmetric monomial ideals and totally symmetric partitions.
**Proposition 4.5**.: _A monomial ideal \(I\) is strongly stable iff \(\varphi(I)\) is a strongly stable partition._
Proof.: \((\Rightarrow)\) Let \(\beta\in\varphi(I)\) so that \(\mathbf{x}^{\beta}\notin I\) and consider \(H(\beta)=(h_{1},\ldots,h_{d})\). Fix \(1\leq i<j\leq d\). We have \((b_{1},\ldots,b_{i},\ldots,b_{j}+h_{j}+1,\ldots,b_{d})\notin\varphi(I)\), and we can use the fact that \(I\) is strongly stable to get
\[\mathbf{x}^{(b_{1},\ldots,b_{i},\ldots,b_{j}+h_{j}+1,\ldots,b_{d})}\in I\implies \mathbf{x}^{(b_{1},\ldots,b_{i}+h_{j}+1,\ldots,b_{j},\ldots,b_{d})}\in I\]
so \((b_{1},\ldots,b_{i}+h_{j}+1,\ldots,b_{j},\ldots,b_{d})\notin P\). Therefore, \(h_{i}<h_{j}+1\implies h_{i}\leq h_{j}\).
\((\Leftarrow)\) Let \(\mathbf{x}^{\alpha}\in G(I)\) such that \(x_{j}\mid\mathbf{x}^{\alpha}\). Then \(\frac{\mathbf{x}^{\alpha}}{x_{j}}=\mathbf{x}^{(a_{1},\ldots,a_{j}-1,\ldots,a_{ d})}\notin I\), so \(\tilde{\alpha}=(a_{1},\ldots,a_{j}-1,\ldots,a_{d})\in\varphi(I)\). If \(H(\tilde{\alpha})=(h_{1},\ldots,h_{d})\), then \(h_{j}=1\) because \(\alpha\notin P\). Since \(\varphi(I)\) is a strongly stable partition, \(h_{i}\leq h_{j}=1\) for \(i<j\). In particular, \((a_{1},\ldots,a_{i}+1,\ldots,a_{j}-1,\ldots,a_{d})\notin I\). So \(\mathbf{x}^{\alpha}\frac{x_{i}}{x_{j}}\in I\).
**Proposition 4.6**.: _A monomial ideal \(I\) is symmetric iff \(\varphi(I)\) is a totally symmetric partition._
Proof.: \((\Rightarrow)\) Let \(\beta\in\varphi(I)\) and let \(\pi\in S_{d}\). Since \(\mathbf{x}^{\beta}\notin I\), \(\mathbf{x}^{\pi(\beta)}\notin I\), so \(\pi(\beta)\in\varphi(I)\).
\((\Leftarrow)\) Let \(\mathbf{x}^{\alpha}\in I\). Then \(\alpha\notin\varphi(I)\implies\pi(\alpha)\notin\varphi(I)\), so \(\mathbf{x}^{\pi(\alpha)}\in I\).
In this section, we have shown that \(\varphi\mid_{\mathcal{B}_{d}(n)}:\mathcal{B}_{d}(n)\to\tilde{\mathcal{B}}_{d} (n)\) and \(\varphi\mid_{\mathcal{T}_{d}(n)}:\mathcal{T}_{d}(n)\to\tilde{\mathcal{T}}_{d}(n)\) are bijections. In the next section, we show a bijection \(\mathcal{B}_{d}(n)\to\mathcal{T}_{d}(n)\).
## 5. Bijection Between Monomial Ideals
Denote the set of all monomials in \(K[x_{1},\ldots,x_{d}]\) by \(M_{d}\) and the set of monomials with weakly increasing exponent vector by \(F_{d}\). Then use \(\mathcal{F}_{d}(n)\) to denote the set of minimal subsets of \(F_{d}\) which contain \(x_{d}^{n}\) and no coordinate of any exponent vector exceeding \(n\). Define a map \(\psi:M_{d}\to F_{d}\) by
\[\psi(\mathbf{x}^{(a_{1},a_{2},\ldots,a_{d})})=\mathbf{x}^{(a_{1},a_{1}+a_{2},a _{1}+a_{2}+a_{3},\ldots,a_{1}+a_{2}+\cdots+a_{d})}\]
and notice that \(\psi^{-1}:F_{d}\to M_{d}\) is given by
\[\psi^{-1}(\mathbf{x}^{(a_{1},\ldots,a_{d})})=\mathbf{x}^{(a_{1},a_{2}-a_{1},a_ {3}-a_{2},\ldots,a_{d}-a_{d-1})}\]
**Proposition 5.1**.: \(\psi(m\frac{x_{q+1}}{x_{q}})=\frac{\psi(m)}{x_{q}}\)
Proof.: \[\psi(m\frac{x_{q+1}}{x_{q}}) =\psi(\mathbf{x}^{(a_{1},\dots,a_{q}-1,a_{q+1}+1,\dots,a_{d})})\] \[=\mathbf{x}^{(a_{1},a_{2}-a_{1},a_{3}-a_{2},\dots,a_{q}-a_{q-1}-1,a _{q+1}-a_{q},a_{q+2}-a_{q+1},\dots,a_{d}-a_{d-1})}\] \[=\frac{\psi(m)}{x_{q}}\]
For a subset of monomials \(A=\{\mathbf{x}^{\alpha_{1}},\dots,\mathbf{x}^{\alpha_{r}}\}\), we will use \(\psi(A):=\{\psi(\mathbf{x}^{\alpha_{1}}),\dots,\psi(\mathbf{x}^{\alpha_{r}})\}\). For an ideal \(I\), we will use \(\psi(I)\) to refer to the ideal generated by \(\psi(G(I))\).
**Proposition 5.2**.: _For a strongly stable ideal \(I\), \(m\in I\iff\psi(m)\in\psi(I)\)._
Proof.: (\(\Rightarrow\)) If \(m\in I\), then there exists \(g\in G(I)\) so that \(g\mid m\). If \(g=\mathbf{x}^{(c_{1},\dots,c_{d})}\) and \(m=\mathbf{x}^{(a_{1},\dots,a_{d})}\), then \(c_{i}\leq a_{i}\) for \(1\leq i\leq d\). It follows that \(c_{1}+\dots+c_{i}\leq a_{1}+\dots+a_{i}\) for all \(i\). So \(\psi(g)\mid\psi(m)\).
(\(\Leftarrow\)) Assume \(\psi(m)\in\psi(I)\) and let \(k_{i}:=a_{1}+\dots+a_{i}-(c_{1}+\dots+c_{i})\) so that
\[\mathbf{x}^{(k_{1},\dots,k_{d})}\psi(g)=\psi(m)\]
Note that \(k_{i}=a_{i}-c_{i}+k_{i-1}\) for \(i=2,\dots,d\). Then we have
\[\tilde{g} :=g\frac{x_{1}^{k_{1}}}{x_{2}^{k_{1}}}\frac{x_{2}^{k_{2}}}{x_{3}^ {k_{2}}}\cdots\frac{x_{d-1}^{k_{d-1}}}{x_{d}^{k_{d-1}}}\] \[=g\frac{x_{1}^{k_{1}-k_{2}}}{x_{2}^{k_{1}-k_{2}}}\frac{x_{1}^{k_{ 2}-k_{3}}}{x_{3}^{k_{2}-k_{3}}}\cdots\frac{x_{1}^{k_{d-2}-k_{d-1}}}{x_{d-1}^{k_ {d-2}-k_{d-1}}}\frac{x_{1}^{k_{d-1}}}{x_{d}^{k_{d-1}}}\] \[=\mathbf{x}^{(c_{1}+k_{1},c_{2}+k_{2}-k_{1},c_{3}+k_{3}-k_{2}, \dots,c_{d-1}+k_{d-1}-k_{d-2},c_{d}-k_{d-1})}\] \[=\mathbf{x}^{(a_{1},a_{2},\dots,a_{d-1},c_{d}-k_{d-1})}\]
since \(c_{i}+k_{i}-k_{i-1}=c_{i}+(a_{i}-c_{i}+k_{i-1})-k_{i-1}=a_{i}\). For the last coordinate, we have
\[c_{d}-k_{d-1} =c_{d}-(a_{1}+\dots+a_{d-1}-(c_{1}+\dots+c_{d-1}))\] \[=c_{1}+\dots+c_{d}-(a_{1}+\dots+a_{d-1})\] \[\leq a_{d}\]
We have shown that \(\tilde{g}\mid m\). Now, we claim that \(\tilde{g}\in I\) because it is a Borel move from \(g\). To see this, notice that \(k_{i-1}-k_{i}=c_{i}-a_{i}\) for \(i=2,\dots,d\) and \(k_{d-1}=a_{1}+\dots+a_{d-1}-(c_{1}+\dots+c_{d-1})\leq c_{d}\). So \(g\) is divisible by each of the denominators in the second line of \(\tilde{g}\).
**Proposition 5.3**.: _If \(I\) is a strongly stable ideal, then \(Bgens(I)=\psi^{-1}(min(\psi(G(I))))\)._
Proof.: (\(\subseteq\)) Let \(m\in Bgens(I)\subset G(I)\). For any \(x_{q}\mid\psi(m)\), \(\frac{\psi(m)}{x_{q}}=\psi(m\frac{x_{q+1}}{x_{q}})\notin\psi(I)\) because \(m\frac{x_{q+1}}{x_{q}}\notin I\). Therefore, \(\psi(m)\in min(\psi(G(I)))\).
(\(\supseteq\)) Let \(m\in\psi^{-1}(min(\psi(G(I))))\). Then \(\psi(m)\) is minimal, so \(\frac{\psi(m)}{x_{q}}=\psi(m\frac{x_{q+1}}{x_{q}})\notin\psi(I)\). It follows that \(m\frac{x_{q+1}}{x_{q}}\notin I\).
Note that this gives an algorithm to compute \(Bgens(I)\) given a generating set \(G(I)\). As a result of Proposition 3.15, observe that for any \(I\in\mathcal{B}_{d}(n)\), we have
\[\mathbf{x}^{\alpha}\in G(I)\implies deg(\alpha)\leq n.\]
It follows that every coordinate of \(\psi(\mathbf{x}^{\alpha})\) is less than or equal to \(n\). This observation combined with the previous proposition allows us to define a map
\[\Lambda:\mathcal{B}_{d}(n) \rightarrow\mathcal{F}_{d}(n)\] \[I \mapsto\psi(Bgens(I)).\]
**Proposition 5.4**.: \(\Lambda:\mathcal{B}_{d}(n)\rightarrow\mathcal{F}_{d}(n)\) _is a bijection._
Proof.: We claim that \(\Lambda^{-1}:\mathcal{F}_{d}(n)\rightarrow\mathcal{B}_{d}(n)\) is given by \(S\mapsto Borel(\psi^{-1}(S))\). Since \(x_{d}^{n}\in S\), \(\psi^{-1}(x_{d}^{n})=x_{d}^{n}\in\psi^{-1}(S)\). There are no other pure powers of \(x_{d}\) in \(\psi^{-1}(S)\), so \(x_{d}^{n}\in G(I)\). It follows that \(Borel(\psi^{-1}(S))\in\mathcal{B}_{d}(n)\) and
\[(\Lambda^{-1}\circ\Lambda)(I) =Borel(\psi^{-1}(\psi(Bgens(I))))\] \[=I.\]
**Proposition 5.5**.: _If \(I\subset K[x_{1},\dots,x_{d}]\) is a symmetric monomial ideal, then \(sym(G(I)\cap F_{d})=G(I)\)._
Proof.: (\(\subseteq\)) If \(u\in sym(G(I)\cap F_{d})\), then \(u=\pi(m)\) for some \(m\in G(I)\cap F_{d}\) and some \(\pi\in S_{d}\). Since \(G(I)\) is closed under operations of \(S_{d}\), \(u\in G(I)\).
(\(\supseteq\)) If \(u\in G(I)\), then there exists some \(\pi\in S_{d}\) so that \(\pi(u)\in G(I)\cap F_{d}\), so \(u\in sym(G(I)\cap F_{d})\).
For an ideal \(I\in\mathcal{T}_{d}(n)\), define a map \(\Omega:\mathcal{F}_{d}(n)\rightarrow\mathcal{T}_{d}(n)\) by
\[\Omega(A)=ideal(sym(A))\]
**Proposition 5.6**.: \(\Omega:\mathcal{F}_{d}(n)\rightarrow\mathcal{T}_{d}(n)\) _is a bijection_
Proof.: We claim that the inverse \(\Omega^{-1}:\mathcal{T}_{d}(n)\rightarrow\mathcal{F}_{d}(n)\) is given by \(\Omega^{-1}(I)=G(I)\cap F_{d}\). Notice that for \(I\in\mathcal{T}_{d}(n)\), we have \(x_{d}^{n}\in G(I)\), so \(x_{d}^{n}\in G(I)\cap F_{d}\). It follows that \(\Omega^{-1}(I)\in\mathcal{F}_{d}(n)\) and
\[\Omega(\Omega^{-1}(I)) =\Omega(G(I)\cap F_{d})\] \[=ideal(sym(G(I)\cap F_{d}))\] \[=I.\]
**Theorem 1**.: _There is a bijection between \(d\)-dimensional strongly stable partitions and \(d\)-dimensional totally symmetric partitions which preserves the side length \(n\) of the minimal \(d\)-dimensional box containing the partitions._
Proof.: We have the diagram of bijections below. The vertical bijections were given in section 4 and the horizontal map \(\Omega\circ\Lambda\) was shown to be a bijection by Propositions 5.4 and 5.6.
\(\begin{CD}\mathcal{B}_{d}(n)@>{\Omega\Lambda}>{}>\mathcal{T}_{d}(n)\\ @V{\varphi}V{}V@V{\varphi}V{}V\\ \tilde{\mathcal{B}}_{d}(n)@>{}>{}>\tilde{\mathcal{T}}_{d}(n)\end{CD}\)
**Example 5.7**.: Consider the strongly stable ideal \(I=(x^{4},x^{3}y,x^{2}y^{3},xy^{4},y^{7})\in\mathcal{B}_{2}(7)\) from Example 3.10. The corresponding partition \(\varphi(I)\in\tilde{\mathcal{B}}_{2}(7)\) from Example 2.6 is shown on the left below. Minimal generators of \(I\) which are not in \(Bgens(I)\) are represented with \(\bullet\). Elements of \(Bgens(I)\) are represented by \(\ast\). The middle diagram depicts \(\psi(I)\) and illustrates that \(\psi(Bgens(I))=min(\psi(G(I)))\). The diagram on the right is the corresponding totally symmetric partition.
\(\begin{CD}\ast\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet \bullet\\ \bullet\\ \bullet\\ \bullet\\ \bullet \bullet\\ \bullet\\ \bullet \bullet\\ \bullet\\ \bullet\\ \bullet \bullet\\ \bullet\\ \bullet \bullet\\ \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet\\ \bullet \bullet\\ \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet \bullet\\ \bullet \bullet \bullet\\ \bullet \bullet \bullet \bullet\\ \bullet \bullet \bullet \bullet\\ \bullet \bullet \bullet \bullet\\ \bullet \bullet \bullet \bullet\\ \bullet \bullet \bullet \bullet\\ \bullet \bullet \bullet \bullet \bullet \bullet \end{CD}\)
\(\square\)
**Remark 5.8**.: For any \(\mathbf{x}^{\alpha}\notin I\), we have \(\psi(\mathbf{x}^{\alpha})\notin\Lambda(I)\). Since \(\psi(\mathbf{x}^{\alpha})\) is the unique representative of its orbit, the number of orbits of monomials not in \(\Omega(\Lambda(I))\) is exactly the total number of monomials in the original ideal \(I\). In partition language, this means that the number of cells in the strongly stable partition is exactly the number of orbits in the totally symmetric partition. In the example above, there are 15 cells in the strongly stable partition on the left which correspond to 15 orbits in the totally symmetric partition on the right.
**Example 5.9**.: Consider the strongly stable partition \(P\in\tilde{\mathcal{B}}_{3}(4)\) from Example 2.7 shown below.
\(\square\)
One can check that \(I=(x^{2},xy^{2},y^{3},xyz,y^{2}z,xz^{2},yz^{3},z^{4})\in\mathcal{B}_{3}(4)\) is the corresponding strongly stable ideal \((\varphi(I)=P)\) and that \(Bgens(I)=\{x^{2},xz^{2},y^{2}z,z^{4}\}\). To compute the
totally symmetric partition corresponding to \(P\), we first compute
\[\Omega(\Lambda(I)) =\Omega(\psi(Bgens(I)))\] \[=\Omega(\{x^{2}y^{2}z^{2},xyz^{3},y^{2}z^{3},z^{4}\})\] \[=ideal(sym(\{x^{2}y^{2}z^{2},xyz^{3},y^{2}z^{3},z^{4}\}))\]
Finally, notice that \(\varphi(ideal(sym(\{x^{2}y^{2}z^{2},xyz^{3},y^{2}z^{3},z^{4}\})))\in\tilde{ \mathcal{T}}_{3}(4)\) is the partition shown below.
## 6. Enumerations
In this last section, we will explore the enumeration of strongly stable partitions (equivalently totally symmetric partitions) in boxes. Let
\[B_{d}(n):=\sum_{k=0}^{n}|\tilde{\mathcal{B}}_{d}(k)|\]
denote the number of \(d\)-dimensional strongly stable partitions which fit in a box of side length \(n\). Similarly, use
\[T_{d}(n):=\sum_{k=0}^{n}|\tilde{\mathcal{T}}_{d}(k)|\]
By convention, we count the empty partition. As a result, we have \(B_{d}(0)=T_{d}(0)=1\) since the empty partition fits inside a box of side length \(0\). As a result of Theorem 1,
\[B_{d}(n)=T_{d}(n)\]
for positive integers \(d,n\).
In [3], Hawkes shows a bijection between \(d\)-dimensional totally symmetric partitions inside a box of side length \(n\) and \((n-1)\)-dimensional totally symmetric partitions inside a box of side length \(d+1\). As a result, \(T_{d}(n)=T_{n-1}(d+1)\) for \(n\geq 2\). Combining this with the formula above, we have
\[B_{d}(n)=B_{n-1}(d+1)\]
for positive integers \(d,n\).
Lastly, we consider some formulae for \(B_{d}(n)\) when \(d\) is fixed. When \(d=1\), partitions are simply nonnegative integers, and every \(1\)-dimensional partition is (trivially) strongly stable.
\[B_{1}(n)=T_{1}(n)=n+1\]
The number of \(2\)-dimensional strongly stable partitions which fit inside a box of side length \(n\) (strict integer partitions with largest part \(n\)) is given by
\[B_{2}(n)=T_{2}(n)=2^{n},\]
because for \(1\leq i\leq n\), we have only the choice of whether or not to include \(i\) in the integer partition.
In [6], Stembridge proved that the number of totally symmetric plane partitions which fit inside a box of side length \(n\) is given by the product formula
\[T_{3}(n)=\prod_{1\leq i\leq j\leq k\leq n}\frac{i+j+k-1}{i+j+k-2}\]
and so we have a formula for \(B_{3}(n)\).
In fact, in the case \(d=3\), an even stronger result has been shown for totally symmetric plane partitions. The \(q\)-TSPP formula
\[\sum_{P\in\overline{\mathcal{T}}_{3}(n)}q^{|P/S_{3}|}=\prod_{1\leq i\leq j \leq k\leq n}\frac{1-q^{i+j+k-1}}{1-q^{i+j+k-2}}\]
was shown to be the orbit-counting generating function for totally symmetric plane partitions fitting inside an \(n\times n\times n\) box in [5]. Remark 5.8 implies that this same formula is a _cell_-counting generating function for strongly stable plane partitions.
There is no known formula for \(B_{d}(n)=T_{d}(n)\) for \(d\geq 4\)[2]. |
2304.11191 | Relaxation breakdown and resonant tunneling in ultrastrong-coupling
cavity QED | We study the open relaxation dynamics of an asymmetric dipole that is
ultrastrongly coupled to a single electromagnetic cavity mode. By using a
thermalizing master equation for the whole interacting system we derive a phase
diagram of the Liouvillian gap. It emerges that the ultrastrong coupling
inhibits the system relaxation toward the equilibrium state due to an
exponential suppression of the dipole tunneling rate. However, we find that
polaronic multi-photon resonances restore fast relaxation by a cavity-mediated
dipole resonant tunneling process. Aside of the numerical evidences, we develop
a fully analytical description by diagonalizing the Rabi model through a
generalized rotating-wave approximation, valid in the so-called polaron frame.
The relaxation physics of such ultrastrong-coupling systems is then reduced to
a multi-photon polaron version of the standard text-book dressed states
picture. At the end we discuss an extension to a multi-well dipole that can set
the basis of a cascaded resonant tunnelling setup in the ultrastrong coupling
regime. | Daniele De Bernardis | 2023-04-21T18:00:36Z | http://arxiv.org/abs/2304.11191v2 | # Relaxation breakdown and resonant tunneling in ultrastrong-coupling cavity QED
###### Abstract
We study the open relaxation dynamics of an asymmetric dipole that is ultrastrongly coupled to a single electromagnetic cavity mode. By using a thermalizing master equation for the whole interacting system we derive a phase diagram of the Liouvillian gap. It emerges that the ultrastrong coupling inhibits the system's relaxation toward the equilibrium state due to an exponential suppression of the dipole tunneling rate. However, we find that polaronic multi-photon resonances restore fast relaxation by a cavity-mediated dipole resonant tunneling process. Aside of the numerical evidences, we develop a fully analytical description by diagonalizing the Rabi model through a generalized rotating-wave approximation, valid in the so-called polaron frame. The relaxation physics of such ultrastrong-coupling systems is then reduced to a multi-photon polaron version of the standard text-book dressed states picture. At the end we discuss an extension to a multi-well dipole that can set the basis of a cascaded resonant tunnelling setup in the ultrastrong coupling regime.
## I Introduction
Relaxation from a metastable state toward equilibrium is a central problem in many branches of physics, such as chemical reactions, radioactive decay and electronic transport, to name a few [1]. The energy barrier separating a local minimum from the stable equilibrium, i.e. the activation barrier of chemical reactions [2], can be overcome by thermal fluctuations, for which, after an initial absorption of energy from the bath, the system is kicked out the metastable state, rolling down to its absolute equilibrium state and releasing the energy excess.
In quantum physics similar processes can happen driven only by quantum fluctuations. When the temperature is too small to kick the system over the metastable energy barrier, relaxation is then dominated by the _tunnel effect_ (or quantum tunneling), which is one of the first surprising consequences of the quantum theory [3].
Following the hand-wavy intuition that quantum fluctuations replace thermal ones in kicking the system out of the metastability, one might speculate that including in these systems a supplemental quantum reservoir could sensibly alter the tunneling dynamics. The work of Leggett et al. on tunneling-systems coupled to an environment [4; 5] has shown that this is actually the case, and tunneling can be sensibly changed as a function of the environment parameters. Since for most systems the natural environment is provided by the electromagnetic radiation, here quantum tunneling is crossing its path with another fundamental concept of quantum physics: the non empty vacuum of quantum electrodynamics (QED) [6], rising the question: can vacuum fluctuations of the electromagnetic field affect tunneling and relaxation in material systems?
Considering only free-space radiation the immediate answer is that the electromagnetic vacuum does not give any sensible contribution in the tunneling dynamics, because vacuum effects scale with the small fine-structure constant \(\alpha_{\rm fs}\simeq 1/137\), giving rise to only minor modifications of materials [7].
However, when the electromagnetic field is confined, such as in cavity QED setups, vacuum effects are boosted by an effective fine-structure constant \(\alpha_{\rm eff}>1\)[8; 9], entering in the field of the so-called _ultrastrong coupling regime_[10; 11]. Here the electromagnetic vacuum is not empty anymore, being heavily populated by virtual photons [12]. This suggests an important interplay between the coupling to a cavity field and tunneling relaxation.
Experiments have already shown that this intuition may be substantial, in particular in chemical reaction and electronic transport where strong differences are observed when molecules, atoms or electrons are enclosed in such extreme resonant electromagnetic environment [13; 14; 15; 16].
All these exciting observations have stimulated multiple theoretical debates in various communities opening new research lines such as: polaritonic chemistry [17; 18; 19], cavity QED control of electronic transport in mesoscopic devices or in quantum Hall systems [20; 21], cavity QED modification of ferromagnetism, ferroelectricity and superconductivity [9; 22; 23; 24; 25; 26], all with the general aim to explore and understand up to which degree the quantum vacuum of cavity QED can be a resource to modify and control properties of matter [27; 28; 29].
Because of the complexity in treating the non-perturbative nature of these light-matter ultrastrong-coupling systems, at the beginning of this debate the focus was mainly on closed systems, trying to understand the correct way to write down these Hamiltonians with the only intention of exploring their ground-state physics. However, recently there has been a growing interest also in their out-of-equilibrium dynamics [30; 31; 32; 33; 34; 35], which is still a largely unexplored territory. Here even the simplest case regarding relaxation toward thermal equilibrium can exhibit surprising non-trivial features.
In this article we address exactly this problem studying how the electromagnetic vacuum of ultrastrong-coupling cavity QED can affect the relaxation toward equilibrium of a polarizable material. In order to isolate every single different effect we consider a simple paradigmatic setup:
an asymmetric double well dipole in a single-mode resonant cavity. Its low-energy dynamics can be approximated to the _quantum Rabi model_, which is the simplest theoretical framework to study light-matter interactions. We complete the description of the model including two basic dissipative mechanism: Ohmic cavity dissipation and dipole radiative losses. Under these circumstances we describe the system relaxation dynamics through a master equation valid for arbitrary light-matter coupling values, whose steady state is the correct thermal equilibrium state. From the spectral gap \(\lambda\) of its Liouvillian operator we derive a phase diagram describing how relaxation toward equilibrium is modified by the coupling to the cavity.
Differently from previous studies [30; 31; 32], here we exploit a generalized rotating-wave approximation of the Rabi model from which we analytically derive the transition rates of the master equation in the ultrastrong coupling regime. From this calculation we obtain a complete and simple picture on how relaxation and thermalization work in terms of polaronic dressed states, valid in the ultrastrong coupling regime.
As a result, we show that the ultrastrong coupling exponentially suppresses the relaxation rate of the dipole, being a prototype for the so-called localization transition in the spin-boson model [5]. However, special resonances exist where relaxation is restored thanks to a cavity-mediated multi-photon resonant tunneling process, in very close analogy to Frank-Condon physics describing electron tunneling assisted by vibrational transitions [36; 37; 38; 39]. After showing that this mechanism is already observable in current experimental platforms such as superconducting circuits we comment on the possible consequence for cavity assisted quantum transport and cascaded ultrastrong-coupling setups with multi-well dipoles.
The article is organized as follows. In Sec. II we introduce the physical system and its approximated description in terms of the asymmetric quantum Rabi model. By considering the Liouvillian gap of its open dynamics, in Sec. III we study how the ultrastrong coupling regime changes the relaxation and thermalization rate. By using a generalized rotating-wave approximation to diagonalize the Rabi model we explicitly show an exponential slow-down of the system's relaxation due to the ultrastrong coupling regime. In Sec. IV we show that the fast relaxation can be restored by a cavity assisted multi-photon resonant tunnelling process. Exploiting again the generalized rotating-wave approximation we develop the discussion in terms of multi-photon polaron dressed states. In Sec. V we extend this setup to the extended Dicke model leading to a cascaded resonant tunnelling device. Finally, in Sec. VI we draw our conclusions.
## II Model
We consider the paradigmatic cavity quantum electrodynamics (cQED) setup described in Fig. 1(a), where a single electrically polarizable object (a dipole) is placed into the planar capacitor of a resonant LC circuit. This system is among the simplest toy models able to reproduce most of the features of cQED in all various coupling regimes, and is particularly important in giving a simple and intuitive description of many solid-state or circuit cQED setups relevant for experiment in the ultrastrong coupling (USC) regime in the GHz or THz range [9; 10; 40; 41].
### A dipole in a asymmetric double well potential
The dipole dynamics is described as a single particle with mass \(m\) in a potential
\[H_{\text{dipole}}=\frac{p^{2}}{2m}+V(x). \tag{1}\]
The dipole has charge \(+q\) on an extreme and \(-q\) on the other extreme, and \(qx\) is its dipole moment. So, the
Figure 1: (a) Cavity QED system. The cavity is modelled as an LC-circuit, where the inductor magnetic flux \(\Phi\) takes the role of the dynamical variable of the electromagnetic field, usually given by the vector potential \(\vec{A}\). The dipole inside the capacitor couples to the voltage drop between the plates, \(U=\hat{\Phi}\). (b) The dipole is described as a particle in a tilted double-well potential. The position \(x\) represent the dipole displacement and thus its dipole moment. When the central well of the potential is large enough the system is approximated by only the two lowest levels (two-level approximation). (c) Open-system schematic view. The cavity QED system can be interpreted as an element of a dissipative circuit.
dipole displacement \(x\) is its main dynamical variable and \(p\) its canonical momentum.
As depicted in Fig. 1(b), we consider only the paradigmatic case of a dipole described by a double-well potential, very similarly to Refs. [42; 43]. Considering only its low energy dynamics we are basically studying the electromagnetism of quantum tunnelling.
Differently from Ref. [9] here we introduce a linear tilt that makes the height of the two wells asymmetric. The total dipole's potential is then given by
\[V(x)=-\frac{\mu_{2}^{2}}{2}x^{2}+\frac{\mu_{4}^{4}}{4}x^{4}+q\mathcal{E}x. \tag{2}\]
The linear tilt \(\sim q\mathcal{E}x\) is physically implemented by a bias static external electric field of amplitude \(\mathcal{E}\) and has no influence on the LC resonator dynamics.
Whenever \(\mu_{2}/\mu_{4}\gg 1\) and \(q\mathcal{E}/\mu_{2}\ll 1\), the two lowest levels are below the central barrier and well separated from the other energy levels. We can then truncate the dipole's Hilbert space keeping only these two lowest energy levels [9]. We perform the two level approximation (TLA) on the dipole Hamiltonian projecting on the eigenstates without tilt \(\mathcal{E}=0\), and we obtain
\[H_{\text{dipole}}^{\text{TLA}}=\hbar\omega_{d}s_{z}+\hbar\epsilon s_{x}. \tag{3}\]
Here we have introduced the pseudospin operators \(s_{a}=\sigma_{a}/2\) (\(\sigma_{a}\) are the usual Pauli matrices). The dipole frequency \(\omega_{d}\) is the energy difference between the two lowest states without the tilt \(\mathcal{E}=0\), and \(\epsilon=2q\mathcal{E}x_{10}/\hbar\), where \(x_{10}=\langle 1|x|0\rangle\) is the dipole matrix element between the two lowest dipole states. The dipole operator now is given by \(x\approx x_{10}\sigma_{x}\).
When the tilt is on, the energy splitting between the two eigenstates of Eq. (3) is given by \(\omega_{\epsilon}=\sqrt{\omega_{d}^{2}+\epsilon^{2}}\), while the eigenfunctions are partially localized on the left or right well, with a small, but non-negligible, overlap with the opposite well, Fig. 1(b). In the two-level language these states are given by
\[\begin{split}|\text{L}_{\epsilon}\rangle&=\cos \frac{\theta_{\epsilon}}{2}|\downarrow\rangle+\sin\frac{\theta_{\epsilon}}{2}| \uparrow\rangle\\ |\text{R}_{\epsilon}\rangle&=-\sin\frac{\theta_{ \epsilon}}{2}|\downarrow\rangle+\cos\frac{\theta_{\epsilon}}{2}|\uparrow \rangle,\end{split} \tag{4}\]
where \(\tan\left(\theta_{\epsilon}\right)=\epsilon/\omega_{d}\).
### General cavity QED Hamiltonian
The Hamiltonian of the full cavity QED system can be written summing up the dipole energy and the total energy stored in the electromagnetic field
\[H_{\text{cQED}}=H_{\text{em}}+H_{\text{dipole}}. \tag{5}\]
For an LC-resonant system, the electromagnetic energy is described by
\[H_{\text{em}}=\frac{CU^{2}}{2}+\frac{\Phi^{2}}{2L}, \tag{6}\]
where \(U\) is the total voltage drop across the capacitor \(C\), \(\Phi\) is the magnetic flux through the inductance \(L\).
When the dipole is inside the capacitor the total voltage \(U\) is no more the right canonical variable conjugate to \(\Phi\). In order to have the correct canonical description we need to introduce the total capacitor charge variable \(Q\), such that \([\Phi,Q]=i\hbar\). Without the presence of the dipole the total charge and the voltage drop are directly proportional through the usual relation \(U=CQ\). When the dipole is inside the capacitor the charge responsible for the voltage drop is modified by the presence of the charge induced by the dipole on the metallic plates. This induced charge does not contribute to any voltage drop, and must be removed [9; 40]. The voltage drop becomes
\[U=C\left(Q-Q_{\text{ind}}\right). \tag{7}\]
For an ideal capacitor we have \(Q_{\text{ind}}\simeq qx/d\)[9], where \(d\) is the distance between the capacitor plates.
We introduce now the cavity creation/annihilation operators through the relations
\[\begin{split}\Phi&=i\frac{\Phi_{0}}{\sqrt{2}} \left(a-a^{\dagger}\right)\\ Q&=\frac{Q_{0}}{\sqrt{2}}\left(a+a^{\dagger} \right).\end{split} \tag{8}\]
Here \(\Phi_{0}=\sqrt{\hbar Z_{LC}}\), \(Q_{0}=\sqrt{\hbar/Z_{LC}}\) and \(Z_{LC}=\sqrt{L/C}\). The cavity QED reads
\[H_{\text{cQED}}=\hbar\omega_{c}a^{\dagger}a+H_{\text{dipole}}+F_{0}x\left(a+a ^{\dagger}\right)+\frac{F_{0}^{2}}{\hbar\omega_{c}}x^{2}, \tag{9}\]
where \(\omega_{c}=1/\sqrt{LC}\) and we introduced the zero-point electric force \(F_{0}=\sqrt{\hbar\omega_{c}/(2Cd^{2})q^{2}}\).
As detailed in Refs. [42; 43], implementing the TLA described in Section II.1, we can now approximate the cavity QED Hamiltonian with the so-called _quantum Rabi model_ (\(\hbar=1\))
\[H_{\text{cQED}}\approx H_{\text{Rabi}}=\omega_{c}a^{\dagger}a+\omega_{d}s_{z} +\epsilon s_{x}+g\left(a+a^{\dagger}\right)s_{x}, \tag{10}\]
where the light-matter coupling is given by \(g=2F_{0}x_{10}\).
We will see in particular that the transverse term \(\sim\epsilon s_{x}\), that breaks the \(\mathbb{Z}_{2}\) symmetry of the usual Rabi model, is fundamental in our development, becoming a switch between slow and fast dissipation of the dipole. In circuit QED this term emerges quite naturally through a bias in the external magnetic flux and is typically used in the observation of the spectral features of the USC regime [44].
### Cavity dissipation
A standard way to introduce dissipation in an LC circuit is to couple it to a transmission line [45]. When
is traced out from the dynamics, the transmission line plays the role of a resistive element, effectively realizing the scheme described in Fig. 1(c). An input voltage can inject current in the system, and the resistance damps the excited oscillations of the LC-circuit. This scheme provides a basic input/output theory describing the system's read out from the energy dissipated into the resistance.
The formal description of this setup assumes a linear coupling to a multi-mode bath of harmonic modes, as described in Eq. (11) of Appendix A. To realize the resistive circuit, since \(\Phi=LI\) is linked to the current passing through the inductor (and so through the whole circuit) the system operator coupled to the bath is [45]
\[\hat{X}=\Phi. \tag{11}\]
To reproduce a standard Ohmic resistance we assume the standard linear spectral density
\[J_{\mathrm{Ohm}}(\omega)\simeq\gamma\omega. \tag{12}\]
Eliminating the bath's dynamics, we obtain the equations of motion of the circuit in terms of the Langevin equation in Eq. (12), and, considering \(A=\dot{\Phi}\) in Eq. (12), we have that
\[\ddot{\Phi}\sim-\gamma\dot{\Phi}, \tag{13}\]
correctly matching the standard Kirchhoff equations of a resistive circuit.
With the supplemental assumption of small coupling between the circuit and the resistance, the same type of calculation leads to the Lindblad master equation for the density matrix of the system, which will be used in the remaining of the paper.
### Dipole dissipation
In our simplified picture, the main source of dissipation of an oscillating electric dipole is given by radiative emission into free space electromagnetic modes. Indeed, even if the dipole is strongly coupled to the sub-wavelength mode of the LC cavity, still interacts with the all the other transverse electromagnetic modes. These have typically a small effect on the coherent dynamics [7], but they provide a decay channel for the dipole.
Because of the harmonic dynamics of the electromagnetic field, and its general linear coupling with the dipole, we can again model the dipole dissipation as a linear damping, as described in Appendix A. In this case the system operator coupled to the dissipative bath is given by the dipole moment
\[\dot{X}=x\approx 2x_{10}s_{x} \tag{14}\]
The bath spectral function is given from the spectral function of the transverse electromagnetic modes. In free-space this would be given approximately by
\[J_{\mathrm{rad}}(\omega)\simeq\kappa\omega^{3}. \tag{15}\]
Considering the Langevin equation in Eq. (12) for the dipole moment velocity \(A=\dot{x}\), using this spectral density we find
\[\ddot{x}\sim\kappa\dddot{x}, \tag{16}\]
recovering the Abraham-Lorentz formula for a radiating dipole [46].
More generally for our developments, one can choose any spectral density for the dipole dissipative bath of the shape \(J_{\mathrm{rad}}\sim\omega^{\nu}\), with \(\nu\geq 1\). We will see that having \(\nu>1\) gives particularly simple results.
## III Relaxation regimes of cavity QED
In this section we will explore the combined effect of light-matter coupling \(g\) and dipole asymmetry \(\epsilon\) on the open relaxation dynamics of the system. To this aim we will use a thermalizing master equation, as detailed in Appendix B, implementing the dissipation mechanisms described before, by including \(J_{\mathrm{Ohm}}(\omega)\) and \(J_{\mathrm{rad}}(\omega)\) to model the cavity/dipole bath spectral densities.
Moreover, in order to have a master equation that correctly describes relaxation toward the equilibrium state, the cavity/dipole jump operators are rewritten in the total system eigenbasis, reading
\[\begin{split} c_{nm}^{c}&=\langle n|a-a^{\dagger}|m \rangle\,|n\rangle\langle m|,\\ c_{nm}^{\mathrm{dip}}&=\langle n|s_{x}|m\rangle\,| n\rangle\langle m|.\end{split} \tag{17}\]
Here \(|n\rangle\) and \(|m\rangle\) are the eigenstates of the Rabi Hamiltonian in Eq. (10).
Since in our theory the coefficient \(\gamma,\kappa\) are free parameters, we absorb the dimensional quantities \(\Phi_{0}/\sqrt{2}\,,\,2x_{10}\) in the definition of the spectral densities, such that
\[\frac{\omega_{c}|\Phi_{0}|^{2}}{2}J_{\mathrm{Ohm}}(\omega)\,,\,4\omega_{d}|x_ {10}|^{2}J_{\mathrm{rad}}(\omega)\longmapsto J_{\mathrm{Ohm}}(\omega),J_{ \mathrm{rad}}(\omega). \tag{18}\]
The full system master equation can be written as the sum of three Liouvillian super operators
\[\partial_{t}\rho=\mathcal{L}_{H}(\rho)+\mathcal{L}_{c}(\rho)+\mathcal{L}_{ \mathrm{dip}}(\rho). \tag{19}\]
Here \(\mathcal{L}_{H}(\rho)=-i\left[H_{\mathrm{sys}},\rho\right]\) generates the coherent time evolution, while \(\mathcal{L}_{c}\) and \(\mathcal{L}_{\mathrm{dip}}\) are given by Eq. (13) with the respective transition rates
\[\begin{split}\Gamma_{nm}^{c}&=\gamma\frac{|\omega_{ mn}|}{\omega_{c}}|\,\langle n|a-a^{\dagger}|m\rangle\,|^{2},\\ \Gamma_{nm}^{\mathrm{dip}}&=\kappa\frac{|\omega_{mn }|^{3}}{\omega_{d}^{3}}|\,\langle n|s_{x}|m\rangle\,|^{2},\end{split} \tag{20}\]
where \(\omega_{mn}=\omega_{m}-\omega_{\mathrm{n}}\) is the difference between the eigenfrequencies of the Rabi Hamiltonian in Eq. (10).
For the sake of simplicity, through the whole manuscript we only focus on the relevant dipole-cavity resonant case where \(\omega_{c}=\omega_{d}\).
### Zero temperature Liouvillian gap
To study the relaxation toward equilibrium of the system we consider the Liouvillian gap \(\lambda=\text{Re}[\lambda_{1}]\)[47; 48], obtained from the spectrum \(\{\lambda_{n}\}\), \(n=0,1,2\ldots\), of the total Liouvillian operator \(\mathcal{L}=\mathcal{L}_{H}+\mathcal{L}_{c}+\mathcal{L}_{\text{dip}}\) defined from Eq. (19) [49]. Here we consider the zero temperature case \(T=0\) only, which is the relevant case for superconducting cavity QED setups [44]. The same picture holds also for finite temperature, provided that \(k_{b}T\lesssim\hbar\omega_{c},\hbar\omega_{d}\), where \(k_{b}\) is the Boltzmann constant. When the temperature grows larger, and \(k_{b}T>\hbar\omega_{c}\) USC effects are pushed to much larger light-matter coupling values [50].
In Fig. 2(a) we show the Liouvillan gap as a function of the light-matter coupling and the dipole asymmetry, \(\lambda(g,\epsilon)\).
At small light-matter coupling \(g\sim 0\), the effect of increasing \(\epsilon\) is to make the dipole an eigenstate of \(s_{x}\), decreasing the value of the matrix element in the dipole transition rate in Eq. (20). However the vanishing matrix element is compensated by the increasing energy difference between the dipole levels, giving larger contribution from the radiative spectral density of the bath, \(J_{\text{rad}}\sim\omega^{3}\). This can be seen by explicitly computing the dipole transition rate using the bare uncoupled dipole states in Eq. (4), for which we have
\[\begin{split}\Gamma^{\text{dip}}_{\text{LR}}&= \frac{\kappa}{4}\left(1+\frac{\epsilon^{2}}{\omega_{d}^{2}}\right)^{3/2}\cos \left(\tan^{-1}\left(\frac{\epsilon}{\omega_{d}}\right)\right)\\ &=\frac{\kappa}{4}\sqrt{1+\frac{\epsilon^{2}}{\omega_{d}^{2}}}. \end{split} \tag{21}\]
In the specific case \(\kappa=4\gamma\) the slowest time scale is set by the cavity losses, and so \(\lambda=-\gamma/2\). It is worth noticing that a Ohmic spectral density increases too slowly to compensate the effect of the vanishing matrix element of the dipole transition rate. As a result we would have that \((\Gamma^{\text{dip}}_{\text{LR}})^{\text{Ohm}}=\kappa/(4\sqrt{1+\epsilon^{2} /\omega_{d}^{2}})\), and so the overall relaxation rate would decrease as a function of \(\epsilon\) at very weak coupling \(g/\omega_{c}\simeq 0\) to then increase again at slightly larger coupling.
In the USC regime, \(g/\omega_{c}\gg 1\), for small dipole asymmetry \(\epsilon\simeq 0\), the Liouvillian gap goes to zero monotonically with an exponential behaviour \(\lambda\sim-\exp[-g/\omega_{c}]\), as is clearly visible from Fig. 2(b). Increasing the dipole asymmetry, \(\epsilon\), we observe the emergence of lobes where the Liouvillian gap approaches zero \(\lambda\sim 0\), separated by a narrow region where relaxation is partially restored and \(\lambda\sim-\gamma/2\). This is shown in Fig. 2(c), where we fixed \(g/\omega_{c}=3\) and we plot \(\lambda\) as a function of \(\epsilon\). Quite surprisingly, these narrow gaps between the lobes appear only when \(\epsilon\simeq\omega_{c}\times k\), where \(k=1,2,3\ldots\) is an integer number.
### Relaxation breakdown in the USC regime
The thermalization slow-down emerging from the spectral analysis of the Liouvillian \(\mathcal{L}\) can be understood as a combination between the spectral properties and the dressing of the jump operators in Eq. (17) of the Rabi model in the USC regime [51; 52]. Here we analyze in detail the symmetric case, when \(\epsilon=0\), which will provide the basic tools to understand the whole phase diagram of Fig. 2(a).
We start by transforming the original Rabi Hamiltonian through the unitary transformation \(U_{\text{pol}}=\exp\left[g/\omega_{c}(a-a^{\dagger})s_{x}\right]\), and obtaining the _Rabi polaron_ Hamiltonian (\(\hbar=1\))
\[\tilde{H}_{\text{Rabi}}=\omega_{c}a^{\dagger}a+\epsilon s_{x}+\frac{\omega_{ d}}{2}\left[\mathcal{D}(g/\omega_{c})\tilde{s}_{+}+\mathcal{D}^{\dagger}(g/ \omega_{c})\tilde{s}_{-}\right]. \tag{22}\]
Here \(\tilde{s}_{\pm}=s_{z}\pm is_{y}\) are the raising/lowering operators along the \(s_{x}\)-axis, while \(\mathcal{D}(g/\omega_{c})=\exp\left[g/\omega_{c}(a-a^{\dagger})\right]\) is the usual displacement operator.
Since both cavity and dipole dissipative operators are unaffected by the polaron transformation \(U_{\text{pol}}\Phi U_{\text{pol}}^{\dagger}=\Phi,\ U_{\text{pol}}s_{x}U_{ \text{pol}}^{\dagger}=s_{x}\), the general master equation in Eq. (19) is still valid, with the only difference that the eigenstates \(|n\rangle,|m\rangle\) appearing in the jump operators in Eq. (17) are now replaced with the eigenstates of the Rabi polaron Hamiltonian in Eq. (22).
Figure 2: (a) Phase diagram of the Liouvillian gap \(\lambda\) as a function of the light-matter coupling \(g\) and the dipole asymmetry \(\epsilon\). Parameters: \(\gamma=\kappa/4=0.05\omega_{c}\), \(\omega_{d}=\omega_{c}\). (b) A cut of the phase diagram at \(\epsilon=0\) as a function of the light-matter coupling \(g\), in logscale. (c) A cut of the phase diagram at \(g/\omega_{c}=3\) as a function of the dipole asymmetry \(\epsilon\), in logscale.
As reported in [53] and detailed in Appendix C, the polaron Rabi Hamiltonian supports a generalized rotating-wave approximation (gRWA) and thus follows the structure of the Jaynes-Cummings model, with the approximated conservation of the polaron excitation number \(\hat{N}_{\mathrm{exc}}^{z}=a^{\dagger}a+s_{z}\). Its eigenstates are then given by the usual dressed states
\[\begin{split}|+,n\rangle&=\cos\frac{\theta_{n}}{2}| \downarrow,n\rangle+\sin\frac{\theta_{n}}{2}|\uparrow,n-1\rangle,\\ |-,n\rangle&=-\sin\frac{\theta_{n}}{2}|\downarrow,n \rangle+\cos\frac{\theta_{n}}{2}|\uparrow,n-1\rangle,\end{split} \tag{23}\]
where \(\theta_{n}\) is given in Appendix C. The ground-state of the system is simply the uncoupled vacuum state
\[|\mathrm{GS}\rangle=|\downarrow,0\rangle. \tag{24}\]
In order to appreciate the quality of this approximation, in Fig. 3(a) we compare the spectrum obtained from the exact diagonalization (solid lines) and from the gRWA analytical formula reported in Appendix C (yellow dots), from which is quite clear that the gRWA gives very good results.
Relaxation can be then understood from the dressed state perspective [54] and in Appendix D we explicitly compute the transition rate in Eq. (20).
From the explicit expression for the Hopfield coefficients \(\sin,\cos\) (present in the Appendix C), we found that in USC
\[\begin{split}\lim_{g/\omega_{c}\rightarrow\infty}\cos\frac{ \theta_{n}}{2}&=1\\ \lim_{g/\omega_{c}\rightarrow\infty}\sin\frac{\theta_{n}}{2}& =0,\end{split} \tag{25}\]
which is clearly shown in Fig. 3(b) where we plot the Hopfield coefficient analytically computed through the gRWA for a few lowest eigenstates. Using this observation together with the matrix element computed in Appendix D we can build the transition rates in Eq. (20), arriving to the conclusion that the only non-negligible transitions are
\[\begin{split}\lim_{g/\omega_{c}\rightarrow\infty}\Gamma_{(+,n)( +,n-1)}^{c}&=\gamma\frac{\omega_{+,n}-\omega_{+,n-1}}{\omega_{c}} \approx\gamma\\ \lim_{g/\omega_{c}\rightarrow\infty}\Gamma_{(-,n)(-,n-1)}^{c}& =\gamma\frac{\omega_{-,n}-\omega_{-,n-1}}{\omega_{c}}\approx \gamma\\ \lim_{g/\omega_{c}\rightarrow\infty}\Gamma_{(-,n)(+,n-1)}^{ \mathrm{dip}}&=\kappa\left(\frac{\omega_{-,n}-\omega_{+,n-1}}{ \omega_{c}}\right)^{3}\approx 0.\end{split} \tag{26}\]
The fact that the dipole transition rate goes to zero \(\Gamma_{(-,n)(+,n-1)}^{\mathrm{dip}}\approx 0\) follows from the approximate degeneracy of the states \(|-,n\rangle,|+,n-1\rangle\) in the USC limit, for which \(\omega_{-,n}-\omega_{+,n-1}\approx 0\), while \(\omega_{+,n}-\omega_{+,n-1}\approx\omega_{-,n}-\omega_{-,n-1}\approx\omega_{c}\).
Here we realize that the USC relaxation breakdown observed in Fig. 2(a-b) affects the dipole only, while the cavity transition rates return to their bare uncoupled values when the USC regime is reached. The schematic representation of the remaining relaxation channels is shown in Fig. 3(c).
This result can be directly interpreted from a polaronic perspective: the USC cavity vacuum heavily dresses the dipole with virtual photons, which inhibit its ability to tunnel from one side to the other of its double well potential. Because of the radiative nature (but in the Ohmic case as well) of its dissipation mechanism, the dipole can loose energy only moving between the two wells (i.e. tunneling), and the faster it moves the stronger it dissipates. In this regime of heavy virtual-photon dressing tunneling becomes extremely slow and so the dipole's rate to release energy in the bath.
## IV USC multi-photon resonant tunneling
In this section we are going to explore more in detail the nature of the gaps between the relaxation-breakdown lobes in Fig. 2(a). In these narrow regions the system can relax as is almost unaffected by the USC suppression of tunneling described in the previous section. However, if we artificially remove the cavity dissipation, \(\gamma=0\), we see that these narrow gaps disappear. This suggests that the suppression of tunneling described above is still present
Figure 3: (a) Spectrum of the Rabi model as a function of the light-matter coupling \(g\) at fixed \(\epsilon=0\). The solid lines are the result of full diagonalization, and the color red/blue are only meant to match the color-code in (c). The yellow dots are given by the analytic Eq. (10) in Appendix C. (b) \(\cos\theta_{n}/2\), \(\sin\theta_{n}/2\) given by Eq. (11) for each \(n\) block as a function of the light-matter coupling \(g\). (c) Scheme of the relaxation mechanism. The cavity relaxes jumping mainly between \(++\) or \(--\) dressed states, while for the dipole is mainly between \(+-\) states. In the USC limit the dipole does not relax anymore. Parameters: \(\epsilon=0\), \(\omega_{c}=\omega_{d}\).
for \(\epsilon\neq 0\), but a new resonant mechanism appears, allowing the dipole to tunnel again by exchanging photons with the cavity. This effect is the cavity analogous of resonant tunneling in electronic setups interacting with vibrational degrees of freedom, and thus establishing a connection between USC cavity QED and Franck-Condon physics in molecular-electronic setups [36; 38; 39].
### Diagonalization of the asymmetric Rabi Hamiltonian
As in the symmetric case \(\epsilon=0\), also the asymmetric Rabi model, \(\epsilon\neq 0\) is approximately block-diagonal, as a consequence of the general form of the displacement operators. However here the situation is more complicated and we cannot find a unique formula that fits the whole spectrum for every \((\epsilon,g)\), but we can only have analytic expressions valid near to each resonance.
We start by noticing that Eq. (22) is written in a form that calls for the gRWA, provided that the system has an _asymmetric resonance_\(\epsilon\simeq\omega_{c}\times k\), with \(k=1,2,\ldots\). Differently from the usual Jaynes-Cummings model, and the gRWA of the symmetric Rabi model, here we need to take the dipole basis as an eigenstate of \(s_{x}\). Moreover, considering higher resonances at \(\epsilon=\omega_{c},2\omega_{c},3\omega_{c}\ldots\) is well motivated by the fact that the displacement operator contains all power of creation/annihilation operators, giving access to multi-photon processes with higher frequencies. This is indeed well visible considering the normal-order expansion [55]
\[\mathcal{D}(x)=e^{-x^{2}/2}\sum_{n,m=0}\frac{(xa^{\dagger})^{n}}{n!}\frac{(- xa)^{m}}{m!}. \tag{27}\]
From this expression is also clear that the non-linear interaction term in the polaron Hamiltonian in Eq. (22) is exponentially suppressed by the factor \(\sim\omega_{d}e^{-g^{2}/(2\omega_{c})}\). As a consequence, when
\[\epsilon,\omega_{c}>\omega_{d}e^{-g^{2}/(2\omega_{c})} \tag{28}\]
the polaron light-matter interaction becomes perturbative, and we can adopt the gRWA. Notice that this correspond to keep only the terms \(n<m\) with \(\omega_{c}(m-n)\simeq\epsilon\) in Eq. (27), so, even if the interaction is perturbative, is still multi-photon and thus highly non-linear.
The asymmetric polaron Rabi Hamiltonian can then be approximately diagonalized around each \(k\)-resonance by projecting it on the states \(\{|\leftarrow,n\rangle,|\rightarrow,n-k\rangle\}\) and the ground-state is simply given by \(|\text{GS}\rangle\approx|\leftarrow,0\rangle\).
The Hamiltonian can be then expressed succinctly in a matrix form, as the sum of \(2\times 2\) blocks
\[\begin{split}&\tilde{H}^{k}_{\text{Rabi}}\approx\sum_{n=1}^{ \infty}\frac{\omega_{c}k-\epsilon}{2}\sigma_{x}^{(n,k)}+\frac{\omega_{d}}{2} \mathcal{D}_{n\,n-k}\,\sigma_{z}^{(n,k)}\\ &+\frac{2\omega_{c}n-\omega_{c}k}{2}\mathds{1}_{(n,k)},\end{split} \tag{29}\]
where \(\sigma_{x,y,z}^{(n,k)}\) are the Pauli matrices for each \(n=1,2,\ldots\) block for the \(k\)-resonance, while
\[\mathcal{D}_{n\,n-k}=\frac{g^{k}}{\omega_{c}^{k}}e^{-\frac{g^{2}}{2\omega_{c} ^{2}}}L_{n-k}^{(k)}\left(g^{2}/\omega_{c}^{2}\right)\sqrt{\frac{(n-k)!}{n!}} \tag{30}\]
is the \(n,n-k\) matrix element of the displacement operator [55]. Here \(L_{m}^{(l)}(x)\) is the special Laguerre polynomials. The excited eigenfrequencies are then given by
\[\omega_{k,n,\pm}^{\text{R}}=\omega_{c}\left(n-\frac{k}{2}\right) \pm\frac{1}{2}\sqrt{\left(\omega_{c}k-\epsilon\right)^{2}+\omega_{d}^{2} \mathcal{D}_{n\,n-k}^{2}}. \tag{31}\]
Since the displacement operator has diagonal matrix element different from zero \(\mathcal{D}_{nn}\neq 0\), one should consider the dipole basis states composed by dipole states oriented along \(\sim\cos\phi s_{x}+\sin\phi s_{z}\), with a certain angle \(\phi\) given by \(\mathcal{D}_{nn}\). Including these corrections makes the analytical formula in general quite complicated, having a simple expression only for the ground-state, which is
\[\omega_{0,0}^{\text{R}}=-\frac{\sqrt{\epsilon^{2}+\omega_{d}^{2}e^{-g^{2}/ \omega_{c}^{2}}}}{2}. \tag{32}\]
However, in the USC regime \(\phi\sim 0\) is a small angle and we can thus neglect it, proceeding with the simple \(s_{x}\) picture developed above.
In Fig. 4 we compare the real spectrum to the one obtained from the gRWA at each resonant point. In the USC limit, when \(g/\omega_{c}\gg 1\) the agreement is very good.
The eigenstates are now given in terms of a multi-photon version of the \(s_{x}\)-polarized Jaynes-Cummings dressed states, fully characterized by the Hopfield coefficients \(\cos\theta_{(k,n)}/2,\sin\theta_{(k,n)}/2\), generalizing the symmetric case in Appendix C. When the resonance condition
Figure 4: Spectrum of the Rabi model as a function of the dipole asymmetry \(\epsilon\) for various \(g/\omega_{c}=0.1,1,2.5,3.5\) light-matter couplings. The solid lines are the result of exact diagonalization while the yellow dot are given by the analitical formula in Eq. (31). Parameters \(\omega_{d}=\omega_{c}\).
\(\epsilon=\omega_{c}\times k\) is satiesfied, the system eigenstates become
\[\begin{split}\ket{+_{(k,n)}}&=\frac{1}{\sqrt{2}}\left( \ket{\leftarrow,n}+\ket{\rightarrow,n-k}\right)\\ \ket{-_{(k,n)}}&=\frac{1}{\sqrt{2}}\left(\ket{ \leftarrow,n}-\ket{\rightarrow,n-k}\right),\end{split} \tag{33}\]
and the ground state is \(\ket{\text{GS}}=\ket{\leftarrow,0}\).
Repeating the analysis on the matrix elements in Appendix D, we realize that the \(s_{x}\) operator can only connect dressed states of the same \((k,n)\) block, for which the only non-diagonal non-zero matrix element is
\[\bra{+_{(k,n)}}s_{x}\ket{-_{(k,n)}}=\cos\frac{\theta_{(k,n)}}{2}\sin\frac{ \theta_{(k,n)}}{2}. \tag{34}\]
Each block is disconnected by the others and the ground-state is disconnected from all other states. Therefore relaxation toward equilibrium is still suppressed from the USC also when \(\epsilon\neq 0\).
On contrary, the cavity is still able to efficiently dissipate. So, when hitting a \(k\)-resonance, also the dipole can lose energy by exchanging \(k\)-photons with the cavity, which are consequently flushed out. This resonant tunneling effect provides a relaxation channel for the dipole, as depicted in Fig. 5(a) and give the proper explanation for the gaps between the lobes observed in Fig. 2(a).
We conclude this subsection by highlighting that: in the polaron frame, the USC open dynamics is mainly given by a polaronic version of the standard text-book dressed state master equation dynamics [54].
### Multi-photon oscillations and cavity-mediated relaxation
Here we illustrate how the polaronic dressed state picture emerges clearly in the full open dynamics. As a striking example we show that the system undergoes to damped Rabi oscillations, as in traditional cavity QED systems described by the Jaynes-Cummings model. However here, depending from the resonance condition, the Rabi oscillations involve multiple photons [56; 57] and must be interpreted as tunneling oscillations for the dipole.
From the block-Hamiltonian in Eq. (29) we can derive the Rabi frequency of the \(k\)-resonance multi-photon Rabi oscillations reading
\[\Omega_{(k,n)}=\omega_{d}\frac{g^{k}}{\omega_{c}^{k}}e^{-\frac{g^{2}}{2\omega_ {c}^{2}}}L_{n-k}^{(k)}\left(g^{2}/\omega_{c}^{2}\right)\sqrt{\frac{(n-k)!}{n!}}, \tag{35}\]
where \(n\geq k\) is the total number of photons involved.
Differently from usual Rabi oscillations in cavity QED, here the dipole oscillates between the right and left states of its asymmetric double well potential, for which we can call them _tunneling oscillations_. The relevant quantity to follow is then \(\bra{s_{x}}(t)\) (on contrary to traditional Rabi oscillations, visible looking at \(\bra{s_{z}}(t)\), in standard notation).
We then numerically simulate \(\bra{s_{x}}(t)\) starting from the initial state \(\ket{\psi_{0}}=\ket{\rightarrow,0}\) (in the polaron frame). When \(\gamma<\Omega_{(k,k)}\), we observe a very good fit on the curve
\[\bra{s_{x}}(t)\approx e^{-\frac{k\gamma}{2}t}\frac{\cos\left[\Omega_{(k,k)}t \right]+1}{2}-\frac{1}{2}. \tag{36}\]
Notice that the overall decay rate is given by \(\sim k\times\gamma/2\), with a factor \(k\). This takes into account that the photon decay increase linearly with the number of photons involved, which, in this case is properly \(k\). In Fig. 5(b) we show the Rabi oscillations data collapse for various resonant values \(\epsilon=\omega_{c},2\omega_{c},3\omega_{c},4\omega_{c}\). For each \(k\) the curve is plotted against its normalized time \(\tilde{t}=\Omega_{(k,k)}t/(2\pi)\) and is normalized to remove the exponential decay accordingly to \(\bra{\tilde{s}_{x}}=e^{k\gamma/2t}(\bra{s_{x}}+1/2)-1/2\).
### Response functions and higher-order processes
Here we investigate how the spectral features analyzed so far manifest themselves through standard transmission measurements.
We start considering a weak probe current entering in the LC-circuit and we look for the transmitted current. With the help of linear response theory (see Appendix E), the current response is mainly given by the cavity structure factor
\[\mathcal{S}_{c}(\omega)=\frac{\hbar Z_{LC}}{2}\sum_{n,m}\frac{e^{-\hbar\omega _{n}/(k_{b}T)}}{\mathcal{Z}}\left|\bra{n}a-a^{\dagger}\ket{m}\right|^{2}\delta (\omega-\omega_{mn}). \tag{37}\]
Here \(\mathcal{Z}=\sum_{n}e^{-\hbar\omega_{n}/(k_{b}T)}\) is the thermal equilibrium partition function of the system. From this quantity is possible to define the system impedance
\[Z_{\text{sys}}(\omega)=-\frac{i\omega\mathcal{S}_{c}(\omega)}{\hbar}, \tag{38}\]
Figure 5: (a) Schematic view of the resonant tunnel mechanism. In the USC regime, the dipole can switch well by exchanging \(k\)-photons with the cavity. (b) Rabi oscillations data collapse. Each curve is labelled by its resonant index \(k\) and represents \(\bra{\tilde{s}_{x}}=e^{k\gamma/2t}(\bra{s_{x}}+1/2)-1/2\). For each \(k\)-curve the time is normalized on its respective \(k\)-Rabi frequency, \(\Omega_{(k,k)}/(2\pi)\). In this way is clear how our analytical description fits very well the full numerics. Parameters: \(\omega_{d}=\omega_{c}\), \(g/\omega_{c}=3\), \(\gamma=\kappa/4=0.002\omega_{c}\).
and consequently the current transmission function
\[\frac{I_{\rm out}}{I_{\rm in}}=\mathcal{T}(\omega)=\frac{Q^{-1}}{Q^{-1}+Z_{\rm LC }/Z_{\rm sys}(\omega)}. \tag{39}\]
Here \(Q=\omega_{c}/\gamma\) is the LC cavity quality factor.
In Fig. 6 we show the current transmission \(|\mathcal{T}(\omega)|\) as a function of the dipole asymmetry \(\epsilon\) and the probe frequency \(\omega\). To mimic experimental conditions, we consider a fixed temperature \(k_{b}T\simeq 0.2\hbar\omega_{c}\).
For coupling strength up to \(g/\omega_{c}\lesssim 0.5\) the transmission spectrum exhibits the usual Jaynes-Cummings polaritonic (or dressed state) behaviour. At \(\epsilon\neq 0\) the response mainly follows the bare LC circuit response, while the maximum hybridization is at \(\epsilon=0\), with maximum Rabi splitting between the upper and lower dressed state branches. This is well visible in the first panel of Fig. 6.
For increasing coupling strength, as in the second panel of Fig. 6, where \(g/\omega_{c}=1\), the upper branch of the transmission spectrum starts to vanish exactly at \(\epsilon=0\), signalling that we are entering the USC regime.
At even larger couplings the transmission is drastically changed. This is well visible from the two lower panels in Fig. 6, where \(g/\omega_{c}=2.5,3\). In the region around \(\epsilon=0\) the transmission becomes much smaller, and the two branches related to the Jaynes-Cummings dressed states are gone. Instead the \(k=1\) avoided crossing due to the resonant tunneling is well visible around \(|\epsilon|/\omega_{c}=1\). The \(k>1\) higher resonances, on contrary, are not well visible, since they are covered by the bare photon resonance between the lower levels \(n<k\). It is worth noticing that the ultrastrong-coupling spectral features shown here, and in particular the \(k=1\) resonance, are already visible in recent experiments with superconducting circuits [44; 58].
Another interesting quantity to probe the spectrum of the system is provided by the dipole structure factor
\[\mathcal{S}_{\rm dip}(\omega)=2\hbar Z_{\rm dip}\sum_{n,m}\frac{e^{-\hbar\omega _{n}/(k_{b}T)}}{\mathcal{Z}}\left|\langle n|s_{x}|m\rangle\right|^{2}\delta( \omega-\omega_{mn}). \tag{40}\]
Here we introduced the dipole impedance as \(Z_{\rm dip}=\hbar/q^{2}f_{10}\), and \(f_{10}=2m\omega_{0}|x_{10}|^{2}/\hbar\) is the oscillator strength of the two-level dipole transition.
From the linear response theory perspective, \(\mathcal{S}_{\rm dip}(\omega)\) quantifies the dipole radiation response to a direct drive of the dipole. Because of the consideration done in Sec. IV.1, is clear that this quantity is strongly suppressed in the USC regime, at low temperature. If we only stick to the dressed state picture we should observe vanishing transitions for \(T\to 0\), due to the fact that the dipole matrix element between the ground-state and the first block is zero
\[\langle\pm_{(k,1)}|s_{x}|\text{GS}\rangle=0. \tag{41}\]
This is thus a perfect framework where to look for transitions beyond the dressed state picture.
Similarly to the system circuit impedance we can define a dipole radiation impedance as
\[Z_{\rm rad}(\omega)=-\frac{i\omega\mathcal{S}_{\rm dip}(\omega)}{\hbar}. \tag{42}\]
In Fig. 7 we show the radiation impedance \(Z_{\rm rad}(\omega)\) in logscale, as a function of the dipole asymmetry \(\epsilon\) and the probe frequency \(\omega\) (with an artificial linewidth \(\gamma_{\mathcal{S}_{\rm dip}}\) to smear out the delta function in Eq. (40)). On contrary to the previous case of the cavity response, at frequencies \(\omega\sim\omega_{c}\) the dipole response is strongly suppressed in favour of higher frequencies transitions that emerge with a diamond-like pattern.
The dipole matrix elements giving the amplitude for these transitions are much weaker than the cavity-current matrix elements for the \(k\)-resonant transitions between dressed states, since are given by beyond gRWA corrections. They are the USC cavity equivalent of the vibronic transitions responsible of the Coulomb diamond structure in the Franck-Condon blockade voltage/current characteristic [36].
Cascaded Relaxation in multi-well dipole
Finally we comment on the possibility to extend our results to the case of a multi-well dipole. The relaxation dynamics of this system is particularly interesting because it can be interpreted as a prototype of a transport problem through an extended system. Despite that a full coverage of cavity-modified relaxation or transport in an extended system is well beyond the scope of this paper, we can still use the concepts developed above to give an initial intuition which sets the basis for future investigations.
### The extended Dicke model
We model the multiple-well dipole generalizing the two-level approximation to \((N+1)\)-level, where each level represent a potential well. In this way, the dipole is simply described by spin-\(N/2\) operators, \(S_{x,y,z}\) that generalizes the spin description of Sec. II.1. In particular the eigenvalues of \(S_{x}\), \(|m_{x}\rangle\), are interpreted as localized states in the \(m_{x}\)-th well, while the \(S_{z}\) operator creates some tunnelling between them.
While the dissipations, and the master equation, are derived in the same way as before, just replacing \(s_{x,y,z}\mapsto S_{x,y,z}\) everywhere, the light-matter Hamiltonian is no more given by the Rabi model of Eq. (10). Indeed, when taking the two-level approximation of Eq. (9) we had discarded the \(x^{2}\)-term, which is only a constant within the two-level subspace, \(x^{2}\approx 4x_{10}^{2}s_{x}^{2}=4x_{10}^{2}\mathds{1}\). For a multi-level dipole, described by a spin-\(N/2\) system if \(N>1\) we have that \(S_{x}^{2}\neq\mathds{1}/4\), and thus the correct cavity QED Hamiltonian within the \((N+1)\)-level subspace is given by the so-called _extended Dicke model_ (EDM) [9; 40]
\[H_{\text{EDM}}=\omega_{c}a^{\dagger}a+\omega_{d}S_{z}+g\left(a+a^{\dagger} \right)S_{x}+\frac{g^{2}}{\omega_{c}}S_{x}^{2}+\epsilon S_{x}. \tag{43}\]
Performing the polaron transformation in the same way as for the Rabi model, we arrive to the polaron (EDM) [9]:
\[\tilde{H}_{\text{EDM}}=\omega_{c}a^{\dagger}a+\epsilon S_{x}+\frac{\omega_{d}} {2}\left[\mathcal{D}(g/\omega_{c})\tilde{S}_{+}+\mathcal{D}^{\dagger}(g/\omega _{c})\tilde{S}_{-}\right], \tag{44}\]
where, again, \(\tilde{S}_{-}=S_{z}-iS_{y}\). In the USC regime \(g\gg\omega_{c}\), and for non-negligible asymmetry \(\epsilon\neq 0\), in the limit of large spin, \(N\gg 1\), we can use the Holstein-Primakoff approximation [59] on the \(S_{x}\)-direction, for which \(S_{x}\approx-N/2+b^{\dagger}b\), and \(\tilde{S}_{-}\approx\sqrt{N}b\). The polaron EDM can then be approximated by
\[\tilde{H}_{\text{EDM}}\approx\tilde{H}_{\text{EDM}}^{\text{HP}} =\omega_{c}a^{\dagger}a+\epsilon b^{\dagger}b+ \tag{45}\] \[+\frac{\omega_{d}\sqrt{N}}{2}\left[\mathcal{D}(g/\omega_{c})b^{ \dagger}+\mathcal{D}^{\dagger}(g/\omega_{c})b\right].\]
### Relaxation dynamics of the EDM
The considerations done for the Rabi model in Sec. IV.1 are still valid, in particular regarding the possibility to discard the counter-rotating terms in the displacement operators and the suppression of tunneling. It is then clear that one can use the same dressed state approach to diagonalize the polaron EDM as well. In particular, considering the relaxation from the initial state \(|0,m\rangle=(b^{\dagger})^{m}/\sqrt{m!}|0,0\rangle\) the resonant tunnelling effect gives rise to a cavity-mediated cascaded dynamics to the groundstate, where, depending from the resonance condition \(\epsilon=k\times\omega_{c}\), \(n_{\text{ph}}\approx k\times m\) photons are released.
To have a more quantitative understanding we consider the limit of strong cavity dissipations with respect to the k-resonance splitting, \(\gamma\gg\Omega_{(k,k)}\). In this regime we can adiabaticaly eliminate the cavity in favor of an effective master equation for the dipole only [45]. Following the previous analysis on cavity and dipole transition rates, we completely neglect the dipole dissipations, while we take as a jump operator of the cavity its bare annihilation operator \(c=a\). Again, this is well motivated by the analysis performed above. Using the approximated form of the EDM in Eq. (45) and assuming that at each time the total density matrix of the system is \(\rho(t)\approx\rho_{d}(t)\otimes\rho_{c}^{\text{th}}\) (here \(\rho_{c}^{\text{th}}\) is the thermal density matrix for the bare cavity at temperature \(T\)) we have that
\[\begin{split}&\partial_{t}\rho_{d}=-i\left[\epsilon b^{\dagger}b, \rho_{d}\right]+\frac{\Gamma_{T}(\epsilon)}{2}\left(2b\rho_{d}b^{\dagger}- \left[b^{\dagger}b,\rho_{d}\right]_{+}\right)\\ &+\frac{\Gamma_{T}(-\epsilon)}{2}\left(2b^{\dagger}\rho_{d}b- \left[bb^{\dagger},\rho_{d}\right]_{+}\right).\end{split} \tag{46}\]
Similarly to non-linear optomechanics setups [60; 61], the cooling and heating rates are given by
\[\begin{split}&\Gamma_{T}(\omega)=\frac{\omega_{d}^{2}N}{2}\times \\ &\times\text{Re}\left[\int_{0}^{\infty}dt\left(\left\langle \mathcal{D}\left(t,x\right)\mathcal{D}^{\dagger}\left(x\right)\right\rangle- \left\langle\mathcal{D}(x)\right\rangle^{2}\right)e^{i\omega t}\right],\end{split} \tag{47}\]
where \(H_{c}=\omega_{c}a^{\dagger}a\), \(x=g/\omega_{c}\) and \(\mathcal{D}(t,x)=e^{iH_{c}t}\mathcal{D}\left(x\right)e^{-iH_{c}t}\). Since that the average \(\langle\cdot\rangle\) is intended over the cavity thermal state \(\rho_{c}^{\text{th}}\), we can explicitly compute this quantity [60; 61; 50]
\[\begin{split}&\Gamma_{T}(\omega)=\frac{\omega_{d}^{2}N}{\gamma}e^{-x^{ 2}(1+2N_{T}(\omega_{c}))}\times\\ &\times\sum_{q,r\neq 0}\frac{x^{2r}N_{T}^{r}(\omega_{c})}{r!}\frac{x^{ 2q}(1+N_{T}(\omega_{c}))^{q}}{q!}\frac{\gamma^{2}/4}{\left(\omega-\omega_{c}(q- r)\right)^{2}+\frac{\gamma^{2}}{4}}.\end{split} \tag{48}\]
Here \(N_{T}(\omega_{c})=1/(e^{\hbar\omega_{c}/(k_{B}T)}-1)\) is the cavity thermal population. From this expression it is particularly evident the multi-photon character of this cavity assisted relaxation mechanism, where the dipole can relax by emitting \(q\)-photons into the cavity, and, at the same time, can
be re-excited by absorbing \(r\)-photons from the cavity (if the temperature is non-zero \(T>0\)).
As in the standard theory of laser cooling the total relaxation rate is given by \(\Gamma_{T}^{\rm tot}=\Gamma_{T}(\epsilon)-\Gamma_{T}(-\epsilon)\). At \(T=0\) and close to resonance \(\epsilon\simeq\omega_{c}\times k\), the total relaxation rate is approximately given by \(\Gamma_{T=0}^{\rm tot}\approx\Omega_{(k,k)}^{2}N/\gamma\), which is the USC version of the Purcell effect. In Fig. 8 we show some examples of the total relaxation rate for \(T=0\) and for \(T>0\) at different coupling strengths. Interestingly larger temperature may help in activating the higher \(k\)-resonances even in the non-USC regime, \(g/\omega_{c}\ll 1\), resembling the behaviour of optomechanical laser cooling setups [62]. We highlight the fact that at weak coupling this description does not hold, since it is based on the assumption that dipole tunneling is suppressed by the USC regime. However we found interesting to show the total relaxation rate \(\Gamma_{T}^{\rm tot}\) also in this regime for completness.
As anticipated in the beginning of this section, the relaxation dynamics of this multi-well setup can be seen as a way to study how the incoherent transport is modified by the cavity. Staying at the single particle level we can interpreted an excitation produced by the dipole operator \(b^{\dagger}\) as the particle moving one well up in energy, and the dipole ground-state as the state where only the lowest energy well is occupied. Following this line of thoughts, we can say that the system has good transport properties if it rapidly thermalizes sufficiently close to its ground-state. Since that the saturation number of the steady state of Eq. (46), \(\left\langle b^{\dagger}b\right\rangle_{\rm ss}=\mathcal{N}_{0}\), around each \(k\)-resonance \(\epsilon=\omega_{c}\times k\) is the dipole thermal occupation
\[\mathcal{N}_{0}=\frac{\Gamma_{T}(-\epsilon)}{\Gamma_{T}(\epsilon)-\Gamma_{T}( -\epsilon)}\approx N_{T}(\omega_{c}\times k), \tag{49}\]
we also need that the temperature \(T\) is small enough so thermal photons cannot push the particle (the dipole excitation) on an upper energy level.
## VI Conclusion
In conclusion, we studied the relaxation properties of a simple (but paradigmatic) cavity QED setup in the ultrastrong coupling regime where the cavity is provided by an LC resonant circuit and the matter is given by an asymmetric double-well dipole inside the capacitor of the LC circuit. By considering the LC circuit coupled to an Ohmic transmission line we introduced current dissipations for the cavity, while considering the coupling to radiating modes we introduced dissipations for the dipole. In this way the system is provided with two among the simplest and most intuitive relaxation channels. After having defined the basic framework, we derived a thermalizing master equation, valid at arbitrary light-matter coupling strengths and arbitrary dipole asymmetry. From the Liouvillian gap we obtained the longest relaxation rate of the system, that we can also consider its thermalization rate. From this quantity emerges clearly that the effect of the USC is to slow-down the system's thermalization by an exponential suppression of the Liouvillian gap. However, for special values of the dipole asymmetry the standard relaxation is restored and the system can thermalize accordingly to its bare relaxation rates.
In order to get more insights on this behaviour of the Liouvillian gap, we employed a generalized rotating-wave approximation (gRWA) [53], valid in the so-called polaron frame. Within this approximation we showed that is possible to diagonalize the asymmetric Rabi model, obtaining analytical expressions for its spectrum and eigenstates, valid in the USC regime.
Specifically it emerges that in the polaron frame the eigenstates are given by a polaronic multi-photon version of the usual Jaynes-Cummings dressed states. Within this simple picture we were able to compute the relaxation rates in the USC regime analytically, explicitly showing the exponential slow-down of thermalization due to an effective suppression of the dipole tunnelling dynamics, while the cavity, remaining effectively uncoupled from the dipole, can still efficiently relax.
However when the dipole asymmetry is resonant with the cavity, the dipole dynamics can be revitalized. Here the dipole dynamics is dominated by cavity assisted tunnelling, where the dipole can resonantly tunnel from one well to the other by releasing multiple photons. Since photons can then relax out of the cavity, this process gives an effective relaxation channel also for the dipole.
After establishing a connection with the Franck-Condon physics of electronic transport through a molecular dot[15; 36], we commented on the possibility of extending this non-linear resonant process to a multi-well dipole. A simple toy model to describe this situation is provided by the extended Dicke model, introduced originally to study a multiple qubit ultrastrongly coupled to a single LC cavity [9; 40]. From this setup is clear that the USC resonant tunnel dynamics can affect also a multi-well system, giving rise to a resonant cascaded
multi-photon process. We argue that this cascaded effect could be observed in cavity modified transport experiments with multiple electronic quantum dots, or in supercunding circuit devices with only minor modifications of the already existing platforms [44, 15]. This suggests that these findings could thus provide an interesting playground to study an implementation for cascaded-laser electronic devices operating in the USC regime in GHz or THz range.
###### Acknowledgements.
We thanks Gianluca Rastelli, Iacopo Carusotto, Peter Rabl, Alberto Biella, Fabrizio Minganti, Alberto Nardin and Gian Marcello Andolina for very helpful and insightful discussions. We acknowledge financial support from the Provincia Autonoma di Trento from the Q@TN initiative.
## Appendix A Linear damping
We consider a generic system, described by the hamiltonian \(H_{\text{sys}}\), coupled to a bath of harmonic oscillators (which may represent the electromagnetic field outside of a cavity, or a resistance in a circuit):
\[H=H_{\text{sys}}+\sum_{k}\left[\frac{P_{k}^{2}}{2m_{k}}+\frac{1}{2}m_{k}\omega_ {k}^{2}\left(Y_{k}-\frac{c_{k}}{m_{k}\omega_{k}^{2}}X\right)^{2}\right]. \tag{21}\]
The equations of motion for a generic system operator \(A\) are given by
\[\partial_{t}A=-i\left[A,H_{\text{sys}}\right]+i\sum_{k}\frac{c_{k} }{2}\left(Y_{k}\left[A,X\right]+\left[A,X\right]Y_{k}\right)+ \tag{22}\] \[-i\sum_{k}\frac{c_{k}^{2}}{2m_{k}\omega_{k}^{2}}\left[A,X^{2}\right]\]
\[\partial_{t}Y_{k}=\frac{P_{k}}{m_{k}},\hskip 14.226378pt\partial_{t}P_{k}=-m _{k}\omega_{k}^{2}Y_{k}+c_{k}X. \tag{23}\]
The formal solution of the bath's equations is given by
\[Y_{k}=Y_{k}^{homg\cdot}(t)+\frac{c_{k}}{m_{k}\omega_{k}}\int_{t_{0}}^{t}dt^{ \prime}\sin(\omega_{k}(t-t^{\prime}))X(t^{\prime}), \tag{24}\]
where
\[Y_{k}^{homg\cdot}=Y_{k}(t_{0})\cos(\omega_{k}(t-t_{0}))+\frac{P_{k}(t_{0})}{m_ {k}\omega_{k}}\sin(\omega_{k}(t-t_{0})). \tag{25}\]
Plugging back this solution is (22), and integrating by part, we get
\[\partial_{t}A =-i\left[A,H_{\text{sys}}\right]+\frac{i}{2}\left(\xi(t)\left[A,X \right]+\left[A,X\right]\xi(t)\right) \tag{26}\] \[-\frac{i}{2}\left[\int_{t_{0}}^{t}K(t-t^{\prime})\partial_{t^{ \prime}}X(t^{\prime})dt^{\prime}\,,\,[A,X]\right]_{+}.\]
Here \([\cdot,\cdot]_{+}\) is the anti-commutator, and
\[\xi(t) =\sum_{k}c_{k}\left(Y_{k}^{homg\cdot}(t)-\frac{c_{k}}{m_{k}\omega _{k}^{2}}X(t_{0})\cos(\omega_{k}(t-t_{0}))\right) \tag{27}\] \[K(t) =\sum_{k}\frac{c_{k}^{2}}{m_{k}\omega_{k}^{2}}\cos(\omega_{k}t),\]
are, respectively, the _quantum noise term_ and the _dissipative kernel_. We notice that the last term in (22) is exactly cancelled by the term proportional to \(C(t)\) coming out by the integration by part. From the fluctuations-dissipation theorem we obtain the specific value of the quantum noise correlator [63]. In the high-temperature limit it reads
\[\frac{1}{2}\langle\left[\xi(t),\xi(t^{\prime})\right]_{+}\rangle\simeq 2k_{b} TK(t-t^{\prime}). \tag{28}\]
Using Eq. (24) and also the retarded solution, and considering \(C(\pm\infty)\simeq 0\), we get the in/out relation
\[Y^{out}=Y^{in}-\int_{-\infty}^{+\infty}K(t-t^{\prime})\dot{X}(t^{\prime})dt^{ \prime}, \tag{29}\]
where \(Y^{in}=\sum_{k}c_{k}Y_{k}^{homg\cdot}(-\infty)\), \(Y^{out}=\sum_{k}c_{k}Y_{k}^{homg\cdot}(+\infty)\).
A useful way to treat the dissipation without having all the details of the bath is to introduce the bath spectral density
\[J(\omega)=\frac{\pi}{2}\sum_{k}\frac{c_{k}^{2}}{m_{k}\omega_{k}}\delta(\omega- \omega_{k}), \tag{30}\]
and recast the dissipator in the form
\[K(t)=\int_{0}^{\infty}\frac{d\omega}{\pi}\frac{J(\omega)}{\omega}\cos(\omega t). \tag{31}\]
Now all bath properties are encoded in the spectral density \(J(\omega)\).
## Appendix B Thermalizing master equation
We consider here the master equation suitable to study relaxation and thermalization processes in cavity QED under the ultra-strong coupling regime. For this purpose we consider the treatment used in [52]. We do not repeat the derivation here, but we stress that the physical assumptions are almost the same as used in deriving the Langevin equation in Appendix A, with the further assumption that the coupling between the system and the bath is very small. In particular this latter one ensures that we can implement the rotating wave approximation between the system and the bath, proceeding with the standard textbook derivation.
The crucial step here is to isolate the components of the system coupling operator \(X\) that rotates with positive (negative) frequencies. This can be done as follows: given an Hamiltonian \(H_{\text{sys}}\) and a (or multiple) system operator(s) \(X\), we express them on the system eigenbasis \(X=\sum_{n,m}\bra{n}X\ket{m}n\bra{m}\). The jump operators are then given by the set \(\{c_{nm}=\bra{n}X\ket{m}\ket{n}\bra{m}\), such that \(n<m\}\). In the Heinseberg picture these jump operators evolve with positive frequencies. This allows to implement the rotating wave approximation in the standard system-bath linear Hamiltonian in Eq. (10).
The master equation is then given by
\[\partial_{t}\rho=\mathcal{L}_{H}(\rho)+\mathcal{L}_{D}(\rho), \tag{12}\]
where the conservative time evolution is generated by
\[\mathcal{L}_{H}(\rho)=-i\left[H_{\text{sys}},\rho\right], \tag{13}\]
while dissipations are given by
\[\begin{split}&\mathcal{L}_{D}(\rho)=\sum_{n<m}\left[1+N_{T}(\omega _{mn})\right]\Gamma_{nm}D\left[\ket{n}\bra{m},\rho\right]+\\ &+\sum_{n<m}N_{T}(\omega_{mn})\Gamma_{nm}D\left[\ket{m}\bra{n}, \rho\right].\end{split} \tag{14}\]
Here
\[D\left[c,\rho\right]=c\,\rho\,c^{\dagger}-\frac{1}{2}\left[c^{\dagger}c\,,\, \rho\right]_{+} \tag{15}\]
is the usual dissipator super-operator [45], and
\[N_{T}(\omega)=\frac{1}{e^{\omega/(k_{B}T)}-1} \tag{16}\]
is the bosonic thermal population, where \(k_{B}\) is the Boltzmann constant. The thermalization rates are given by
\[\Gamma_{nm}=J(\ket{\omega_{mn}})\ket{\bra{n}X\ket{m}}^{2}. \tag{17}\]
Considering the thermal density matrix \(\rho_{\text{ss}}=e^{-H_{\text{sys}}/(k_{B}T)}/\mathcal{Z}\), where \(\mathcal{Z}=\text{Tr}[\rho_{\text{ss}}]\), one can easily prove that is the steady state of the system.
## Appendix C Diagonalization of the resonant-symmetric Rabi Hamiltonian
In this section we perform the approximated diagonalization of the Rabi model in the regime where
\[\begin{split}&\omega_{d}\simeq\omega_{c}\\ &\epsilon=0.\end{split} \tag{18}\]
We first transform the original Rabi Hamiltonian through the unitary transformation \(U_{\text{pol}}=\exp\left[g/\omega_{c}(a-a^{\dagger})s_{x}\right]\), obtaining the Rabi polaron Hamiltonian in the form
\[\begin{split}\tilde{H}_{\text{Rabi}}&=\omega_{c}a ^{\dagger}a+\omega_{d}\Big{[}\cosh\left(g/\omega_{c}(a-a^{\dagger})\right)s_ {z}\\ &+i\sinh\left(g/\omega_{c}(a-a^{\dagger})\right)s_{y}\Big{]}. \end{split} \tag{19}\]
Here \(\cosh\), and \(\sinh\) operators can be expressed in terms of the displacement operator \(\mathcal{D}(x)=\exp\left[x(a-a^{\dagger})\right]\).
In this frame one can then perform a generalized rotating wave approximation [53], following from the fact that the Hamiltonian in this basis is approximately block diagonal. Each block is spanned by the states \(\{\ket{\uparrow,n-1},\ket{\downarrow,n}\}\), where \(n=1,2\ldots\), and the groundstate is given by the polaron vacuum state \(\ket{\downarrow,0}\).
This block-diagonal structure is ultimately linked to the matrix elements of the displacements operators in Eq. (19), which are known to be exponentially suppressed as \(\bra{n}\mathcal{D}(g/\omega_{c})\ket{m}\sim e^{-g^{2}/(2\omega_{c}^{2})}\)[55]. Moreover because of parity selection rule of the \(\cosh(g/\omega_{c}(a-a^{\dagger}))s_{z}\), \(\sinh(g/\omega_{c}(a-a^{\dagger}))s_{y}\) operators only second nearest neighbour blocks are coupled. Combining this two observations we have that the most relevant transitions are within each block, for which we need only two matrix element of the displacement operator per block:
\[\begin{split}&\mathcal{D}_{n\,n-1}=\frac{g}{\omega_{c}}\sqrt{ \frac{(n-1)!}{n!}}e^{-\frac{a^{2}}{2\omega_{c}^{2}}}L_{n-1}^{(1)}\left(g^{2}/ \omega_{c}^{2}\right),\\ &\mathcal{D}_{n\,n}=e^{-\frac{a^{2}}{2\omega_{c}^{2}}}L_{n}^{(0)} \left(g^{2}/\omega_{c}^{2}\right),\end{split} \tag{20}\]
where \(L_{n}^{(\alpha)}(x)\) are the special Laguerre polynomials. Notice that, since \(L_{n}^{(\alpha)}(0)=(n+\alpha)!/(n!)\), we have that \(L_{n-1}^{(1)}(0)=n\), recovering the usual Jaynes-Cummings picture at weak-coupling.
We can then rewrite the polaron Rabi Hamiltonian as a block-diagonal matrix, \(\tilde{H}_{\text{Rabi}}\approx\sum_{n}\tilde{H}_{\text{Rabi}}^{n}\), where each block reads
\[\tilde{H}_{\text{Rabi}}^{n}=\begin{pmatrix}A_{n}&C_{n}\\ C_{n}&B_{n}\end{pmatrix} \tag{21}\]
where
\[\begin{split} A_{n}&=\omega_{c}n-\frac{\omega_{d}e^{-g^{2}/(2 \omega_{c}^{2})}}{2}L_{n}^{(0)}\left(g^{2}/\omega_{c}^{2}\right),\\ B_{n}&=\omega_{c}(n-1)+\frac{\omega_{d}e^{-g^{2}/(2 \omega_{c}^{2})}}{2}L_{n-1}^{(0)}\left(g^{2}/\omega_{c}^{2}\right),\\ C_{n}&=\frac{g}{\omega_{c}}\frac{\omega_{d}e^{-g^{2}/(2 \omega_{c}^{2})}}{2}\sqrt{\frac{(n-1)!}{n!}}L_{n-1}^{(1)}\left(g^{2}/\omega_{ c}^{2}\right).\end{split} \tag{22}\]
The spectrum is
\[\omega_{\pm,n}=\frac{A_{n}+B_{n}}{2}\pm\sqrt{\frac{\left(A_{n}+B_{n}\right)^{2} }{4}+C_{n}^{2}-A_{n}B_{n}} \tag{23}\]
and the eigenstates are
\[\begin{split}&\ket{+,n}=\cos\frac{\theta_{n}}{2}\ket{\downarrow,n} +\sin\frac{\theta_{n}}{2}\ket{\uparrow,n-1},\\ &\ket{-,n}=-\sin\frac{\theta_{n}}{2}\ket{\downarrow,n}+\cos\frac{ \theta_{n}}{2}\ket{\uparrow,n-1},\end{split} \tag{24}\]
where
\[\begin{split}\cos\frac{\theta_{n}}{2}&=\pm\sqrt{\frac{1}{ 2}\left(1+\frac{A_{n}-B_{n}}{\sqrt{\left(A_{n}-B_{n}\right)^{2}+4C_{n}^{2}}} \right)}\\ \sin\frac{\theta_{n}}{2}&=\pm\sqrt{\frac{1}{2} \left(1-\frac{A_{n}-B_{n}}{\sqrt{\left(A_{n}-B_{n}\right)^{2}+4C_{n}^{2}}} \right)}\end{split} \tag{10}\]
This approximate solution of the symmetric Rabi model is valid in all coupling regimes for each value of \(g\). However its validity is restricted to the cases when \(\omega_{d}\lesssim\omega_{c}\)[53].
## Appendix D Matrix element and transition rates of the symmetric Rabi model
As for standard dressed states, the allowed transitions are only between states of neighbouring blocks, with \(\left(n,n\pm 1\right)\)-excitations, and the only relevant matrix elements contributing to the transition rates of the cavity are
\[\begin{split}\left\langle+,n|\left(a^{\dagger}-a\right)|-,n-1 \right\rangle&=\sqrt{n-1}\cos\frac{\theta_{n-1}}{2}\sin\frac{ \theta_{n}}{2}+\\ &-\sqrt{n}\cos\frac{\theta_{n}}{2}\sin\frac{\theta_{n-1}}{2},\end{split} \tag{11}\]
\[\begin{split}\left\langle-,n|\left(a^{\dagger}-a\right)|+,n-1 \right\rangle&=\sqrt{n-1}\sin\frac{\theta_{n-1}}{2}\cos\frac{ \theta_{n}}{2}+\\ &-\sqrt{n}\sin\frac{\theta_{n}}{2}\cos\frac{\theta_{n-1}}{2},\end{split} \tag{12}\]
\[\begin{split}\left\langle+,n|\left(a^{\dagger}-a\right)|+,n-1 \right\rangle&=\sqrt{n-1}\sin\frac{\theta_{n-1}}{2}\sin\frac{ \theta_{n}}{2}+\\ &+\sqrt{n}\cos\frac{\theta_{n}}{2}\cos\frac{\theta_{n-1}}{2}, \end{split} \tag{13}\]
\[\begin{split}\left\langle-,n|\left(a^{\dagger}-a\right)|-,n-1 \right\rangle&=\sqrt{n-1}\cos\frac{\theta_{n-1}}{2}\cos\frac{ \theta_{n}}{2}+\\ &+\sqrt{n}\sin\frac{\theta_{n}}{2}\sin\frac{\theta_{n-1}}{2}, \end{split} \tag{14}\]
and for the ground-state
\[\begin{split}\left\langle\downarrow,0|\left(a+a^{\dagger} \right)|+,1\right\rangle&=\cos\frac{\theta_{1}}{2},\\ \left\langle\downarrow,0|\left(a+a^{\dagger}\right)|-,1\right\rangle &=-\sin\frac{\theta_{1}}{2}.\end{split} \tag{15}\]
For the dipole we have a complementary situation
\[\begin{split}\left\langle+,n|s_{x}|-,n-1\right\rangle& =-\frac{1}{2}\sin\frac{\theta_{n-1}}{2}\sin\frac{\theta_{n}}{2}, \end{split} \tag{16}\]
\[\begin{split}\left\langle-,n|s_{x}|+,n-1\right\rangle& =\frac{1}{2}\cos\frac{\theta_{n-1}}{2}\cos\frac{\theta_{n}}{2}, \end{split} \tag{17}\]
\[\begin{split}\left\langle+,n|s_{x}|+,n-1\right\rangle& =\frac{1}{2}\cos\frac{\theta_{n-1}}{2}\sin\frac{\theta_{n}}{2}, \end{split} \tag{18}\]
\[\begin{split}\left\langle-,n|s_{x}|-,n-1\right\rangle& =-\frac{1}{2}\sin\frac{\theta_{n-1}}{2}\cos\frac{\theta_{n}}{2}, \end{split} \tag{19}\]
and the ground-state
\[\begin{split}\left\langle\downarrow,0|s_{x}|+,1\right\rangle& =\sin\frac{\theta_{1}}{2},\\ \left\langle\downarrow,0|s_{x}|-,1\right\rangle&=\cos \frac{\theta_{1}}{2}.\end{split} \tag{20}\]
## Appendix E Linear response and absorption spectra
Exciting the cavity corresponds to insert a current in the circuit, which can be interpreted as a parallel LC filter. We can then define a system circuit impedance \(Z_{\text{sys}}(\omega)\) for the LC circuit, which takes into account the presence of the dipole in the capacitor. By using the standard composition of circuit impedance, we can derive the response input/output relation from the current flowing in the resistance (the transmission line)
\[\frac{I_{out}}{I_{in}}=\frac{Z_{\text{sys}}(\omega)}{R+Z_{\text{sys}}(\omega)}, \tag{21}\]
where \(R=Z_{\text{LC}}Q\) is the Ohmic resistance of the transmission line coupled to the LC cavity, and \(Q=\omega_{c}/\gamma\) is the LC cavity quality factor. In Fig. 1(c) it is shown the general scheme of our circuit approach.
The system impedance is defined by considering the relation between voltage and current flowing through the circuit, \(V=ZI\), which gives
\[Z_{\text{sys}}(\omega)=\frac{\left\langle\dot{\Phi}\right\rangle(\omega)}{I_{ \text{in}}(\omega)}. \tag{22}\]
When the input current is very small we can invoke linear response theory [64], for which we have
\[\chi_{VI}=\lim_{I_{\text{in}}\to 0}\frac{\left\langle\dot{\Phi}\right\rangle( \omega)}{I_{\text{in}}(\omega)}, \tag{23}\]
where \(\chi_{VI}\) is the voltage-current linear response function [64]. From here it follows an operative definition of the system impedance as
\[Z_{\text{sys}}(\omega)=-i\omega\chi_{II}(\omega), \tag{24}\]
where
\[\chi_{II}=\lim_{I_{\text{in}}\to 0}\frac{\left\langle\Phi\right\rangle(\omega)}{I_{ \text{in}}(\omega)} \tag{25}\]
is the current-current linear response function.
The current-current linear response function can be calculated in many ways, but the simplest one is to use the cavity structure factor
\[\mathcal{S}_{c}(\omega)=\sum_{n,m}\frac{e^{-\hbar\omega_{n}/(k_{b}T)}}{\mathcal{Z }}\left|\langle n|\Phi|m\rangle\right|^{2}\delta(\omega-\omega_{mn}). \tag{50}\]
The system impedance is then given by \(Z_{\text{sys}}(\omega)=-i\omega\mathcal{S}_{c}(\omega)\).
|
2302.12716 | Supervised Hierarchical Clustering using Graph Neural Networks for
Speaker Diarization | Conventional methods for speaker diarization involve windowing an audio file
into short segments to extract speaker embeddings, followed by an unsupervised
clustering of the embeddings. This multi-step approach generates speaker
assignments for each segment. In this paper, we propose a novel Supervised
HierArchical gRaph Clustering algorithm (SHARC) for speaker diarization where
we introduce a hierarchical structure using Graph Neural Network (GNN) to
perform supervised clustering. The supervision allows the model to update the
representations and directly improve the clustering performance, thus enabling
a single-step approach for diarization. In the proposed work, the input segment
embeddings are treated as nodes of a graph with the edge weights corresponding
to the similarity scores between the nodes. We also propose an approach to
jointly update the embedding extractor and the GNN model to perform end-to-end
speaker diarization (E2E-SHARC). During inference, the hierarchical clustering
is performed using node densities and edge existence probabilities to merge the
segments until convergence. In the diarization experiments, we illustrate that
the proposed E2E-SHARC approach achieves 53% and 44% relative improvements over
the baseline systems on benchmark datasets like AMI and Voxconverse,
respectively. | Prachi Singh, Amrit Kaul, Sriram Ganapathy | 2023-02-24T16:16:41Z | http://arxiv.org/abs/2302.12716v1 | # Supervised Hierarchical Clustering Using Graph Neural Networks for Speaker Diarization
###### Abstract
Conventional methods for speaker diarization involve windowing an audio file into short segments to extract speaker embeddings, followed by an unsupervised clustering of the embeddings. This multi-step approach generates speaker assignments for each segment. In this paper, we propose a novel Supervised Hierarchical gRaph Clustering algorithm (SHARC) for speaker diarization where we introduce a hierarchical structure using Graph Neural Network (GNN) to perform supervised clustering. The supervision allows the model to update the representations and directly improve the clustering performance, thus enabling a single-step approach for diarization. In the proposed work, the input segment embeddings are treated as nodes of a graph with the edge weights corresponding to the similarity scores between the nodes. We also propose an approach to jointly update the embedding extractor and the GNN model to perform end-to-end speaker diarization (E2E-SHARC). During inference, the hierarchical clustering is performed using node densities and edge existence probabilities to merge the segments until convergence. In the diarization experiments, we illustrate that the proposed E2E-SHARC approach achieves \(53\%\) and \(44\%\) relative improvements over the baseline systems on benchmark datasets like AMI and Voxconverse, respectively.
Prachi Singh, Amrit Kaul, Sriram Ganapathy+LEAP Lab, Electrical Engineering, Indian Institute of Science,Bangalore.
[email protected]
Supervised Hierarchical Clustering, Graph Neural Networks, Speaker Diarization.
Footnote †: This work was supported by the grants from the British Telecom Research Center.
## 1 Introduction
Speaker Diarization (SD) is the task of segmenting an audio file based on speaker identity. The task has important applications in rich speech transcription for multi-speaker conversational audio like customer call center data, doctor patient conversations and meeting data.
The conventional approach for the task of SD involves multiple steps. In the first step, the audio is windowed into short segments (1-2 s) and fed to a speaker embedding extractor. The speaker embedding extractors are deep neural networks trained for the speaker classification task. The output of penultimate layer, called as embeddings, provides a good initial speaker representation (for example, x-vectors) [1, 2]. In a subsequent step, these speaker embeddings are clustered based on similarity scores computed using methods like Probabilistic Linear Discriminant Analysis (PLDA) scoring [3, 4]. The most common clustering approach is the agglomerative hierarchical clustering (AHC) [5], which merges two clusters at each time step based on similarity scores until the required number of clusters/speakers are attained. Other approaches involve spectral clustering (SC) [6], k-means clustering [7] and graph based clustering [8, 9].
Recently, the end-to-end neural diarization [10, 11] approaches involving transformers have proved effective in handling overlaps. However, due to the difficulty in handling more than 4 speakers, pairwise metric learning loss is proposed recently [12]. There have been recent efforts on clustering algorithms to improve the diarization performance over the conventional approach. A graph-based agglomerative clustering called path integral clustering proposed by Zhang et al. [8] is shown to outperform other clustering approaches on CALL-HOME and AMI datasets [9]. Similarly, metric learning approaches are introduced in [6, 13] to improve the speaker similarity scores. In a recent work, Singh et al. [9, 14] introduced self-supervised metric learning using clustering output labels as the pseudo labels for model training.
Most of the previous approaches for diarization are trained to improve the similarity scores. However, they still use an unsupervised clustering algorithm to obtain the final labels. We hypothesize that this limits their performance as they are not trained with the clustering objectives. On the other hand, EEND models require a large amount of data and hundreds of hours of training. We propose a simple approach to SD which is not data intensive and can handle large number of speakers (more than 7) while training and evaluation. The approach is called as Supervised HierArchical gRaph Clustering algorithm (SHARC). Our work is inspired by Xing et al. [15], where a supervised learning approach to image clustering was proposed. We perform supervised representation learning and clustering jointly without requiring an external clustering algorithm. The major contributions are:
1. Introducing supervised hierarchical clustering using Graph Neural Networks (GNN) for diarization.
2. Developing the framework for joint representation learning and clustering using supervision.
3. Achieving state-of-the-art performance on two benchmark datasets.
## 2 Related Work and Background
This section highlights previous works on SD by representing multi-speaker audio file in the form of a graph. We first introduce GNN and their use in metric learning and supervised clustering in other domains. Then, we describe a variant of GNN, called GraphSAGE [16] which is used in our approach.
Wang et al. [17] proposed GNN for metric learning. The inputs to the model are x-vectors/d-vectors and the PLDA similarity score. The output of the model is a probability score of whether two nodes are connected or not. The Graph Convolution Network (GCN) [18], the most common variant of GNNs, are used in [19] for semi-supervised training using clustering output as "pseudo-labels".
**GraphSAGE:** The GCN is inherently transductive and does not generalize to unseen nodes. The GraphSAGE [16], another variant of GNN, is a representation learning technique suitable for dynamic graphs. It can predict the embedding of a new node without requiring a re-training procedure. The GraphSAGE learns aggregator functions that can induce the embedding of a new node given its features and neighborhood. First, a graph is constructed using the embeddings as the nodes. The edges are connected using the similarity scores between the embeddings. Instead of training individual embeddings for each node, a function is learnt that generates embeddings by sampling and aggregating features from a node's local neighborhood. The aggregate function outputs a single neighborhood embedding by taking a weighted average of each neighbor's embedding.
## 3 Proposed Approach
The Supervised HierArchical gRaph Clustering algorithm (SHARC) model is shown in Figure 1. It introduces a hierarchical structure in the GNN-based clustering. Figure 1(a), shows the training procedure using \(R\) audio recordings \(r\in\{1,2,..,R\}\) where \(r\) is the recording-id assigned to each recording in the dataset. It involves extracting short segment embeddings such as x-vectors \(\boldsymbol{\mathcal{X}}=\{\boldsymbol{X}_{1},\boldsymbol{X}_{2},..., \boldsymbol{X}_{R}\}\) from an Extended Time Delay Neural Network (ETDNN) [1] for all recordings where \(\boldsymbol{X}_{r}\in\mathcal{R}^{N_{r}\times F}\), \(N_{r}\) is the number of x-vectors for recording \(r\) and \(F\) is the dimension of x-vectors. These are used to form graphs at different levels of hierarchy denoted as \(G=\{G_{1}^{0},G_{2}^{0},...,G_{1}^{1},...,G_{R}^{M_{r}}\}\) where \(G_{r}^{m}\) is a graph of recording \(r\) at level \(m\) and \(M_{r}\) is the maximum number of levels created for \(r\). The nodes of the graphs are obtained from \(\boldsymbol{\mathcal{X}}\), and edges are connected using \(k\)-nearest neighbors with weights coming from similarity matrices \(\boldsymbol{\mathcal{S}}^{m}=\{\boldsymbol{S}_{1}^{m},...,\boldsymbol{S}_{R}^ {m}\}\) for level \(m\) where, \(\boldsymbol{S}_{r}^{m}\in\mathcal{R}^{N_{r}^{m}\times N_{r}^{m}}\), \(N_{r}^{m}\) is number of nodes at level \(m\) for recording \(r\). The graphs are constructed at different clustering levels by merging the node features of each cluster and recomputing the similarity matrix, as discussed in Section 3.1. For training, a set of graphs \(G\) are fed to the GNN module in batches. The module comprises of GNN along with a feed forward network to predict edge weights \(\hat{E_{m}}\in\mathcal{R}^{N_{r}^{m}\times k}\) of all nodes with their k-nearest neighbors. The loss is computed using \(E^{q}\) (true edge weights) and \(\hat{E_{m}}\) (predicted edge weights). The details of the GNN scoring and loss computation are given in Section 3.4.
Figure 1(b), shows the inference block diagram. For a test recording \(t\), x-vectors \(\boldsymbol{X}_{t}\) and \(\boldsymbol{S}_{t}\) are extracted and a graph \(G_{t}^{0}\) is constructed at level 0. Then it is passed to the clustering module which iteratively performs clustering using edge predictions from GNN module followed by merging nodes of same cluster and then, reconstructing graph for next level \(m\). This process stops if the graph has no edges (\(G^{m}=\{\phi\}\)) or maximum allowed level \(M\) is attained. The algorithm outputs cluster labels predicted for the nodes at the top level, propagating down to the original embeddings. The process is summarized in Algorithm 1.
### Graph generation
In this step, a hierarchy of graphs, \(G_{r}^{m}=(V^{m},E_{m})\), is created using \(\boldsymbol{X}_{r}\) and \(\boldsymbol{S}_{r}^{m}\) where \(V^{m}=\{v_{1},v_{2},...,v_{n}\}\) is the set of the nodes and \(E_{m}\) is the set of the edges. Each graph consists of node representations \(H_{r}^{m}=\{h_{1}^{(m)},h_{2}^{(m)},...,h_{n}^{(m)}\}\in\mathcal{R}^{F^{\prime} Xn}\) where \(n=N_{r}^{m}\) is the number of nodes at level \(m\). \(E_{m}\) is obtained using \(\boldsymbol{S}_{r}^{m}\in[0,1]\) considering \(k\)-nearest neighbors of each node in \(V^{m}\). At level \(m=0\), we consider each embedding as individual cluster. Therefore, node representations are given as \(H_{r}^{0}=[\boldsymbol{X}_{r};\boldsymbol{X}_{r}]\). For any level \(m>0\), the node representation is obtained by concatenating the identity feature and the average feature of the current cluster, as described in Section 3.3.
Figure 1: Block diagram of proposed SHARC method. The ETDNN model and GNN are the extended time delay network model for x-vector extraction and the graph neural network for score prediction. FFN stands for feed forward network. The left side (a) shows the training steps and the right side (b) shows the inference steps.
### GNN scoring and clustering
The node representations \(H^{m}\) at each level \(m\) are passed to the GNN scoring function \(\Phi\). It predicts edge linkage probability (\(p_{ij}\)) which indicates presence of an edge \((v_{i},v_{j})\in V^{m}\) along with node density (\(\hat{d}_{i}\)) which measures how densely the node is connected with its neighbors. After GNN scoring, the clustering is performed. At each level of hierarchy \(m\), it creates a candidate edge set \(\varepsilon(i)\), for the node \(v_{i}\), with edge connection threshold \(p_{\tau}\), as given below.
\[\varepsilon(i)=\{j|(v_{i},v_{j})\in E_{m},\quad\hat{d}_{i}\leq\hat{d}_{j} \quad\text{and}\quad p_{ij}\geq p_{\tau}\} \tag{1}\]
For any \(i\), if \(\varepsilon(i)\) is not empty, we pick \(j=\text{argmax}_{j\in\varepsilon(i)}\hat{e}_{ij}\) and add \((v_{i},v_{j})\) to \(E^{\prime}_{m}\) where \(\hat{e}_{ij}\) is the predicted edge weights, given as,
\[\hat{e}_{ij}=2p_{ij}-1\in[-1,1]\forall j\in N_{i}^{k} \tag{2}\]
Here, \(N_{i}^{k}\) are the k-nearest neighbors of node \(v_{i}\). After a full pass over every node, \(E^{\prime}_{m}\) forms a set of connected components \(C^{\prime}_{m}\), which serves as the designated clusters for the next level (\(m+1\)). The clustering stops when there are no connected components present in the graph.
### Feature aggregation
To obtain node representations for next level \(H^{m+1}\), the connected components \(C^{\prime}_{m}\) obtained from the clustering along with \(H^{m}\) are passed to an aggregation function \(\Psi\). The function \(\Psi\) concatenates identity feature \(\tilde{h}_{i}^{(m+1)}\) and average feature \(\tilde{h}_{i}^{(m+1)}\) of each cluster \(i\) to obtain \({h_{i}}^{(m+1)}=[\tilde{h}_{i}^{(m+1)};\tilde{h}_{i}^{(m+1)}]\). The identity feature of node \(i\) at level \(m+1\) is the feature of the node which has highest node density at level \(m\) in the cluster \(i\). The average feature is computed by taking average of all the identity features from previous level of that cluster, given as,
\[\tilde{h}_{i}^{(m+1)}=\tilde{h}_{zi}^{(m)};\qquad\tilde{h}_{i}^{(m+1)}=\frac{ 1}{|c_{i}{}^{(m)}|}\sum_{j\in c_{i}{}^{(m)}}\tilde{h}_{j}^{(m)} \tag{3}\]
where \(z_{i}\) = \(\text{argmax}_{j\in c_{i}{}^{(m)}}\tilde{d}_{j}^{(m)}\).
### GNN module architecture and training
GNN scoring function \(\Phi\) is implemented as a learnable GNN module designed for supervised clustering. The module consists of one GraphSAGE [16] layer with \(F^{\prime}=2048\) neurons. Each graph \(G^{m}_{i}\), containing source and destination node pairs, is fed to the GNN module. It takes node representations \(H^{m}\) and their edge connections as input and generates latent representations denoted as \(\hat{H}^{(m)}\in\mathcal{R}^{F^{\prime}\times n}\), with \(n\) being the number of embeddings at a level m. The pair of embeddings are concatenated \([\tilde{h}_{i};\tilde{h}_{j}]\) and passed to a three-layer fully connected feed-forward network with a size of \(\{1024,1024,2\}\) followed by softmax activation to generate linkage probability \(p_{ij}\). The predicted node density is computed as:
\[\hat{d}_{i}=\frac{1}{k}\sum_{j\in N_{i}^{k}}\hat{e}_{ij}\mathbf{S}_{r}(i,j) \tag{4}\]
The ground truth density \(d_{i}\) is obtained using ground truth edge weight \(e_{ij}^{g}=2q_{ij}-1\in E^{q}\{-1,1\}^{N_{r}Xk}\), where \(q_{ij}=1\) if nodes \(v_{i}\) and \(v_{j}\) belong to the same cluster, otherwise \(q_{ij}=0\). A node with higher density is a better representative of the cluster than a node with lower density. Each node \(v_{i}\) has a cluster (speaker) label \(y_{i}\) in the training set, allowing the function to learn the clustering criterion from the data. The loss function for training is given as follows:
\[L=L_{conn}+L_{den} \tag{5}\]
where \(L_{conn}\) is the pairwise binary cross entropy loss based on linkage probability across all the possible edges in \(E\) accumulated across all levels and recordings in a batch. \(L_{den}\) represents mean squared error (MSE) loss between ground truth node density \(d_{i}\) and predicted node density \(\hat{d}_{i}\)\(\forall i\in\{1,...,|V|\}\), where \(|V|\) is the cardinality of \(V\).
### E2e-Sharc
The SHARC model described in the previous section also allows the computation of gradients of the loss function w.r.t the input x-vector embeddings. The computation of these gradients enables the fine-tuning of the embedding extractor. We remove the classification layer from the 13-layer ETDNN model [20] and connect the \(11^{th}\) affine layer output with the SHARC model input. This model is trained using \(40\)-D mel-spectrogram feature vectors and similarity matrices as input. The details of ETDNN embedding extractor are described in Section 4.2. The training loss is the same as the SHARC model (Equation 5). This approach is referred as End-to-End Supervised HierArchical gRaph Clustering (E2E-SHARC).
## 4 Experiments
### Datasets
#### 4.1.1 Ami
* **Train, dev and eval sets**: The AMI dataset [21] contains meeting recordings from four different sites (Edinburgh, Idiap, TNO, Brno). It comprises of training, development (dev) and evaluation (eval) sets consisting of \(136\), \(18\) and \(16\) recordings sampled at \(16\)kHz, respectively. The number of speakers and the duration ranges of each recording from 3-5 and \(20\)-\(60\) mins, respectively.
#### 4.1.2 Voxcomverse
* **Train set**: The dataset used for training Voxconverse model is simulated using Voxceleb 1 and 2 [22, 23] and Librispeech [24] using the recipe from [10]. We simulated \(5000\) mixtures containing \(2\)-\(5\) speakers with duration ranging from \(150\)-\(440\) s. This generates \(1000\) hrs of data with \(6,023\) speakers.
* **Voxconverse dev and eval sets**: It is an audio-visual diarization dataset [25] consisting of multispeaker human speech recordings extracted from YouTube videos. It is divided into a development (dev) set and an evaluation (eval) set consisting of \(216\) and \(232\) recordings respectively. The duration of a recording ranges from \(22\)-\(1200\) s. The number of speakers per recording varies from \(1\)-\(21\).
### Baseline system
The baseline method is an x-vector-clustering based approach followed in [14, 26]. First, the recording is divided into 1.5 s short segments with 0.75 s shift. The 40-D mel-spectrogram features are computed from each segment which is passed to the ETDNN model [1] to extract 512-D x-vectors. The ETDNN model follows the Bign DNN architecture described in [20] and is trained on the VoxCeleb1 [22] and VoxCeleb2 [23] datasets, for speaker identification task, to discriminate among the \(7,146\) speakers. The whitening transform, length normalization and recording level PCA are applied to the x-vectors as pre-processing steps to compute the PLDA similarity score matrix and perform clustering to generate speaker labels for each segment. For comparison, we have used two most popular clustering approaches - AHC [5] and SC [28]. To perform AHC, the PLDA is used directly. For SC, we convert the scores in \([0,1]\) range by applying sigmoid with temperature parameter \(\tau=10\) (best value obtained from experimentation).
### Training configuration
For training the SHARC model, we extract x-vectors with a window of duration 1.5 s with 0.75 s shift, from single speaker regions of the training set. The similarity score matrices, \(\mathbf{S}^{m}\), are obtained using baseline PLDA models which are fed to the GNN module described in Section 3.4. The possible levels of each recording depend on the number of x-vectors (\(N_{r}\)) and the choice of \(k\).
To train the end-to-end SHARC model, the weights of the x-vector model are initialized with the pre-trained ETDNN model while the SHARC model weights are initialized with the one obtained from SHARC training. The input to the model is 40-D mel-spectrogram computed from 1.5 s with 0.75 s shift. To prevent overfitting of the embedding extractor, the pre-trained x-vectors are added to the embedding extractor output before feeding to the GNN.
### Choice of hyper-parameters
The SHARC model is trained with Stochastic Gradient Descent (SGD) optimizer with a learning rate \(lr\)=\(0.01\) (for Voxconverse) and \(lr\)=\(0.001\)(for AMI) for \(500\) epochs. Similarly, the E2E-SHARC is also trained with an SGD optimizer. In this case, the learning rate is \(1e\)-\(06\) for the ETDNN model and \(1e\)-\(03\) for the SHARC model, trained for 20 epochs. The hyperparameters \(k,p_{r}\) are selected based on the best performance on the dev set for the eval set and vice versa. The maximum number of levels \(M\) is initially set to \(15\) to avoid infinite loops but the algorithm converges at \(M\leq 3\). Table 1 shows the values of hyperparameters obtained for the AMI and Voxconverse datasets.
## 5 Results
The proposed approaches are evaluated using the diarization error rate (DER) metric [26]. In our work, we use ground truth speech regions for performance comparison. The DERs are computed for two cases. The first case considers overlaps and without collar regions, and the second case ignores overlaps and incorporates a tolerance collar of 0.25 s. Table 2 shows that the proposed SHARC model improves over the baseline systems, and the performance further improves with the E2E-SHARC model for both datasets. To incorporate temporal continuity, we applied a re-segmentation approach using Variational Bayes inference (VBx) [27] with the E2E-SHARC clustering labels as initialization, which further boosted the performance. As shown in Table 2, for the AMI SDM dataset, we obtain \(15.6\%\) and \(52.6\%\) relative improvements for the dev and eval set, respectively over the PLDA-SC baseline (best). Similarly, we achieve \(39.6\%\) and \(44.4\%\) relative improvements over the Voxconverse baseline (PLDA- SC) for the dev and eval set, respectively.
Table 3 compares proposed approach performance with state-of-the-art systems. The widely reported beamformed AMI multi-distant microphone (MDM) dataset, without TNO recordings, is used for benchmarking. The beamformed recordings are obtained using [33]. The proposed SHARC model has the lowest DER for eval set compared to all previous SOTA approaches. For the Voxconverse dataset, we compare it with the challenge baseline and other published results. Here, the E2E-SHARC with VBx shows the best results compared to previously published results.
## 6 Summary
We have proposed a supervised hierarchical clustering algorithm using graph neural networks for speaker diarization. The GNN module learns the edge linkages and node densities across all levels of hierarchy. The proposed approach enables the learnt GNN module to perform clustering hierarchically based on merging criteria which can handle a large number of speakers. The method is further extended to perform end-to-end diarization by jointly learning the embedding extractor and the GNN module. With challenging diarization datasets, we have illustrated the performance improvements obtained using the proposed approach.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{Parameters} & \multicolumn{3}{c|}{AMI} & \multicolumn{3}{c|}{Voxconverse} \\ \cline{2-6} & Train & Dev & Eval & Train & Dev & Eval \\ \hline \(k\) & 60 & 60 & 60 & 60 & 30 & 30 \\ \(p_{\tau}\) & - & 0.0 & 0.0 & - & 0.5 & 0.8 \\ \(k^{*}\) & 30 & 50 & 50 & 60 & 30 & 30 \\ \(p_{\tau}^{*}\) & - & 0.0 & 0.0 & - & 0.9 & 0.8 \\ \hline \end{tabular}
\end{table}
Table 1: Choice of hyper-parameters for train, dev, eval split of AMI and Voxconverse datasets. The parameters \(k^{*}\) and \(p_{\tau}^{*}\) are used in E2E-SHARC training.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**AMII SDM System** & \multicolumn{2}{c|}{**with OVP + no COL**} & \multicolumn{2}{c|}{**w/out OVP + COL**} \\ \cline{2-5} & Dev. & Eval. & Dev. & Eval. \\ \hline x-vec + PLDA + AHC [26] & 24.50 & 29.51 & 7.61 & 14.59 \\ x-vec + PLDA + SC & 19.8 & 22.29 & 4.1 & 5.76 \\ x-vec + PLDA + SHARC & **19.71** & 21.44 & **3.91** & 4.88 \\ E2E-SHARC & 20.59 & **19.83** & 5.15 & **2.89** \\ — + VBx [27] & **19.35** & **19.82** & **3.46** & **2.73** \\ \hline \multicolumn{5}{|l|}{**Voxconverse System**} \\ \hline x-vec + PLDA + AHC [26] & 12.68 & 13.41 & 7.82 & 9.28 \\ x-vec + PLDA + SC & 10.78 & 14.02 & 6.52 & 9.92 \\ x-vec + PLDA + SHARC & 10.25 & 13.29 & 6.06 & 9.40 \\ E2E-SHARC & **9.90** & **11.68** & **5.68** & **7.65** \\ — & + VBx [27] & **8.29** & **9.67** & **3.94** & **5.51** \\ \hline \end{tabular}
\end{table}
Table 2: DER (%) comparison on the AMI SDM and Voxconverse datasets with the baseline methods. OVP: overlap, COL: collar.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**AMII MDM System** & Dev. & Eval. \\ \hline x-vec(ResNet101)+AHC+VBx [29] & 2.78 & 3.09 \\ ECAPA-TDNN [30] & 3.66 & **3.01** \\ SelfSup-PLDA-PIC (+VBx) [14] & 5.38 (**2.18**) & 4.63 (3.27) \\ SHARC (+VBx) & 3.58 (3.72) & **2.29 (2.11)** \\ \hline \multicolumn{2}{|l|}{**Voxconverse System**} & Dev. & Eval. \\ \hline Voxconverse challenge [25] & 24.57 & \(-\) \\ VBx BUT system [31] & 4.36 & \(-\) \\ Wang et. al. [32] & 4.41 & 5.82 \\ E2E-SHARC +VBx & **3.94** & **5.51** \\ \hline \end{tabular}
\end{table}
Table 3: DER (%, w/out overlap \(+\) with collar) comparison with state-of-the-art on AMI MDM (without TNO sets) and Voxconverse datasets.
## 7 Acknowledgements
The authors would like to thank Michael Free, Rohit Singh, Shakti Srivastava of British Telecom Research for their valuable inputs.
|
2308.07426 | A Survey on Point-of-Interest Recommendations Leveraging Heterogeneous
Data | Tourism is an important application domain for recommender systems. In this
domain, recommender systems are for example tasked with providing personalized
recommendations for transportation, accommodation, points-of-interest (POIs),
etc. Among these tasks, in particular the problem of recommending POIs that are
of likely interest to individual tourists has gained growing attention in
recent years. Providing POI recommendations to tourists can however be
especially challenging due to the variability of the user's context. With the
rapid development of the Web and today's multitude of online services, vast
amounts of data from various sources have become available, and these
heterogeneous data represent a huge potential to better address the challenges
of POI recommendation problems. In this work, we provide a survey of published
research on the problem of POI recommendation between 2021 and 2023. The
literature was surveyed to identify the information types, techniques and
evaluation methods employed. Based on the analysis, it was observed that the
current research tends to focus on a relatively narrow range of information
types and there is a significant potential in improving POI recommendation by
leveraging heterogeneous data. As the first information-centric survey on POI
recommendation research, this study serves as a reference for researchers
aiming to develop increasingly accurate, personalized and context-aware POI
recommender systems. | Zehui Wang, Wolfram Höpken, Dietmar Jannach | 2023-08-14T19:36:57Z | http://arxiv.org/abs/2308.07426v3 | # A Survey on Point-of-Interest Recommendations Leveraging Heterogeneous Data
###### Abstract
Tourism is an important application domain for recommender systems. In this domain, recommender systems are for example tasked with providing personalized recommendations for transportation, accommodation, points-of-interest (POIs), or tourism services. Among these tasks, in particular the problem of recommending POIs that are of likely interest to individual tourists has gained growing attention in recent years. Providing POI recommendations to tourists _during their trip_ can however be especially challenging due to the variability of the users' context. With the rapid development of the Web and today's multitude of online services, vast amounts of data from various sources have become available, and these heterogeneous data sources represent a huge potential to better address the challenges of in-trip POI recommendation problems. In this work, we provide a comprehensive survey of published research on POI recommendation between 2017 and 2022 from the perspective of heterogeneous data sources. Specifically, we investigate which types of data are used in the literature and which technical approaches and evaluation methods are predominant. Among other aspects, we find that today's research works often focus on a narrow range of data sources, leaving great potential for future works that better utilize heterogeneous data sources and diverse data types for improved in-trip recommendations.
**Keywords: Recommender Systems, Tourism, Point-of-Interest Recommendation, Heterogeneous Data**
## 1 Introduction
Tourism is the act of traveling for pleasure or business to places outside one's usual environment (Hamid et al, 2021). It includes a wide range of activities such as visiting tourist attractions, sightseeing, participating in cultural events and activities and exploring natural wonders. Based on the temporal sequence of tourism behavior, the tourism process can be divided into three distinct phases (Pearce, 2005): pre-trip, in-trip and post-trip, as depicted in Figure 1.
Throughout these travel phases, the in-trip phase presents a more complicated situation compared to other phases because of the continuous changes in the contextual environment during the trip, which has a direct effect on tourists' travel behavior, such as selecting different transportation modes, adjusting their visiting time or visiting different tourist attractions. Among the various aspects of in-trip planning, selecting appropriate POIs can be a significant challenge for travelers, since _point-of-interest_ is a holistic concept that encompasses any places tourists can visit during their trip, including museums, parks, cinemas, art galleries, restaurants, coffee shops, shopping centers, etc. (Safavi et al, 2022). And it can be a time-consuming task for tourists to filter out relevant content from the vast amount of available information about POIs on the Internet.
In order to address these issues, information mechanisms are urgently needed in the tourism domain to assist users by making useful and effective suggestions from the plethora of available POI choices. Recommender Systems (RSs) are considered as established solutions for this task due to their ability to provide personalized recommendations based on various travel purposes and individual preferences. Being the primary task during the trip, offering POI recommendations may however face significant challenges in providing up-to-date recommendations based on tourists' preferences and context (Wu et al, 2022). To achieve this, in-trip POI RSs require access to user-related data to understand their needs and preferences. Therefore, it is crucial to collect and analyze all kinds of available data in the tourism domain to offer valuable recommendations to visit POIs.
Figure 1: Temporal Phases of the Tourism Process
Tourists engage in various activities and interact with various elements throughout the duration of their trip. With the widespread use of smartphones and various online applications, a wealth of tourism-related information can be obtained from multiple data sources. These sources encompass descriptions and statistics related to tourism offers and marketing, as well as records of tourists' feedback on consumption of tourism products and services (Yochum et al, 2020). Such information constitutes a fertile ground for investigating tourists' preferences and behavior in the context of POI recommendation research. However, integrating the aforementioned data sources to address the recommendation challenges during trips poses significant difficulties due to their inherent heterogeneity, which manifests in a high variability in data types and formats (Wang, 2017).
These different types of data can exhibit heterogeneity from syntactic, conceptual, terminological, semiotic and other aspects due to the diverse demographic backgrounds (language, age, gender, etc.) of the data generators, the variety of data acquisition devices (mobile phones, computers, GPS devices, etc.) and the complexity of data types (text, images, videos, trajectories, etc.) (Jirkovsky and Obitko, 2014). This issue is particularly prominent for RSs in the tourism domain, where it is challenging to integrate different types of data to establish a holistic user model. Therefore, data integration techniques that can effectively analyze and explore comprehensive data have become more important in recent years (Abassi et al, 2022). However, it is important to note that despite the potential benefits, the integration of heterogeneous data as input to POI RSs is still not widely employed and lacks a systematic and comprehensive overview of its utilization in the existing literature.
To bridge this gap, the contribution of this work is as follows:
* We conduct a systematic literature review spanning from 2017 to 2022. The review provides a comprehensive analysis of today's state-of-the-art techniques, widely used data sources and popular evaluation metrics in the context of in-trip POI recommendations.
* We review the current utilization of heterogeneous data sources in the field of POI recommendation research. Our study offers valuable insights into the types of heterogeneous data that have been employed and sheds light on the integration techniques utilized in existing POI RSs.
* We identify and present potential research directions that can guide future work in the field of POI RSs. These research directions provide valuable insights and serve as a roadmap for researchers to explore novel approaches, address existing challenges, and advance the state-of-the-art in in-trip POI recommendation.
The rest of the paper is structured as follows: Section 2 provides background information and preliminaries about RSs in the POI recommendation domain; Section 3 describes the research methodology of our information-centric survey for in-trip POI RSs; Section 4 contains the main findings of this work; Section 5 outlines open gaps and future research opportunities; Section 6 concludes our analysis and offers remarks on future research directions.
## 2 Background and Preliminaries
In this section, we present an overview of the fundamental concepts associated with RSs and their specific application within the tourism domain. Furthermore, we provide a concise summary of past surveys conducted in the realm of POI recommendations, effectively elucidating the gaps and constraints inherent in the existing survey literature.
### Recommender Systems and Their Applications in Tourism
RSs have become ubiquitous in various application domains today, and their origins can be traced back to the early 1990s, when they were first applied in experimental settings for personal email and information filtering (Jannach et al, 2021). Since then, they have become a common feature of many online platforms, serving as tools for helping users discover content that may be of interest to them. With the continual progress and evolution of recommender systems, an array of diverse system types has emerged. Among these, the most utilized types of recommendation systems comprise (Ricci et al, 2022): Collaborative Filtering (CF) methods, which suggest items based on the similarity of users' past behaviors or preferences; Content-based (CB) methods, which suggest similar items to those that a user has previously liked or interacted with based on the characteristics of the items themselves; Hybrid methods, which integrate multiple techniques including those mentioned above as well as demographic-based, knowledge-based, and community-based methods to leverage the strengths of each approach and provide more accurate and diverse recommendations.
With the help of the aforementioned techniques, RSs have been extensively applied in various domains such as e-commerce (Salunke and Nichite, 2022), social media (Anandhan et al, 2022), entertainment (Schedl et al, 2022; Jayalakshmi et al, 2022), and education (Cui et al, 2018). As an important application, RSs in the tourism domain can recommend tourists the most appropriate transportation options (such as flights and trains), accommodations, POIs and other items that are necessary for their trip (Sarkar et al, 2022). Therefore, in the present era of information overload, there is an increasing demand for Tourism Recommender Systems (TRSs) to alleviate the time spent on information retrieval for travel. These systems are designed to reduce the time spent on retrieving travel information and can effectively assist users with a variety of tourism-related recommendations, as illustrated in Figure 2.
When examining the application of TRSs specifically in the domain of POI recommendations, it becomes apparent that data can be gathered throughout various stages of the trip, including tourism business transactions. Examples of such data are presented in Table 1 reflecting tourists' preferences for POIs and providing valuable information for building effective in-trip POI RSs, as discussed in (Hopken and Fuchs, 2022). Aside from these data collected from tourism business transactions at each stage of the trip, there are other types of data that can be utilized to generate personalized recommendations. These include tourists' demographic information and friendships from profiles on social media platforms (Kolahkaj et al, 2020; Cai et al, 2022), as well as basic information about tourism products and services, such as POI location, costs, facilities (Qomariyah and Kazakov, 2021), etc. Moreover, contextual
information about traffic and weather conditions during travel is being explored to build context-aware RS as well (Zhu et al, 2018; Hossain et al, 2022). By leveraging these diverse data types, POI RSs can generate personalized recommendations that align with the individual preferences and needs of tourists throughout their entire trip. The integration and analysis of these data sources hold the potential to enhance the accuracy and relevance of POI recommendations, ultimately enriching the overall travel experience for tourists.
\begin{table}
\begin{tabular}{c l l} \hline \hline
**Phase** & **Data** & **Source** \\ \hline \multirow{5}{*}{Pre-trip} & Descriptions and marketing statistics & Official websites, marketing networks, search engines, social media platforms \\ & of tourism products or services & \\ \cline{2-3} & Information search records & Search engines, travel websites, mobile apps/guides, social media platforms \\ \cline{2-3} & Booking or reservation records & Booking systems \\ \hline \multirow{5}{*}{In-trip} & Transportation trajectories and POI check-in records & Ticket systems, social media platforms \\ \cline{2-3} & Accommodation records & Accommodation providers, official statistics, mobile app usage \\ \cline{2-3} & Consumption records & Local ticket offices, tourists’ payment systems \\ \cline{2-3} & & Online review sites, supplier-specific online feedback, survey systems, social media platforms \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data Sources for POI Recommendations at Different Trip Phases from Tourists’ Business Transaction (Hopken and Fuchs, 2022)
Figure 2: Main Recommendation Tasks in Tourism
### Previous Reviews on Point-of-Interest Recommendation
With an increasing emergence of research on POI RSs in recent years, a plethora of models have been proposed to tackle the problem of providing personalized POI recommendations. There have been several review articles highlighting the major findings and limitations from a certain perspective.
From the viewpoint of the data that are used for POI recommendations, Yochum et al (2020) conducted a survey on linked open data in location-based recommendation systems in the tourism domain, providing a systematic review and mapping of linked open data in location-based RSs in the tourism domain, summarizing the research achievements during 2001 and 2018 and providing a distribution of the different categories of location-based recommendation applications using linked open data. This survey also suggests possible future research directions for the use of linked open data in location-based recommendations for tourism. Another survey conducted by Sanchez and Bellogin (2022) delved into the domain of POI recommendation research spanning the period from 2011 to 2020, with a specific focus on the integration of data sourced from location-based social networks (LSBNs). The authors furnished an intricate analysis of diverse information sources, evaluation methodologies and algorithms within the context of POI recommendation and highlighted both the existing prospects and challenges that continue to persist within this field. Additionally, a comprehensive analysis of the effect of contextual factors, including social, temporal, spatial, and categorical factors on recommendation models was conducted by Rahmani et al (2022). Through an extensive survey of context-aware location recommendation, they quantitatively evaluated the impact of these contextual factors on POI recommendations using both existing and novel linear/non-linear models. The surveys conducted by Sanchez and Bellogin (2022), Rahmani et al (2022), and Yochum et al (2020) are closely related with our current study, as they have undertaken information-centric survey work in the domain of POI recommendation. In contrast to these works, our present survey is however not limited to specific types of information such as data from LBSNs or linked open data, but specifically analyzes the utilization of all kinds of data as input to POI recommender systems.
In the light of numerous POI recommendation techniques, a recent study by Liu et al (2017) provided an evaluation of twelve state-of-the-art POI recommendation models. Through this thorough evaluation, significant findings about the different model performances due to data, users or modeling methods were obtained, which can aid in the better understanding and utilization of POI recommendation models in various scenarios. Due to the surge of research activities utilizing deep learning paradigms in the field of POI recommendations, a survey of major deep learning-based POI recommendation works has been compiled by Islam et al (2022). This survey categorized and critically analyzed recent POI recommendation works based on different deep learning paradigms, as well as relevant features such as problem formulations, proposed techniques, and used datasets. Ultimately, the survey may serve as a valuable resource for researchers and practitioners to gain insights into current trends and future research directions in the area of POI RSs.
In another survey on POI recommendation research, Werneck et al (2021) conducted a systematic overview of 74 relevant papers published from 2017 to 2019 and
proposed an extensible POI recommendation benchmark to address and identify limitations, including a prioritization of accuracy over other quality dimensions and a low intersection of metrics and datasets used to evaluate proposed solutions. In a subsequent work, Werneck et al (2022) developed a reproducibility framework based on Python software libraries and a Docker image to reproduce experimental evaluations on POI recommendations using different datasets, metrics, and baselines.
Despite the existence of prior surveys on POI recommendation research, a comprehensive, systematic and information-centric comparison that reflects the current state-of-the-art in the field is still lacking. Specifically, there is a need to investigate how recent research has utilized heterogeneous data sources and to provide an overview of the latest advancements in in-trip POI RSs. Such an overview should consider various aspects, including techniques, data and evaluations, which are the primary areas of focus of this work.
## 3 Research Methodology
This section outlines the research methodology adopted to conduct a systematic literature review and to gather relevant research papers for this study. The primary objective of this research is to offer a comprehensive overview of the most recent advancements in the realm of in-trip POI recommendations, specifically focusing on an information-centric perspective.
Definition of Research Questions.In order to achieve the stated objective, the following research questions (RQs) were formulated:
* **RQ1**: What is the current state of research on POI recommendations in terms of techniques, data and evaluation?
* **RQ2**: How are heterogeneous data currently being utilized in in-trip POI recommendation research?
* **RQ3**: What are the existing limitations and potential future directions for research and development of in-trip POI RSs?
Search Strategy.A systematic literature search was conducted in the DBLP database1 to retrieve English-language journal citations published between 2017 and 2022. This time frame was selected to focus on recent research and minimize overlap with previous surveys on tourism RSs. The search strategy involved various search queries related to in-trip RS, with a particular emphasis on terms related to the concept of POI.
Footnote 1: [https://dblp.org/](https://dblp.org/)
To ensure a comprehensive coverage of the search results, common prefixes were incorporated, such as the inclusion of terms like "recommend" for "recommender system" and "recommendation" Furthermore, synonymous terms for the keyword "POI" were included, such as "point-of-interest" and "attraction". The final search query was the following:
"_recommend_" AND ("_point-of-interest_" OR "_POI_" OR "_tour_" OR "_activity_" OR "_attraction_" OR "_event_" OR "_venue_")
Through this search strategy, our aim was to capture a wide range of relevant literature related to RSs, particularly in the context of POI recommendations.
Screening of Papers.The inclusion criteria to select relevant papers for this study were defined as follows:
1. The paper is written in English.
2. The publication date of the paper falls between 2017 and 2022.
3. The journals in which the paper is published ranks in Q1 according to the Scimago Journal & Country Rank (SJR) for the publication year.
4. The paper is an original research article or review article related to general POI recommendation or the next-POI recommendation2. Footnote 2: We recall that in next-POI recommendation settings, the sequence of the previous POI visit events matters, see also Figure 2.
Exclusion criteria were also defined to exclude papers that do not meet the specific requirements of this study. Papers fulfilling the following criteria were excluded:
1. Papers that focus on POI recommendations for specific cities, regions (such as urban or suburban areas), or specific populations (such as individuals with autism).
2. Papers that focus on very specific sub-problems such as the orienteering problem or data augmentation in the tourism domain.
3. Papers that propose only a theoretical framework without utilizing any dataset for experimentation.
Paper Mapping.To identify relevant studies for our review on POI recommendations, we conducted a systematic paper mapping process depicted in Figure 3. The process involved multiple steps to screen, filter, and extract key information from the retrieved papers (Page et al, 2021).
First, we formulated a comprehensive search query using relevant keywords related to POI recommendations (see above). This query was applied to the DBLP database with an additional filter based on the year of publication, type of paper, and journal ranking, resulting in an initial pool of 217 papers.
Next, we conducted a preliminary screening based on the titles and abstracts of these papers. We carefully reviewed each paper to assess its relevance to our research topic, resulting in the exclusion of papers that did not align with our focus. After this initial screening, 160 papers remained. Finally, a further refinement was done by excluding papers that did not specifically address POI recommendations, resulting in a final set of 117 eligible papers. We thoroughly read and analyzed these papers to extract and summarize the key information pertinent to our research objectives.
Overall, the paper mapping process ensured a rigorous and systematic approach in selecting relevant studies for our review. The chosen papers provide valuable insights into the current state-of-the-art of POI recommendation research, which will be presented in detail in the subsequent sections of this review.
## 4 A Landscape of POI Recommendation Research
This section provides an overview of the state-of-the-art research in in-trip POI recommendations from the perspectives of techniques, data and evaluations. Specifically, we will first present the results of a statistical analysis of research papers published in journals from 2017 to 2022, highlighting the trends and patterns in how researchers have approached technical advancements, data collection and evaluation metrics. Building on these findings, we will then delve deeper into how the use of heterogeneous data sources has been applied in the field of POI recommendations.
### Trends and Developments in POI Recommendations
Based on the methodology described in Section 3, a total of 117 original research papers and review articles published between 2017 and 2022 were collected, with their annual distribution depicted in Figure 4. The results indicate a consistent increase in the number of original research papers published in each successive year. Notably, there was a significant surge in the number of papers published in 2021 and 2022.
Figure 3: Mapping Process for Identifying Relevant Papers on POI Recommendations
These findings not only demonstrate a growing interest in in-trip POI recommendation research, but also highlight the necessity for continued exploration and evaluation of this field.
In the literature, we can generally distinguish between approaches that generate POI recommendations based on a given _set_ of previously visited POIs by a user, and approaches that target at the _next-POI_ recommendation problem by also taking the _sequence_ of previously visited POIs into account. The next-POI recommendation therefore represents a specific form of sequence-aware recommendation problems (Quadrana et al, 2018), which have attracted significant research interest in recent years.
The proportion of research dedicated to the general (sequence-agnostic) POI recommendation and the next-POI recommendation is depicted in Figure 4(a). While research on the general POI recommendation continues to be predominant, the proportion of original research that has concentrated on the next-POI recommendation has been gradually increasing over the years, as evidenced by the distribution of the two application domains presented in Figure 4(b). These trends suggest that there is a growing interest in exploring the potential of the next-POI recommendation, which may lead to further advancements in this field.
### Techniques and Approaches in POI Recommendations
Through the analysis of the collected research papers, it becomes evident that all three main families of recommendation approaches (collaborative filtering, content-based filtering, hybrid techniques) are used to build POI RSs. Among these techniques, CF stands out as the most extensively utilized approach, demonstrating its prominence in the field. Traditionally, CF methods are categorized as either being memory-based or model-based, where nearest-neighbor methods are the most commonly used memory-based approaches, and where model-based approaches include all sorts of supervised machine learning techniques (Jannach et al, 2010; Nikolakopoulos et al, 2021).
Notably, the advent of neural networks has sparked considerable interest among researchers in recent years. Neural network architectures have been leveraged to train
Figure 4: Annual Distribution of Collected Papers on POI Recommendations
prediction models using the user-item matrix, facilitating the generation of personalized recommendations. As a result, the adoption of model-based collaborative filtering has witnessed a significant surge, with nearly half of the studies incorporating such techniques. The results of our analysis regarding the underlying technical approaches are shown in Figure 6.
Upon a more granular examination of the methodologies employed in the collected POI recommendation studies, we categorized these into research implementing traditional machine learning (ML) techniques, deep learning (DL) techniques, probabilistic methods and optimization techniques. To illustrate the evolution in the proportion of papers employing these distinct methodologies over time, Figure 6(a) presents the temporal trend in research based on aforementioned methodologies. In the initial years encompassed by our study, traditional ML techniques, matrix factorization (Cui et al, 2017; Baral and Li, 2018; Cai et al, 2018) and Bayesian Personalized Ranking (He et al, 2018; Li et al, 2019) were predominantly utilized for POI RSs. However, in recent years we have observed a notable surge in the use of DL techniques, which excel at pattern extraction and accurate predictions for POI recommendations, e.g., with the help of recurrent neural networks (RNN) (Chen et al, 2022; Hossain et al, 2022) or
Figure 5: Evolution of Application Focus in POI Recommendation Research
Figure 6: Techniques Employed in POI Recommender Systems
convolutional neural networks (CNN) (Sang et al, 2021). This upswing is indicative of the growing acknowledgment of the efficacy and flexibility of deep learning algorithms in dealing with intricate recommendation tasks. Furthermore, there were also some studies based on probabilistic methods such as kernel density estimation (Zhou et al, 2022; Huang et al, 2021), and optimization-based methods such as greedy algorithms (Werneck et al, 2021; Han and Yamana, 2019), but these were always in the minority.
As shown in Figure 6(b), another remarkable trend observable since 2021 is the rising adoption of graph-embedded approaches in both DL- and traditional ML-based studies within the domain of POI RSs. By leveraging graph structures, these methodologies aim to capture complex relationships and associations among tourists and POIs, thereby facilitating more comprehensive and context-aware recommendations (Dai et al, 2022; Hu et al, 2021; Christoforidis et al, 2021). This mounting interest in graph-embedded techniques signifies the increasing appreciation for exploiting inherent data structures and dynamics for crafting effective recommendations.
In the context of applying modern deep learning techniques to recommendation problems, it was observed that the excitement for deep learning may have led to some rushed evaluations and a partially limited level of reproducibility, which may have hampered the achievement of true progress to a certain extent (Ferrari Dacrema et al, 2021). To see if similar problems may appear in POI recommendations, we were paying particular attention to reproducibility aspects when reviewing the technical approaches.
The availability of source code in the surveyed studies is depicted in Figure 8. A notable observation is that a significant number of papers solely provided pseudocode, which presents a high-level representation of the algorithms or methodologies without offering the actual implementation details; a larger portion of the papers did not include either the source code or pseudocode, thus restricting the ability to replicate and validate the research findings. Conversely, only a small proportion of the papers (10 %) provided the source code, which contributes to the reproducibility and transparency of the research process. The distribution of code availability in the domain of POI recommendation research reflects the varying levels of reproducibility. This underscores the significance of promoting the sharing of source code to enhance the
Fig. 7: Trends in Utilization of Recommendation Methods in POI RSs
credibility and reproducibility of research outcomes. The provision of source code empowers other researchers to replicate, verify, and extend existing work, fostering an environment of openness and facilitating scientific progress in the field.
### Data Utilization in POI Recommendations
In our next analysis, we direct our attention towards the status of today's research in terms of data collection and usage in the context of POI recommendations, as the use of appropriate datasets plays a crucial role in the development and evaluation of POI RSs. A high-quality dataset can provide valuable insights into user preferences, behavior, and context, which are essential for designing effective recommendation algorithms. Figure 9 provides an overview of the different ways data is collected for POI recommendation research. We can observe that a majority of studies relies on published open data, which is a readily available and easily accessible source of information. A smaller proportion of studies employed self-collected data either extracted from websites through crawlers or APIs or gathered by surveys. A few papers finally do not specify how they collected the data used in their studies. Overall, the analysis shows that open data is the most frequently used source of data in POI recommendation research, likely due to its convenience and abundance.
Compared to studies that rely on self-collected data, which are often limited in terms of publicly available information, the utilization of published open data in papers presents a more transparent and accessible source of data. Table 2 provides a concise overview of published open data utilized in the surveyed papers, while Figure 10 illustrates their utilization proportions in POI recommendation research. The findings show that more than half of the studies that utilize open public datasets make use of data from the Foursquare and Gowalla platforms. In contrast, the utilization of data from other platforms in the context of in-trip POI recommendations appears to be relatively limited. The data sources included in Figure 10 represent platforms that were utilized by at least two papers, thereby excluding platforms that were not extensively employed in the surveyed research. It is worth noting that these platforms
Figure 8: Availability of Code in POI Recommendation Research Papers
offer multiple versions of these datasets, varying in terms of time span, geographical coverage, and other relevant characteristics.
To investigate the varied data types employed as inputs in RSs, this study extends the taxonomy by Wu et al (2023). The taxonomy is expanded to classify pertinent data sources during the in-trip phase into four principal categories: tourist, POI, interaction, and context. By analyzing the collected papers, this study categorizes the utilized data types into these four categories and provides a summary of their respective attributes. The schematic representation of this classification is presented in Figure 11. Additionally, the relative proportions of the major data types used in POI recommendation research are presented in Figure 12.
Upon analyzing the utilization of data types in POI recommendation research, a noteworthy tendency towards a limited amount of data types is presented. Over 50 % of the surveyed papers leverage check-in data, incorporating both timestamp and geographic information, as a primary data source for their research on in-trip POI recommendations. Social relationship information, including friendship networks and social connections, is the second most commonly utilized data type, particularly in recent years where graph-based approaches have gained prominence (Cai et al, 2022;
Figure 10: Utilization Proportions of Open Datasets in POI Recommendation Research
Figure 9: Overview of Data Collection Approaches in POI Recommendation Research
Zhang et al, 2021; Christoforidis et al, 2021). POI profiles (e.g., category and characteristics of POIs), user feedback (e.g., ratings and reviews) and other information (e.g., POI visual content, weather context, POI popularity and tourists' demographic information) constitute smaller proportions of the utilized data types.
All mentioned information types possess the potential to effectively capture tourists' preferences for POI recommendations. Collectively, these information types can be classified into two categories: implicitly or explicitly provided information.
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt}} \hline \hline
**Platform** & **Description** \\ \hline _Foursquare_ & \\ \hline Global-scale Check-in Dataset & 33,278,683 global-scale check-ins by 266,909 users, covering 3,680,126 venues across 415 cities in 77 countries from April 2012 to September 2013; (Yang et al, 2016, 2015a) \\ NYC and Tokyo Check-in Dataset & 227,428 check-ins in New York City and 573,703 check-ins in Tokyo, spanning from 12 April 2012 to 16 February 2013; (Yang et al, 2015b) \\ Weekplaces Dataset & 7,658,368 check-ins generated by 15,799 users over 971,309 locations \\ \hline _Gowalla_ & 6,442,890 check-ins and an undirected friendship network with 196,591 nodes and 950,327 edges between February 2009 and October 2010 (Cho et al, 2011) \\ \hline _Yelp_ & \\ \hline Yelp Open Dataset & 908,915 tips provided by 1,987,897 users and aggregated check-in information over time for each of the 131,930 businesses \\ \hline _Brightkite_ & 4,491,143 check-ins and an undirected friendship network with 58,228 nodes and 214,078 edges between April 2008 and October 2010 (Cho et al, 2011) \\ \hline _Flickr_ & \\ \hline YFCC100M & 99.2 million photos and 0.8 million videos, spanning from 2004 until early 2014 (Thomee et al, 2016) \\ Flickr User-POI Visits Dataset & A set of users and their visits to various POIs in 8 cities, which are determined based on YFCC100M Flickr photos Lim et al (2015, 2016) \\ \hline _TripAdvisor_ & \\ \hline TripAdvisor Dataset & Ratings for POIs in the South Tyrol region of Italy that are tagged with contextual situations described by the conjunction of contextual conditions coming from type, month and year of the trip (Braunhofer and Ricci, 2016) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of Published Open Data Utilized in POI Recommendation Research
Explicitly provided information refers to the data that directly reveals tourists' preferences, such as ratings, reviews, and feedback comments. This type of data provides a deliberate, unambiguous, and intentional quality assessment of user preferences, enabling the generation of recommendations that align with those preferences (Kordumova et al, 2010). Implicitly provided information is typically inferred from user behavior and interaction patterns, such as clickstream data, search queries, and consumption histories, which can provide valuable insights into tourists' preferences and interests as well (Jannach et al, 2018).
Explicitly and implicitly provided information offer distinct levels of expressivity regarding the user's preferences, and a combination of the two can lead to more accurate and effective recommendations (Jawaheer et al, 2010). However, in the surveyed papers, as shown in Figure 12(a), the majority of studies solely relied on a single type of information, either implicitly or explicitly provided. Only a small percentage of the
Figure 11: Taxonomy of Data Sources for POI Recommendations
papers simultaneously utilized both types of information, such as incorporating check-in data along with reviews or ratings (Liao et al, 2021; Abbasi-Moud et al, 2021; Pang et al, 2020).
An orthogonal question in the context of information on user preferences lies in their temporal dimension, i.e., we can distinguish between long-term and short-term preferences. Long-term preferences are inherent and relatively stable, such as preferred weather, activities and travel mode, which are influenced by the user's personal background, such as age, gender, education, and income (Bennett et al, 2012). Short-term preferences convey the user's tourism intention in a relatively short period and can be affected by transient events, such as impromptu short weekend trips or special personal occasions, like business travel. These preferences change more frequently and strongly compared to long-term preferences (Guo et al, 2019).
Both long-term and short-term preferences play pivotal roles in providing POI RSs with a potential to delivering precise and dynamic tourism recommendations that align with the evolving context. However, after analyzing the collected studies, as depicted in Figure 12(b), it is evident that the majority of research primarily relies on
Figure 12: Information Type Utilization for In-Trip POI Recommendations
Figure 13: Analysis of Information Feedback Types and Preferences in POI Recommendation Research
tourists' historical data representing their long-term preferences for generating POI recommendations. The investigation of short-term preferences has received relatively limited attention. It is only in recent years, with the emergence of sequence-aware and session-based methods, that this situation has begun to change. Long Short-Term Memory (LSTM) based network architectures have for example been explored in the domain of POI recommendations, facilitating the consideration of both long-term and short-term preferences of tourists, such as in Liu et al (2022); Huang et al (2021); Wang et al (2021).
### Evaluation Metrics and Approaches in POI Recommendations
By leveraging learned preferences of tourists from the aforementioned data, POI RSs strive to provide diverse types of tourism-related recommendations. However, evaluating the quality of these recommendations poses a significant challenge. Evaluation methodologies for RSs can be categorized into offline evaluations, lab/user studies, and online A/B testing (Jannach, 2023). Offline evaluations involve training models on a training dataset and then evaluating their performance on a test dataset; lab/user studies typically encompass the invitation of a selected group of users to participate in an experiment, with subsequent collection and analysis of their feedback regarding the recommendations; online A/B testing entails randomly assigning users of a fielded system to groups using different recommendation algorithms and then comparing the user behaviors across these groups, such as click-through rates, purchase rates and user satisfaction levels to discern the effectiveness of the respective recommendation algorithms. Offline evaluation, in contrast to the other two evaluation methods, focuses on evaluating the effectiveness of recommendation algorithms using pre-collected historical data. Although this approach assumes that only evaluations present in the test set accurately define user preferences, it offers a quick and cost-effective means of evaluating the performance of a recommendation system (Ricci et al, 2021). Thus, offline evaluation remains essential for investigating specific aspects of recommendation algorithms (Jannach et al, 2021).
Analyzing the collected papers, we found that the majority of research in the POI recommendation domain relies on offline evaluation. Offline evaluations enable the assessment of recommendation quality from various perspectives, such as relevance, diversity, novelty, and serendipity (Alhijawi et al, 2022). Nevertheless, it is important to note that, apart from the few research studies (such as Han and Yamana (2019); Werneck et al (2021); Chen et al (2021); Rahmani et al (2022)), most of the research primarily focused on relevance (technically, through prediction accuracy) when evaluating recommendations. The evaluation metrics frequently employed for relevance measures are illustrated in Figure 14. Among these metrics, Recall and Precision measures represent a significant proportion, indicating their prominent role in evaluating recommendation relevance. In addition to recall and precision, other evaluation metrics such as Normalized Discounted Cumulative Gain (NDCG), F1-score, Accuracy, Mean Reciprocal Rank (MRR), Mean Average Precision (MAP), Root Mean Square Error (RMSE), Area Under the Curve (AUC), and Hit Ratio are also employed as complementary measures. These metrics contribute to the comprehensive evaluation
of the relevance of the predicted POI recommendations in comparison to the actual tourists' visit history.
During the evaluation process of POI RSs, researchers often compare the performance of their systems against baseline approaches. In our analysis of the collected papers, we observed that a variety of baselines is used, indicating the absence of a universally accepted standard baseline for POI recommendation research. However, we identified several baselines that were chosen by a slightly higher percentage (more than 5 papers), which we summarize and present in Table 3. The findings indicate a prevailing reliance on non-machine learning methods as chosen baselines in the evaluation of POI RSs, highlighting their enduring significance in this research domain. In contrast, the utilization of alternative methods as baselines appears to be comparatively limited.
Contrary to the widespread use of offline evaluations in POI recommendation research, only a few studies have conducted lab/user studies to evaluate their POI RSs. For instance, Ji et al (2021) comprehensively measured model performance from multiple perspectives in an offline setting while also addressing the efficiency of online recommendations across two datasets involving 9,000 users. Similarly, Massimo and Ricci (2021) designed an online user study to gauge user-perceived novelty and appreciation of the recommendations. However, overall, the utilization of lab/user studies in POI recommendation research remains relatively limited. Moreover, the application of online A/B testing was not observed in the papers reviewed for this study.
## 5 Discussion
In this section, we further elaborate upon the results obtained in Section 4, specifically discussing the application of multiple data types and methods for integrating heterogeneous data in the field of POI recommendations. We shall further identify existing gaps in current research and explore potential future directions and research opportunities.
Figure 14: Evaluation Metrics for Relevance Measures in POI Recommendation Research
### Utilization of Multiple Data Types in POI Recommendations
In recent years, the availability of extensive and diverse data sources has provided an opportunity for researchers to enhance the recommendation process by capturing different aspects of user preferences, POI characteristics, and contextual information. However, looking at Figure 15a, we observe that a substantial fraction of research, more than one third of the studied works, only rely on one single type of information in the recommendation process. Most commonly, such approaches are based on past user-item interaction databases. As a result, many approaches in the literature miss the opportunity to reach more accurate recommendations that may be obtained by considering other types of information. Notwithstanding the fact that the majority of studies strive to integrate multiple data types to decipher tourist preferences towards POIs, the diversity of data types employed remains notably limited. We represent the attributes or the attribute combination derived from tourist-POI interactions along the x-axis and attributes from users or POIs characteristics along the y-axis, and utilize a heatmap to illustrate the quantity of collected POI recommendation papers that opted for different combinations of data types. This visualization is provided in Figure 15b.
It is noteworthy that in the field of POI recommendation research, the primary data type combinations revolve around tourist-POI interactions and the social relationships of tourists or POI profiles. For instance, the quantity of papers leveraging
\begin{table}
\begin{tabular}{p{56.9pt} p{284.5pt}} \hline \hline
**Baseline** & **Description** \\ \hline GeoMF & Geographical modeling and weighted matrix factorization (Lian et al, 2014). \\ \hline Rank-GeoFM & Ranking-based geographical factorization model (Li et al, 2015). \\ \hline LRT & Location recommendation framework with temporal effects in terms of temporal regularization and temporal aggregation (Gao et al, 2013). \\ \hline FPMC & Combination of common Markov chain and normal matrix factorization model (Rendle et al, 2010). \\ \hline FPMC-LR & Personalized Markov chains in the check-in sequence and users’ movement constraint (Cheng et al, 2013). \\ \hline LORE & Sequential influence on location recommendations (Zhang et al, 2014). \\ \hline USG & User preference, social influence and geographical influence (Ye et al, 2011). \\ \hline GeoSoCa & Geographical correlations, social correlations and categorical correlations among users and POIs (Zhang and Chow, 2015). \\ \hline LSTM & Recurrent network architecture in conjunction with an appropriate gradient-based learning algorithm (Hochreiter and Schmidhuber, 1997). \\ \hline ST-RNN & Extended RNN that models local temporal and spatial contexts in each layer (Liu et al, 2016). \\ \hline GE & Sequential effect, geographical influence, temporal cyclic effect and semantic effect by embedding relational graphs into a low dimensional space (Xie et al, 2016). \\ \hline \hline \end{tabular}
\end{table}
Table 3: Baseline Approaches for Evaluation of POI Recommender Systems
check-in data in conjunction with a friendship graph (i.e., social relationships) or POI category (i.e., POI profiles) almost approaches half of the collected papers. Besides, the joint utilization of check-in and feedback data offers notable advantages, since feedback data provides explicit insights into user preferences and dislikes. By considering negative samples, the recommendation models can effectively account for user dislikes and improve the overall recommendation quality (Abbasi-Moud et al, 2021). Additionally, while some papers attempt to incorporate context, they generally focus on considering weather information of the POI's location (Hossain et al, 2022; Massimo and Ricci, 2021; Abbasi-Moud et al, 2021; Sun et al, 2019; Trattner et al, 2018), with only a minority of studies striving to integrate other contextual factors, such as time, travel constraints and traffic (Esmaeili et al, 2020; Braunhofer and Ricci, 2017). This tendency might be attributed to the limited types of data provided in public datasets which are predominantly employed by the majority of papers, leading to a lack of information related to tourists, POIs, and contextual factors.
Moreover, even when leveraging existing datasets, current research in POI recommendations does not fully exploit the potential of these resources, and certain types of data remain underutilized. For instance, a mere two of the surveyed papers utilized POI visual information, i.e., image data from the public Flickr dataset.
Cui et al (2017) propose a method to use image data for POI recommendations, which is based on the extraction of user preferences from the implicit feedback encoded in their uploaded geotagged photos. A user-POI matrix is created, with its entries representing the preference score of a user on a POI. This score is calculated by counting the number of photos uploaded by a user that have geotags matching a POI. To construct the POI hypergraph, five types of low-level visual features are used to represent each photo, namely color histogram, color correlogram, edge direction histogram, wavelet texture and blockwise color moment. These features characterize photos from the different perspectives of color, shape, and texture.
Sang et al (2021) propose a deep neural network model called LSVP for POI recommendations on sparse check-in data. The model leverages images to make recommendations, integrating visual content and sequential patterns for more accurate
Fig. 15: Heterogeneous Data Utilized in POI Recommendation Research
recommendations, thus addressing the issue of data sparsity. In the LSVP model, the authors chose the VGG16 model as the CNN architecture to extract visual preferences from user-generated photos, and then long-term and short-term preferences are extracted from check-in sequences. Finally, an adaptive attention mechanism is used to balance all extracted user preferences.
Compared to the majority of studies that solely use check-in data for POI recommendations, the introduction of POI visual content significantly enhances the model's ability to learn tourist preferences, which can help to notably outperform some of the latest models (Sang et al, 2021). This example illustrates that there is considerable room for improvement for POI recommender system performance when utilizing underused data types in current databases.
### Integration of Heterogeneous Data in POI Recommendations
When confronted with varying types of data, the methodologies employed by current POI recommendation research to integrate these heterogeneous data warrant exploration within the context of this survey. From the research papers utilizing multiple data types, it is observed that the methods of integrating heterogeneous data vary according to the distinct recommendation approaches used in the respective studies.
In the realm of CF-based POI recommendation research, factorizing the user-item check-in matrix is a common approach. To utilize data types beyond check-ins, a prevalent method of integration involves constructing a unified objective function to fuse these heterogeneous data, which can manifest as regularization terms within the objective function to represent the influence of these factors. For instance, Xu et al (2021) proposed a novel multi-factor-based POI recommendation method that integrates tourist social relationships, tourist preferences, check-in time and geographical locations into a matrix factorization-based recommendation method. A distinct advantage of this approach is that it not only facilitates the integration of diverse data types but also retains interpretability to a certain extent.
With the increasing adoption of DL-based methods in the field of POI recommendations, an increasing number of studies are leveraging advanced techniques to effectively integrate information from diverse data sources, capturing the intricate relationships and patterns inherent in the data. For instance, Huang et al (2020) integrated different types of data using a dual-attention network (DAN-SNR) that captures both social and non-social influences in POI recommendations. The authors firstly transformed five types of information--user information (including social relationships), POI information, spatial information, temporal information and location information--into embedding vectors to obtain latent representations for each user. Subsequently, these embeddings are concatenated to derive a hidden representation for each check-in. Upon obtaining the hidden representation for each check-in, the authors employed a self-attention mechanism to simulate interactions between user check-ins, capturing social, sequential, temporal and spatial influences regardless of the type of the involved check-in information. Moreover, the self-attention mechanism can automatically measure behavioral relevance between check-ins and adjust attention weights accordingly to predict the next POI. The combination of these methods
allows deep learning approaches to effectively process and integrate different types of data, thereby enhancing the accuracy and efficiency of POI recommendations.
Moreover, as depicted in Figure 6(b), graph-based methodologies have been garnering increasing interest within the realm of POI recommendation research. The integration of heterogeneous information through graph structures facilitates the exploration of intricate interdependencies between users, POIs, and contextual factors. Noteworthy studies in this domain, such as the one conducted by Zhang et al (2021), employed Graph Neural Networks (GNNs) to learn representations of users and POIs. GNNs are particularly adept at handling complex graph data by learning high-quality node representations. In this study, a Location-Based Social Network (LBSN) graph was constructed, wherein each user is interconnected with other users via social relations and with POIs via check-in activities. Subsequently, a latent representation of the target user nodes is generated by merging the outputs of social neighbor integration and POI neighbor integration via a neural network. A significant advantage of this model is its ability to automatically learn the weights of neighbors from the data, thereby modeling complex and multifaceted social relationships. This example underscores the potential of graph-based methodologies to enhance the accuracy and effectiveness of POI recommendations by incorporating and harnessing the rich information encapsulated within heterogeneous data sources.
### Open Gaps and Directions for Future Advancements in POI Recommendations
We identified the following research gaps within the current body of work and subsequently propose potential future trajectories and research opportunities to drive advancements in the domain of POI recommendations.
Lack of exploration of diverse attributes from existing datasets.Based on the findings presented in Subsection 4.3, it is evident that the majority of collected papers in the field of POI recommendations primarily rely on a limited set of attributes, predominantly derived from check-in data. While these attributes proved valuable in the recommendation process, there remains a lack of exploration concerning other important dimensions of data.
Addressing the observed lack of diverse attribute exploration in current studies, future research could seek to incorporate underutilized attributes, such as demographic user information and comprehensive POI profiles (including cost and facilities information). Additionally, the integration of a wider range of contextual information, such as time, location and traffic, can lead to more relevant and tailored recommendations, enhancing the overall user experience.
Insufficient exploration of heterogeneous data.Although the utilization of multiple data types to understand tourist preferences has become more and more prevalent in current research, due to the constraints posed by today's datasets, there remains a lack of a more comprehensive exploration towards integrating heterogeneous data sources. Therefore, a more comprehensive exploration and integration of heterogeneous data sources could lead to significantly improved POI recommendations. Future research could involve developing models that simultaneously consider
the diverse dimensions of tourists, POIs, the interaction between tourists and POIs and context, hence enhancing the accuracy and effectiveness of recommendations.
Limited investigation of short-term preferences.Although Figure 12(b) illustrates that some research efforts have started to focus on capturing short-term preferences of tourists, particularly through methods such as LSTM-based approaches, the current level of exploration appears limited. Specifically, there is a certain lack of profound research in defining the temporal aspects of short-term preferences in the context of POI recommendations. Time-based considerations and understanding of short-term preferences have not been extensively investigated within the field, indicating a research gap that requires further attention.
The exploration of temporal aspects of short-term preferences could be enhanced in future studies to capture tourists' dynamic needs and interests better. More profound research about robust methods, such as attention mechanisms, are needed to effectively define and understand short-term preferences within the context of POI recommendations.
Underutilization of diverse evaluation metrics.In current POI recommendation research, there exists an overreliance on relevance measures in offline evaluation, such as recall and precision and exhibits an underutilization of diverse evaluation metrics, as emphasized in Subsection 4.4. For example, a general POI recommender system might recommend a first-time visitor to Paris to visit the Eiffel Tower, although the user most likely already knows about this popular attraction. Similarly, when making a next-POI recommendation, it might be predicted that the tourist will visit a nearby cafe based on their current location. While these recommendations might yield high accuracy in offline evaluations, the value of such recommendations to the user can be arguably low. Therefore, while the relevance is of paramount importance, it is equally imperative to recognize the multidimensional nature of POI recommendations, along with the varying preferences and requirements of users. Moreover, the current use of comparably simple models that neglect unique tourist characteristics can engender misleading outcomes and spawn recommendations of limited utility.
Hence, there is a pressing need for future research to diversify evaluation metrics, such as novelty, diversity, serendipity and coverage, to capture the true value and usefulness of recommendations in the specific context of tourism.
Limited reproducibility and transparency.The current landscape of POI recommendation research underscores a crucial need for enhancing reproducibility and transparency, as can be observed from Figure 8. Since open access to source code allows for the verification and extension of existing methodologies, fostering an environment of open science that is conducive to continual innovation and progress in POI recommendation research. The limited availability of source code in a significant number of papers impedes the replication and validation of research findings, potentially hindering the realization of true advancements in the field.
Ensuring reproducibility not only strengthens the validity and credibility of research but also encourages collaboration and knowledge sharing among researchers. Therefore, it is critical for future research to place a stronger emphasis on ensuring
the availability of source code, complete with relevant documentation, to facilitate the reproducibility of studies.
## 6 Summary
In this work, we have conducted a comprehensive analysis of the current state of research on in-trip POI recommendations. By addressing the research questions outlined at the beginning of the study, we have initially gained insights into the techniques, data utilization and evaluation metrics prevalent in this field. This has facilitated a thorough understanding of the latest research trends in this domain over the past five years. Subsequently, we have focused our attention on the utilization of heterogeneous data within the realm of POI recommendations, discussing exemplars pertaining to the diversity of data types and methods for integrating heterogeneous data. Finally, based on the results derived from this survey, we have identified existing open gaps and proposed potential future research directions.
As the first data-centric survey on POI recommendation research, this study serves as a valuable reference for researchers. It provides a foundation for the development of increasingly accurate, personalized, and context-aware RSs, thereby more effectively catering to the nuanced needs and preferences of tourists.
|
2303.05143 | ESCL: Equivariant Self-Contrastive Learning for Sentence Representations | Previous contrastive learning methods for sentence representations often
focus on insensitive transformations to produce positive pairs, but neglect the
role of sensitive transformations that are harmful to semantic representations.
Therefore, we propose an Equivariant Self-Contrastive Learning (ESCL) method to
make full use of sensitive transformations, which encourages the learned
representations to be sensitive to certain types of transformations with an
additional equivariant learning task. Meanwhile, in order to improve
practicability and generality, ESCL simplifies the implementations of
traditional equivariant contrastive methods to share model parameters from the
perspective of multi-task learning. We evaluate our ESCL on semantic textual
similarity tasks. The proposed method achieves better results while using fewer
learning parameters compared to previous methods. | Jie Liu, Yixuan Liu, Xue Han, Chao Deng, Junlan Feng | 2023-03-09T09:52:28Z | http://arxiv.org/abs/2303.05143v1 | # ESCL: Equivariant Self-Contrastive Learning for Sentence Representations
###### Abstract
Previous contrastive learning methods for sentence representations often focus on insensitive transformations to produce positive pairs, but neglect the role of sensitive transformations that are harmful to semantic representations. Therefore, we propose an Equivariant Self-Contrastive Learning (ESCL) method to make full use of sensitive transformations, which encourages the learned representations to be sensitive to certain types of transformations with an additional equivariant learning task. Meanwhile, in order to improve practicability and generality, ESCL simplifies the implementations of traditional equivariant contrastive methods to share model parameters from the perspective of multi-task learning. We evaluate our ESCL on semantic textual similarity tasks. The proposed method achieves better results while using fewer learning parameters compared to previous methods.
Jie Liu\({}^{1}\)+ Yixuan Liu\({}^{1,2}\)+ Xue Han\({}^{1}\) Chao Deng\({}^{1}\) Junlan Feng\({}^{1}\)\({}^{1}\)JIUTIAN Team, China Mobile Research
\({}^{2}\)Beijing University of Posts and Telecommunications Natural Language Processing, Representation Learning, Pre-trained Language Models, Contrastive Learning
Footnote †: The first two authors contribute equally.
Footnote †: thanks: \({}^{*}\) Junlan Feng is the corresponding author.
## 1 Introduction
Sentence representation is a fundamental task in the field of natural language processing, which has been well studied in previous literatures [1, 2, 3, 4]. In practice, sentence embeddings are widely used in numerous downstream tasks, such as text summarization [5], machine translation [6] and recommendations [7]. Recently, some studies found that fine-tuning Pre-trained Language Models (PLMs) [8] with contrastive learning is helpful to learn sentence embeddings [9, 10, 11, 12]. Typically, contrastive learning methods construct positive pairs through data augmentations while treating other unrelated samples as negative instances, and then improve the representation space of PLMs based on InfoNCE loss [13]. Existing contrastive learning methods treat data augmentation modules as insensitive transformations that cannot affect the semantic representation (e.g., image blurring, low-dropout-based augmentation), but ignore the role of sensitive transformations that are harmful to semantic representation [14] (e.g., image rotations and word deletions). That is, sentence representations learned through fine-tuning PLMs with a contrastive learning strategy should be sensitive to certain types of transformations.
Based on the idea of contrastive learning, SimCSE [15] simplifies its implementation by only using standard dropout as an implicit data augmentation. In this work, inspired by SimCSE and equivariant self-supervised learning methods [14, 16], we propose an Equivariant Self-Contrastive Learning (ESCL) method that relies only on dropout-based data augmentation to improve the expressiveness of sentence representations. Following SimCSE, the proposed ESCL uses the dropout-based data augmentation with low dropout rate as insensitive transformation to build an invariant task (similar to the main task in multi-task learning [17]). In the framework of equivariant self-supervised learning [14], we construct the equivariant task (similar to the auxiliary task) using high dropout rate and the proposed Relative Difference (RD) loss. From the view of multi-task learning, we analyze equivariant self-supervised learning in the hope of making it more practical and providing researchers with a new perspective.
## 2 Related Work
Most of the contextualized neural embedding methods are based on PLMs and show great promise. However, their sentence representations cannot achieve satisfactory performance on downstream tasks.
Some recent studies use a contrastive learning strategy to fine-tune PLMs to get better sentence embeddings. DeclUTR [18] adopted a span sampling method in the same document to get anchor spans and positive spans. Self-guided contrastive framework [9] cloned BERT into two copies to get multiple views of the same sample. ConSERT [19] verified the effectiveness of multiple text augmentation strategies. SimCSE [15] used only standard dropout in PLMs twice as implicit data augmentations. SNCSE [4] proposed the soft negative samples and a bidirectional margin loss to distinguish and decouple textual similarity and semantic similarity.
More recently, to make full use of the previously ignored sensitive transformations, E-SSL [14] added an additional
task to contrastive learning framework to make the learned embeddings more expressive in the field of computer vision. Subsequently, DiffCSE [16] applied this idea to sentence representations. However, DiffCSE employs an additional generator to produce augmented samples and a discriminator to build the equivariant task, which not only makes the computation more expensive, but also leads to more complex model structures and more training parameters. Compared to E-SSL and DiffCSE, our ESCL is more efficient since it does not need additional data augmentation modules and encoders, and only uses the dropout-based data augmentations to construct invariant and equivariant tasks.
## 3 Methodology
### General Contrastive Learning Framework
In a typical contrastive learning method [13], the training objective is designed to obtain effective representation by pulling similar samples closer while pushing the unrelated samples apart.
SimCSE assumes a minibatch of \(N\) samples \(\mathcal{D}=\{x_{i}\}_{i=1}^{N}\), where \(x_{i}\) denotes the \(i\)-th input sentence. SimCSE passes \(x_{i}\) to BERT with the same low dropout rate twice to get two sentence embeddings \(h_{i}\) and \(h_{i}^{+}\), which is equivalent to using two different sub-encoders from original BERT. That is, unsupervised SimCSE is an implicit parameter-shared dual-encoder framework. As shown in Fig. 1, the embeddings of positive pair for the given sentence \(x_{i}\) can be obtained by:
\[h_{i}=f_{\theta}(x_{i},r_{\text{low}},m_{i}),\;\;h_{i}^{+}=f_{\theta}(x_{i},r_{ \text{low}},m_{i}^{+}) \tag{1}\]
where \(\theta\) are the training parameters of encoder \(f\), \(m_{i}\) and \(m_{i}^{+}\) denote different dropout masks for the low dropout rate \(r_{\text{low}}\). The InfoNCE loss for input sentence \(x_{i}\) in a mini-batch \(\mathcal{D}\) can be formulated as follows:
\[\mathcal{L}_{\text{InfoNCE}}=-\log\frac{e^{\text{sim}\left(h_{i},h_{i}^{+} \right)/\tau}}{\sum_{j=1}^{N}e^{\text{sim}\left(h_{i},h_{j}^{+}\right)/\tau}} \tag{2}\]
where \(\tau\) is a temperature hyperparameter and \(\text{sim}(\cdot,\cdot)\) is the cosine similarity. The training objective treats other \(N-1\) augmented samples within a minibatch as negative samples and aims to distinguish positive samples from negative ones, even if the difference of the two is small. In other words, the hard negative samples play an important role in InfoNCE loss.
### Equivariant Self-Contrastive Learning
More recently, E-SSL [14] proposed a general equivariant self-supervised learning framework, which discussed and verified the importance of previously neglected sensitive transformations for learning sentence representation in the field of computer vision. Let \(T_{g}\) denote the transformation from a group \(G\), \(T_{g}^{\prime}\) denotes an induced group transformation, \(f\) is the encoder to get representations and \(x\) is an input sample. The property of equivariance can be described as:
\[f(T_{g}(x))=T_{g}^{\prime}(f(x)) \tag{3}\]
We can construct a training objective to make \(T_{g}^{\prime}\) not the identity for some types of transformations (e.g., image rotations), while it can keep the identity for some other transformations (e.g., image blurring).
In equivariant self-supervised learning, we usually need to construct an equivariant task. E-SSL directly adopts data augmentation to get the augmented samples, which have different semantics from the original samples. DiffCSE uses an additional generator to produce augmented sentences and an additional discriminator encoder with new training parameters to build the equivariant task. In contrast to these above methods, for the sake of efficiency, we use the above encoder \(f\) with high dropout rate to accomplish sensitive transformation to get the embedding \(h_{i}^{-}\). As shown in Fig. 1, \(h_{i}^{-}\) can be obtained by:
\[h_{i}^{-}=f_{\theta}(x_{i},r_{\text{high}},m_{i}^{-}) \tag{4}\]
where \(r_{\text{high}}\) is a high dropout rate and \(m_{i}^{-}\) denotes its dropout mask. That is to say, we construct the equivariant task using only dropout-based data augmentation with a high dropout rate. With no need for the additional data augmentation module [14, 16] and discriminator [16] to construct the equivariant task, our ESCL can simplify the model structure and reduce the scale of training parameters.
Based on the property of equivariant self-supervised learning and inspired by SNCSE [4], we design a Relative Difference (RD) loss for sensitive transformations denoted by \(\mathcal{L}_{\text{RD}}\), which aims to learn the relative difference between positive and negative samples. The RD loss is defined as:
\[\mathcal{L}_{\text{RD}}=\sum_{h_{i}^{\prime}\in\left\{h_{i},h_{i}^{+}\right\}} e^{\text{sim}\left(h_{i}^{\prime},h_{i}^{-}\right)-\text{sim}\left(h_{i},h_{i}^{+}\right)} \tag{5}\]
Figure 1: Schematic illustration of the proposed method.
Relative difference loss function \(\mathcal{L}_{\mathrm{RD}}\) encourages the cosine distance \(Dist_{\mathrm{neg}}\) between negative pair (\(h_{i}^{\prime}\) and \(h_{i}^{-}\)) to be much larger than the cosine distance \(Dist_{\mathrm{pos}}\) between positive pair (\(h_{i}\) and \(h_{i}^{+}\)). The training objective design is based on the property of equivariant contrastive learning, which helps the learned sentence embeddings be sensitive to certain types of transformations that are harmful to semantic representation.
As mentioned above, we can get the final loss function \(\mathcal{L}_{\mathrm{ESCL}}\) which consists of two training objectives:
\[\mathcal{L}_{\mathrm{ESCL}}=\mathcal{L}_{\mathrm{InfoNCE}}+\lambda\cdot \mathcal{L}_{\mathrm{RD}} \tag{6}\]
where \(\lambda\) is a hyperparameter to control the trade-off between these two loss functions. All the training procedures of our ESCL are described as above and illustrated in Fig. 1.
In the inference stage, we discard the equivariant task and use only the encoder \(f\) to produce sentence embeddings.
Another advantage is that the structure of ESCL is similar to the framework of hard parameter sharing for multi-task learning in deep neural networks [17], which shares the training parameters for different tasks that can promote each other during training. Although the invariant task and equivariant task do not exactly meet the requirements of multi-task learning, the similarity in the framework makes many of the studies of multi-task learning useful for equivariant contrastive learning. We hope that, from the view of multi-task learning, we can provide a new research perspective for equivariant contrastive learning.
**Why does the relative difference loss work?** To further understand the role of \(\mathcal{L}_{\mathrm{RD}}\), we analyze and compare InfoNCE loss and RD loss. Firstly, InfoNCE loss in Eq. 2 can be formulated in another way:
\[\mathcal{L}_{\mathrm{InfoNCE}}=\log(1+\frac{\sum_{j=1,j\neq i}^{N}e^{\mathrm{ sim}(h_{i},h_{j}^{+})/\tau}}{e^{\mathrm{sim}\left(h_{i},h_{i}^{+}\right)/\tau}}) \tag{7}\]
It is clear that cosine distance \(Dist_{\mathrm{pos}}\) should be smaller, while \(Dist_{\mathrm{neg}}^{\prime}\) between \(h_{i}\) and \(h_{j}^{+}\) should be larger. However, InfoNCE loss may cause some problems: (i). The negative samples come from the same batch, so there may be some false negative samples, which will affect the effect of InfoNCE loss. (ii). There is no explicit comparison between \(Dist_{\mathrm{pos}}\) and \(Dist_{\mathrm{neg}}^{\prime}\). Compared to InfoNCE loss, RD loss in Eq. 5 explicitly encourages \(Dist_{\mathrm{neg}}^{\prime}\) to be greater than \(Dist_{\mathrm{pos}}\), and the embeddings of negative samples come from BERT with high dropout to ensure quality. Therefore, RD loss can enable BERT to make full use of the sensitive transformations to get better sentence embeddings.
## 4 Experiments and Analysis
### Experimental Setup
In our experiment, we implement our ESCL based on the PyTorch implementations of SimCSE [15] and DiffCSE [16]. Following the setting of DiffCSE, we use BERT(uncased) [8] to initialize the sentence encoder \(f\) at the training stage. Unless otherwise mentioned, the rest of the hyperparameters in our ESCL are the same as in DiffCSE [16]. We use Spearman's correlation \(\rho\) to measure the performance of the learned sentence embeddings, which is a non-parametric measure of rank correlation and can be formulated as:
\[\rho(\mu,\nu)=\frac{\sum_{k=1}^{n}(\mu_{k}-\bar{\mu})(\nu_{k}-\bar{\nu})}{ \sqrt{\sum_{k=1}^{n}(\mu_{k}-\bar{\mu})^{2}\sum_{k=1}^{n}(\nu_{k}-\bar{\nu})^ {2}}} \tag{8}\]
where \(\mu\) and \(\nu\) are a set of variables, \(n\) is the sample size, \(\mu_{k}\) and \(\nu_{k}\) denote the \(k\)-th variable, \(\bar{\mu}\) and \(\bar{\nu}\) denote the mean value.
For the additional hyperparameters in our ESCL, we set \(r_{\mathrm{low}}\) as \(0.1\), \(r_{\mathrm{high}}\) as \(0.45\) and \(\lambda\) as \(2.5e-3\). We will compare the results of using different \(r_{\mathrm{high}}\) for the equivariant learning task in Sec. 4.4. In subsequent sections, we report the performance of our ESCL over \(10\) different random seeds to reduce statistical errors.
### The Datasets
We use the SentEval [20] toolkit to evaluate ESCL on 7 semantic textual similarity (STS) tasks, which include STS 2012-2016 [21], STS Benchmark [22] and SICK-Relatedness [23]. It is worth mentioning that no STS training datasets are used at the training stage and all the experiments on STS are fully unsupervised, which means all the embeddings are fixed once they are trained. We choose to follow the way of using development data of Sentence-BERT [2] in our evaluation. SimCSE and DiffCSE also use the same strategy in evaluation.
### Main Results and Analysis
**Baselines.** We compare our ESCL to previous state-of-the-art methods on STS tasks including averaged GloVe [24] embeddings, averaged first and last layer BERT [8] embeddings, SimCSE [15], DiffCSE [16] and the post-processing method BERT-flow [25]. Tab. 1 shows all the related results on 7 STS tasks for different methods based on BERT\({}_{\mathrm{base}}\)1.
Footnote 1: Additionally, we repeat all the experiments based on RoBERT\({}_{\mathrm{base}}\), which also proved the effectiveness of ESCL.
Firstly, compared to the axiomatic method GloVe, our ESCL achieves a significant performance improvement on all STS datasets, which fully demonstrates the effectiveness of the contextualized neural embedding methods based on PLMs with the contrast learning strategy.
Compared to the contextualized neural embedding methods based on PLMs (BERT, BERT-flow and SimCSE), our ESCL method still achieves consistent performance gains. As mentioned above, the original BERT is not suitable for directly getting sentence embeddings. BERT-flow is a post-processing method which directly adjusts the anisotropic
distribution of sentence embeddings through normalizing flows, limited by the flow-based model in PLMs, resulting in a relatively small performance improvement. Specifically, although SimCSE is also a method for fine-tuning BERT based on a contrastive learning strategy, ESCL outperforms it on STS tasks by about \(2\%\) on Spearman's correlation.
Finally, the most important comparison of experimental results is between DiffCSE and our ESCL. DiffCSE is an equivariant contrast learning method, that uses an additional generator to produce augmented samples and a discriminator to construct the equivariant task for sensitive transformations. Our \(\text{ESCL}_{\text{cls-before-pooler}}\) can also improve upon DiffCSE\({}_{\text{cls-before-pooler}}\) from \(76.16\%\) to \(77.46\%\). Such experimental results fully validate our analysis of building the equivariant task in Sec. 3.2.
### Ablation Studies
In this section, we present a series of ablation experiments to support the reasonability of the design of our ESCL in Tab. 2. The following variants are considered: (i). \(r_{\text{high}}\) in Eq. 4 for augmented sentence embeddings. (ii). The loss function of equivariant tasks.
For the high dropout rate \(r_{\text{high}}\), which is a hyperparameter that affects the quality of the embedding \(h_{i}^{-}\) to build the equivariant task. To further understand the role of \(r_{\text{high}}\) in Eq. 4, we try the different values in Tab. 2 and observe that the augmented embedding \(h_{i}^{-}\) from BERT with a high dropout rate plays an important role in the equivariant task, and the way of building augmented embeddings for sensitive transformations is effective. Therefore, we set \(r_{\text{high}}\) as \(0.45\) in all experiments for our ESCL.
Then, we replace the RD loss with a simple Cosine Similarity (CosSim) loss \(\sum_{h_{i}^{\prime}\in\left\{h_{i},h_{i}^{+}\right\}}e^{sim(h_{i}^{\prime},h_ {i}^{-})}\) to verify the role of RD loss in the equivariant task. The CosSim loss aims to make the cosine distance \(Dist_{\text{neg}}\) between the negative pair (\(h_{i}^{\prime}\) and \(h_{i}^{-}\)) larger, but cannot get the relative differences between \(Dist_{\text{pos}}\) and \(Dist_{\text{neg}}\). As shown in Tab. 2, even though CosSim loss takes the same \(h_{i}^{-}\) as augmented sentence embeddings, the performance degrades by \(3.65\%\) on the development set of STS-B. This comparative experiment shows that the learned sentence representations by RD loss are sensitive to the difference between the original sample and augmented sample, and the relative difference between \(Dist_{\text{pos}}\) and \(Dist_{\text{neg}}\) is conducive to improving the sentence representation of PLMs.
## 5 Discussion and Conclusions
We introduce ESCL, an equivariant self-contrastive learning method that improves the sentence representations of BERT, which relies only on standard dropout-based augmentations. Firstly, different dropout rates are used to build invariant and equivariant tasks. Subsequently, the relative difference loss for the equivariant task is proposed to jointly optimize sentence representations. Finally, we provide researchers with a new multi-task learning perspective to analyze and study equivariant contrastive learning. We believe that our ESCL can provide a new framework to implement equivariant self-supervised learning to get better sentence embeddings.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Model** & **STS12** & **STS13** & **STS14** & **STS15** & **STS16** & **STS-B** & **SICK-R** & **Avg.** \\ \hline GloVe embeddings (avg.)\({}^{\dagger}\) & 55.14 & 70.66 & 59.73 & 68.25 & 63.66 & 58.02 & 53.76 & 61.32 \\ \(\text{BERT}_{\text{base}}\) (first-last avg.)\({}^{\dagger}\) & 39.70 & 59.38 & 49.67 & 66.03 & 66.19 & 53.87 & 62.06 & 56.70 \\ \(\text{BERT}_{\text{base-flow}}\)\({}^{\dagger}\) & 58.40 & 67.10 & 60.85 & 75.16 & 71.22 & 68.66 & 64.47 & 66.55 \\ \hline \(*\) SimCSE\({}_{\text{cls}}\) (reproduce) & **68.21** & 81.32 & 73.72 & 80.25 & 76.03 & 75.54 & 71.06 & 75.16 \\ \(*\) DiffCSE\({}_{\text{cls}}\) (reproduce) & 66.42 & 81.60 & 73.46 & **82.29** & 78.00 & 77.22 & 70.29 & 75.61 \\ \(\text{ESCL}_{\text{cls}}\) & 66.67 & **82.66** & **74.03** & 82.24 & **79.78** & **79.49** & **72.46** & **77.19** \\ \hline \(*\) SimCSE\({}_{\text{cls-before-pooler}}\) (reproduce) & 68.06 & 81.56 & 73.95 & 80.84 & 76.56 & 75.79 & 71.43 & 75.46 \\ \(*\) DiffCSE\({}_{\text{cls-before-pooler}}\) (reproduce) & 67.21 & 81.84 & 74.06 & 82.62 & 78.97 & 77.57 & 70.82 & 76.16 \\ \(\text{ESCL}_{\text{cls-before-pooler}}\) & **70.06** & **82.64** & **74.14** & **82.67** & **80.14** & **80.14** & **72.44** & **77.46** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The performance of sentence representations on semantic textual similarity (STS) test sets (Spearman’s correlation) for different methods based on \(\text{BERT}_{\text{base}}\). \({\dagger}\) means the result comes from DiffCSE, \(*\) means the reproduced results with default setup based on the original implementations of SimCSE1 and DiffCSE2.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Setup** & \multicolumn{3}{c}{\(\mathbf{r_{\text{high}}}\)} & \multicolumn{3}{c}{**Equivariant Loss**} \\ & \(0.35\) & \(0.40\) & \(0.45\) & \(0.50\) & CosSim loss \\ \hline
**STS-B** & 82.01 & 83.74 & **83.94** & 83.45 & 80.29 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Development set results of STS-B with different dropout rates and loss in equivariant task. |
2301.13158 | Selection principles and proofs from the Book | I provide simplified proofs for each of the following fundamental theorems
regarding selection principles:
1. The Quasinormal Convergence Theorem, due to the author and Zdomskyy,
asserting that a certain, important property of the space of continuous
functions on a space is actually preserved by Borel images of that space.
2. The Scheepers Diagram Last Theorem, due to Peng, completing all provable
implications in the diagram.
3. The Menger Game Theorem, due to Telg\'arsky, determining when Bob has a
winning strategy in the game version of Menger's covering property.
4. A lower bound on the additivity of Rothberger's covering property, due to
Carlson.
The simplified proofs lead to several new results. | Boaz Tsaban | 2023-01-30T18:36:46Z | http://arxiv.org/abs/2301.13158v1 | # Selection principles and proofs from the book
###### Abstract.
I provide simplified proofs for each of the following fundamental theorems regarding selection principles:
1. The Quasinormal Convergence Theorem, due to the author and Zdomskyy, asserting that a certain, important property of the space of continuous functions on a space is actually preserved by Borel images of that space.
2. The Scheepers Diagram Last Theorem, due to Peng, completing all provable implications in the diagram.
3. The Menger Game Theorem, due to Telgarsky, determining when Bob has a winning strategy in the game version of Menger's covering property.
4. A lower bound on the additivity of Rothberger's covering property, due to Carlson.
The simplified proofs lead to several new results.
2020 Mathematics Subject Classification: Primary: 37F20; Secondary 26A03, 03E75
## 1. Introduction
The study of _selection principles_ unifies notions and studies originating from dimension theory (Menger and Hurewicz), measure theory (Borel), convergence properties (Csaszar-Laczkovicz), and function spaces (Gerlits-Nagy and Arhangel'skii), notions analyzed and developed in numerous studies of later mathematicians, especially since the 1996 paper of Just, Miller, Scheepers and Szeptycki [9]. The unified notions include, among others, many classic types of special sets of real numbers, local properties in function spaces, and more recent types of convergence properties.
Selective topological covering properties form the kernel of selection principles. These covering properties are related via the Scheepers Diagram (Figure 1). This is a diagram
Figure 1. The Scheepers Diagram
of covering properties and implications among them. The properties in this diagram are obtained as follows.
For families \(\mathrm{A}\) and \(\mathrm{B}\) of sets, let \(\mathsf{S}_{1}(\mathrm{A},\mathrm{B})\) be the statement: For each sequence of elements of the family \(\mathrm{A}\), we can pick one element from each sequence member, and obtain an element of the family \(\mathrm{B}\). When \(\mathrm{A}=\mathrm{B}=\mathrm{O}(X)\), the family of open covers of a topological space \(X\), we obtain _Rothberger's property_ (1941), the topological version of Borel's strong measure zero. We say that a space \(X\) satisfies \(\mathsf{S}_{1}(\mathrm{O},\mathrm{O})\) if the assertion \(\mathsf{S}_{1}(\mathrm{O}(X),\mathrm{O}(X))\) holds, and similarly for the other selective properties.
The hypothesis \(\mathsf{S}_{\mathrm{fin}}(\mathrm{A},\mathrm{B})\) is obtained by replacing _one_ by _finitely many_ in the above definition. The property \(\mathsf{S}_{\mathrm{fin}}(\mathrm{O},\mathrm{O})\) is, by an observation of Hurewicz (1925), equivalent to Menger's basis property, a dimension-type property. The property \(\mathsf{U}_{\mathrm{fin}}(\mathrm{A},\mathrm{B})\) is obtained by further allowing us to take the _unions_ of the selected finite subsets--this matters for some types of covers. For technical reasons, this property does not consider all covers of type \(\mathrm{A}\), but only those that have no finite subcover.
A cover of a space is an _\(\omega\)-cover_ if no member of the cover covers the entire space, but every finite subset of the space is covered by some member of the cover. For a space \(X\), \(\Omega(X)\) is the family of open \(\omega\)-covers of the space. A _point-cofinite cover_ is an infinite cover where every point of the space belongs to all but finitely many members of the cover. \(\Gamma(X)\) is the family of open point-cofinite covers of the space \(X\).
Applying the mentioned selection principles to the cover types \(\mathrm{O}\), \(\Omega\) and \(\Gamma\), we obtain additional important properties, such as Hurewicz's property \(\mathsf{U}_{\mathrm{fin}}(\mathrm{O},\Gamma)\) (1925). We also obtain the Gerlits-Nagy \(\gamma\)-property \(\mathsf{S}_{1}(\Omega,\Gamma)\) (1982), characterizing the Frechet-Urysohn property of the function space \(\mathrm{C}_{\mathrm{p}}(X)\) of continuous real-valued functions, with the topology of pointwise convergence: A topological space is _Frechet-Urysohn_ if every point in the closure of a set is actually a limit of a sequence in the set. This duality between the spaces \(X\) and \(\mathrm{C}_{\mathrm{p}}(X)\) also translates various tightness and convergence properties of the space \(\mathrm{C}_{\mathrm{p}}(X)\)--discovered earlier by Arhangel'skii, Bukovsky, Sakai, and others--to the selective covering properties \(\mathsf{S}_{\mathrm{fin}}(\Omega,\Omega)\), \(\mathsf{S}_{1}(\Gamma,\Gamma)\), and \(\mathsf{S}_{1}(\Omega,\Omega)\). In Section 2, we provide a surprisingly simple proof of one of the most important results of this type. While the result itself does not involve selective covering properties explicitly, its proof does that extensively.
A topological space is _Lindelof_ if every open cover has a countable subcover. For example, all sets of real numbers are Lindelof. Since all selection principles concern countable sequences, the theory mainly deals with Lindelof spaces. For Lindelof spaces, the Scheepers Diagram is the result of a classification of all properties thus introduced; each property is equivalent to one in the diagram [9]. It was long open whether any additional implication could be established among the properties in the diagram. In Section 3 we deal with the recent, surprising solution of this problem.
Menger's covering property \(\mathsf{S}_{\mathrm{fin}}(\mathrm{O},\mathrm{O})\) is the oldest, most general, and most applied property in the Scheepers Diagram. Initially, Menger conjectured his property to coincide with \(\sigma\)-compactness. While this turned out false [20], the game version of this property does provide a characterization of \(\sigma\)-compactness. A very transparent proof of this deep result is presented in Section 4.
In Section 5 we consider a connection to combinatorial set theory. We show that a nontrivial lower bound on the additivity of Rothberger's property follows easily from basic knowledge on selection principles.
_The Book_ is a popular myth by Paul Erdos: A transfinite book containing the most simple proofs for all theorems. I would like to believe that the proofs presented here are similar to ones from the Book...or from some of its preliminary drafts, at any rate.
## 2. The Quasinormal Convergence Theorem
By _real set_ we mean a topological space where every open set is a countable union of clopen sets. Such are, for example, totally disconnected subsets of the real line and, in particular, subsets of the Cantor space \(\{0,1\}^{\mathbb{N}}\). In general, every perfectly normal space with any of the properties considered in this section is a real set.
Let \(X\) be a real set. A sequence of real-valued functions \(f_{1},f_{2},\dots\) on \(X\) converges _quasinormally_ to a real-valued function \(f\) if there are positive real numbers \(\epsilon_{1},\epsilon_{2},\dots\) converging to \(0\) such that for each point \(x\in X\), we have
\[|f_{n}(x)-f(x)|\leq\epsilon_{n}\]
for all but finitely many \(n\). Quasinormal convergence generalizes uniform convergence.
A real set \(X\) is a _QN space_ if every sequence of continuous real-valued functions on \(X\) that converges to \(0\) pointwise, converges to \(0\) quasinormally. Equivalently, convergence in the space \(\mathrm{C}_{\mathrm{p}}(X)\) is quasinormal.
QN spaces were studied intensively, e.g., by Bukovsky, Reclaw, Repicky, Scheepers, Nowik, Sakai, and Hales [21, and references therein]. This and other properties of similar type are preserved by continuous images, and all experience prior to the paper of the author and Zdomskyy [21] suggested that they are not preserved by Borel images. Thus, the following theorem [21, Theorem 9] came as a surprise.
The Baire space \(\mathbb{N}^{\mathbb{N}}\) is quasiordered by the relation \(\leq^{*}\) of eventual dominance: \(f\leq^{*}g\) if \(f(n)\leq g(n)\) for all but finitely many \(n\). A subset \(Y\) of the Baire space \(\mathbb{N}^{\mathbb{N}}\) is _bounded_ if it is bounded with respect to evntual dominance.
**Theorem 2.1** (Quasinormal Convergence Theorem).: _The following assertions are equivalent for real sets \(X\):_
1. _The set_ \(X\) _is a QN space._
2. _Every Borel image of the set_ \(X\) _in the Baire space_ \(\mathbb{N}^{\mathbb{N}}\) _is bounded._
The second property in the theorem is well known and straightforward to apply: The most natural transformations needed in proofs regarding these notions are always easily seen to be Borel. Consequently, the theorem had a dramatic impact on the study of QN spaces: First, many of the previous sophisticated arguments could be replaced by straightforward ones. Second, many properties that were hitherto considered separately turned out provably equivalent. Consequently, this theorem settled all problems concerning these properties [21].
The original proof of the Quasinormal Convergence Theorem is long and involved, and some of its parts are difficult to follow. A more natural proof was later published by Bukovsky and Supina [5, SS4]. Inspired by a paper of Gerlits and Nagy [7], I have discovered the following surprisingly simple proof. All needed proof ingredients were already available at the time the Quasinormal Convergence Theorem was established. The following lemma provides the key to the proof.
For a space \(X\), let \(\mathrm{PF}(X)\) be the collection of countably infinite point-finite families of open sets in \(X\).
**Lemma 2.2**.: _Let \(X\) be a topological space. The following assertions are equivalent:_
1. _Every Borel image of the space_ \(X\) _in the Baire space_ \(\mathbb{N}^{\mathbb{N}}\) _is bounded._
2. _The space_ \(X\) _satisfies_ \(\mathsf{S}_{1}(\mathrm{PF},\mathrm{PF})\)_._
Proof.: Let \(\mathrm{F}(X)\) (respectively, \(\mathrm{B}(X)\)) be the family of countable closed (respectively, Borel) covers of the set \(X\), and \(\mathrm{F}_{\Gamma}(X)\) (respectively, \(\mathrm{B}_{\Gamma}(X)\)) be the family of infinite closed (respectively, Borel) point-cofinite covers of the set \(X\). The properties (1), \(\mathsf{U}_{\mathrm{fin}}(\mathrm{B},\mathrm{B}_{\Gamma})\), and \(\mathsf{S}_{1}(\mathrm{B}_{\Gamma},\mathrm{B}_{\Gamma})\) are equivalent [17, Theorem 1]. For a family \(\mathcal{U}\) of open sets, we have \(\mathcal{U}\in\mathrm{PF}(X)\) if and only if
\[\{\,U^{\mathfrak{c}}:U\in\mathcal{U}\,\}\in\mathrm{F}_{\Gamma}(X).\]
It follows that \(\mathsf{S}_{1}(\mathrm{PF},\mathrm{PF})=\mathsf{S}_{1}(\mathrm{F}_{\Gamma}, \mathrm{F}_{\Gamma})\).
\((1)\Rightarrow(2)\): Clearly, \(\mathsf{S}_{1}(\mathrm{B}_{\Gamma},\mathrm{B}_{\Gamma})\) implies \(\mathsf{S}_{1}(\mathrm{F}_{\Gamma},\mathrm{F}_{\Gamma})\).
\((2)\Rightarrow(1)\): A theorem of Bukovsky-Reclaw-Repicky [4, Corollary 5.3] asserts that
\[\mathsf{U}_{\mathrm{fin}}(\mathrm{F},\mathrm{F}_{\Gamma})=\mathsf{U}_{\mathrm{ fin}}(\mathrm{B},\mathrm{B}_{\Gamma}).\]
The usual argument [16, Proposition 11] shows that \(\mathsf{S}_{1}(\mathrm{F}_{\Gamma},\mathrm{F}_{\Gamma})\) implies \(\mathsf{U}_{\mathrm{fin}}(\mathrm{F},\mathrm{F}_{\Gamma})\): If \(\{\,C_{n}:n\in\mathbb{N}\}\in\mathrm{F}(X)\) and there is no finite subcover, then \(\{\,\bigcup_{k=1}^{n}C_{k}:n\in\mathbb{N}\}\in\mathrm{F}_{\Gamma}(X)\).
A topological space \(Y\) has Arhangel'skii's property \(\alpha_{1}\) if for every sequence \(s_{1},s_{2},\dots\) of sequences converging to the same point, there is a sequence \(s\) such that the sets \(\mathrm{im}(s_{n})\setminus\mathrm{im}(s)\) are finite for all natural numbers \(n\). This property is defined by properties of sets (images of sequences) rather than sequences. Fix a bijection \(\varphi\colon\mathbb{N}\times\mathbb{N}\to\mathbb{N}\). For sequences \(s_{1},s_{2},\dots\), with \(s_{n}=(s_{(n,1)},s_{(n,2)},\dots)\) for each \(n\), define
\[\bigsqcup_{n=1}^{\infty}s_{n}:=(s_{\varphi(1)},s_{\varphi(2)},\dots).\]
Since convergence of a sequence does not depend on the order of its elements, it does not matter, for our purposes, which bijection \(\varphi\) is used. A sequence \(\bigsqcup_{n=1}^{\infty}s_{n}\) converges to a point \(p\) if and only if each sequence \(s_{n}\) converges to \(p\), and for each neighborhood \(U\) of \(p\), we have \(\mathrm{im}(s_{n})\subseteq U\) for all but finitely many \(n\).
**Lemma 2.3**.: _Let \(Y\) be an \(\alpha_{1}\) space. For every sequence \(s_{1},s_{2},\dots\) of sequences in the space \(Y\) converging to the same point \(p\), there are tails \(t_{n}\) of \(s_{n}\), for \(n\in\mathbb{N}\), such that the sequence \(\bigsqcup_{n=1}^{\infty}t_{n}\) converges to \(p\)._
Proof.: There is a sequence \(s\) such that the sets \(\mathrm{im}(s_{n})\setminus\mathrm{im}(s)\) are finite for all natural numbers \(n\). By moving to a subsequence, we may assume that \(\mathrm{im}(s)\subseteq\bigcup_{n=1}^{\infty}\mathrm{im}(s_{n})\). Suppose that \(s=(a_{1},a_{2},\dots)\). For each natural number \(n\), since the sequence \(s_{n}\) converges to the point \(p\), every element other than \(p\) may appear in the sequence \(s_{n}\) only finitely often. Thus, there is a tail \(t_{n}\) of the sequence \(s_{n}\) such that
\[\mathrm{im}(t_{n})\subseteq\{\,a_{k}:k\geq n\,\}\cup\{p\}.\]
Let \(U\) be a neighborhood of \(p\). There is a natural number \(N\) such that
\[\mathrm{im}(t_{n})\subseteq\{\,a_{k}:k\geq n\,\}\cup\{p\}\subseteq U\]
for all natural numbers \(n\geq N\). Thus, the direct sum \(t:=\bigsqcup_{n=1}^{\infty}t_{n}\) converges to the point \(p\).
Sakai [13, Theorem 3.7] and Bukovsky-Hales [3, Theorem 11] proved that a real set \(X\) is a QN space if, and only if, the space \(\mathrm{C}_{\mathrm{p}}(X)\) is an \(\alpha_{1}\) space. Thus, the Quasinormal Convergence Theorem can be stated, and proved, as follows.
**Theorem 2.4**.: _The following assertions are equivalent for real sets \(X\):_
1. _The space_ \(C_{p}(X)\) _is an_ \(\alpha_{1}\) _space._
2. _Every Borel image of the set_ \(X\) _in the Baire space_ \(\mathbb{N}^{\mathbb{N}}\) _is bounded._
Proof.: (2) \(\Rightarrow\) (1): This is the straightforward implication. For completeness, we reproduce its proof [21, Theorem 9].
Let \(s_{1},s_{2},\dots\) be sequences in the space \(\mathrm{C}_{p}(X)\) that converge to a function \(f\in\mathrm{C}_{p}(X)\). For each natural number \(n\), suppose that
\[s_{n}=(f_{1}^{n},f_{2}^{n},f_{3}^{n},\dots).\]
Define a Borel function \(\Psi\colon X\to\mathbb{N}^{\mathbb{N}}\) by
\[\Psi(x)(n):=\min\bigg{\{}\,k:(\forall m\geq k)\,\,|f_{m}^{n}(x)-f(x)|\leq\frac {1}{n}\,\bigg{\}}.\]
Let \(g\in\mathbb{N}^{\mathbb{N}}\) be a \(\leq^{*}\)-bound for the image \(\Psi[X]\). Then the sequence
\[\bigsqcup_{n=1}^{\infty}(f_{g(n)}^{n},f_{g(n)+1}^{n},f_{g(n)+2}^{n},\dots)\]
converges to the function \(f\).
(1) \(\Rightarrow\) (2): This is the main implication. By Lemma 2.2, it suffices to prove that the set \(X\) satisfies \(\mathsf{S}_{1}(\mathrm{PF},\mathrm{PF})\). Let \(\mathcal{U}_{1},\mathcal{U}_{2},\dots\in\mathrm{PF}(X)\). By thinning out the point-finite covers, we may assume that they are pairwise disjoint [16, Lemma 4]. For each set \(U\in\bigcup_{n=1}^{\infty}\mathcal{U}_{n}\), let \(\mathcal{C}_{U}\) be a countable family of clopen sets with \(\bigcup\mathcal{C}_{U}=U\). For each natural number \(n\), let
\[\mathcal{V}_{n}:=\bigcup_{U\in\mathcal{U}_{n}}\mathcal{C}_{U}.\]
Every set \(C\in\mathcal{V}_{n}\) is contained in at most finitely many sets \(U\in\mathcal{U}_{n}\). Thus, the family \(\mathcal{V}_{n}\) is infinite and point-finite. Let \(s_{n}\) be a bijective enumeration of the family
\[\{\,\chi_{V}:V\in\mathcal{V}_{n}\,\}.\]
The sequence \(s_{n}\) is in \(\mathrm{C}_{p}(X)\), and it converges to the constant function \(0\).
As the space \(\mathrm{C}_{p}(X)\) is \(\alpha_{1}\), there are for each \(n\) a tail \(t_{n}\) of the sequence \(s_{n}\) such that the sequence \(s:=\bigsqcup_{n=1}^{\infty}t_{n}\) converges to \(0\). For each natural number \(n\), pick a set \(U_{n}\in\mathcal{U}_{n}\) with \(\mathcal{C}_{U_{n}}\subseteq\mathrm{im}(t_{n})\). Then the family \(\{\,U_{n}:n\in\mathbb{N}\}\) is infinite and point-finite.
For a set \(X\subseteq\{0,1\}^{\mathbb{N}}\), Gerlits and Nagy (and, independently, Nyikos) define a space \(\mathrm{T}(X)\) as follows. Let \(\{0,1\}^{*}\) denote the set of finite sequences of elements of the set \(\{0,1\}\). Let \(X\subseteq\{0,1\}^{\mathbb{N}}\). For each point \(x\in X\), let \(A_{x}\subseteq\{0,1\}^{*}\) be the set of initial segments of the point \(x\). Let \(X\cup\{0,1\}^{*}\) be the topological space where the points of the set \(\{0,1\}^{*}\) are isolated, and for each point \(x\in X\), a neighborhood base of \(x\) is given by the sets \(\{x\}\cup B\), where \(B\) is a cofinite subset of the set \(A_{x}\). Let \(\mathrm{T}(X)\) be the one-point compactification of this space, and \(\infty\) be the compactifying point.
Gerlits and Nagy prove that if a set \(X\subseteq\{0,1\}^{\mathbb{N}}\) is a Siepinski set, then the space \(\mathrm{T}(X)\) is \(\alpha_{1}\), and that if the space \(\mathrm{T}(X)\) is \(\alpha_{1}\), then the set \(X\) is a \(\sigma\)-set [7, Theorem 4]. The following theorem unifies these results and improves upon them. Indeed, every Borel image of a Siepinski set in the Baire space is bounded, and every set with bounded Borel images in the Baire space is a \(\sigma\)-set [17, and references therein].
**Theorem 2.5**.: _Let \(X\subseteq\{0,1\}^{\mathbb{N}}\). The following assertions are equivalent:_
1. _The space_ \(\mathrm{T}(X)\) _is an_ \(\alpha_{1}\) _space._
2. _Every Borel image of the set_ \(X\) _in the Baire space_ \(\mathbb{N}^{\mathbb{N}}\) _is bounded._
Proof.: For a finite sequence \(s\in\{0,1\}^{*}\), let \([s]\) be the basic clopen subset of the Cantor space \(\{0,1\}^{\mathbb{N}}\) consisting of all functions extending \(s\). Every open set in the space \(\{0,1\}^{\mathbb{N}}\) is a disjoint union of basic clopen sets. A sequence \(a_{1},a_{2},\dots\) in the set \(\{0,1\}^{*}\) converges to \(\infty\) in the space \(\mathrm{T}(X)\) if, and only if, the set \(\{\,[a_{n}]:n\in\mathbb{N}\}\) is point-finite in the space \(X\)[7]. The argument in the proof of Theorem 2.4 applies.
## 3. The Scheepers Diagram Last Theorem
The implications in the Scheepers Diagram 1 were all rather straightforward to establish, and almost all other potential implications were ruled out by counterexamples [9]. Only two problems remained open: Does \(\mathsf{U}_{\mathrm{fin}}(\mathrm{O},\Omega)\) imply \(\mathsf{S}_{\mathrm{fin}}(\Gamma,\Omega)\)? And if not, does \(\mathsf{U}_{\mathrm{fin}}(\mathrm{O},\Gamma)\) imply \(\mathsf{S}_{\mathrm{fin}}(\Gamma,\Omega)\)? [9, Problems 1 and 2]. For nearly three decades it was expected that the remaining two potential implications were refutable. Only when Peng came up with an entirely new method for refuting implications among selective covering properties [12], were these problems resolved. But not in the expected way: Having proved that \(\mathsf{U}_{\mathrm{fin}}(\mathrm{O},\Omega)\) does not imply \(\mathsf{S}_{\mathrm{fin}}(\Gamma,\Omega)\), Peng tried to refute the last remaining potential implication. And he failed. His close examination of the failure suggested a path for _proving_ the last potential implication [12, Theorem 23]. Peng's results establish the final form of the Scheepers Diagram (Figure 2).
Peng's proof of the last implication is somewhat involved. The proof given below identifies the heart of Peng's argument, and replaces the other parts with simple, quotable observations about selective covering properties.
Let \(k\) be a natural number. A cover of a space is a \(k\)_-cover_ if no member of the cover covers the entire space, but every \(k\)-element subset of the space is covered by some member of the cover. Thus, a cover is an \(\omega\)-cover if and only if it is a \(k\)-cover for all natural numbers \(k\). For a space \(X\) and a natural number \(k\), let \(\mathrm{O}_{k}(X)\) be the family of open \(k\)-covers of the space.
**Lemma 3.1**.: _Let \(\Pi\) be a selection principle, and \(\mathrm{A}\) a type of open covers. The following assertions are equivalent:_
1. \(\Pi(\mathrm{A},\Omega)\)_._
2. _For each natural number_ \(k\) _we have_ \(\Pi(\mathrm{A},\mathrm{O}_{k})\)
Figure 2. The Final Scheepers Diagram
Proof.: \((1)\Rightarrow(2)\): Obvious.
\((2)\Rightarrow(1)\): Let \(\mathcal{U}_{1},\mathcal{U}_{2},\dots\) be a sequence in \(\mathrm{A}\). Split the sequence into infinitely many disjoint sequences. For each natural number \(k\), apply \(\Pi(\mathrm{A},\mathrm{O}_{k})\) to the \(k\)th sequence, to obtain a \(k\)-cover \(\mathcal{V}_{k}\). Then \(\bigcup_{k=1}^{\infty}\mathcal{V}_{k}\) is an \(\omega\)-cover, in accordance to the required property \(\Pi(\mathrm{A},\Omega)\).
**Theorem 3.2** (Peng [12, Theorem 23]).: _The Hurewicz property \(\mathsf{U}_{\mathrm{fin}}(\mathrm{O},\Gamma)\) implies \(\mathsf{S}_{\mathrm{fin}}(\Gamma,\Omega)\)._
Proof.: Let \(X\) be a Huewicz space. By Lemma 3.1, it suffices to prove that \(\mathsf{S}_{\mathrm{fin}}(\Gamma,\mathrm{O}_{k})\) holds for all natural numbers \(k\). Fix a natural number \(k\).
Let \(\mathcal{U}_{1},\mathcal{U}_{2},\dots\) be a sequence in \(\Gamma(X)\). By moving to countably infinite subcovers, we may enumerate
\[\mathcal{U}_{n}=\{\,U_{m}^{n}:m\in\mathbb{N}\,\}\]
for each \(n\). For each \(n\) and \(m\), we may replace the set \(U_{m}^{n}\) with the smaller set
\[U_{m}^{1}\cap U_{m}^{2}\cap\dots\cap U_{m}^{n},\]
so that we may assume that
\[U_{m}^{1}\supseteq U_{m}^{2}\supseteq U_{m}^{3}\supseteq\dots\]
for all natural numbers \(m\). The refined covers \(\mathcal{U}_{n}\) remain in \(\Gamma(X)\).
Let \(g_{0}(m):=m\) for all \(m\). We define, by induction, increasing functions \(g_{1},\dots,g_{k}\in\mathbb{N}^{\mathbb{N}}\). Let \(l<k\) and assume that the function \(g_{l}\) is defined. For natural numbers \(n\), \(m\) and \(i\), let
\[V_{i}^{l,n} := \bigcap_{m=i}^{g_{l}(i)}U_{m}^{n};\] \[W_{m}^{l,n} := \bigcup\{\,V_{i}^{l,n}:n\leq i,g_{l}(i)\leq m\,\}.\]
For each \(l\) and \(n\), the sets \(W_{m}^{l,n}\) are increasing with \(m\), and cover the space \(X\). By the Hurewicz property, there is an increasing function \(g_{l+1}\in\mathbb{N}^{\mathbb{N}}\) such that
\[\{\,W_{g_{l+1}(n)}^{l,n}:n\in\mathbb{N}\}\in\Gamma(X).\]
This completes the inductive construction.
We will show that
\[\{\,U_{m}^{n}:n\in\mathbb{N},m\leq g_{k}(n)\,\}\in\mathrm{O}_{k}(X).\]
Let \(x_{1},\dots,x_{k}\in X\). Since \(\{\,W_{g_{l+1}(n)}^{l,n}:n\in\mathbb{N}\}\in\Gamma(X)\) for all \(l=0,\dots,k-1\), there is a natural number \(N\) with
\[x_{1},\dots,x_{k}\in W_{g_{l+1}(n)}^{l,n}\]
for all \(l=1,\dots,k\) and all \(n\geq N\). Fix a number \(n_{0}\geq N\).
Since \(x_{1}\in W_{g_{k}(n_{0})}^{k-1,n_{0}}\), there is \(n_{1}\) with \(n_{0}\leq n_{1},g_{k-1}(n_{1})\leq g_{k}(n_{0})\) and
\[x_{1}\in V_{n_{1}}^{k-1,n_{0}}=\bigcap_{m=n_{1}}^{g_{k-1}(n_{1})}U_{m}^{n_{0}}.\]
Since \(x_{2}\in W_{g_{k-1}(n_{1})}^{k-2,n_{1}}\), there is \(n_{2}\) with \(n_{1}\leq n_{2},g_{k-2}(n_{2})\leq g_{k-1}(n_{1})\) and
\[x_{2}\in V_{n_{2}}^{k-2,n_{1}}=\bigcap_{m=n_{2}}^{g_{k-2}(n_{2})}U_{m}^{n_{1}} \subseteq\bigcap_{m=n_{2}}^{g_{k-2}(n_{2})}U_{m}^{n_{0}}.\]
Since \(x_{3}\in W^{k-3,n_{2}}_{g_{k-2}(n_{2})}\), there is \(n_{3}\) with \(n_{2}\leq n_{3},g_{k-3}(n_{3})\leq g_{k-2}(n_{2})\) and
\[x_{3}\in V^{k-3,n_{2}}_{n_{3}}=\bigcap_{m=n_{3}}^{g_{k-3}(n_{3})}U^{n_{2}}_{m} \subseteq\bigcap_{m=n_{3}}^{g_{k-3}(n_{3})}U^{n_{0}}_{m}.\]
\[\vdots\]
Since \(x_{k}\in W^{0,n_{k-1}}_{g_{1}(n_{k-1})}\), there is \(n_{k}\) with \(n_{k-1}\leq n_{k}=g_{0}(n_{k})\leq g_{1}(n_{k-1})\) and
\[x_{k}\in V^{0,n_{k-1}}_{n_{k}}=U^{n_{k-1}}_{n_{k}}\subseteq U^{n_{0}}_{n_{k}}.\]
It follows that \(x_{1},\ldots,x_{k}\in U^{n_{0}}_{n_{k}}\), and \(n_{k}\leq g_{k}(n_{0})\).
The proof of Theorem 3.2 establishes a stronger result. To this end, we need the following definitions and lemma. An infinite cover of a space \(X\) is _\(\omega\)-groupable_ (respectively, \(k\)_-groupable_, for a natural number \(k\)) if there is a partition of the cover into finite parts such that for each finite (respectively, \(k\)-element) set \(F\subseteq X\) and all but finitely many parts \(\mathcal{P}\) of the partition, there is a set \(U\in\mathcal{P}\) with \(F\subseteq U\)[10]. Let \(\Omega^{\operatorname{gp}}(X)\) (respectively, \(\operatorname{O}^{\operatorname{gp}}_{k}(X)\)) be the family of open \(\omega\)-groupable (respectively, \(k\)-groupable) covers of the space \(X\).
**Lemma 3.3**.: _Let \(\Pi\) be a selection principle, and \(\operatorname{A}\) a type of open covers. The following assertions are equivalent:_
1. \(\Pi(\operatorname{A},\Omega^{\operatorname{gp}})\)_;_
2. _For each natural number_ \(k\) _we have_ \(\Pi(\operatorname{A},\operatorname{O}^{\operatorname{gp}}_{k})\)_._
Proof.: The proof is similar to that of Lemma 3.1, once we observe that if \(\{\,U_{n}:n\in\mathbb{N}\}\) is a \(k\)-groupable cover for all \(k\), then it is \(\omega\)-groupable. This follows easily from the fact that for each countable family \(\{\,\mathcal{P}_{k}:k\in\mathbb{N}\,\}\) of partitions of \(\mathbb{N}\) into finite sets, there is a partition \(\mathcal{P}\) of \(\mathbb{N}\) into finite sets that is eventually coarser than all of the given partitions, that is, such that for each \(k\), all but finitely many members of the partition \(\mathcal{P}\) contain a member of the partition \(\mathcal{P}_{k}\).
Kocinac and Scheepers proved that if all finite powers of a space \(X\) are Hurewicz, then every open \(\omega\)-cover of the space is \(\omega\)-groupable. Together with Peng's Theorem 3.2, we have that if _all finite powers_ of a space \(X\) are Hurewicz, then the space satisfies \(\mathsf{S}_{\operatorname{fin}}(\Gamma,\Omega^{\operatorname{gp}})\). The following theorem shows that the assumption on the finite powers is not needed.
In the theorem, we also mention \(\mathsf{S}_{\operatorname{fin}}(\Gamma,\Lambda^{\operatorname{gp}})\). An open cover is _large_ if each point is in infinitely many members of the cover. Let \(\Lambda(X)\) be the family of large open covers of the space \(X\). An open cover \(\mathcal{U}\) is in \(\Lambda^{\operatorname{gp}}(X)\)[10] (also denoted \(\operatorname{\boldsymbol{\exists}}(\Gamma)\), depending on the context [14]) if there is a partition of the cover into finite parts such that for each point \(x\in X\) and all but finitely many parts \(\mathcal{P}\) of the partition, we have \(x\in\bigcup\mathcal{P}\).
**Theorem 3.4**.: _The following assertions are equivalent:_
1. \(\mathsf{U}_{\operatorname{fin}}(\operatorname{O},\Gamma)\)_,_
2. \(\mathsf{S}_{\operatorname{fin}}(\Gamma,\Omega^{\operatorname{gp}})\)_; and_
3. \(\mathsf{S}_{\operatorname{fin}}(\Gamma,\Lambda^{\operatorname{gp}})\)_._
Proof.: \((1)\Rightarrow(2)\): The proof of Peng's Theorem 3.2, as written above, shows, for a prescribed number \(k\), that for each \(k\)-element set \(F\) there is a natural number \(N\) such that for each \(n\geq N\) there is a member of the finite set \(\mathcal{F}_{n}=\{\,U^{n}_{m}:m\leq g_{k}(n)\,\}\) that contains the set \(F\). By thinning out the point-cofinite covers, we may assume that they are pairwise disjoint [16,
Lemma 4], and consequently so are the finite sets \(\mathcal{F}_{n}\). Thus, \(\bigcup_{n=1}^{\infty}\mathcal{F}_{n}\in\mathrm{O}_{k}^{\mathrm{gp}}\). This proves \(\mathsf{S}_{\mathrm{fin}}(\Gamma,\mathrm{O}_{k}^{\mathrm{gp}})\) for all \(k\). Apply Lemma 3.3.
\((2)\Rightarrow(3)\): \(\Omega^{\mathrm{gp}}\subseteq\Lambda^{\mathrm{gp}}\).
\((3)\Rightarrow(1)\): This implication is standard and should be known. For completeness, we provide a proof. Assume that the space \(X\) satisfies \(\mathsf{S}_{\mathrm{fin}}(\Gamma,\Lambda^{\mathrm{gp}})\). It suffices to prove that it satisfies \(\mathsf{U}_{\mathrm{fin}}(\Gamma,\Gamma)\). Given a sequence \(\mathcal{U}_{1},\mathcal{U}_{2},\dots\) in \(\Gamma(X)\), we may (as in the proof of Theorem 3.2) assume that the covers get finer with \(n\). Apply \(\mathsf{S}_{\mathrm{fin}}(\Gamma,\Lambda^{\mathrm{gp}})\) to obtain a cover \(\mathcal{U}\in\Lambda^{\mathrm{gp}}\), with parts \(\mathcal{P}_{n}\) (for \(n\in\mathbb{N}\)) witnessing that.
Let \(\mathcal{F}_{1}\subseteq\mathcal{U}_{1}\) be a finite set refined by \(\mathcal{P}_{1}\). Let \(n_{2}\) be minimal with \(\mathcal{P}_{n_{2}}\subseteq\bigcup_{n=2}^{\infty}\mathcal{U}_{n}\), and \(\mathcal{F}_{2}\subseteq\mathcal{U}_{2}\) be a finite set refined by \(\mathcal{P}_{n_{2}}\). Let \(n_{3}\) be minimal with \(\mathcal{P}_{n_{3}}\subseteq\bigcup_{n=3}^{\infty}\mathcal{U}_{n}\), and \(\mathcal{F}_{3}\subseteq\mathcal{U}_{3}\) be a finite set refined by \(\mathcal{P}_{n_{3}}\). Continuing in this manner, we obtain finite sets \(\mathcal{F}_{n}\subseteq\mathcal{U}_{n}\) for \(n\in\mathbb{N}\), with \(\{\,\bigcup\mathcal{F}_{n}:n\in\mathbb{N}\}\in\Gamma(X)\).
Kocinac and Scheepers proved that \(\mathsf{U}_{\mathrm{fin}}(\mathrm{O},\Gamma)=\mathsf{S}_{\mathrm{fin}}( \Omega,\Lambda^{\mathrm{gp}})=\mathsf{S}_{\mathrm{fin}}(\Lambda,\Lambda^{ \mathrm{gp}})\)[10, Theorem 14]. However, \(\mathsf{U}_{\mathrm{fin}}(\mathrm{O},\Gamma)\neq\mathsf{S}_{\mathrm{fin}}( \Omega,\Omega^{\mathrm{gp}})\): The latter property is equivalent to satisfying \(\mathsf{U}_{\mathrm{fin}}(\mathrm{O},\Gamma)\) in all finite powers [10, Theorem 16], a property strictly stronger than \(\mathsf{U}_{\mathrm{fin}}(\mathrm{O},\Gamma)\)[9, Theorem 2.12].
## 4. When Bob has a winning strategy in the Menger game
Menger [11] conjectured that his property \(\mathsf{S}_{\mathrm{fin}}(\mathrm{O},\mathrm{O})\) implies \(\sigma\)-compactness. While his conjecture turned out false [20, and references therein], a closely related assertion is true. The _Menger game_[8], \(\mathsf{G}_{\mathrm{fin}}(\mathrm{O},\mathrm{O})\), is the game associated to Menger's property \(\mathsf{S}_{\mathrm{fin}}(\mathrm{O},\mathrm{O})\). It is played on a topological space \(X\), and has an inning per each natural number \(n\). In each inning, Alice picks an open cover \(\mathcal{U}_{n}\) of the space, and Bob chooses a finite set \(\mathcal{F}_{n}\subseteq\mathcal{U}_{n}\). Bob wins if \(\bigcup_{n=1}^{\infty}\mathcal{F}_{n}\) is a cover of the space, and otherwise Alice wins. Telgarsky [18] proved that if Bob has a winning strategy in the Menger game played on a metric space, then the space is \(\sigma\)-compact.
Scheepers [15, Theorem 1] provided a direct proof of Telgarsky's Theorem, using the notion of H-closed sets. We will eliminate the notion of H-closed sets and the closure operations from Scheepers's proof, and obtain a more transparent proof. As a bonus, the separation hypotheses on the space are eliminated.
A subset \(K\) of a topological space \(X\) is _relatively compact_ if every open cover \(\mathcal{U}\) of the entire space \(X\) has a finite subcover of the set \(K\). A set \(K\) is relatively compact if and only if its closure is compact.
**Lemma 4.1**.: _Let \(\kappa\) be a cardinal number. If a space \(X\) is a union of at most \(\kappa\) relatively compact sets, then it is the union of at most \(\kappa\) compact sets._
Proof.: If \(X=\bigcup_{\alpha<\kappa}K_{\alpha}\), then \(X=\bigcup_{\alpha<\kappa}\overline{K_{\alpha}}\).
For a basis \(\mathcal{B}\) for the topology of a space \(X\), let \(\mathrm{O}_{\mathcal{B}}(X)\) be the family of subsets of \(\mathcal{B}\) that cover the space \(X\).
**Lemma 4.2**.: _Let \(X\) be a topological space with a basis \(\mathcal{B}\), and \(\sigma\) be a function on the family \(\mathrm{O}_{\mathcal{B}}(X)\) such that for each cover \(\mathcal{U}\in\mathrm{O}_{\mathcal{B}}(X)\), \(\sigma(\mathcal{U})\) is a finite subset of \(\mathcal{U}\). Then the set_
\[K:=\bigcap_{\mathcal{U}\in\mathrm{O}_{\mathcal{B}}(X)}\bigcup\sigma(\mathcal{U})\]
_is relatively compact._
Proof.: Let \(\mathcal{U}\) be an open cover of \(X\). Let \(\mathcal{V}\in\operatorname{O}_{\mathcal{B}}(X)\) be a cover that refines the cover \(\mathcal{U}\). Then \(K\subseteq\bigcup\sigma(\mathcal{V})\), and there is a finite set \(\mathcal{F}\subseteq\mathcal{U}\) with \(\bigcup\sigma(\mathcal{V})\subseteq\bigcup\mathcal{F}\).
Scheepers [15, Theorem 1] proves the following theorem for metric spaces. If Bob has a winning strategy in the Menger game played on \(X\), then the space \(X\) is Menger and, in particular, Lindelof. If \(X\) is, in addition, metric, then the space is second countable.
**Theorem 4.3**.: _Let \(X\) be a second countable topological space. If Bob has a winning strategy in the Menger game \(\mathsf{G}_{\mathrm{fin}}(\operatorname{O},\operatorname{O})\) played on \(X\), then the space \(X\) is \(\sigma\)-compact._
Proof.: We follow steps of Scheepers's proof, removing what is not necessary. Let \(\sigma\) be a winning strategy for Bob. Fix a countable base \(\mathcal{B}\) for the topology of the space \(X\). Let \(\mathbb{N}^{*}\) be the set of finite sequences of natural numbers. We consider all possible games where Alice chooses her covers from the family \(\operatorname{O}_{\mathcal{B}}(X)\).
Since the base \(\mathcal{B}\) is countable, the family \(\{\,\sigma(\mathcal{U}):\mathcal{U}\in\operatorname{O}_{\mathcal{B}}\,\}\) (the possible first responds of Bob) is countable, too. Choose elements \(\mathcal{U}_{1},\mathcal{U}_{2},\ldots\in\operatorname{O}_{\mathcal{B}}\) with
\[\{\,\sigma(\mathcal{U}_{n}):n\in\mathbb{N}\}=\{\,\sigma(\mathcal{U}):\mathcal{ U}\in\operatorname{O}_{\mathcal{B}}\,\}.\]
By induction, for a given natural number \(n\) and each sequence \(s\in\mathbb{N}^{n}\), the family
\[\{\,\sigma(\mathcal{U}_{s_{1}},\mathcal{U}_{s_{1},s_{2}},\ldots,\mathcal{U}_{s },\mathcal{U}):\mathcal{U}\in\operatorname{O}_{\mathcal{B}}\,\}\]
is countable. Choose elements \(\mathcal{U}_{s,1},\mathcal{U}_{s,2},\ldots\in\operatorname{O}_{\mathcal{B}}\) with
\[\{\,\sigma(\mathcal{U}_{s_{1}},\ldots,\mathcal{U}_{s},\mathcal{U}_{s,n}):n\in \mathbb{N}\}=\{\,\sigma(\mathcal{U}_{s_{1}},\ldots,\mathcal{U}_{s},\mathcal{ U}):\mathcal{U}\in\operatorname{O}_{\mathcal{B}}\,\}.\]
This completes our inductive construction.
By Lemma 4.2, for each sequence \(s\in\mathbb{N}^{*}\), the set
\[K_{s}:=\bigcap_{n=1}^{\infty}\bigcup\sigma(\mathcal{U}_{s_{1}},\ldots, \mathcal{U}_{s},\mathcal{U}_{s,n})\]
is relatively compact. By Lemma 4.1, it remains to see that \(X=\bigcup_{s\in\mathbb{N}^{*}}K_{s}\).
Assume that some element \(x\in X\) is not in \(\bigcup_{s\in\mathbb{N}^{*}}K_{s}\).
1. Since \(x\notin K_{(})\), there is \(m_{1}\) with \(x\notin\bigcup\sigma(\mathcal{U}_{m_{1}})\).
2. Since \(x\notin K_{m_{1}}\), there is \(m_{2}\) with \(x\notin\bigcup\sigma(\mathcal{U}_{m_{1},m_{2}})\).
3. Since \(x\notin K_{m_{1},m_{2}}\), there is \(m_{3}\) with \(x\notin\bigcup\sigma(\mathcal{U}_{m_{1},m_{2},m_{3}})\).
4. Etc.
Then the play
\[\mathcal{U}_{m_{1}},\sigma(\mathcal{U}_{m_{1}}),\mathcal{U}_{m_{1},m_{2}}, \sigma(\mathcal{U}_{m_{1},m_{2}}),\mathcal{U}_{m_{1},m_{2},m_{3}},\sigma( \mathcal{U}_{m_{1},m_{2},m_{3}}),\ldots\]
is lost by Bob; a contradiction.
Let \(\alpha\) be an ordinal number. The transfinite Menger game \(\mathsf{G}_{\mathrm{fin}}^{\alpha}(\operatorname{O},\operatorname{O})\) is defined as the ordinary Menger game, with the only difference that now there is an inning per each ordinal number \(\beta<\alpha\). Clearly, if \(\alpha_{1}<\alpha_{2}\) and Bob has a winning strategy in the \(\alpha_{1}\)-Menger game, then Bob has a winning strategy in the \(\alpha_{2}\)-Menger game: He can use a winning strategy in the first \(\alpha_{1}\) innings, and then play arbitrarily. Thus, the following theorem is stronger than Theorem 4.3, and has no assumption on the topological space \(X\).
The _weight_ of a topological space is the minimal cardinality of a base for its topology.
**Theorem 4.4**.: _Let \(X\) be a topological space of weight \(\kappa\). Bob has a winning strategy in the game \(\mathsf{G}_{\mathrm{fin}}^{\kappa}(\operatorname{O},\operatorname{O})\) if and only if the space \(X\) is a union of at most \(\kappa\) compact sets._
Proof.: \((\Rightarrow)\) The proof is identical to that of Theorem 4.3, only that here we begin with a base of cardinality \(\kappa\).
\((\Leftarrow)\) In the \(\alpha\)-th inning, Bob covers the \(\alpha\)-th compact set.
**Corollary 4.5**.: _Let \(X\) be a topological space of weight \(\kappa\). If Bob has a winning strategy in the Menger game, then the space \(X\) is a union of at most \(\kappa\) compact sets. _
The converse of Corollary 4.5 is false: The discrete space of cardinality \(\kappa\) has weight \(\kappa\), and it is a union of \(\kappa\) compact sets (singletons). This space is not Lindelof, and thus not Menger, so Bob has no winning strategy in the Menger game played on this space.
For a space \(X\), let \(\overline{\mathrm{O}}(X)\) be the families \(\mathcal{U}\) of open sets with \(\bigcup_{U\in\mathcal{U}}\overline{U}=X\). A space \(X\) is _almost Menger_ if it satisfies \(\mathsf{S}_{\mathrm{fin}}(\mathrm{O},\overline{\mathrm{O}})\). For regular spaces, almost Menger is equivalent to Menger [1]. And similarly for the other notions considered below. Thus, the remainder of this section is mainly relevant for nonregular spaces. The _almost Menger game_ is the game associated to the property \(\mathsf{S}_{\mathrm{fin}}(\mathrm{O},\overline{\mathrm{O}})\). The corresponding notion of _almost Lindelof_ is classic, and so is the notion of _almost compact_ space: A space \(K\) is almost compact if every open cover of \(K\) has a finite subset \(\mathcal{F}\) with dense union. This notion appears in the literature under various names. For Hausdorff spaces, it is known to be equivalent to _Hausdorff closed_, that is, being closed in all Hausdorff superspaces.
A set \(K\) in a space \(X\) is _relatively_ almost compact if every open cover of the space \(X\) has a finite subset \(\mathcal{F}\) with \(K\subseteq\bigcup_{U\in\mathcal{F}}\overline{U}\). The standard proof that every almost compact space is \(H\)-closed shows that every relatively almost compact set in a Hausdorff space is closed in that space. However, no separation hypothesis is needed in the following theorem. Since the proof of the following theorem is provided by the first part of Scheepers's argument, we attribute the theorem to Scheepers.
**Theorem 4.6** (Scheepers).: _Let \(X\) be a second countable topological space. The following assertions are equivalent:_
1. _Bob has a winning strategy in the game_ \(\mathsf{G}_{\mathrm{fin}}(\mathrm{O},\overline{\mathrm{O}})\) _played on_ \(X\)_._
2. _The space_ \(X\) _is a countable union of relatively almost compact sets._
Proof.: \((2)\Rightarrow(1)\): This is easy.
\((1)\Rightarrow(2)\): This is a part of the argument of Scheepers [15, Theorem 1]. We provide it, for completion and verification.
Repeat the inductive construction of Theorem 4.3 verbatim. Having completed it, define for each sequence \(s\in\mathbb{N}^{*}\):
\[K_{s}:=\bigcap_{n=1}^{\infty}\bigcup_{U\in\sigma(\mathcal{U}_{s_{1}},\ldots, \mathcal{U}_{s},\mathcal{U}_{s,n})}\overline{U}.\]
The set \(K_{s}\) is relatively almost compact. It remains to see that \(X=\bigcup_{s\in\mathbb{N}^{*}}K_{s}\).
Assume that some element \(x\in X\) is not in \(\bigcup_{s\in\mathbb{N}^{*}}K_{s}\).
1. Since \(x\notin K_{()}\), there is \(m_{1}\) with \(x\notin\bigcup_{U\in\sigma(\mathcal{U}_{m_{1}})}\overline{U}\).
2. Since \(x\notin K_{m_{1}}\), there is \(m_{2}\) with \(x\notin\bigcup_{U\in\sigma(\mathcal{U}_{m_{1},m_{2}})}\overline{U}\).
3. Since \(x\notin K_{m_{1},m_{2}}\), there is \(m_{3}\) with \(x\notin\bigcup_{U\in\sigma(\mathcal{U}_{m_{1},m_{2},m_{3}})}\overline{U}\).
4. Etc.
Then the play
\[\mathcal{U}_{m_{1}},\sigma(\mathcal{U}_{m_{1}}),\mathcal{U}_{m_{1},m_{2}}, \sigma(\mathcal{U}_{m_{1},m_{2}}),\mathcal{U}_{m_{1},m_{2},m_{3}},\sigma( \mathcal{U}_{m_{1},m_{2},m_{3}}),\ldots\]
is lost by Bob; a contradiction.
The assertions analogous to the more general Theorem 4.4 and Corollary 4.5 also hold. Theorem 4.6 answers a question of Babinkostova, Pansera and Scheepers [1, Question 26(2)] in the case of second countable spaces.
## 5. The additivity of Rothberger's property
Let \(\operatorname{add}(\mathcal{N})\) be the minimal cardinality of a family \(F\subseteq\mathbb{N}^{\mathbb{N}}\) such that there is no function \(S\colon\mathbb{N}\to[\mathbb{N}]^{<\infty}\) with \(|S(n)|\leq n\) for all \(n\), such that for each function \(f\in F\) we have \(f(n)\in S(n)\) for all but finitely many \(n\).
The notation \(\operatorname{add}(\mathcal{N})\) is explained by a result of Bartoszynski and Judah [2, Theorem 2.11]: The cardinal number \(\operatorname{add}(\mathcal{N})\) is the minimal cardinality of a family of Lebesgue null sets of real numbers whose union is not Lebesgue null. In general, the _additivity_ of a property is the minimum cardinality of a family of sets satisfying the property, whose union does not. The following theorem is attributed to Carlson by Bartoszynski and Judah [2, Theorem 2.9].
**Theorem 5.1** (Carlson).: _Let \(\kappa<\operatorname{add}(\mathcal{N})\). If a Lindelof space is a union of at most \(\kappa\) spaces satisfying \(\mathsf{S}_{1}(\operatorname{O},\operatorname{O})\), then the space \(X\) satisfies \(\mathsf{S}_{1}(\operatorname{O},\operatorname{O})\). That is, for Lindelof spaces, \(\operatorname{add}(\mathcal{N})\leq\operatorname{add}(\mathsf{S}_{1}( \operatorname{O},\operatorname{O}))\)._
This theorem is an easy consequence of a simple, basic fact concerning selection principles. We need the following lemmata.
Let \(\operatorname{A}\) and \(\operatorname{B}\) be types of open covers. A topological space \(X\) satisfies \(\mathsf{S}_{n}(\operatorname{A},\operatorname{B})\) if for all \(\mathcal{U}_{1},\mathcal{U}_{2},\dots\in\operatorname{A}(X)\), there are finite sets \(\mathcal{F}_{1}\subseteq\mathcal{U}_{1},\mathcal{F}_{2}\subseteq\mathcal{U}_{2},\dots\) such that \(|\mathcal{F}_{n}|\leq n\) for all \(n\), and \(\bigcup_{n=1}^{\infty}\mathcal{F}_{n}\in\operatorname{B}(X)\).
Garcia-Ferreira and Tamariz-Mascarua [6, Lemma 3.12] established the following observation in the case \(\operatorname{A}=\operatorname{O}\).
**Lemma 5.2** ([20, Theorem A.1]).: _Let \(\operatorname{A}\) be a type of countable covers such that every pair of covers of type \(\operatorname{A}\) has a joint refinement of type \(\operatorname{A}\). Then \(\mathsf{S}_{n}(\operatorname{A},\operatorname{O})=\mathsf{S}_{1}( \operatorname{A},\operatorname{O})\)._
**Lemma 5.3** (Folklore).: _If a space \(X\) satisfies \(\mathsf{S}_{1}(\operatorname{A},\operatorname{O})\), then for each sequence \(\mathcal{U}_{1},\mathcal{U}_{2},\dots\) in \(\operatorname{O}(X)\) there are elements \(U_{m_{1}}\in\mathcal{U}_{m_{1}},U_{m_{2}}\in\mathcal{U}_{m_{2}},\dots\) such that for each point \(x\in X\), we have \(x\in U_{m_{n}}\) for infinitely many \(n\)._
Proof.: As usual, we split the sequence of open covers to infinitely many disjoint sequences, and apply the property \(\mathsf{S}_{1}(\operatorname{A},\operatorname{O})\) to each subsequence separately.
**Theorem 5.4**.: _Let \(\operatorname{A}\) be a type of countable covers such that every pair of covers of type \(\operatorname{A}\) has a joint refinement of type \(\operatorname{A}\). Then \(\operatorname{add}(\mathcal{N})\leq\operatorname{add}(\mathsf{S}_{1}( \operatorname{A},\operatorname{O}))\)._
Proof.: Let \(\kappa<\operatorname{add}(\mathcal{N})\) and \(X=\bigcup_{\alpha<\kappa}X_{\alpha}\), where each space \(X_{\alpha}\) satisfies \(\mathsf{S}_{1}(\operatorname{A},\operatorname{O})\). By Lemma 5.2, it suffices to show that the space \(X\) satisfies \(\mathsf{S}_{n}(\operatorname{A},\operatorname{O})\).
Let \(\mathcal{U}_{n}=\{U_{m}^{n}:m\in\mathbb{N}\}\in\Lambda(X)\), for \(n\in\mathbb{N}\). For each ordinal number \(\alpha<\kappa\), as the space \(X_{\alpha}\) satisfies \(\mathsf{S}_{1}(\operatorname{A},\operatorname{O})\), there is a function \(f_{\alpha}\in\mathbb{N}^{\mathbb{N}}\) such that for each point \(x\in X_{\alpha}\) we have
\[x\in U_{f_{\alpha}(n)}^{n}\]
for infinitely many \(n\).
There is a function \(S\colon\mathbb{N}\to[\mathbb{N}]^{<\infty}\) with \(|S(n)|\leq n\) for all \(n\), such that for each \(\alpha<\kappa\) we have
\[f_{\alpha}(n)\in S(n)\]
for all but finitely many \(n\). Then \(\bigcup_{n=1}^{\infty}\{U_{m}^{n}:m\in S(n)\,\}\in\operatorname{O}(X)\).
Judging by an extensive survey on the topic [19], the result in the second item below seems to be new.
**Corollary 5.5**.: __
1. _For Lindelof spaces,_ \(\operatorname{add}(\mathcal{N})\leq\operatorname{add}(\mathsf{S}_{1}(\mathrm{O},\mathrm{O}))\)_._
2. \(\operatorname{add}(\mathcal{N})\leq\operatorname{add}(\mathsf{S}_{1}(\Gamma, \mathrm{O}))\)_._
Proof.: (1) Since the spaces are Lindelof, we may restrict attention to countable covers, and the assumptions of Theorem 5.4 hold.
(2) A countably infinite subset of a point-cofinite cover is also a point-cofinite cover. Thus, we may restrict attention to countable point-cofinite covers. It is well-known that every pair of point-cofinite covers has a joint refinement that is a point-cofinite cover. Indeed, let \(\mathcal{U}\) and \(\mathcal{V}\) be countable point-cofinite covers. Enumerate them \(\mathcal{U}=\{\,U_{n}:n\in\mathbb{N}\}\) and \(\mathcal{V}=\{\,V_{n}:n\in\mathbb{N}\}\). Then \(\{\,U_{n}\cap V_{n}:n\in\mathbb{N}\}\in\Gamma(X)\). Theorem 5.4 applies.
We can extract additional information from this proof method. A topological space \(X\) satisfies \(\mathsf{U}_{n}(\Gamma,\Gamma)\)[20] if for all \(\mathcal{U}_{1},\mathcal{U}_{2},\dots\in\Gamma(X)\), there are finite sets \(\mathcal{F}_{1}\subseteq\mathcal{U}_{1},\mathcal{F}_{2}\subseteq\mathcal{U}_ {2},\dots\) such that \(|\mathcal{F}_{n}|\leq n\) for all \(n\), and \(\{\,\bigcup\mathcal{F}_{n}:n\in\mathbb{N}\}\in\Gamma(X)\). This property is strictly inbetween \(\mathsf{S}_{1}(\Gamma,\Gamma)\) and \(\mathsf{U}_{\mathrm{fin}}(\mathrm{O},\Gamma)\)[20, Theorems 3.3 and 3.8].
**Theorem 5.6**.: \(\operatorname{add}(\mathcal{N})\leq\operatorname{add}(\mathsf{U}_{n}(\Gamma, \Gamma))\)_._
Proof.: Let \(\kappa<\operatorname{add}(\mathcal{N})\) and \(X=\bigcup_{\alpha<\kappa}X_{\alpha}\), where each space \(X_{\alpha}\) satisfies \(\mathsf{U}_{n}(\Gamma,\Gamma)\). It suffices to show that the space \(X\) satisfies \(\mathsf{U}_{n^{2}}(\Gamma,\Gamma)\), where the cardinality of the \(n\)-th selected finite set is at most \(n^{2}\)[20, Lemma 3.2].
Let \(\mathcal{U}_{n}=\{U_{m}^{n}:m\in\mathbb{N}\}\in\Gamma(X)\), for \(n\in\mathbb{N}\). For each \(\alpha<\kappa\), as the space \(X_{\alpha}\) satisfies \(\mathsf{U}_{n}(\Gamma,\Gamma)\), there is a function \(S_{\alpha}\colon\mathbb{N}\to\prod_{n}[\mathbb{N}]^{\leq n}\) such that for each point \(x\in X_{\alpha}\) we have
\[x\in\bigcup_{m\in S_{\alpha}(n)}U_{m}^{n}\]
for infinitely many \(n\).
There is a function \(S\colon\mathbb{N}\to\prod_{n}[\mathbb{N}]^{\leq n}\) with \(|S(n)|\leq n\) for all \(n\), such that for each \(\alpha<\kappa\) we have
\[S_{\alpha}(n)\in S(n)\]
for all but finitely many \(n\). For each natural number \(n\), let \(F_{n}:=\bigcup S(n)\). Then \(|F_{n}|\leq n^{2}\) for all \(n\), and \(\{\,\bigcup_{m\in F_{n}}U_{m}^{n}:n\in\mathbb{N}\}\in\Gamma(X)\).
**Acknowledgments.** This is the first paper that I write after a long, challenging period. I thank all those who helped and encouraged me throughout, and supported my return to normal track afterwards. Above all, I thank my wife, Adina, for her faith, support, and patience.
|
2304.01382 | PoseMatcher: One-shot 6D Object Pose Estimation by Deep Feature Matching | Estimating the pose of an unseen object is the goal of the challenging
one-shot pose estimation task. Previous methods have heavily relied on feature
matching with great success. However, these methods are often inefficient and
limited by their reliance on pre-trained models that have not be designed
specifically for pose estimation. In this paper we propose PoseMatcher, an
accurate model free one-shot object pose estimator that overcomes these
limitations. We create a new training pipeline for object to image matching
based on a three-view system: a query with a positive and negative templates.
This simple yet effective approach emulates test time scenarios by cheaply
constructing an approximation of the full object point cloud during training.
To enable PoseMatcher to attend to distinct input modalities, an image and a
pointcloud, we introduce IO-Layer, a new attention layer that efficiently
accommodates self and cross attention between the inputs. Moreover, we propose
a pruning strategy where we iteratively remove redundant regions of the target
object to further reduce the complexity and noise of the network while
maintaining accuracy. Finally we redesign commonly used pose refinement
strategies, zoom and 2D offset refinements, and adapt them to the one-shot
paradigm. We outperform all prior one-shot pose estimation methods on the
Linemod and YCB-V datasets as well achieve results rivaling recent
instance-level methods. The source code and models are available at
https://github.com/PedroCastro/PoseMatcher. | Pedro Castro, Tae-Kyun Kim | 2023-04-03T21:14:59Z | http://arxiv.org/abs/2304.01382v1 | # PoseMatcher: One-shot 6D Object Pose Estimation by Deep Feature Matching
###### Abstract
Estimating the pose of an unseen object is the goal of the challenging one-shot pose estimation task. Previous methods have heavily relied on feature matching with great success. However, these methods are often inefficient and limited by their reliance on pre-trained models that have not be designed specifically for pose estimation. In this paper we propose **PoseMatcher**, an accurate model free one-shot object pose estimator that overcomes these limitations. We create a new training pipeline for object to image matching based on a three-view system: a query with a positive and negative templates. This simple yet effective approach emulates test time scenarios by cheaply constructing an approximation of the full object point cloud during training. To enable PoseMatcher to attend to distinct input modalities, an image and a pointcloud, we introduce IO-Layer, a new attention layer that efficiently accommodates self and cross attention between the inputs. Moreover, we propose a pruning strategy where we iteratively remove redundant regions of the target object to further reduce the complexity and noise of the network while maintaining accuracy. Finally we redesign commonly used pose refinement strategies, zoom and 2D offset refinements, and adapt them to the one-shot paradigm. We outperform all prior one-shot pose estimation methods on the Linemod and YCB-V datasets as well achieve results rivaling recent instance-level methods. The source code and models are available at github.com/PedroCastro/PoseMatcher.
## 1 Introduction
Accurately retrieving the relative position and orientation of an object is the first step for any task that requires interaction with objects in the real world. Estimating the pose of an object is an indispensable step for robotic manipulation as well as in VR/AR applications. It is imperative for the pose estimation to be accurate and robust to external obstacles such as occlusion, illumination and symmetries. Existing methods excel at retrieving the pose of known objects to a very high accuracy standard [42, 7, 2]. Some methods can even produce pose estimation at high throughput [42, 4], be specially robust to symmetries [10] and can even surpass synthetic to real domain gaps [41, 36]. However, most share a very large limitation: the target objects must be known. This constraint is severely limiting as it necessitates model retraining for each object addition. Retraining new objects on existing models might lead to catastrophic forgetting if not handled properly [19]. Category level approaches [46, 43] try to generalize up to a category, where each target object can be obtained with a simple deformation of a canonical model and all share semantic keypoints (ex. handle on a mug or the cap of a bottle). Nonetheless, much of the same one-shot domain problems still remain: objects category must have been seen before and the target object must not lie outside of the scope of the category for the estimation to be possible. In order to overcome these problems we must tackle object pose estimation in the one-shot paradigm.
By one-shot pose estimation, we refer to estimating the pose of an novel object based on information seen only at test-time, without the object,or its category, being present in the training dataset. Older methods have some very hard constraints such as needing masking and depth maps at test-time [13], a full colored 3D model of the target object [22],
Figure 1: **Illustrative diagram of PoseMatcher training diagram. For each training instance query we sample two templates. From these templates, we reconstruct a partial point cloud simulating the object point cloud at test time.**
need to be fine-tuned on images of the same dataset for difficulties of overcoming domain gap [25] or rely on an extremely expensive test-time optimization through renderers [47, 21, 28]. More recently, feature matching methods have been shown to achieve impressive results. Particularly, OnePose [38] and OnePose++ [12] make use of existing state of the art pre-trained descriptor extractors on top of which a pose estimation pipeline is built. However, by relying on a fixed pre-trained model to extract descriptive keypoints from templates it fails to capture the optimal keypoint object descriptions. Ideally, the template extraction model should be the same as the query model and should be jointly optimized.
We rethink the approach to the problem and introduce a three-view pipeline that allows us to jointly train the template and query extractors. Just by redesigning the training pipeline, we can improve OnePose++[12] without additional changes to its methodology. We also introduce a new efficient image to object attention layer which we call IO-Layer, which reduces both parameters when compared to the modules used by OnePose++ [12] and also separates the two input modalities, an image and a pointcloud, allowing the model to optimize weights specifically for either image or pointcloud keypoints. We also found that working with a full object at all stages of the feature matching is redundant and reduces pose estimation accuracy which lead us to introduce a novel iterative matching based pointcloud pruning. On top of these changes, we employ a 3D-refinement technique based on zoom refinement used on instance-level pose estimation [22, 4, 20].
In summary our contributions are as follows:
* We redesign the training pipeline for one-shot object-to-image matching. Our three-view training approach allows us to train from scratch PoseMatcher, a pose estimation method leveraging detection free keypoint matching.
* We introduce a more efficient image to object attention layer we call IO-Layer, specifically built to accommodate the two input modalities, the image and the object pointcloud.
* We propose a layered pruning of the target object point cloud. We improve runtime by reducing the amount of _attented_ keypoints while reducing noisy matching, leading to an improvement in accuracy.
* We create a fine level 3D based refinement where we directly estimate relative 2D and depth positions, replacing the 2D keypoint refinement used by prior one-shot approaches.
## 2 Literature
**Fully supervised pose estimation.** Instance-level pose estimation is designed to support a single object. Due to this task's narrowed scope, the accuracy of instance-level methods are becoming increasingly more impressive. Early 6D pose estimation works focused on recovering the 2D position of specific keypoints, mainly the 3D bounding box of the target [33, 27, 39]. PVNet [30] found that choosing keypoints that lie within the object's silhouette would yield better results. This idea has become a mainstay among current keypoint based methods [15, 4]. An alternative to chosing a limited amount of features was to estimate the coordinate of the surface of the object at each pixel [43, 48, 29]. GDR[42] and SO-Pose[7] approximated PnP through a small neural network and optimizing directly through pose errors. Direct pose estimation with a combination of a dense intermediate representation makes up most of the state of the art methods as per the BOP challenge [14]. Other ideas have focused on designing textures as learnable features that can be more easily estimated [10, 16] and as a result, better correspondences are found. In order to decrease the need for real data, Sock _et al_. [36] proposed a differentiable pipeline where learning is done through comparison with a rendering of the estimation which is done using keypoints, where Self6D [41] directly outputs pose.
**One-shot pose estimation.** Until recent versions of the BOP challenge [14], Point Pair Features [13] held the top spot on its leaderboard. It relies on selecting geometrical relevant keypoints from an existing 3D and matching these to a depth map. This method relies on capturing information with a depth sensor at inference time which is not commonly available. Extending the prior instance-level approaches to category-level allows for object deformation and texture change within a narrow category [49, 9, 43]. The main idea from NOCS [43], although used for instance-level by a wide range of works [42, 7, 48], was initially to predict pose of slightly deformable objects within a category, as it was inspired by its human reconstruction contemporary DenseBody [1]. Recent advancements have been made that allow for training for in the wild [9]. However, these are sill limited by the similarity to objects of the same category. Our aim is to overcome this limitation.
Methods such as Pitteri _et al_. [31] and CorNet[32] rely on training with a subset of similar objects of the same dataset and/or scene, which includes biases towards illuminations, noise, background and shape. Recent attempts have shown researchers are keen on tackling one-shot pose estimation. OSOP [35] introduces a global template matching method where a given query is matched to the closest viewpoint from a database of pre-processed synthetic viewpoints, which requires the object model, an assumption we do not make. Gen6D [25] works in a similar fashion by matching a query to a small amount of real annotated im
ages and then refining the pose. However, it is susceptible to poor pose initialization and poor 2D bounding box detections. The closest work to our own is OnePose++, an improved version of OnePose[38], where the feature extraction is replaced by the detector free feature matcher LoFTR [37]. However, OnePose++[12] adapts a pre-trained LoFTR to one-shot pose estimation without taking consideration the different modalities of image and point cloud, self-occlusion redundancy and the possibility of using 6D pose based refinement.
## 3 PoseMatcher
The goal of the PoseMatcher is to establish matches between the two sets of keypoints. We establish correspondence between the 2D keypoints features extracted from the query image \(\mathcal{I}^{Q}\) and the object point cloud \(\{\mathcal{P}_{O}\}\). From those matches we can apply PnP and recover full 6D pose \(\zeta\). A diagram of PoseMatcher can be seen in Fig. 2.
### Template Based Training
We adopt a new training methodology that allows us to design a one-shot object-image detector free feature matching model from scratch, specifically aimed at pose estimation task. We therefore remove the need for pre-trained descriptor extractors used by OnePose and OnePose++ [38, 12].
However, a single viewpoint will result in a partial reconstruction of the object. At inference time, both the visible and occluded regions of the object will be represented in the template point-cloud. Therefore, a single template is not enough to emulate the conditions at test-time.
To address these limitations, we sample an additional template image we refer to as the negative template \(\mathcal{I}^{-}\). This template shares low co-visibility area with the anchor image. The keypoints extracted from \(\mathcal{I}^{-}\) serve to generate nearly complete reconstruction of the target object, with the visible sections being sampled from \(\mathcal{I}^{+}\) while the self-occluded ones from \(\mathcal{I}^{-}\). This 2-template paradigm simulates the full object point-cloud available at the inference time.
In order to optimize through the matching task we apply the differentiable dual-softmax operator as proposed by LoFTR [37] and subsequently used by both OnePose and OnePose++ [38, 12].
In order to output matches, we start by using a local feature extractor, such as a Resnet [11] to extract coarse and fine level feature maps, \(\hat{\mathcal{F}}_{\mathcal{I}^{Q}}\in\mathbb{R}^{H\mathcal{x}W\mathcal{x}^{ C}}\) and \(\hat{\mathcal{F}}_{\mathcal{I}^{Q}}\in\mathbb{R}^{H\mathcal{x}^{C}}\). We then use the 2D feature map \(\mathcal{F}_{\mathcal{I}^{Q}}\) to extract coarse and fine level features from \(\mathcal{F}_{\mathcal{I}^{Q}}\).
Figure 2: **Diagram of PoseMatcher. At training time we sample from the Google Scanned Objects[8] dataset and from two opposing views templates we construct a partial point cloud. At test time, a novel unseen object is used instead. We extract local features from both the query and the templates. Using our novel IO-Layer, we compute dense correspondences between the image and the object. We then take the matches (yellow star) and further refine them using 2D and 3D techniques.**
\(\mathbb{R}^{HxWx\tilde{C}}\) from the query image. We repeat the same process for both \(\mathcal{I}^{+}\) and \(\mathcal{I}^{-}\) and sample \(M\) points from both templates. We make sure that the template positions lie within the objects mask such that:
\[\begin{split} p^{+}&=\{p^{+}_{k}|\:k\in\mathcal{M}^{ +}\},\\ p^{-}&=\{p^{-}_{k}|\:k\in\mathcal{M}^{-}\},\end{split} \tag{1}\]
where \(\mathcal{M}\) refers to the segmentation mask. We backproject, using the a depth map only available at training time, and generate a template point \(\mathcal{P}\mathcal{T}^{3D}\in\mathbb{R}^{2Mx3}\) with its corresponding extracted features \(\tilde{\mathcal{F}_{\mathcal{T}}}\in\mathbb{R}^{2Mx\tilde{C}}\) and \(\tilde{\mathcal{F}_{\mathcal{T}}}\in\mathbb{R}^{2Mx\tilde{C}}\), where \(\mathcal{T}\) refers to the combined positive and negative templates.
At this point, we perform dense matching between the coarse level query features \(\tilde{\mathcal{F}_{\mathcal{I}^{3}}}\) and the extracted point cloud \(\tilde{\mathcal{F}_{\mathcal{T}}}\). We now aim to globally match each pixel of the query image that contains the object to its respective point cloud keypoint. We make use of positional embeddings to encode positional information into each feature point. For 2D keypoints we use the sinusoidal fixed version used by DETR [3] while for 3D positional encoding we use a simple 3-layer MLP as used by SurfEmb [10] and OnePose++ [12].
We flatten both feature maps and apply self and cross attention layers following other feature matching papers [34, 37, 38, 12, 44] to generate more easily separable features on each set. Our objective is to construct a matrix \(\mathds{P}^{HWx2M}\) that reflects the level of confidence of the correspondence between \(\mathcal{P}^{2D}\) and \(\mathcal{P}^{3D}\). A score matrix \(\mathcal{S}\) is computed by measuring the cosine similarity between the two sets of transformed features in a contrastive manner. We apply dual-softmax [40] over \(\mathcal{S}\) to calculate the correspondence confidence matrix \(\mathds{P}\):
\[\mathds{P}=softmax(\mathcal{S}(i,\cdot))_{c}\ \cdot\ softmax(\mathcal{S}( \cdot,c))_{i}, \tag{2}\]
\(\tau\) being a temperature hyperparameter, with \(i\) and \(c\) being the indices of the flattened image pixels and point cloud, respectively. The image to object correspondences \(\mathcal{C}\) are established by choosing only those that exceed a certain level of confidence, represented by the threshold value \(\theta\), and meet the mutual nearest neighbor (MNN) constraint to remove false matches:
\[\mathcal{C}=\{(i,c)|\forall(i,c)\in MNN(\mathcal{P}^{2D}_{i},\mathcal{P}^{3D} _{c}),\mathds{P}_{i,c}\geq\theta\}. \tag{3}\]
The coarse loss \(\mathcal{L}_{c}\) used to optimize \(\mathcal{C}\) uses focal loss [24] as suggested by LoFTR [37].
### IO-Layer: An object-image attention layer
Image to image feature matching has seen its effectiveness increase with the introduction of self and cross attention mechanisms [44, 34, 37, 26]. These layers are particularly efficient for this task seeing as both inputs are from the same modality and share the same spatial structure. For this reason, self attention layers can share weights and cross attention only requires swapping the query and template image, as you can see in Fig 3. OnePose++ [12] has suggested using the same approach for image to object matching however in this scenario we are working with two different modalities, a 2D image and a 3D point cloud.
We postulate that mixing modalities has an adverse effect on learning distinctive features. The positional encoding for each input is structurally different which means that attention layers will have difficulty separating the features of each modality. OnePose++ [12] shares a sign of this problem when they observe that adding 3D positional encoding results in a small (\(<1\%\)) improvement. A simple ad-hoc solution would be to have modality specific weights, however that would double the amount of parameters and subsequently the necessary amount of training data and compute. In standard self attention layers an input sequence is linearly projected into a query, key and value: \(Q_{i}\),\(K_{i}\),\(V_{i}\),stemming from the same input \(i\). For cross attention, one would linearly project \(Q_{i}\), \(K_{j}\), \(V_{j}\) where \(i\) and \(j\) are different inputs. Usually, in attention mechanisms, the encoded message is parsed to the decoder and is not returned. However, since it is important for the image features to be distinct from the object's and vice-versa, the message must be passed bi-directionally and therefore requires careful redesign.
We propose a simple layer, specially designed for **I**mage to **O**bject matching we call **IO-Layer**. (\(Q_{i}\),\(K_{i}\),\(V_{i}\)) and (\(Q_{o}\),\(K_{o}\),\(V_{o}\)), the inputs to the attention mechanism for inputs and object respectively, are computed only once and used for both self and cross attention. Therefore cross attention is computed as \(softmax(Q_{i}K_{o}^{T}/\sqrt{d})K_{o}\) and \(softmax(Q_{o}K_{i}^{T}/\sqrt{d})K_{i}\) for image to object and object to image attention respectively, where we only have to project each sequence once. We modify self and cross attention accordingly to support Linear Attention [17]. Much like the layers used in LoFTR and OnePose++, these can be stacked together. We exemplify the IO-Layer on Fig.3.
### Object Pruning
OnePose++ proposes sampling and using a point cloud template with over 15k keypoints per object. Performing attention over such a higher number of keypoints is expensive even if using more efficient mechanisms such as Linear Attention [17]. The set of keypoints that are not in visible in the query image should be quickly identifiable in early matching. Removing these keypoints from the point cloud allows for better separability of the features of the remaining visible keypoint, which leads to a better match. As a peripheral advantage, by removing a set of keypoints from the template point cloud, we reduce the complexity of the following attention layers. During pruning, we do not impose a hard threshold on the confidence of the matches but rather
but rather we select the keypoints with higher confidences of existing in the image. We extend the matching operation in Eq. 3 where we sum the keypoint confidences over every pixel and prune the lowest ones. We insert a pruning step after each IO-Layer. The amount of pruning is subjected to ablation studies in Section 4.5.1.
### Pose Refinement
We add a fine level 2D-refinement post-processing step as described by LoFTR [37] and OnePose++[12]. For each match, we crop a window at the match location and perform simple matching w.r.t. the matched keypoints. We supervise it by minimizing the Euclidean distance between the center of the grid and the projected keypoints. More details about 2D refinement are available in LoFTR [37].
However, its impact is limited as we show in the experimental section. We believe this is due to the use of PnP following the keypoint position refinement. Prior 6D object pose literature has shown that estimating translation through PnP leads to poor results [23, 42]. An alternative approach, first introduced by CDPN [23], is to directly estimate the translation as the 2D offset of the object centroid as well as bounding box relative depth estimation. Directly applying this step to PoseMatcher cannot be done seeing as the objects are not known.
In order to perform translation estimation, we adopt translation refinement, which will refer to as 3D-refinement, a strategy used by iterative methods [4, 20, 22]. Instead of estimating the translations directly, we estimate the translation errors given an initial pose. Intuitively, we estimate the necessary 2D translation and _zoom_ necessary to align the initial pose to the target pose.
In order to generate the initial pose \(\zeta^{0}\), we perform PnP over the coarse matching keypoints. In the 2D-refinement step, the initial position of the matched keypoints are the image grid default positions. However, if we are performing a refinement over an initial pose, this alignment might be broken due to erroneous matching, pose estimation or even sub-pixel positioning. Therefore we project the matched keypoints using the initial pose and use those 2D locations as the center for the fine-sampling grid crops \(F^{i}_{crop}\). We perform self and cross attention over the grids and the corresponding 3D fine feature \(T^{i}_{f}\) using a single IO-Layer. We compute the 2D expectation and supervise its output using the same methodology as described for the 2D-refinement.
We propose a lightweight CNN similar to the one used for learned Patch-PnP in GDR-Net [42]. We build our pose representation as a pixel-wise map where at each coarse pixel we collect each matched keypoint refinement prediction (i.e. refinement direction). This CNN outputs the _3D-refinement_ in the form of \(\Delta T\) and \(\Delta Z\), the 2D location and zoom offsets respectively. Moreover, in order to supervise the refinement step, LoFTR [37] proposes computing the total variance of the matching heatmap in order to penalize more confident but erroneous estimations. We propose reusing this variance to introduce a confidence measure to each keypoint refinement output and append to each input its corresponding refinement.
We model the _zoom_ as a classification problem and discrete the possible zooms in \(K\) classes [2]. We output the estimated _zoom_\(\delta_{z}\) as the expectation over all classes and 2d translation \(\delta_{2d}\) necessary to align \(\zeta^{0}\) with \(\zeta^{GT}\) as following:
\[\begin{cases}\zeta^{GT}_{z}=\zeta^{GT}_{z}\cdot\delta_{z},\\ p(\zeta^{GT})=p(\zeta^{GT})\;\cdot\;\delta_{2d},\end{cases} \tag{4}\]
where \(p(P)=\mathcal{K}\zeta[0,0,0]^{T}\) is the 2D projection location of the object's centroid adjusted for the crop bounding box and \(K\) are the camera intrinsic parameters.
We supervise the _zoom_ and 2D location offset directly with a sparse loss:
\[\begin{cases}\mathcal{L}_{z}=||\delta_{z}-\varepsilon_{z}||_{1}\\ \mathcal{L}_{2d}=||\delta_{2d}-\varepsilon_{2d}||_{1}\end{cases} \tag{5}\]
Figure 3: **IO-Layer architecture. We improve on the attention modules used by LoFTR and OnePose++ [37, 38]. We have modality specific projections for image features embedded with 2D positional encoding and for object template features with its 3D positional encoding. We found that separating the two modalities improves the final pose accuracy.**
where \(\varepsilon\) is noise introduced at training time.
The full loss with our improvements becomes:
\[\mathcal{L}=\mathcal{L}_{c}+\mathcal{L}_{f}+\alpha\;(\mathcal{L}_{z}+\mathcal{L} _{2d}). \tag{6}\]
## 4 Experiments.
### Data preparation
In order to train PoseMatcher from scratch we must use a sufficiently large dataset to encompass a large number of shapes and textures. We use the Google Scanned Objects[8] dataset with 1023 household objects. We use the renderings provided by [25] for fair comparison where each object is rendered at 250 different viewpoints. We use an additional set of 500 keypoints from ShapeNet [5] with renderings provided by [9]. For each instance we find a close viewpoint, \(\mathcal{T}^{+}\), sampled within \([5^{\circ},25^{\circ}]\) orientation to ensure high co-visibility while being sufficiently disparate. \(\mathcal{T}^{-}\) is sampled randomly from a set of the 5 farthest viewpoints from the query, which ensures enough data variety. We apply the standard color and noise augmentations [10, 42, 4] as well as bounding box _zoom-ins_ as proposed by CDPN [23]. We also perform object level augmentations by changing the object's canonical reference frame. Although only 1500 objects are used, by providing strong augmentations we are able to avoid overfitting.
### Implementation Details.
For fair comparison with OnePose++, we use 3 IO-Layers. We set the contrastive temperature term \(\tau=0.1\). For 3D-refinement we set \(K=100\) and \(\alpha=100\). During training we sample \(M=2048\) from each templates for a total of \(4096\). For refinement, we use the top 512 matches with confidences up to \(\theta=0.1\). If not enough matches during training, we pad the pointcloud with groundtruth matches. At test time, we sample 16k keypoints from the available training images. We prune \(50\%\) of the point cloud after each IO-Layer, to 8k and 4k after the first two layers, respectively.
For all experiments, we use AdamW [18] with cosine learning rate decay and we linearly warmup the learning rate for 5 epochs. We train PoseMatcher for 50 epochs with a batch size of 8 and an intial learning rate of 0.0001. At each epoch, we sample 10 different viewpoint queries for each of the 1500 objects.
### RGB vs Grayscale images
Interest point descriptor models are mostly used over grayscale images as it has been shown to improve generalization. OnePose and OnePose++ [38, 12] are applied to grayscale images because they make use of pre-trained descriptor models trained solely on grayscale, SuperPoint [6] and LoFTR respectively [37]. PoseMatcher does not rely on prior methods thus is not limited to grayscale. To provide a comprehensive analysis and ensure thoroughness we train our method on both RGB and grayscale input. Intuitively, RGB inputs should provide more information and allow PoseMatcher to better separate textured regions of an object as well as from the background. Surprisingly, we found that using RGB data improves our method. Grayscale only reaches an ADD-(S) accuracy of \(84.1\%\) on Linemod while RGB inputs reach an accuracy \(87.5\%\). All our experiments and further ablations use RGB inputs.
### Evaluation Results
Our results on Linemod and the YCB-V datasets use the standard 2D bounding boxes provided by the BOP Chal
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline Type & \multicolumn{3}{c|}{Fully Supervised} & \multicolumn{3}{c|}{Self-Supervised} & \multicolumn{3}{c}{One-Shot} \\ \hline Method & PVNet [30] & GDR [42] & SO-Pose [7] & Self6D [41] & Sock _et al_. [36] & Gen6D [25] & OnePose [38] & OnePose++ [12] & **PoseMatcher** \\ \hline \hline Ape & 43.6 & 85.9 & - & 38.9 & 37.6 & - & 11.8 & 31.2 & **59.2** \\ \hline Benchvise & 99.9 & 99.8 & - & 75.2 & 78.6 & 62.1 & 92.6 & 97.3 & **98.1** \\ \hline Camera & 86.9 & 96.5 & - & 36.9 & 65.6 & 45.6 & 88.1 & 88.0 & **93.4** \\ \hline Can & 95.5 & 99.3 & - & 65.6 & 65.6 & - & 77.2 & 89.8 & **96.0** \\ \hline Cat & 79.3 & 93.0 & - & 57.9 & 52.5 & 40.9 & 47.9 & 70.4 & **88.0** \\ \hline Driller & 96.4 & 100. & - & 67.0 & 48.8 & 48.8 & 74.5 & 92.5 & **98.4** \\ \hline Duck & 52.6 & 65.3 & - & 19.6 & 35.1 & 16.2 & 34.2 & 42.3 & **54.1** \\ \hline Eggbox* & 99.2 & 99.9 & - & 99.0 & 89.2 & - & 71.3 & 99.7 & **97.8** \\ \hline Glue* & 95.7 & 98.1 & - & 94.1 & 64.5 & - & 37.5 & 48.0 & **91.5** \\ \hline Holepuncher & 81.9 & 73.4 & - & 16.2 & 41.5 & - & 54.9 & 69.7 & **73.4** \\ \hline Iron & 98.9 & 86.9 & - & 77.9 & 80.9 & - & 89.2 & 97.4 & **97.9** \\ \hline Lamp & 99.3 & 99.6 & - & 98.2 & 70.7 & - & 87.6 & 97.8 & **98.1** \\ \hline Phone & 92.4 & 86.3 & - & 50.1 & 60.5 & - & 60.6 & 76.0 & **92.1** \\ \hline \hline Average & 86.3 & 91.0 & 96.0 & 58.9 & 60.6 & - & 63.6 & 76.9 & **87.5** \\ \hline \end{tabular}
\end{table}
Table 1: **Comparison study on Linemod. We present the results for ADD(-S) metric and compare them to state of the art. While Linemod is close to saturated for fully supervised methods, one-shot pose estimation is still challenging. PoseMatcher achieves the best results for all objects for the one-shot category, surpassing self-supervised methods and close to fully supervised methods. Best results for one-shot are bolded. \({}^{*}\) denotes symmetric objects.**
lenging [14]. To note these are trained using synthetic versions of the target datasets which is done in order for a fair comparison to other methods that rely on the same detections. To measure the performance of PoseMatcher, we employ the same metrics used by prior methods. The ADD(-S) metric considers a pose correct if the average distance to the groundtruth falls below a threshold, usually \(10\%\) of the objects diameter, with a slightly modified version for symmetric objects [45]. We also measure the accuracy of the predicted translation under a range of threshold to support our 3D-refinement step.
We outperform every existing one-shot method by a significant margin, specially on more complicated objects such as the Ape or the Cat. We have a large advantage over DeepIM [22] which needs a strong pose initialization from PoseCNN [45] whereas PoseMatcher does not require any type of pose initialization or the 3D mesh model. It is interesting to observe that we also outperform PVNet[30], an instance-level pose estimator.
### Ablation Studies
Our new training pipeline elevates the results from OnePose++ [12]. By learning meaningful descriptors specifically trained for pose estimation instead of pretrained ones, we are able increase the quality of the feature matching thereby improving the pose accuracy by \(5\%\). This confirms that carefully designing a training pipeline can yield better results.
The IO-Layer, our novel attention based layer, yields an improvement of \(2\%\) on both datasets. While it does not increase runtime performance when compared to OnePose++ [38], we can see an improvement stemming from the use of specialized modality weights.
Replacing 2D based refinement with pose specific 3D-Refinement improved our results significantly. We found improvements in both Linemod and the YCB-V datasets. The improvement cannot be solely observed by measuring the ADD-(S) since the average threshold for Linemod is \(15cm\). We provide further information for Linemod on Fig. 5 where the error curves for each object in Linemod can be seen. The 3D-refinement has a much larger impact on lower thresholds accuracies than higher ones. An addition like 3D-refinement is invaluable for applications that require high precision.
We found 2D-refinement to result in no significant improvements. This step does not correct significant matching errors and PnP seems to overcome small imperfections stemming from resolution errors. PnP is also subject to biasing for translation as observed by prior works [42, 23]. Our results on 3D-refinement show a large increase in the translation accuracy when we use 3D-refinement over 2D-refinement, which is also reflected on ADD-(S).
#### 4.5.1 Object pruning ablations
As discussed before, pruning an object provides two advantages. Firstly, it eliminates self-occluded regions of the object which removes possible noise during feature matching. Secondly, it reduces the complexity of the attention mechanism by decreasing the amount of keypoints needed to be _attented_, which leads to faster runtime. However, pruning a large amount of keypoints leads to poorer results as it excludes essential keypoints. We perform inference time ablation studies to help us determine the optimal pruning percentage and plot the results of percentages ranging from \(10\%\) to \(90\%\) after each IO-Layer in Fig 4 and measure the ADD-(S) accuracy on Linemod. If a low amount of keypoints are removed, the performance is not highly affected. However, we see a small drop at \(75\%\) with a large drop when crop \(90\%\) of the existing pointcloud. The latter might be due to the number of initial keypoints as the number of keypoints are just 150 points. Although increasing the number of pruned keypoints leads to higher throughput, the peak accuracy results is at \(50\%\).
\begin{table}
\begin{tabular}{c c c c c|c|c} \hline \hline Three-View & IO-Layer & Pruning & 2D-Ref & 3D-Ref & Grayscale & Linemod & YCB-V \\ \hline & & & ✓ & & 76.9 & - \\ \hline ✓ & & & ✓ & & 81.1 & 22.1 \\ ✓ & & & ✓ & & 81.4 & 22.5 \\ \hline ✓ & ✓ & & ✓ & & 83.4 & 24.1 \\ ✓ & ✓ & ✓ & ✓ & & 83.4 & 24.2 \\ ✓ & ✓ & ✓ & ✓ & & 83.9 & 25.8 \\ ✓ & ✓ & ✓ & & ✓ & **87.5** & **31.3** \\ ✓ & ✓ & ✓ & & ✓ & ✓ & 84.1 & 27.4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation studies for each component.** We present the results on Linemod and YCB-V measured by ADD-(S). The first line are the results gathered from OnePose++ [38] The model used for the second and third rows refer is identical to the OnePose++ [12] model trained with our three-view based pipeline.
Figure 4: **Pruning ablations.** With this ablation, we show that PoseMatcher is not reliant on a large number of keypoints. Although significant decreases in performance occur when using 8 keypoints, we see small differences for larger amounts, with the optimal number of keypoints being 64 for both datasets.
#### 4.5.2 Number of Templates
For all our experiments on Linemod, we have been using all the available training images in order to sample template keypoints. Due to our training pipeline, PoseMatcher is robust to incomplete and noisy templates. We perform an ablation study over the number of templates needed. Since each object in Linemod contains different number of training images, we present the percentage of training images used. To sample which images to use, we sample from the available viewpoints using furthest point sampling [30] w.r.t. orientations in order to cover the widest range of viewpoints. PoseMatcher almost achieves the same level of accuracy as OnePose++ [12] using only \(75\%\) of the available training images.
## 5 Conclusions
We proposed PoseMatcher, a novel model free one-shot pose estimator based on deep feature matching. Given a sequence of template images, we can reconstruct a feature point cloud and extract matches from a query image. In order to avoid using pre-trained descriptor models, we introduce a new training pipeline that allows us to train from scratch. With this simple addition, we show that we can improve OnePose++ [12] without any additional changes. We build on top of OnePose++ by designing a new attention layer IO-Layer, designed specifically for image to object matching. Additionally, we propose improvements to the pipeline including object pruning and 3D based refinement.
**Limitations.** Unfortunately we found that MatchPose has limited domain adaptation capability. If there is a large domain gap between the template domain (ex. synthetic renderings) and queries MatchPose is unable to correctly match keypoints, even coarsely. We specifically find this on YCB-V where each scene contains different levels of sensor noise and illumination. We also found symmetric objects particularly difficult, even though object pruning did show improvements.
Figure 5: **Difference between refinement operations.** We can see that for all objects in Linemod, the 3D-refinement has a big impact on lower thresholds.
Figure 6: **Visualization of matching keypoints.** Here we show PoseMatcher output \(\mathcal{C}\), where we assigned a normalized coordinate of the highest confidence keypoint to the corresponding pixel. The last two rows are examples from the YCB-V dataset.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \(\%\) & \(\mathcal{A}_{\text{geo}}\) & \(\mathcal{B}_{\text{C}}\) & \(\mathcal{C}_{\text{ox}}\) & \(\mathcal{C}_{\text{blox}}\) & \(\mathcal{B}_{\text{ox}}\) & \(\mathcal{C}_{\text{blox}}\) & \(\mathcal{C}_{\text{ox}}\) & \(\mathcal{B}_{\text{ox}}\) & \(\mathcal{C}_{\text{ox}}\) & \(\mathcal{C}_{\text{ox}}\) \\ \hline
10 & 21 & 15.1 & 8.8 & 0.9 & 1.9 & 6.4 & 0.2 & 25.5 & 0.0 & 12.4 & 2.1 & 12.4 & 6.8 \\
25 & 25 & 29.4 & 31.5 & 12.4 & 11.4 & 24.1 & 4.2 & 45.1 & 8.3 & 0.4 & 24.6 & 29.1 & 15.7 \\
50 & 12 & 54.4 & 35.6 & 19.7 & 19.3 & 34.7 & 49.6 & 49.9 & 5.7 & 34.1 & 21.8 & 57.3 & 33.1 \\
75 & 39.8 & 78.9 & 81.2 & 76.6 & 87.3 & 66.1 & 27.4 & 92.1 & 72.1 & 46.8 & 87.1 & 79.8 & 88.5 & 60.6 \\
100 & 52.2 & 94.1 & 95.4 & 96.0 & 88.0 & 96.4 & 54.1 & 97.3 & 91.5 & 73.4 & 97.9 & 98.1 & 92.1 & 67.3 \\ \hline \end{tabular}
\end{table}
Table 3: **Number of templates.** We present the results on Linemod for \(\%\) of training images used. Each object has around 180 images therefore \(10\%\) is only 18 templates to use. |
2310.12507 | Multi-granularity Backprojection Transformer for Remote Sensing Image
Super-Resolution | Backprojection networks have achieved promising super-resolution performance
for nature images but not well be explored in the remote sensing image
super-resolution (RSISR) field due to the high computation costs. In this
paper, we propose a Multi-granularity Backprojection Transformer termed MBT for
RSISR. MBT incorporates the backprojection learning strategy into a Transformer
framework. It consists of Scale-aware Backprojection-based Transformer Layers
(SPTLs) for scale-aware low-resolution feature learning and Context-aware
Backprojection-based Transformer Blocks (CPTBs) for hierarchical feature
learning. A backprojection-based reconstruction module (PRM) is also introduced
to enhance the hierarchical features for image reconstruction. MBT stands out
by efficiently learning low-resolution features without excessive modules for
high-resolution processing, resulting in lower computational resources.
Experiment results on UCMerced and AID datasets demonstrate that MBT obtains
state-of-the-art results compared to other leading methods. | Jinglei Hao, Wukai Li, Binglu Wang, Shunzhou Wang, Yuting Lu, Ning Li, Yongqiang Zhao | 2023-10-19T06:17:04Z | http://arxiv.org/abs/2310.12507v1 | # Multi-granularity Backprojection Transformer for Remote Sensing Image Super-Resolution
###### Abstract
Backprojection networks have achieved promising super-resolution performance for nature images but not well be explored in the remote sensing image super-resolution (RSISR) field due to the high computation costs. In this paper, we propose a Multi-granularity Backprojection Transformer termed MBT for RSISR. MBT incorporates the backprojection learning strategy into a Transformer framework. It consists of Scale-aware Backprojection-based Transformer Layers (SPTLs) for scale-aware low-resolution feature learning and Context-aware Backprojection-based Transformer Blocks (CPTRs) for hierarchical feature learning. A backprojection-based reconstruction module (PRM) is also introduced to enhance the hierarchical features for image reconstruction. MBT stands out by efficiently learning low-resolution features without excessive modules for high-resolution processing, resulting in lower computational resources. Experiment results on UCMerced and AID datasets demonstrate that MBT obtains state-of-the-art results compared to other leading methods.
Transformer, back-rejection, remote sensing image super-resolution, multi-scale features.
## I Introduction
Remote Sensing Image Super-Resolution (RSISR) is a classical image processing task that aims to reconstruct high-resolution remote sensing images from the low-resolution input. It can be used in many remote sensing interpretation tasks like object detection [1, 2, 3] and scene recognition [4, 5]. Thus, many researchers spare no effort to improve the final RSISR performance.
With the renaissance of neural networks, deep learning-based methods have dominated the RSISR field. Mainstream RSISR networks rely on feedforward structures for feature extraction, with the primary emphasis on designing more rational learning modules. For example, Lei et al. [6] proposed LGCNet to learn residuals between low-resolution and high-resolution images. Lei et al. [7] introduce a novel technique that leverages internal recursion of single-scale and cross-scale information. Wang et al. [8] developed a U-shaped structure that aims to further exploit and learn feature relationships among multi-scale remote sensing images. However, feed-forward learning offers only limited contextual information, which encounters challenges in capturing complex textures in images, dealing with high-frequency details, and modeling intricate inter-pixel relationships.
Different from feedforward learning, DBPN [9] ingeniously integrates the backprojection learning into network modules, creating both upsampling and downsampling backprojection blocks. The backprojection blocks incorporate feedback connections, facilitating information propagation within the network. This capability enables the network to delve deeper into the dependencies between low-resolution and high-resolution images. Subsequent work HDPN [10] has further improved the backprojection block, using two \(1\times 1\) convolutional layers inside the block to fine-tune LR and HR features. However, the cascading backprojection blocks entail substantial parameter learning for upsampled high-resolution features, leading to a considerable parameter burden. As a result, the exploration of backprojection structures in the field of RSISR has not been well explored in recent years.
To this end, inspired by the success of Transformer for super-resolution [11, 12], we design a Transformer-based RSISR method termed Multi-granularity Backprojection Transformer (MBT). Specifically, we employ the backprojection learning strategy to learn low-resolution feature representations at different granularities. Firstly, we design the scale-aware backprojection-based Transformer Layer (SPTL), which utilizes pyramid pooling and backprojection mechanisms to learn scale-aware low-resolution features. Based on SPTLs, we construct the context-aware backprojection-based Transformer block (CPTB) for efficient hierarchical feature learning in the network. We organize multiple CPTBs in a cascaded manner to learn the comprehensive low-resolution feature representation of remote sensing images. Moreover, we propose a backprojection-based reconstruction module (PRM) that adopts the backprojection design to learn the differences between high and low-resolution image features, enhancing the hierarchical features for final super-resolution reconstruction. With the above-proposed components, MBT stands out among other backprojection networks as it does not employ excessive modules to process high-resolution features. Therefore, MBT does not consume excessive computational resources. We conduct experiments on commonly-used remote sensing image super-resolution datasets. Our method achieves the best performance in terms of qualitative and quantitative results compared to other state-of-the-art RSISR methods.
To summarize, the contributions of this paper are three-fold:
* We develop an SPTL, which can obtain the scale-aware
low-resolution features in an effective way.
* We propose a CPTB, which can generate comprehensive hierarchical features for complex scene high-resolution image reconstruction.
* We construct an MBT, which achieves state-of-the-art results on commonly-used RSISR datasets.
The rest of this paper is organized as follows. We will first review the related works in Section II. Then, the details of our proposed method are illustrated in Section III. Experiment results and analysis are introduced in Section IV, and we will give a conclusion of this paper in Section V.
## II Related Work
### _Natural Image Super-Resolution_
Over the past years, through further exploration of end-to-end feature learning, many works in super-resolution reconstruction have further improved reconstruction quality. Most of these works focus on more sophisticated and efficient structural designs. For instance, Zhang et al. [13, 14] employed densely connected residual blocks to enhance the deep feature learning capacity of networks. Shi et al. [15] introduced a more efficient upsampling module, allowing the network to perform feature learning on low-resolution images, reducing the parameter burden. This approach is still widely used in various networks. At the same time, generative models have made significant progress in the field of image super-resolution. To achieve better perceptual quality in image reconstruction, several studies have incorporated GAN models to recover more realistic texture details, such as SRGAN [16] and ESRGAN [17]. Gao et al. [18] introduced diffusion models into the super-resolution domain for continuous image super-resolution reconstruction. Yao et al. [19] realized high-resolution image generation with a sense of realism at arbitrary scales using flow-based super-resolution models.
### _Optical Remote Sensing Image Super-Resolution_
With the proliferation of neural networks in the realm of super-resolution, numerous endeavors have begun employing neural networks for remote sensing image reconstruction. Lei et al. [6] proposed LGCNet to learn residuals between low-resolution and high-resolution images. Haut et al. [20], in their residual-based network design, integrated visual attention mechanisms to focus the remote sensing image super-resolution process on deep features demanding greater computational resources. In pursuit of further elevating the quality of remote sensing image reconstruction, Li et al. [21] introduced the SRAGAN network using a GAN model, concurrently applying local and global attention mechanisms to capture features across different scales. In recent years, certain works have gradually shifted focus towards designing more rational structures to explicitly or implicitly capture multi-scale features beneficial for high-resolution reconstruction in remote sensing images. HSENet [7] effectively exploits internal recursion of single-scale and cross-scale information by employing multi-level feature extraction. Wang et al. [8] devised a U-shaped structure to further mine and learn feature relationships between multi-scale remote sensing images, enhancing global feature representation through attention based on hybrid convolutions.
### _Residual Back-Projection for Image Super-Resolution_
The feed-forward architecture of deep super-resolution networks achieves promising results in the image super-resolution field. However, the mutual dependency between low-resolution and high-resolution images has not been effectively explored. The iterative back-projection algorithm [22] stands as one of the early SR algorithms. It can iteratively calculate the reconstruction error and then fuse it back to adjust the strength of the HR image, aiding in better understanding the image context and enhancing reconstruction quality. In the rapidly evolving field of deep learning, some researchers have focused on exploring how to incorporate traditional back-projection methods into structural designs to harness their full potential for feedback learning in the context of super-resolution. Haris et al. [9] introduced back-projection into the SR network architecture, proposing the DBPN network that focuses on employing multiple upsampling and downsampling stages to directly enhance SR features, iteratively learning LR and HR feature maps for feedback residual. Liu et al. [10] further improved DBPN, presenting an enhanced back-projection block and implementing bottom-up and top-down feature learning using an HourGlass structure. Additionally, in another work in the same year, Liu et al. [23] integrated attention into the back-projection module, achieving efficient single-image super-resolution and designing an improved refined back projection block to further enhance SR performance in the final reconstruction process. In recent works, the back-projection block has also been employed to alleviate feature loss caused by the upsampling and downsampling processes. RefSR-VAE [24] employs multiple back-projection modules instead of conventional PixelShuffle operations to obtain the final upsampled reconstructed images.
### _Transformer for Image Super-Resolution_
For low-level visual tasks like super-resolution reconstruction, the inherent computational complexities of the Transformer bring substantial computation burdens. Hence, Liang et al. [11] introduced the Swin Transformer [25] into the design of super-resolution networks, proposing an efficient SwinIR that combines the advantages of CNNs and employs a low-burden sliding window for global dependency modeling, achieving impressive results. However, the window attention design for local modeling restricts the powerful global modeling capability of the Transformer. It is evident that increasing the window size can further exploit the Transformer's long-range modeling ability to enhance the quality of reconstructed images, but it also introduces a significant computational burden. To address this issue, Zhang et al. [26] conducted a more refined design of the SwinIR architecture tailored to low-level visual tasks, proposing ELAN by using windows of different sizes and implementing window attention with larger windows. Chen et al. [27] proposed an Overlapping Cross Attention Module to enhance the interaction of cross-window information, effectively activating more pixels for local feature
reconstruction. Zhou et al. [28] introduced Permuted Self-Attention, striking a balance between channel and spatial information in self-attention. By sacrificing channel dimensions in the self-attention computation, this method enables super-resolution networks to enjoy the benefits of large-window self-attention. In addition to addressing the computational burden, it is noteworthy that in SR tasks, Transformers exhibit a tendency to overly emphasize the learning of low-frequency features, which is detrimental to achieving finer reconstruction results. Chen et al. [27] addressed this by introducing a CNN branch for the learning of local features and enhancing high-frequency features. Furthermore, Li et al. [29]. designed a more refined architecture to synergize the learning capabilities of CNN and Transformers, fully leveraging their respective strengths.
## III Methodology
In this section, we introduce the details of MBT. First, we provide a brief overview of the network's overall architecture. Subsequently, in Sections III-B, III-C, and III-D, we delve into the details of the core components of MBT, which include three back-projection structures designed for feature enhancement at different granularities.
### _Overview_
The overall framework of MBT is shown in Fig.1. It is composed of three distinct components: shallow feature extraction module, deep feature extraction module, and backprojection-based reconstruction module. Specifically, for the low-resolution input image \(\mathbf{I}_{LR}\in\mathbb{R}^{H\times W\times 3}\), MBT initially employs a \(3\times 3\) convolutional layer to extract shallow features \(\mathbf{F}_{0}\in\mathbb{R}^{H\times W\times C}\), where \(C\) represents the channel dimension. Then, deep feature extraction \(\mathcal{F}_{\text{EXTA}}\) is performed through a series of context-aware backprojection-based transformer blocks (CPTBs), followed by obtaining the deep features \(\mathbf{H}^{N}\in\mathbb{R}^{H\times W\times C}\) using a 3\(\times\)3 convolutional layer of the last CPTB output. Subsequently, we employ a global residual connection to ease training complexity and obtain the initial super-resolved output \(\mathbf{\hat{I}}_{SR}\in\mathbb{R}^{(r\times H)\times(r\times W)\times C}\) through the reconstruction layer \(\mathcal{F}_{\text{REC}}\) as
\[\mathbf{\hat{I}}_{SR}=\mathcal{F}_{\text{BINR}}(\mathbf{I}_{LR})+\mathcal{F}_{\text{ REC}}(\mathbf{H}^{N}). \tag{1}\]
Here, \(r\) represents the upsampling factor. Lastly, the \(\mathbf{\hat{I}}_{SR}\) is fed into the backprojection-based reconstruction module (PRM) to obtain the final enhanced super-resolved output \(\mathbf{I}_{SR}\in\mathbb{R}^{(r\times H)\times(r\times W)\times C}\).
Among them, CPTB and PRM are the main components of MBT. Moreover, the scale-aware backprojection-based Transformer layer (SPTL) constructs the CPTB. Thus, for the rest of this section, we will first introduce the details of SPTL. Then, CPTB is illustrated. Finally, the details of PRM are presented.
### _Scale-aware backprojection based Transformer Layer_
Extracting multi-scale information in an image can effectively improve the performance of RSISR methods. Therefore, inspired by the design of Pyramid Pooling Transformer [30],
Fig. 1: The overall architecture of the proposed MBT framework.
Fig. 2: Illustration of Pyramid Pooling Self-Attention layer.
we propose a multi-scale self-attention operation called Pyramid Pooling Self-attention (PPSA), as shown in Fig.2. Specifically, the given input features \(\mathbf{X}\in\mathbb{R}^{H\times W\times C}\) undergo downsampling at different scales through three different scaling pooling blocks1. Considering that using only average pooling for feature extraction would lead to overly flat reconstruction images with poor contour feature recovery. Thus, we perform a combination of maximum pooling and average pooling operation as the pooling blocks to process the given feature \(\mathbf{X}\) as:
Footnote 1: For the sake of simplicity, we omitted the index number of PPSA.
\[\mathbf{P}^{i}=\mathcal{F}^{i}_{\text{avg}}(\mathbf{X})+\mathcal{F}^{i}_{\text{max}}( \mathbf{X}), \tag{2}\]
where \(\mathbf{P}^{i}\) is the generated specific-scale features and the \(i\) denotes the number of pooling blocks. Here, we opt to use three scales for multi-scale feature extraction, with downsampling ratios of \(\times 2\), \(\times 4\), and \(\times 8\).
After that, the generated features are concatenated and obtain the multi-scale features \(\mathbf{P}\), and then \(\mathbf{K}\) and \(\mathbf{V}\) values are obtained from \(\mathbf{P}\) through the linear mapping transformation. Subsequently, a standard self-attention calculations \(\mathcal{F}_{\text{SA}}\) are performed with the \(\mathbf{Q}\) values obtained from the mapping input features \(\mathbf{X}\), and the output are further processed by a \(1\times 1\) convolution layers for subsequent feature processing as
\[\mathbf{\bar{X}}=\mathcal{F}_{1\times 1}(\mathcal{F}_{\text{SA}}(\mathbf{Q},\mathbf{K}, \mathbf{V})). \tag{3}\]
Moreover, previous studies have indicated that Transformers, which excel at modeling long-range dependencies [29], tend to focus on learning low-frequency features, contrasting with the characteristics of convolutional layers that excel at local feature modeling. To this end, building upon the powerful multi-scale feature learning capability of PPSA and the strong local modeling capability of convolutional layers, we introduce the first granularity of back-projection learning in MBT, i.e., SPAL, as shown in Fig.3.
SPAL consists of two parts: backprojection-enhanced PPSA and a feed-forward network. Firstly, the given feature \(\mathbf{F}\in\mathbb{R}^{H\times W\times C}\) is preprocessed through Layer Normalization (LN) to obtain \(\mathbf{F}_{\text{LN}}\)2. Then, a \(1\times 1\) convolutional layer is applied to increase the channel dimension of the feature to \(C_{1}\) in order to enhance its representational capacity, resulting in \(\mathbf{F}_{\text{u}}=\mathcal{F}_{1\times 1}(\mathbf{F}_{\text{LN}})\). Subsequently, the feature is split into two parts according to the channel dimensions, \(\mathbf{F}_{c}\in\mathbb{R}^{H\times W\times\frac{C_{1}}{2}}\) and \(\mathbf{F}_{p}\in\mathbb{R}^{H\times W\times\frac{C_{1}}{2}}\). \(\mathbf{F}_{p}\) undergoes multi-scale learning via PPSA \(\mathcal{F}_{\text{PPSA}}\) which serves as the main branch, producing the global feature \(\mathbf{\tilde{F}}_{p}\) as
Footnote 2: For the sake of simplicity, we omitted the index number of PPSA.
\[\mathbf{\bar{F}}_{\text{p}}=\mathcal{F}_{\text{PPSA}}(\mathbf{F}_{\text{p}}). \tag{4}\]
On the other hand, \(\mathbf{F}_{c}\) is processed through a channel attention block \(\mathcal{F}_{\text{CAB}}\) following HAT [27] to obtain high-frequency local features for feedback supplementation as
\[\mathbf{\bar{F}}_{\text{c}}=\mathcal{F}_{\text{CAB}}(\mathbf{F}_{\text{c}}). \tag{5}\]
\(\mathbf{\bar{F}}_{c}\) is then subtracted from \(\mathbf{\bar{F}}_{p}\) and enhanced through a \(1\times 1\) convolutional layer to obtain feedback error features \(\mathbf{F}_{e}\) as
\[\mathbf{F}_{\text{e}}=\mathcal{F}_{1\times 1}(\mathbf{\bar{F}}_{\text{c}}-\mathbf{\bar{F}}_{ \text{p}}). \tag{6}\]
By adding \(\mathbf{F}_{e}\) to the adaptively adjusted \(\mathbf{\bar{F}}_{p}\) through a \(1\times 1\) convolution layer, we obtain the enhanced feature \(\mathbf{\bar{F}}\) in a backprojection way as
\[\mathbf{\bar{F}}=\mathcal{F}_{1\times 1}(\mathbf{\bar{F}}_{p})+\mathbf{F}_{e}. \tag{7}\]
Finally, a residual operation is applied by adding the input feature \(\mathbf{F}\) to \(\mathbf{\tilde{F}}\) to alleviate training difficulties, resulting in the back-projection-enhanced feature \(\mathbf{\hat{F}}\):
\[\mathbf{\hat{F}}=\mathbf{\bar{F}}+\mathbf{F}. \tag{8}\]
The feed-forward network consists of two linear layers with an activation function between them to capture higher-level features and contextual relationships as
\[\mathbf{H}=\text{FFN}(\text{LN}(\mathbf{\hat{F}}))+\mathbf{\hat{F}}. \tag{9}\]
Unlike the Back-Projection Block, the Back-Projection structure we designed minimally introduces additional parameters and computational burden while preserving the original feedback learning. We achieve this by subtracting the rich multiscale low-frequency features learned by the PPSA branch from the high-frequency features enhanced by the channel attention block (CAB) branch and reinforcing the feature learning fed back to the PPSA branch. This combination enables the network to learn from features at different levels, enhancing feature richness and ultimately yielding superior reconstruction results.
### _Context-aware Backprojection based Transformer Block_
In the SR network, cascaded residual blocks have been widely adopted for deep feature extraction [14, 16, 31]. The primary advantage of this design is that, through hierarchical
Fig. 3: Architectures of our proposed scale-aware backprojection-based self-attention layer.
feature processing, the network gains a better understanding of the image's structure and content, leading to a significant improvement in the reconstruction quality and richer detail representation. However, typically, each residual block is merely connected through a straightforward cascading mechanism, failing to effectively propagate the rich features from one residual block to the next. Such a connection scheme results in inefficient feature propagation in deep network layers, thereby impeding the achievement of more precise reconstruction results. Furthermore, due to the absence of a dedicated feature transmission mechanism, this straightforward cascading approach struggles to handle complex feature relationships, particularly when dealing with images containing intricate features, such as remote sensing images, thereby imposing certain limitations on performance. To this end, we incorporate the back-projection learning from SPAL into the context-aware backprojection-based transformer blocks (CPTBs) to enhance feature interaction between CPTBs. This constitutes the second granularity of backprojection learning proposed in MBT, as shown in Fig.1.
Each CPTB consists of \(N\) cascaded SPALs and a \(3\times 3\) convolutional layer used for feature aggregation. By introducing the backprojection feature enhancement structure for each CPTB, we effectively model the feature relationships between the previous CPTB and the current CPTB, optimizing the feature propagation process between CPTBs, and allowing the network to more fully transmit and utilize deep features.
Given the feature \(\mathbf{H}^{n-1}\in\mathbb{R}^{H\times W\times C}\) generated by the \(n-1\)th CPTB, a \(1\times 1\) convolutional layer is first applied to reduce the computational and parameter burden within the CPTB. This \(1\times 1\) convolutional layer increases the dimension of feature \(\mathbf{H}^{n-1}\) to \(C_{2}\), resulting in the initial feature \(\mathbf{H}^{n}_{\text{init}}\in\mathbb{R}^{H\times W\times C_{2}}\). After that, a channel split operation is performed and the \(\mathbf{H}^{n}_{\text{init}}\) is split as \(\mathbf{H}^{n}_{\text{p}}\in\mathbb{R}^{H\times W\times\frac{C_{2}}{2}}\) and \(\mathbf{H}^{n}_{\text{c}}\in\mathbb{R}^{H\times W\times\frac{C_{2}}{2}}\). \(\mathbf{H}^{n}_{\text{p}}\in\mathbb{R}^{H\times W\times\frac{C_{2}}{2}}\) is processed with \(N\) cascaded SPALs and a \(3\times 3\) convolutional layers for feature aggregation as
\[\mathbf{\bar{H}}^{n}_{\text{p}}=\mathcal{F}_{3\times 3}(\mathcal{F}^{N}_{\text{SPAL} }(\mathcal{F}^{N-1}_{\text{SPAL}}(\cdots(\mathbf{H}^{n}_{\text{p}})\cdots))). \tag{10}\]
While, for \(\mathbf{H}^{n}_{\text{c}}\), a channel attention block is also used to enhance the feature as
\[\mathbf{\bar{H}}^{n}_{\text{c}}=\mathcal{F}_{\text{CaB}}(\mathbf{H}^{n}_{\text{c}}), \tag{11}\]
which is consistent with the design in SPAL. After that, we obtain the differential features with powerful representational capabilities as
\[\mathbf{H}^{n}_{\text{e}}=\mathbf{\bar{H}}^{n}_{\text{p}}-\mathbf{\bar{H}}^{n}_{\text{c}}. \tag{12}\]
The results will be further enhanced with a residual connection to obtain the complementary information as
\[\mathbf{\bar{H}}^{n}=\mathcal{F}_{1\times 1}(\mathbf{H}^{n}_{\text{e}})+\mathcal{F}_{1 \times 1}(\mathbf{\bar{H}}^{n}_{\text{p}}) \tag{13}\]
Finally, the output of the \(n\)-th CPTB can be obtained as
\[\mathbf{H}^{n}=\mathbf{\bar{H}}^{n}+\mathbf{H}^{n-1}. \tag{14}\]
By introducing CPTB, MBT can comprehensively capture subtle details within features and correlations between features, thus extracting deeper features that are more favorable for feature reconstruction.
### _Backprojection-based Reconstruction Module_
The mainstream super-resolution networks [20, 32, 7, 33] directly upsample the low-resolution features to obtain high-resolution images. However, obtaining a reconstructed image solely through upsampling operations without further enhancing feature representation capability increases the training difficulty and restricts the improvement in super-resolution performance, especially when larger magnification factors are required. To this end, inspired by [23], we introduce the third granularity of backprojection learning, referred to as backprojection-based reconstruction module, and apply it after the reconstruction layer as illustrated in Fig1.
Specifically, we use the bilinear interpolation to obtain an estimated low-resolution image \(\mathbf{\hat{I}}_{LR}\) from \(\mathbf{\hat{I}}_{SR}\). Next, we subtract the estimated low-resolution image \(\mathbf{\hat{I}}_{LR}\) from the input LR image \(\mathbf{I}_{LR}\) to obtain feedback information, and the results are further processed with two 1\(\times\)1 convolution layers \(f_{1\times 1}\) as:
\[\mathbf{\hat{F}}_{LR}=f_{1\times 1}(f_{1\times 1}((\mathbf{\hat{I}}_{LR}-\mathbf{I}_{LR}))). \tag{15}\]
Finally, we upsample the enhanced features \(\hat{F}_{LR}\) using bilinear interpolation and combine with the estimated SR image \(\mathbf{\hat{I}}_{SR}\) to obtain the final SR image as
\[\mathbf{I}_{SR}=\mathcal{F}_{\text{BINR}}(\mathbf{\hat{F}}_{LR})+\mathbf{\hat{I}}_{SR}. \tag{16}\]
Leveraging a backprojection-based reconstruction module, we further enhance the quality of SR results while avoiding excessive parameter and computational overhead.
## IV Experiment
### _Datasets_
Our experiments were conducted on two remote sensing datasets, namely, the UCMerced dataset and the AID dataset. Details of the two datasets are given as follows.
1. _UCMerced dataset_[39]_. Comprising 21 classes3 of remote sensing landscapes, this dataset consists of 100 images per category, each with a spatial resolution of 0.3 meters per pixel and dimensions of 256 \(\times\) 256 pixels. In alignment with prior research [8] and [7], we have evenly divided the dataset into two well-balanced subsets, each containing 1050 samples. Within the training set, 10\(\%\) of the samples are reserved for validation purposes.
Footnote 3: All these 21 classes: 1-Agricultural, 2-Airplane, 3-Baseballdiamond, 4-Beach, 5-Buildings, 6-Chaparral, 7-Densersectional, 8-Foreway, 9-Forex, 11-Harbet, 12-Intersection, 13-Mediumresnetal, 14-Mobilehomerpark, 15-Overpass, 16-Parkinglot, 17-River, 18-Runway, 19-Sparsereistdenial, 20-Storagetanks, and 21-Tennisour.
2. _AID dataset_[40]_. Incorporating a total of 10,000 images, this dataset encompasses 30 classes4 of remote sensing
scenes. All images maintain a consistent resolution of 600 \(\times\) 600 pixels, with a spatial resolution reaching up to 0.5 meters per pixel. In line with the methodology outlined in [8], we partitioned the dataset into training and test sets. Specifically, 80\(\%\) of the dataset was randomly selected for the training set. Within this training set, we curated validation sets by extracting five images per class. The remaining 20\(\%\) of the images were set aside for use as the test set.
### _Implementation Details_
In this paper, we explore the scale factors of \(2\times\), \(3\times\), and 4\(\times\), and the upsampling blocks in the reconstruction part are modified according to the specific scale factor. In the training phase, the Exponential Moving Average (EMA) strategy is employed to stabilize training. \(64\times 64\) image patches are randomly cropped from LR images, and their corresponding real references are extracted from HR images corresponding to the scale factor. Additionally, we augment the training images by randomly rotating them by 90\({}^{\circ}\), 180\({}^{\circ}\), and 270\({}^{\circ}\) and performing horizontal flips. We ultimately set the number of SPALs to 6, with 3 CPTBs in each SPAL. The number of channels is set to 96. The number of attention heads in the Pyramid Pooling Self-Attention layer is set to 4. The values of \(C_{1}\) and \(C_{2}\) are set to 96 and 64. Furthermore, Bilinear Interpolation operations are used in the backprojection-based reconstruction module for downsampling and upsampling.
We use the Adam optimizer [41] to train our model with \(\beta_{1}=0.9\), \(\beta_{2}=0.99\), and \(\varepsilon=10^{-8}\). The initial learning rate is set to \(2e^{-4}\), and the mini-batch size is 4. The total number of training epochs is 700, with the learning rate halved at 600 iterations. MBT is implemented with PyTorch [42], and all experiments are conducted on a single NVIDIA GeForce GTX 4090 graphics card.
### _Comparisons with Other Methods_
**Quantitative Results.** We compare MBT with other leading RSISR methods on UCMerced and AID datasets for \(\times 2\), \(\times 3\), and \(\times 4\) SR. The detailed comparisons are shown in Table
I. As seen, MBT obtains the best performance in all experiment settings. Compared with CNN-based methods (e.g., SRCNN [34], VDSR [35], DCM [20], LGCNet [6], HSENet [7], SRDD [36], and FENet [37]), MBT obtains the best results. This can be attributed to the long-range dependency learning with the Transformer structure. Compared with Transformer based methods (e.g., TransENet [12] and Omnisr [38]) MBT also achieves the best performance on both datasets with different up-scale factor settings, which further demonstrates the effectiveness of our proposed multi-granularity design.
Moreover, we also explore the performance of the above methods for different remote sensing classes. The comparison
Fig. 4: Visual comparisons for \(\times\)3 and \(\times\)4 SR on UCMerced datasets.
results are shown in Table. II. As seen, MBT outperforms other methods by significant improvements. Specifically, for the class _Buildings_ (#5) and class _Parkinglot_ (#16), MBT obtains even 0.46 dB and 0.52dB in PSNR compared to the second place method HSENet [7], which demonstrates the effectiveness of MBT.
**Qualitative Results.** Fig. 4 and Fig. 5 show the visual results of different methods. As seen, MBT generates the best quality visual results with clear edges and textures. This is attributed to multi-scale feature learning in SPAL and the feature enhanced using backprojection learning. The visual results demonstrate the effectiveness of MBT.
Fig. 5: Visual comparisons for \(\times 3\) and \(\times 4\) SR on AID datasets.
**Computational Analysis.** We also compare the network parameters and FLOPs with other RSISR methods. As shown in Table III, the parameters and FLOPs of MBT are smaller than HSENet [7]. Compared with TransENet [12], MBT owns almost 8% parameters of TransENet [12] but obtains the 0.26 dB PSNR improvement. MBT achieves a favorable trade-off between performance and model complexity.
### _Ablation Studies_
**Efficacy of MBT.** We explore the main components of MBT and develop seven model variants (shown in Table IV). Our baseline model (the 1st row) removed the backprojection learning structures from CPTB and SPAL and maintained the parameter count by employing parallel calculations of CAB with adjusted channel compression ratios combined with the main pathway. Additionally, PRM was excluded. As seen, Compared to the baseline model, the individual use of SPAL (the 2nd row), CPTB (the 3rd row), and PRM (the 4th row) all achieves performance improvements. Using a combination of two modules also resulted in performance gains compared to using a single module. Particularly, the model variant that adopts SPAL and CPTB (the 6th row) achieves a performance improvement of 0.22dB in terms of PSNR compared to the baseline model, thanks to the effectiveness and rationality of SPAL and CPTB design. The full implementation model (the 8th row) obtains the best results, further demonstrating the rationality and effectiveness of the MBT design.
**Hyper-Parameter Settings.** We explore the performance of different numbers of CPTB in MBT, SPAL in CPTB, and the channel dimension number in MBT. The detailed results are reported in Table V. As seen, with the increases of CPTB, the performance of MBT deteriorates slightly, but the number of parameters continually increases. The reason behind the results is that too many CPTBs will increase the complexity of the model, making it difficult to achieve optimal model performance through training. Considering the trade-off between the number of parameters and the performance, we set the number of CPTB as 3.
Then, we explore the settings of SPAL in CPTB. As seen, with the increase of SPAL, the performance of MBT also improves and achieves the best results when the number is equal to 6. Thus, the number of SPAL is set to 6.
Finally, the channel dimension of MBT is also explored. As reported in Table V, MBT achieves the best results when the channel number is set to 96. Considering the trade-off between the model complexity and the final performance, we set the number of channels as 96.
## V Conclusion
In this paper, we propose a Backprojection style Transformer termed MBT for RSISR. Specifically, we propose a scale-aware Transformer layer SPTL as the basic feature extraction layer for obtaining multi-scale image features. With SPTLs, we construct the Transformer block named CPTB for efficient hierarchical feature learning. Moreover, we develop a reconstruction module RPM to generate comprehensive reconstruction features for final high-resolution image reconstruction. Based on the above components, MBT is constructed and achieves promising results compared to other state-of-the-art methods on commonly-used RSISR datasets. However, other image super-resolution tasks, like arbitrary-scale super-resolution and super-resolution with complicated degradation models, are not explored in this paper, which is also important for remote sensing applications. We will extend MBT to these tasks in future work.
|
2301.04424 | Riemannian Geometry and Molecular Similarity II: Kähler Quantization | Shape-similarity between molecules is a tool used by chemists for virtual
screening, with the goal of reducing the cost and duration of drug discovery
campaigns. This paper reports an entirely novel shape descriptor as an
alternative to the previously described RGMolSA descriptors
\cite{cole2022riemannian}, derived from the theory of Riemannian geometry and
K\"ahler quantization (KQMolSA). The treatment of a molecule as a series of
intersecting spheres allows us to obtain the explicit \textit{Riemannian
metric} which captures the geometry of the surface, which can in turn be used
to calculate a Hermitian matrix $\mathbb{M}$ as a directly comparable surface
representation. The potential utility of this method is demonstrated using a
series of PDE5 inhibitors considered to have similar shape. The method shows
promise in its capability to handle different conformers, and compares well to
existing shape similarity methods. The code and data used to produce the
results are available at: \url{https://github.com/RPirie96/KQMolSA}. | Daniel J. Cole, Stuart J. Hall, Thomas Murphy, Rachael Pirie | 2023-01-11T12:03:23Z | http://arxiv.org/abs/2301.04424v1 | # Riemannian Geometry and Molecular Similarity II: Kahler Quantization
###### Abstract
Shape-similarity between molecules is a tool used by chemists for virtual screening, with the goal of reducing the cost and duration of drug discovery campaigns. This paper reports an entirely novel shape descriptor as an alternative to the previously described RGMolSA descriptors [1], derived from the theory of Riemannian geometry and Kahler quantization (KQMolSA). The treatment of a molecule as a series of intersecting spheres allows us to obtain the explicit _Riemannian metric_ which captures the geometry of the surface, which can in turn be used to calculate a Hermitian matrix \(\mathbb{M}\) as a directly comparable surface representation. The potential utility of this method is demonstrated using a series of PDE5 inhibitors considered to have similar shape. The method shows promise in its capability to handle different conformers, and compares well to existing shape similarity methods. The code and data used to produce the results are available at: [https://github.com/RPirie96/KQMolSA](https://github.com/RPirie96/KQMolSA).
Riemannian Geometry Kahler Quantization Molecular Shape Ligand-Based Virtual Screening
## 1 Introduction and Summary of Part I
The concept that shared biological activity exists between similar molecules is used widely in drug discovery [2]. Molecules with known activity can be used as templates to screen large databases for other potential hits. This is more efficient and allows coverage of a greater area of chemical space than is possible with experimental screening alone [3]. Estimating similarity between molecules based on their 3D shape has gained popularity due to the requirement for protein-drug shape complementarity to enable strong binding. However no fixed notion of shape exists. Instead, comparison relies on mathematical approximation of the molecule's shape based on its volume, distribution of atomic distances or surface (most commonly treated as the van der Waals or solvent accessible surface) [4].
In the accompanying paper [1], the RGMolSA method was presented. The descriptor developed there approximates the shape of the molecular surface using a simple nine-element vector containing the surface area and an approximation to the first eight non-zero eigenvalues of the ordinary Laplacian. The descriptor can be viewed as an approximation to the _Riemannian metric_, the underlying mathematical object that describes the shape of a surface. In this paper we present an entirely different method of approximating the Riemannian metric by using ideas from the theory of _Kahler quantization_; we call this method Kahler quantization for Molecular Surface Approximation (KQMolSA). The theory was originally developed by mathematicians and string theorists in order to give explicit representations of the shapes of 4-dimensional objects (Calabi-Yau manifolds) that appear in physical theories (see [5]
for the paper that pioneered its use as a numerical technique). In a nutshell, a function called the Kahler potential is associated to the metric. We then compute something analogous to a Taylor expansion of this function with the coefficients being stored in a Hermitian matrix. While the matrices themselves do depend upon the precise position and parameterisation of the molecular surface in three-dimensional space \(\mathbb{R}^{3}\), the dependence is easy to calculate. Hence we can perform our calculations in the 'quantized' space of Hermitian matrices and assign a distance between the shapes of two molecular surfaces this way. The final distance is independent of the position of the molecules and the choices made in their parameterisations.
### Summary of Previous Work
As in the accompanying paper [1], our approach begins by treating the molecule as a series of intersecting spheres, with their radii given by the van der Waals radii of the constituent atoms. The surface is assumed to have a genus of zero, so any rings (e.g. benzene) are replaced with a single sphere of radius 2.25 A to facilitate this. The molecular structure is then defined by the number of spheres \(N\) (with each ring counted as a single sphere, and excluding any hydrogen atoms), the centres \(c_{i}\) and radii \(r_{i}\) for each sphere and the adjacency matrix \(T\) describing intersection of spheres, where
\[T_{ij}=\left\{\begin{array}{ll}1&\mbox{if spheres $i$ and $j$ intersect}\\ 0&\mbox{otherwise (or $i=j$)}.\end{array}\right.\]
The surface area \(\mathcal{A}\) of the molecule is calculated as the area of each sphere minus the "missing parts" where two spheres intersect:
\[\mathcal{A}=2\pi\sum_{i}\left(2r_{i}^{2}-\left(r_{i}\sum_{j}T_{ij}|r_{i}- \lambda_{ij}|\right)\right). \tag{1}\]
This value is used to re-scale each of the starting constructs such that the surface area of the molecule is equal to that of a unit sphere (or \(4\pi\)) to address the observation that Riemannian geometry treats two objects which differ only in size as having equivalent shape. This re-scaling is accounted for in the final descriptors with some weighting so as not to dominate the similarity calculation.
From the initial data, a map is constructed to 'unwrap' the surface onto the complex plane \(\mathbb{C}\) in a process we refer to as piecewise stereographic projection. This requires an atom to be selected as a starting point from which to construct our map, which we refer to as the base sphere. This is taken to be the atom closest to the centre of mass by first finding the centroid of the molecule and then taking the atom with the smallest Euclidean distance from this point. The _Riemannian metric_\(g=\Phi_{ps}^{*}(g_{Euc})\) induced by the mapping \(\Phi_{ps}:\mathbb{C}\rightarrow\mathcal{S}\subset\mathbb{R}^{3}\) takes the form
\[g=\left\{\begin{array}{ll}\frac{4r_{B}^{2}}{(1+|z|^{2})}(dx^{2}+dy^{2})& \mbox{if $z\in\mathcal{C}$}\\ \frac{C_{1}}{(2-A_{1}^{2}+B_{1})^{2}}(dx^{2}+dy^{2})&\mbox{if $z\in\mathbb{D}(a_{1},R_{1})$}, \\ \frac{C_{2}}{(|z-A_{2}|^{2}+B_{2})^{2}}(dx^{2}+dy^{2})&\mbox{if $z\in\mathbb{D}(a_{2},R_{2})$}, \\ \vdots&\vdots\\ \frac{C_{N-1}}{(|z-A_{N-1}|^{2}+B_{N-1})^{2}}(dx^{2}+dy^{2})&\mbox{if $z\in \mathbb{D}(a_{N-1},R_{N-1})$},\end{array}\right. \tag{2}\]
where \(r_{B}\) is the radius of the base sphere and
\[\mathcal{C}=\mathbb{C}\backslash\mathbb{D}(a_{1},R_{1})\cup\mathbb{D}(a_{2},R _{2})\cup\cdots\cup\mathbb{D}(a_{N-1},R_{N-1}),\]
is the complement of the discs \(\mathbb{D}(a_{1},R_{1}),\ldots,\mathbb{D}(a_{N-1},R_{N-1})\) which corresponds to the points in the base sphere.
The RGMolSA descriptor uses the explicit form of the Riemannian metric provided by piecewise stereographic projection to approximate the low-lying eigenfunctions of the Laplacian \(\Delta\). In [1], we compared the RGMolSA descriptor for Sildenafil, Vardenafil and Tadalafil, a series of PDE5 inhibitors that are known to occupy a similar volume in the binding pocket of their target protein, and thus have similar shape (Figure 1) [6]. Vardenafil is a classic example of a "me-too" drug, where only a few small modifications have been made to the structure of Sildenafil. As these are both highly similar chemically, they would be expected to have close to the same shape. Tadalafil on the other hand is chemically quite different from the other two, but inspection of the molecules in the pocket of PDE5 reveals they occupy a similar binding pose, and thus would also be expected to have similar shape. In this article, for ease of comparison with the previous work [1], we again use these three molecules as the basis for investigating the new shape descriptor.
While RGMolSA was found to give a good description of shape, it has a possible deficiency due to the dependence of the results on the choice of base sphere, which in turn determines the trial functions for calculating the integrals used to construct the descriptor. The geometry of the surface near the base sphere is well described, but for atoms further away a greater number of eigenvalues would be needed for accurate description of the surface. This problem is greater for larger molecules and can lead to the introduction of numerical errors when the molecule is large enough. We handled such errors by ignoring any contributions from regions with numerical radii less than \(10^{-9}\); however, this forces a somewhat artificial 'locality' upon the shape descriptor meaning that it probably only accurately captures the shape near to the base sphere.
In the following section we outline the theory underpinning the KQMolSA descriptors, that again uses the Riemannian metric to approximate the geometry of the surface. The resulting descriptors lie in the manifold \(GL(N,\mathbb{C})/U(N)\) to give a global descriptor of molecular geometry with reduced dependence on the starting position. Figure 2 summarises the steps in computing these, using Sildenafil as an example. While the descriptor itself does depend upon the choices made and the position of the surface within \(\mathbb{R}^{3}\), this is easily accounted for within the space \(GL(N,\mathbb{C})/U(N)\). This makes computing the 'distance' between the shape descriptors particularly straightforward.
## 2 The Mathematics of Kahler Quantization
### Overview of the Theory
We should say immediately that the theory of Kahler quantization is far too advanced to be able to detail in the current paper. For readers with sufficient mathematical background, a good account (and the original account of its use as a numerical technique) is given in [5]. An exposition, aimed at readers with a general scientific background, of the mathematical theory is currently being written by two of the authors [7].
The theory is concerned with the geometry of _complex manifolds_ (shapes that locally look like \(\mathbb{C}^{n}\)); any surface that sits in \(\mathbb{R}^{3}\) is a complex manifold as it locally looks like a copy of the complex numbers \(\mathbb{C}\) (i.e. \(n=1\)). More concretely, we will be concerned with the surfaces that are topologically equivalent to \(\mathbb{S}^{2}\); in the language of complex manifolds, the sphere is often referred to as the _Riemann Sphere_ and denoted \(\mathbb{CP}^{1}\). The restriction on the topology of the surface is justified by the fact that chemists do not expect any activity in the centre of rings occurring in most
Figure 1: PDES inhibitors Sildenafil, Vardenafil and Tadalafil of known shape similarity. Tadalafil (different chemical structure, similar shape) is an example of a scaffold hop from the first in class drug Sildenafil, and offers greatly improved performance, while Vardenafil (a “me-too” follow-up drug) only offers minor improvements.
drug-like molecules. The exceptions to this are macrocyclic molecules (those with large rings of more than 12 atoms) where genuine activity occurs in the centre of the ring. Such molecules are therefore excluded from comparison by both methods proposed.
The natural class of functions to work with when dealing with complex manifolds are those that are complex differentiable, often called _holomorphic_ functions. We consider a general complex manifold \(X\); unfortunately, if the manifold \(X\) is compact, the only holomorphic functions \(f:X\to\mathbb{C}\) are constant. Thus we cannot hope to understand \(X\) simply by studying the holomorphic functions on \(X\). A generalisation of the notion of a holomorphic function is that of a section of a holomorphic line bundle \(L\) with base \(X\). For readers familiar with the theory, a function is a section of the trivial bundle. A line bundle is positive if there is a Hermitian metric \(h\) on \(L\) with positive curvature. A foundational result of Kodaira [8] says that if the line bundle \(L\) is positive then for large enough \(k\) the tensor power \(L^{k}\), has a lot of holomorphic sections. In fact, the space of all such sections, denoted \(\bar{H}^{0}(L^{k})\), is a complex vector space of dimension that has order \(O(k^{n})\) as \(k\to\infty\).
The curvature of a positively curved Hermitian metric \(h\) gives rise to an object called a _Kahler form_, \(\omega\), which in turn gives rise to a Riemannian metric \(g\) (the mathematical object being used in [1] to describe shape). It turns out that the set of all positively curved Hermitian metrics on a line bundle \(L\) can be identified with the set of all real-valued functions \(\varphi:X\to\mathbb{R}\) that satisfy, in some local coordinate \(z\), the \(\partial\bar{\partial}\)-equation
\[\sqrt{-1}\partial\bar{\partial}\varphi=\omega-\omega_{0}\]
where \(\omega\) is the Kahler form of the metric and \(\omega_{0}\) is a fixed reference Kahler form. We will give more detail on the differential operators \(\partial\) and \(\bar{\partial}\) in Section 2.3; in particular, we will explain that in the molecular surface setting, the \(\partial\bar{\partial}\)-equation is really just the familiar Poisson equation in the plane. The function \(\varphi\) is called a _Kahler potential_ for \(\omega\). The associated potential is not unique but any two differ by a constant; this does not affect the metric which is constructed by taking two derivatives of the potential. However, we will see that the addition of a constant to a potential will have the affect of scaling the Hermitian matrix we produce as a shape descriptor by a positive real number and we will be required to find the 'optimal' rescaling in our distance calculation.
To summarise, what we have for a positive Hermitian line bundle \((L,h)\to X\) are:
* a Kahler form \(\omega\) and a Kahler potential \(\varphi:X\to\mathbb{R}\),
* a complex vector space \(H^{0}(L^{k})\).
Figure 2: Key steps involved in the computation of the KQMoSA surface descriptor for Sildemafil (a PDES inhibitor).
What Kahler quantization amounts to is relating the geometry described by the Kahler potentials (an infinite dimensional space of functions) to the finite dimensional complex vector space \(H^{0}(L^{k})\). This theme occurs throughout numerical analysis and shape description, for example in the theories of Fourier analysis, spherical harmonics, Taylor series, all of which produce a finite-dimensional vector space out of some infinite-dimensional set of functions.
### Quantization and Tian's Theorem
The data \((L,h)\to X\) allows for a natural \(\mathcal{L}^{2}\)-inner product on the vector space of sections \(H^{0}(L^{k})\). Given sections \(s_{1},s_{2}\in H^{0}(L^{k})\), we compute
\[\langle s_{1},s_{2}\rangle:=\int_{X}h_{k}(s_{1},s_{2})\frac{\omega^{n}}{n!},\]
where \(h_{k}\) is the Hermitian metric induced on \(L^{k}\) by \(h\), and \(\omega^{n}/n!\) is the volume element produced by the Kahler form. It is this inner product that is the quantization of the data \((L,h)\to X\). The space of all (Hermitian) inner products on a complex \(N\)-dimensional vector space can be thought of as \(GL(N;\mathbb{C})/U(N)\). This is a negatively curved symmetric space and has a natural notion of distance on it; it is this distance that we will use to measure shape similarity (see Section 2.5).
To recover the geometry defined by \((L,h)\to X\) from the quantization, we choose a basis \(\{s_{j}\}\) of the vector space \(H^{0}(L^{k})\) which gives rise to the matrix representation of the inner product
\[\mathbb{M}_{ij}:=\langle s_{i},s_{j}\rangle.\]
If we let \(v\) be the vector of sections
\[v=\left(s_{1},s_{2},\dots s_{N}\right),\]
then we can define a Kahler potential (recalling that the sections are locally defined holomorphic functions) \(\widetilde{\varphi}\) by
\[\widetilde{\varphi}:=-\frac{1}{k}\log\left(v^{*}\mathbb{M}^{-1}v\right).\]
**Theorem 2.1** (Tian, [9]).: _Let \((X,L,h)\) be a complex manifold with holomorphic line bundle \(L\) and positively curved Hermitian metric \(h\) with curvature \(\omega\). If we produce another Kahler form_
\[\widetilde{\omega}=\omega_{0}+\sqrt{-1}\partial\bar{\partial}\tilde{\varphi},\]
_then_
\[\|\omega-\widetilde{\omega}\|_{C^{0}}=O(k^{-2}).\]
Paraphrasing this theorem, we can say any Kahler form coming from a Kahler potential \(\varphi\) can be well approximated by the Kahler form coming from the 'algebraic' function \(\widetilde{\varphi}\). If we pick local complex coordinates \(z_{1},z_{2},\dots,z_{n}\) then the term \(v^{*}\mathbb{M}^{-1}v\) is just a power series in the coordinates. In the case of a molecular surface, we will have something like a polynomial. This is the sense in which the function \(\widetilde{\varphi}\) is similar to a truncated Taylor series for the original function \(\varphi\). The theorem then says that this series really does converge.
Tian's Theorem is stated for smooth metrics (those where one can take an arbitrary number of derivatives of the Kahler potential \(\varphi\)); in practice (see Section 2.3), we will be working with metrics where the potentials are in \(\mathbb{C}^{2}(X)\), that is twice continuously differentiable. The theory of approximating such metrics algebraically has not been written down but we will demonstrate that we get a method that does produce meaningful shape comparisons. We expect that, suitably adapted to this setting, something like Tian's Theorem is still true; for example, the case of potentials with lower regularity is discussed in [10].
### Implementation in Practice
As mentioned already, in practice we take \(X=\mathbb{CP}^{1}\) the Riemann sphere and the line bundle to be the anticanonical bundle \(K^{*}_{\mathbb{CP}^{1}}=\mathcal{O}(2)\). The Kahler form \(\omega\), can be explicitly constructed from the Riemannian metric \(g\), and in the coordinates furnished by the piecewise stereographic projection map \(\Phi_{ps}\), we can use the form of the metric (2) to get
\[\omega=F(z)\sqrt{-1}dz\wedge d\overline{z},\]
where \(F:\mathbb{C}\to\mathbb{R}_{+}\) is the'metric function' given by
\[F(z)=\left\{\begin{array}{cc}\frac{2r_{B}^{2}}{(1+|z|^{2})^{2}}&\quad\text{if }z \in\mathcal{C},\\ \frac{C_{1}}{(|z-A_{1}|^{2}+B_{1})^{2}}&\quad\text{if }z\in\mathbb{D}(a_{1},R_{1}), \\ \frac{C_{2}}{(|z-A_{2}|^{2}+B_{2})^{2}}&\quad\text{if }z\in\mathbb{D}(a_{2},R_{2}), \\ \vdots&\quad\vdots\\ \frac{C_{N-1}}{(|z-A_{N-1}|^{2}+B_{N-1})^{2}}&\quad\text{if }z\in\mathbb{D}(a_{N-1},R_{N-1}). \end{array}\right. \tag{3}\]
Note we have replaced, in the metric \(g\), the real symmetric 2-tensor \(dx^{2}+dy^{2}\) with the antisymmetric form \((\sqrt{-1}/2)dz\wedge d\bar{z}\), where \(dz=dx+\sqrt{-1}dy\) and \(d\bar{z}=dx-\sqrt{-1}dy\).
To find the Kahler potential \(\varphi:\mathbb{C}\to\mathbb{R}\), we solve the '\(\partial\overline{\partial}\)-equation'
\[\omega=\sqrt{-1}\partial\overline{\partial}\varphi.\]
If we consider the complex differential operators
\[\frac{\partial}{\partial z}=\frac{1}{2}\left(\frac{\partial}{\partial x}- \sqrt{-1}\frac{\partial}{\partial y}\right)\qquad\mathrm{and}\qquad\frac{ \partial}{\partial\bar{z}}=\frac{1}{2}\left(\frac{\partial}{\partial x}+\sqrt {-1}\frac{\partial}{\partial y}\right),\]
then the \(\partial\overline{\partial}\)-equation is equivalent to solving the Poisson equation
\[\frac{\partial^{2}\varphi}{\partial z\partial\overline{z}}=\frac{1}{4}\Delta _{Euc}\varphi=F,\]
where \(\Delta_{Euc}\) is the usual 2-dimensional Laplacian. We can solve the Poisson problem explicitly to find \(\varphi\). The solution can be thought of as having two parts: a 'local' part that is found by simply observing that
\[\frac{\partial^{2}}{\partial z\partial\overline{z}}\left(\frac{C\log(|z-A|^{ 2}+B)}{B}\right)=\frac{C}{(|z-A|^{2}+B)^{2}},\]
and a 'correction term', named thus as the term is needed to ensure the function is in \(\mathrm{C}^{2}(\mathbb{C})\). The correction term is a linear combination of functions of the form
\[\log(|\alpha z+\beta|^{2}),\]
where we get one term for each sphere. As each of the correction terms is a harmonic function, that is
\[\Delta\log(|\alpha z+\beta|^{2})=0,\]
the addition of the correction terms is still a solution of the Poisson equation. It would appear the correction terms are singular at the points \(z=-\beta/\alpha\); however, these points always lie outside the disc where the function takes this particular form. We record the form of the potential as a theorem and refer the reader to the appendix (Section 5) for a derivation of the solution.
**Theorem 2.2** (Form of Kahler potential).: _Let \(g\) be of the form Equation (2). In the region associated to the \(i^{th}\) sphere, the Kahler potential can be written_
\[\varphi(z)=\frac{C_{i}}{B_{i}}\log(|z-A_{i}|^{2}+B_{i})+\sum_{j=1}^{N}\mathcal{ K}_{ij}\log(|\alpha_{ij}z+\beta_{ij}|^{2}),\]
_where \(\mathcal{K}\in M^{N\times N}(\mathbb{R})\), and \(\alpha,\beta\in M^{N\times N}(\mathbb{C})\)._
The matrices \(\mathcal{K},\alpha,\) and \(\beta\) in the previous theorem are easily calculated from the geometric data associated to the molecule and so it is straightforward to describe the Kahler potential explicitly.
The space of global sections \(H^{0}(\mathcal{O}(2k))\cong\mathbb{C}^{2k+1}\) can be identified with the span of the functions
\[\langle 1,z,z^{2},\ldots,z^{2k}\rangle.\]
Thus the shape descriptor associated to the surface is the \((2k+1)\times(2k+1)\) Hermitian matrix \(\mathbb{M}\) where (considering indices that run from 0 to \(2k\))
\[\mathbb{M}_{ij}=\iint_{\mathbb{C}}z^{i}\overline{z}^{j}e^{-k\varphi}F(z) \sqrt{-1}dz\wedge d\overline{z}. \tag{4}\]
### Computing the Relevant Integrals
A naive numerical calculation of the integrals described by Equation (4) gives rise to two obvious problems: firstly, the domain of integration is unbounded (being the whole complex plane \(\mathbb{C}\)); secondly, the domains and values describing the metric and the Kahler potential \(\varphi\) could become so small that numerical instabilities start to dominate the contribution of the associated atom. The second problem has been discussed as a limitation in the approximation of the spectrum of the Laplacian [1]. In this paper, we exploit the fact that the automorphism group of \(\mathbb{CP}^{1}\) is the group of Mobius transformations, \(PSL(2,\mathbb{C})\); we can use elements of this group to ensure the coordinates we perform calculations in are always in a numerically controlled region (here we use a unit disc).
Put more concretely, let \(m\in\{1,2,\ldots,N\}\) index the \(m^{th}\) sphere making up the molecular surface, then there is an element \(\mathcal{T}_{m}\in PSL(2,\mathbb{C})\) that maps the unit disc
\[\mathbb{D}=\{z\in\mathbb{C}\mid|z|<1\},\]
onto the region \(\mathbb{D}(a_{m},R_{m})\) from Equation (2). We note that if the \(m^{th}\) sphere has level \(l\), then the pre-image of the regions corresponding to level \((l+1)\) spheres which intersect the \(m^{th}\) sphere will describe certain discs properly contained in \(\mathbb{D}\). Hence the contribution of the \(m^{th}\) sphere to the matrix described by Equation (4) is given by
\[\iint_{\mathbb{D}-\hat{D}}(\mathcal{T}_{m}(w))^{i}(\overline{\mathcal{T}_{m} (w)})^{j}e^{-k\varphi(\mathcal{T}_{m}(w))}F(\mathcal{T}_{m}(w))\ d\mathcal{T}_ {m}(w)\wedge d\overline{\mathcal{T}_{m}(w)}, \tag{5}\]
where \(\hat{D}\) represents the union of the discs corresponding to the next level spheres intersecting the \(m^{th}\) sphere. In practice, we account for these higher-level spheres by assigning the value \(0\) to the volume form \(F(\mathcal{T}_{m}(w))\ d\mathcal{T}_{m}(w)\wedge d\overline{\mathcal{T}_{m}(w)}\) whenever \(w\in\hat{D}\) (note this produces a jump discontinuity in the volume form). Numerical calculation of integrals of the form of Equation (5) is done by splitting into an angular and radial direction and then performing successive applications of the trapezium rule; we choose a radial step size corresponding to \(n_{r}=15\) integration points and an angular step size corresponding to taking \(n_{\theta}=10\) points. This seems to achieve a reasonable accuracy; for example, one can check the area integral for a given integration scheme. We have also determined that the distance between shape descriptors does not seem to be significantly changed by taking smaller step sizes (Section 3.1).
### Finding the Distance Between Shape Descriptors
Given two positive definite Hermitian matrices \(\mathbb{M}_{1},\mathbb{M}_{2}\), such as those generated by Equation (4), there are innumerable ways of defining a notion of distance between such matrices. With regards to the theory of Kahler quantization, it is natural to consider \(\mathbb{M}_{1},\mathbb{M}_{2}\) as two Hermitian inner products on the fixed complex vector space \(H^{0}(\mathcal{O}(2k))\). This space is naturally seen as the manifold \(GL(2k+1;\mathbb{C})/U(2k+1)\). An inner product is specified by declaring a particular basis to be orthonormal; any basis conjugate under the action of \(U(2k+1)\) defines the same inner product. This space has a natural distance on it; one characterisation of this distance is that shortest paths (geodesics) are given by one-parameter subgroups of \(GL(2k+1;\mathbb{C})\), that is by paths of matrices of the form \(\exp(tA)\) where \(A\) is some \((2k+1)\times(2k+1)\) complex matrix.
More explicitly, if \(\{v_{1},v_{2},\ldots,v_{2k+1}\}\) is a basis of \(H^{0}(\mathcal{O}(2k))\)such that both inner products are represented by diagonal matrices
\[\mathbb{M}_{1}=\mathrm{Diag}\left(e^{\lambda_{1}},e^{\lambda_{2}},\ldots,e^{ \lambda_{2k+1}}\right),\qquad\mathbb{M}_{2}=\mathrm{Diag}\left(e^{\mu_{1}},e^ {\mu_{2}},\ldots,e^{\mu_{2k+1}}\right),\]
then
\[d(\mathbb{M}_{1},\mathbb{M}_{2})=k^{-\frac{3}{2}}\sqrt{\sum_{i=1}^{2k+1}( \lambda_{i}-\mu_{i})^{2}}. \tag{6}\]
The factor of \(k^{-3/2}\) ensures that the distances stabilise as \(k\to\infty\) (see Theorem 1.1 in [11]). It will be useful to consider the following more compact form for the distance
\[d(\mathbb{M}_{1},\mathbb{M}_{2})=k^{-\frac{3}{2}}\sqrt{\sum_{i=1}^{2k+1}(\log (\eta_{i}))^{2}}, \tag{7}\]
where \(\{\eta_{i}\}\) are the eigenvalues of the matrix \(\mathbb{M}_{1}^{-1}\mathbb{M}_{2}\).
It is a well-known fact that the automorphism group of the Riemann sphere \(\mathbb{CP}^{1}\) is the group of Mobius transformations \(PSL(2,\mathbb{C})\). Roughly speaking, the subgroup \(PSU(2)\subset PSL(2,\mathbb{C})\) corresponds to rotations of the original surface and the remaining maps correspond to reparameterisations that preserve the complex structure. If \(\varpi\in PSL(2,\mathbb{C})\) is an automorphism of the form
\[\varpi(z)=\frac{\alpha z+\beta}{\gamma z+\delta},\]
then \(\varpi\) also acts on the vector space \(H^{0}(\mathcal{O}(2k))\). In representation theoretic terms, this action is the representation induced on \(\mathrm{Sym}_{2k}(\mathbb{C}^{2})\) by the standard representation of \(SL(2,\mathbb{C})\). If we denote the element of \(SL(2k+1,\mathbb{C})\) by \(\vartheta(\varpi)\) (see [12], Lemma 8) and the original shape descriptor computed in the \(z\)-coordinate by \(\mathbb{M}\), then the shape descriptor computed in the \(\varpi(z)\)-coordinate will be
\[(\vartheta(\varpi))^{*}\,\mathbb{M}\left(\vartheta(\varpi)\right).\]
As mentioned in Section 2, the fact that the Kahler potential is only defined up to the addition of a constant means we can also scale the Hermitian matrix \(\mathbb{M}\) by a positive constant. Hence our calculation of distance between two shape descriptors \(\mathbb{M}_{1}\) and \(\mathbb{M}_{2}\) becomes the concrete problem of minimising, over \((p,\vartheta)\in\mathbb{R}\times SL(2,\mathbb{C})\),
\[\zeta(p,\vartheta)=\sum_{i=1}^{2k+1}(\log(\eta_{i}))^{2},\]
where \(\{\eta_{i}\}\) are the eigenvalues of the matrix \(\mathbb{M}_{1}^{-1}e^{p}\left(\vartheta(\varpi)\right)^{*}\mathbb{M}_{2} \left(\vartheta(\varpi)\right)\).
It is easy to see that the value of \(p\) at a critical point of \(\zeta\) is independent of the element \(\vartheta\). Elementary calculus yields that the value of \(p\) is given by
\[p=-\frac{1}{2k+1}\sum_{i=1}^{2k+1}\log(\tilde{\eta}_{i}),\]
where \(\{\tilde{\eta}_{i}\}\) are the eigenvalues of the matrix \(\mathbb{M}_{1}^{-1}\mathbb{M}_{2}\). As the matrix \((\vartheta(\varpi))\) has unit determinant, the value of \(p\) does not depend up the \(SL(2,\mathbb{C})\) action on the Hermitian matrix \(\mathbb{M}_{2}\). We thus reduce the distance calculation to a minimisation over the six-dimensional Lie group \(SL(2,\mathbb{C})\).
Note that the distance between the shape descriptors given by Equation (6) is the distance between the molecular shapes _after_ they have been re-scaled to have area \(4\pi\). Hence the distance between two molecular surfaces \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) should include a component to reflect the difference in area between \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\). As we are interested in producing a similarity score rather than a distance between two inputs, we do not take this point up further in the article. Our initial attempts at creating a similarity score are detailed in the subsequent section.
The remaining minimisation over \(SL(2,\mathbb{C})\) is done by parameterising a generic matrix by the \(6\) real variables \(x_{1},...x_{6}\) and taking
\[\varpi(x_{1},x_{2},\ldots,x_{6})=\left(\begin{array}{cc}x_{1}+\sqrt{-1}x_{2 }&x_{3}+\sqrt{-1}x_{4}\\ x_{5}+\sqrt{-1}x_{6}&*\end{array}\right),\]
where \(*\) is chosen to ensure \(\det(\varpi)=1\). To perform the minimisation, we use algorithms that do not require the input of a gradient vector, such as Nelder-Mead or Powell methods [13]. These are implemented using off-the-shelf packages in SciPy [14]. We found that for \(k=1\) there was very little difference between the results for either method; the minimisation algorithm converges to produce a robust distance value. For \(k=2\) the minimisation methods appear to be a little less stable and occasionally did not converge. One way around this was to use the element of \(SL(2,\mathbb{C})\) found by the \(k=1\) minimisation as the initial guess for the \(k=2\) step (otherwise the identity matrix was used). We anticipate that one might be able to improve this process; for example, by computing the gradient of the function to be minimised explicitly and then using this in an algorithm such as conjugate gradient descent.
One further consideration in implementing the distance measure between two matrices was in shape descriptors for \(k>2\) (and for \(k=2\) in some cases), where numerical instability exists within the method. Occasionally non-positive definite matrices are produced, that cannot be compared using the above approach. As Hermitian matrices that differ only by scale can be considered equivalent, such cases have been treated by scaling one matrix by a factor of 10, 100 or 1000 as needed in order to bring the eigenvalues into the range required for consideration with Python.
## 3 Initial Case Study: Phosphodiesterase 5 (PDE5) Inhibitors
### Tuning the Parameters \(\mathbf{n_{r}}\) and \(\mathbf{n_{\theta}}\)
To determine the effect of varying the parameters \(n_{r}\) and \(n_{\theta}\) (Section 2.4) on the quality of the shape descriptors produced, we considered three sets of parameters: \(n_{r}=200\) and \(n_{\theta}=100\); \(n_{r}=50\) and \(n_{\theta}=25\); \(n_{r}=15\) and \(n_{\theta}=10\). The distances produced between the descriptor for each set and the area returned during the computation of the relevant integrals (which should be \(\sim 12.57\) for an accurate descriptor, as constrained by the choice of scaling the surface area to \(4\pi\)) are reported here for Sildenafil (Table 1), Vardenafil (Table 2) and Tadalafil (Table 3).
As these distances are small in each case, there is no significant loss of quality when the number of points considered is reduced. The areas for both Sildenafil and Vardenafil are also close to 12.57, indicating high quality descriptors. The area for Tadalafil is overestimated slightly, however this is due to an issue with the replacement of the rings for motifs with a 5-membered ring between two other rings rather than the choice of \(n_{r}\) and \(n_{\theta}\). Similar results were observed for the consideration of \(k=2\). As the quality is unaffected, the minimum parameters of \(n_{r}=15\) and \(n_{\theta}=10\) were used in the final descriptors to increase the speed of calculation.
### Constructing a Similarity Score
In order to facilitate familiar comparison of molecules, we wish to construct a similarity score rather than simply taking the distance between two matrices. In chemoinformatics, this score typically takes a value between 0 (no similarity) and 1 (identical) [4]. To achieve this we take the inverse distance, and account for size by taking the ratio of two surface areas. Equation 8 gives the similarity score between two molecular surfaces \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\),
\[score(\mathcal{S}_{1},\mathcal{S}_{2})=x(A_{min}/A_{max})+y\frac{1}{1+d( \mathbb{M}_{1},\mathbb{M}_{2})}, \tag{8}\]
where \(A_{min}\) is the smaller of the two surface areas, and \(A_{max}\) is the larger, in order to give a score bounded by 0 and 1. We therefore need to choose an appropriate set of weights \(x\) and \(y\) such that \(x+y=1\), and \(x<0.5\), to ensure the shape is the primary contributor to the score.
Table 4 gives the resulting similarity scores for pairwise comparison of the PDE5 inhibitors. In all three cases, the similarity increases with increasing contribution from the surface area term as expected. The increase for Sildenafil-Vardenafil is only small, while for Tadalafil there is a greater effect of including the area. Final weights of \(x=0.3\) and \(y=0.7\) were selected to balance the contribution of the surface area without it dominating over the shape contribution. The PDE5 inhibitors were selected for tuning due to their known similarity, however further refinement of
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **(200, 100)**, **area = 12.59** & **(50, 25)**, **area = 12.62** & **(15, 10)**, **area = 12.62** \\ \hline
**(200, 100)** & - & 0.032 & 0.032 \\ \hline
**(50, 25)** & 0.038 & - & 0.040 \\ \hline
**(15, 10)** & 0.038 & 0.040 & - \\ \hline \end{tabular}
\end{table}
Table 1: Computed distances between descriptors of Sildenafil generated using different values of \(n_{r}\) and \(n_{\theta}\) for \(k=1\). The area reported is that returned by the integration step.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **(200, 100)** & **area = 14.32** & **(50, 25)**, **area = 14.37** & **(15, 10)**, **area = 14.37** \\ \hline
**(200, 100)** & - & 0.003 & 0.003 \\ \hline
**(50, 25)** & 0.003 & - & 0.001 \\ \hline
**(15, 10)** & 0.003 & 0.001 & - \\ \hline \end{tabular}
\end{table}
Table 2: Computed distances between descriptors of Vardenafil generated using different values of \(n_{r}\) and \(n_{\theta}\) for \(k=1\). The area reported is that returned by the integration step.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **(200, 100)**, **area = 14.32** & **(50, 25)**, **area = 14.37** & **(15, 10)**, **area = 14.37** \\ \hline
**(200, 100)** & - & 0.003 & 0.003 \\ \hline
**(50, 25)** & 0.003 & - & 0.001 \\ \hline
**(15, 10)** & 0.003 & 0.001 & - \\ \hline \end{tabular}
\end{table}
Table 3: Computed distances between descriptors of Tadalafil generated using different values of \(n_{r}\) and \(n_{\theta}\) for \(k=1\). The area reported is that returned by the integration step.
these parameters with a larger set of examples may be required for full scale virtual screening.
### Investigating Variation in 3D Conformers
As discussed in the previous work [1], consideration of the different orientations a molecule can adopt (known as conformers) is important when using 3D shape descriptors. Conformers of the same molecule should theoretically have scores in the range \(0.7<score<1\), as high self-similarity is expected (scores above \(0.7\) in chemoinformatics), while retaining the ability to distinguish between them.
As with RGMolSA, two small sets of 10 conformers of the PDE5 inhibitors are used to investigate how KQ-MolSA regards different conformers. One set contains 10 random conformers, in which we would expect slightly more variance, while the other has 10 low energy conformers, for which higher similarity is expected. Both sets were produced using the ETKDG algorithm [15] with energy optimisation using the MMFF94 force field [16], both implemented in RDKit [17]. The minimum, maximum and average shape similarity as well as the average RMSD (which compares conformers based on their atomic positions) for each set are given in Figure 3. The full set of RMSD and shape similarity comparisons are available in the **Supporting Data**.
The RMSD and shape similarity for each set are compared in the swarm plots shown in Figure 4. For \(k=1\), generally high similarity was observed, with some scores for the random conformers of Tadalafil falling slightly below 0.7. Greater variation is observed for \(k=2\), where some conformer pairs have scores below 0.6. This reduction in similarity is expected for \(k=2\) as the descriptors represent a more detailed approximation to the original surface than those for \(k=1\) and hence will be more sensitive to differences in the geometry. However, the similarity scores obtained were on the whole lower than for RGMolSA, where the similarity between most conformer pairs is greater than 0.8 [1]. For the random sets, the similarity between conformers showed more variation than for RGMolSA, where clusters of similar conformers were observed. While KQMolSA does handle conformers well, RGMolSA appears to do a better job of this, due to the insensitivity to surface deformation of the spectrum of the Laplace-Beltrami operator. For virtual screening, this consideration of conformers as similar negates the need for a pre-alignment step prior to shape similarity calculation, and may allow molecules that can deform to fit in the binding pocket to be identified as potential hits, where these would otherwise be classified as the wrong shape by methods that depend on atomic coordinates.
### Comparison to Existing Methods
The PDE5 inhibitor series was also used to investigate how well KQMolSA compares to the previous work, and to other open source shape similarity methods. Table 5 provides the shape-similarity scores observed between the PDE5 inhibitors for KQMolSA (for \(k=1\) and \(k=2\)), RGMolSA [1], USRCAT [18, 17], Shape-It [19] and MolSG [20]. A 2D representation, in the form of the 1024-bit Morgan fingerprint using radius 3, is also included. Each descriptor uses a similarity score between 0 (different) and 1 (identical).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**x** & **y** & **Sildenafil-Vardenafil** & **Sildenafil-Tadalafil** & **Vardenafil-Tadalafil** \\ \hline
0 & 1 & 0.884 & 0.286 & 0.275 \\
0.1 & 0.9 & 0.892 & 0.340 & 0.328 \\
0.2 & 0.8 & 0.900 & 0.394 & 0.380 \\
0.3 & 0.7 & 0.908 & 0.449 & 0.432 \\
0.4 & 0.6 & 0.916 & 0.503 & 0.485 \\
0.5 & 0.5 & 0.924 & 0.557 & 0.537 \\ \hline \end{tabular}
\end{table}
Table 4: Similarity scores for the PDES inhibitors for surface area weights ranging from 0 to 0.5.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & **KQMolSA** & **KQMolSA** & **RGMolSA** & **USRCAT** & **Shape-It** & **MolSG** & **Morgan** \\ & (k=1) & (k=2) & **RGMolSA** & **USRCAT** & **Shape-It** & **MolSG** & **Fingerprint** \\ \hline Sildenafil-Vardenafil & 0.907 & 0.652 & 0.903 & 0.384 & 0.388 & 0.704 & 0.667 \\ \hline Sildenafil-Tadalafil & 0.449 & 0.482 & 0.809 & 0.269 & 0.278 & 0.746 & 0.201 \\ \hline Vardenafil-Tadalafil & 0.432 & 0.470 & 0.725 & 0.291 & 0.353 & 0.887 & 0.209 \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of the work presented here (KQMolSA) to the previous work (RGMolSA) [1] and existing atomic-distance [18], atomic-centred [19] and molecular surface based [20] descriptors. In all cases the similarity scores given are bound by 0 (no similarity) and 1 (identical).
As discussed in the prequel to this paper, as Sildenafil and Vardenafil are close structural analogues they should display both high shape and fingerprint similarity. As Tadalafil is known to occupy a similar volume in PDE5 compared to the other inhibitors, we'd expect high shape similarity scores also, but lower 2D similarity. One conformer of each molecule is considered for simplicity.
As for RGMolSA, Sildenafil and Vardenafil are scored as highly similar, with a score of 0.907 (\(k=1\)). However Tadalafil is not scored as highly, and for KQMolSA would be classed as dissimilar if the typical threshold of 0.7 was used. Lower similarity is observed for \(k=2\), which is expected as discussed previously. The similarity score for \(k=2\) has a small dependence on the order of comparison (A compared to B yields a score which may differ at the second decimal place from B compared to A, Table 6). This is due to the distance calculation involving a numerical minimisation procedure rather than an exact expression, but this will have no practical implications in chemoinformatics applications. Both proposed methods (RGMolSA and KQMolSA) perform well in this simple study, with a higher predicted similarity for Sildenafil and Vardenafil than all the other 3D methods, and a more intuitive ordering of the relative similarity measures than MolSG. However, a full scale benchmarking study will be required to verify their performance.
Figure 3: Overlay of the most and least shape-similar conformers of Sildenafil, Vardenafil and Tadalafil and the average shape similarity and RMSD for each set for (a) \(k=1\) and (b) \(k=2\). On average the conformers display a high degree of self-similarity despite the variance in atom-position similarity.
### Similarity to Potential Decoys
As for RGMolSA, we also wanted to check how the method handles molecules that should be classed as genuinely different from the PDE5 inhibitor molecules. We therefore present a comparison to four other molecules (Figure 5): Arginine (supplement) which has a lower molecular weight, but similar general shape (a long chain of spheres); Lymecycline (antibiotic), with a higher molecular weight and a four-ring motif potentially giving part of the molecule a similar shape to Sildenafil; Diflorasone (topical corticosteroid), which has a similar molecular weight and four rings, but has a different therapeutic target/indication and S-octylglutathione (oligopeptide), which again has similar molecular weight, but no rings and the potential for similarity due to the branching in the centre of the molecule.
The results of this comparison are presented in Figure 6. Most of the scores obtained for both \(k=1\) and \(k=2\) fall significantly below the typical threshold of 0.7 for similarity, and as such these molecules would be classed as genuinely different and likely inactive against PDE5. The exception is the comparison between Tadalafil and Diflorasone, where a higher score of 0.74 (\(k=1\)) is obtained. Due to the similarity between their structures (both contain a motif of 4 fused rings), we would expect to see some similarity between the two. Inspection by eye of both the space filling model and surface of the two molecules also suggests they do have genuinely similar shapes (Figure 7). These were also classed as potentially similar by RGMolSA (similarity of 0.872).
## 4 Conclusion
We have outlined the theory underpinning an entirely novel shape descriptor,
\[\mathbb{M}_{ij}=\iint_{\mathbb{C}}z^{i}\overline{z}^{j}e^{-k\varphi}F(z) \sqrt{-1}dz\wedge d\overline{z}, \tag{9}\]
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & **Sildenafil** & **Vardenafil** & **Tadalafil** \\ \hline Sildenafil & - & 0.652 & 0.462 \\ \hline Vardenafil & 0.648 & - & 0.470 \\ \hline Tadalafil & 0.482 & 0.470 & - \\ \hline \end{tabular}
\end{table}
Table 6: Similarity scores for the PDE5 inhibitors for k=2 highlighting the dependence on the order of comparison.
Figure 4: Swarm plots of the RMSD (in Å) and shape similarity for our set of conformers highlight the general trend that different conformers are classed as having similar shape, despite significant variance in their atomic positions. Conformers with RMSD less than 1 Å are considered similar, while those over 3 Å have significant differences.
the \((2k+1)\times(2k+1)\) Hermitian matrix which captures the geometry of the molecular surface. The distance between two such matrix representations is then given as
\[d(\mathbb{M}_{1},\mathbb{M}_{2})=k^{-\frac{3}{2}}\sqrt{\sum_{i=1}^{2k+1}(\lambda _{i}-\mu_{i})^{2}}. \tag{10}\]
An overall similarity score of 1 for identical molecules and 0 for no similarity is then obtained as
\[score(\mathcal{S}_{1},\mathcal{S}_{2})=0.3(A_{min}/A_{max})+0.7\frac{1}{1+d( \mathbb{M}_{1},\mathbb{M}_{2})}. \tag{11}\]
As with the previously reported work, the capabilities of KQMoISA were investigated using a series of PDE5 inhibitors known to have similar shape. The method generally handles conformers well, with similarity scores generally higher than \(0.7\). The scores obtained were higher for \(k=1\) than \(k=2\), which is expected due to the greater detail leading to more sensitivity to changes in geometry. The insensitivity to deformation of the surface lead to RGMoISA outperforming KQMoISA in this area. KQMoISA performs relatively well compared to existing methods, identifying Sildenafil and Vardenafil as highly similar, but assigning lower similarity scores to Tadalafil. This small study suggests that RGMoISA might still perform better, but a full retrospective benchmarking study is required to confirm this. Compared to RGMoISA, KQMoISA does have the advantage of a lower dependence on the choice of base sphere. There may therefore be some instances where the use of KQMoISA is more appropriate despite its seemingly poorer performance, for example in the consideration of long chain molecules with few rings, where numerical errors are often observed for RGMoISA. Comparison to a set of potential decoy molecules yielded low scores for all except comparison of Tadalafil to Diflorasone, which were also classed as similar by RGMoISA. Inspection by eye of both the space filling and surface models of the molecules suggests that this assignment is reasonable, as they look similar in shape. Identification of such similarity evidences the potential for scaffold hopping by these methods.
Whilst the above tests suggest that the matrix \(\mathbb{M}\) does give a promising description of molecular shape, the method does have some drawbacks, primarily in the calculation of the distance between two descriptors. While the notion of the distance between two Hermitian inner products (represented by the matrices \(\mathbb{M}_{1}\) and \(\mathbb{M}_{2}\)) is well understood, the calculation of the distance between molecular surfaces requires the distance between a point on an \(SL(2,\mathbb{C})\)-orbit to be minimised. Despite the use of existing optimised minimisation algorithms, this process is still quite slow, depending on the extent of the required minimisation, and further does not guarantee that the global minimum has been found. This step typically takes a few seconds per pair, compared to a near-instantaneous calculation for RGMoISA. Further refinement of this step would be required for use of the method in screening ultra-large chemical libraries as part of a drug discovery pipeline.
Figure 5: Chemical structures of potential decoy molecules.
Of course, there are many other ways of measuring the distance between two Hermitian matrices. One might hope that some form of machine learning, trained on an appropriate data set, might discern other useful geometries on the space of descriptors.
The method also contains numerical instability above \(k=2\) (and for \(k=2\) in a few instances), producing Hermitian matrices that are not positive definite. As Hermitian matrices differing only by a scale factor can be considered equivalent, we have handled such cases by scaling one matrix by a factor of 10-1000 to bring the eigenvalues into the range of Python's numerical tolerance.
Along with addressing these issues, both of the methods proposed could be further improved through the consideration of pharmacorphoric features, such as aromatic rings, hydrogen bond donors and acceptors, alongside the shape. As these features are important for binding, this may lead to improved predictions compared to the consideration of shape alone. As for RGMolSA, there would also be scope to investigate the use the Hermitian matrix descriptors produced by KQMolSA as a feature descriptor in machine learning.
## 5 Appendix: Finding the Kahler potential
Before giving the proof of the form of the Kahler potential, we dispense with a small technical point. From the point of view of describing the Kahler form \(\omega\) via
\[\sqrt{-1}\partial\bar{\partial}\varphi=\omega,\]
the Kahler potential \(\varphi\) is only locally defined and adding any function \(H\) satisfying \(\sqrt{-1}\partial\bar{\partial}H=0\)
will also define a Kahler potential for \(\omega\). In our setting where the underlying complex manifold is \(\mathbb{CP}^{1}\) and we are using the standard coordinate \(z\), we can add any harmonic function \(H:\mathbb{C}\rightarrow\mathbb{R}\) to obtain a valid Kahler potential.
Figure 6: KQMolSA similarity (for \(k=1\) and \(k=2\)) of four ‘different’ molecules (blue) to the PDES inhibitor test series (red). The overlay of the structures was computed using Open3DAlign [21]
However, in Kahler Quantization, the potential \(\varphi\) actually describes a global object, the Hermitian metric \(h\) on the line bundle \(L\). This means that the functions
\[h(z^{j},z^{j})=e^{-k\varphi(z)}|z|^{2j},\]
are defined over whole sphere \(\mathbb{C}\mathbb{P}^{1}\). In particular, they extend to functions over the point at infinity. For example the round metric has Kahler potential \(\varphi=-2\log(|z|^{2}+1)\) and so, if we add a harmonic function \(H\) we require
\[\frac{|z|^{4k}}{(1+|z|^{2})^{2k}}e^{-kH}\]
to be bounded. The Liouville Theorem then implies \(H\) must be constant.
**Theorem 5.1** (Form of Kahler potential).: _Let \(\omega\) be a Kahler metric of the form given by Equation (3). If we denote the region corresponding to the \(i^{th}\) sphere as \(R_{i}\subset\mathbb{C}\), then the Kahler potential potential \(\varphi\), which satisfies \(\sqrt{-1}\partial\overline{\partial}\varphi=\omega\), is of the form_
\[\varphi(z)=\frac{C_{i}}{B_{i}}\log(|z-A_{i}|^{2}+B_{i})+\sum_{j=1}^{N}\mathcal{ K}_{ij}\log(|\alpha_{ij}z+\beta_{ij}|^{2}),\]
_where \(\mathcal{K}\in M^{N\times N}(\mathbb{R})\), and \(\alpha,\beta\in M^{N\times N}(\mathbb{C})\)._
Proof.: The proof is by induction on the number of spheres \(N\). For \(N=1\) the metric \(\omega\) is the round metric and we can take \(\mathcal{K}=0\). Adding a new sphere to the surface changes the metric by adding a new region \(R_{k}\) which is a disc where the metric takes the form
\[\omega(z)|_{R_{k}}=\frac{C_{k}}{(|z-A_{k}|^{2}+B_{k})^{2}}\sqrt{-1}dz\wedge d \overline{z}.\]
Figure 7: Comparison by eye of both the space filling model and the surface of Tadahali and Difforasono highlights their similarity.
We can map \(R_{k}\) to the unit disc about the origin by a Mobius transformation \(\mathcal{M}\) in such a way that, in the coordinate of the unit disc, the metric is given by
\[\widetilde{\omega}(w)=\left\{\begin{array}{cc}F(w)\sqrt{-1}dw\wedge d \overline{w}&\mathrm{if}\quad|w|>1,\\ \frac{\kappa}{(|w|^{2}+\varepsilon)^{2}}\sqrt{-1}dw\wedge d \overline{w}&\mathrm{if}\quad|w|\leq 1,\end{array}\right.\]
for some function \(F:\mathbb{C}\to\mathbb{R}\) and constants \(\kappa,\varepsilon\in\mathbb{R}\).
We solve the \(\bar{\partial}\)-equation using the Dolbeault method; for a compactly supported1 continuous function \(H:\mathbb{C}\to\mathbb{C}\),
Footnote 1: Our function is not compactly supported but we could cut off at an arbitrary radius to produce such a function.
\[\psi(w)=\frac{1}{2\pi\sqrt{-1}}\iint_{\mathbb{C}}\frac{H(p)}{p-w}dp\wedge d \overline{p},\]
solves \(\overline{\partial}\psi=H(w)d\overline{w}\). We split the integral according to the form of the metric and consider
\[\psi(w)=\frac{1}{2\pi\sqrt{-1}}\iint_{\mathbb{D}}\frac{\kappa}{(|p|^{2}+ \varepsilon)^{2}(p-w)}dp\wedge d\overline{p}+\frac{1}{2\pi\sqrt{-1}}\iint_{ \mathbb{C}\setminus\mathbb{D}}\frac{F(p)}{p-w}dp\wedge d\overline{p}.\]
To compute the first integral we use the Cauchy-Pompeiu integral formula and the fact that
\[\frac{\kappa}{(|p|^{2}+\varepsilon)^{2}}=\frac{\partial}{\partial\overline{p }}\left(\frac{(\kappa/\varepsilon)\overline{p}}{(|p|^{2}+\varepsilon)}\right),\]
to give
\[\frac{1}{2\pi\sqrt{-1}}\iint_{\mathbb{D}}\frac{\kappa}{(|p|^{2}+\varepsilon)^{ 2}(p-w)}dp\wedge d\overline{p}=\]
\[\left\{\begin{array}{cc}\left(\frac{(\kappa/\varepsilon)\overline{w}}{(|w|^ {2}+\varepsilon)}\right)-\frac{1}{2\pi\sqrt{-1}}\int_{\partial\mathbb{D}}\frac {(\kappa/\varepsilon)\overline{p}}{(|p|^{2}+\varepsilon)(p-w)}dp&\mathrm{if} \quad|w|<1,\\ -\frac{1}{2\pi\sqrt{-1}}\int_{\partial\mathbb{D}}\frac{(\kappa/ \varepsilon)\overline{p}}{(|p|^{2}+\varepsilon)(p-w)}dp&\mathrm{if}\quad|w|>1.\end{array}\right.\]
The contour integral
\[\frac{1}{2\pi\sqrt{-1}}\int_{\partial\mathbb{D}}\frac{(\kappa/\varepsilon) \overline{p}}{(|p|^{2}+B)(p-w)}dp,\]
can be easily computed using the Cauchy Residue Formula and this yields
\[\frac{1}{2\pi\sqrt{-1}}\int_{\partial\mathbb{D}}\frac{(\kappa/\varepsilon) \overline{p}}{(|p|^{2}+\varepsilon)(p-w)}dp=\left\{\begin{array}{cc}0& \mathrm{if}\;|w|<1,\\ -\frac{(\kappa/\varepsilon)}{(1+\varepsilon)w}&\mathrm{if}\;|w|>1.\end{array}\right.\]
Finally, we arrive at
\[\frac{1}{2\pi\sqrt{-1}}\iint_{\mathbb{D}}\frac{\kappa}{(|p|^{2}+\varepsilon)^ {2}(p-w)}dp\wedge d\overline{p}=\left\{\begin{array}{cc}\left(\frac{( \kappa/\varepsilon)\overline{w}}{|w|^{2}+\varepsilon}\right)&\mathrm{if}\;|w|< 1,\\ \frac{(\kappa/\varepsilon)}{(1+\varepsilon)w}&\mathrm{if}\;|w|>1.\end{array}\right.\]
To compute the second integral, we again split the domain and consider
\[\frac{1}{2\pi\sqrt{-1}}\iint_{\mathbb{C}\setminus\mathbb{D}}\frac{F(p)}{p-w} dp\wedge d\overline{p}=\frac{1}{2\pi\sqrt{-1}}\iint_{\mathbb{C}}\frac{F(p)}{p-w}dp \wedge d\overline{p}-\frac{1}{2\pi\sqrt{-1}}\iint_{\mathbb{D}}\frac{F(p)}{p-w} dp\wedge d\overline{p}.\]
The integral
\[S(w)=\frac{1}{2\pi\sqrt{-1}}\iint_{\mathbb{C}}\frac{F(p)}{p-w}dp\wedge d \overline{p},\]
is a solution to
\[\frac{\partial S}{\partial\overline{w}}=F(w).\]
In the unit disc \(\mathbb{D}\), \(F\) has the form
\[F(w)=\frac{\tilde{\kappa}}{(|w|^{2}+\tilde{\varepsilon})^{2}},\]
where \(\tilde{\kappa}\) and \(\tilde{\varepsilon}\) are positive constants. Hence
\[\psi(w)=\left\{\begin{array}{cc}S(w)+\left(\frac{(\kappa/\varepsilon)}{|w|^{ 2}+\varepsilon}-\frac{(\tilde{\kappa}/\tilde{\varepsilon})}{|w|^{2}+\tilde{ \varepsilon}}\right)\overline{w}&\text{if}\quad|w|<1,\\ S(w)+\left(\frac{(\kappa/\varepsilon)}{1+\varepsilon}-\frac{(\tilde{\kappa}/ \tilde{\varepsilon})}{1+\tilde{\varepsilon}}\right)w^{-1}&\text{if}\quad|w|> 1,\end{array}\right.\]
solves \(dw\wedge\overline{\partial}\psi=\widetilde{\omega}(w)\).
If \(Q(w)\) is a Kahler potential for \(F(w)\sqrt{-1}dw\wedge d\overline{w}\) then
\[\widetilde{\varphi}(w)=\left\{\begin{array}{cc}Q(w)+(\kappa/\varepsilon) \log(|w|^{2}+\varepsilon)-(\tilde{\kappa}/\tilde{\varepsilon})\log(|w|^{2}+ \tilde{\varepsilon})-K&\text{if}\quad|w|<1,\\ Q(w)+\left(\frac{(\kappa/\varepsilon)}{1+\varepsilon}-\frac{(\tilde{\kappa}/ \tilde{\varepsilon})}{1+\tilde{\varepsilon}}\right)\log(|w|^{2})&\text{if} \quad|w|>1,\end{array}\right.\]
where
\[K=(\kappa/\varepsilon)\log(1+\varepsilon)-(\tilde{\kappa}/\tilde{\varepsilon} )\log(1+\tilde{\varepsilon}),\]
is a Kahler potential for \(\widetilde{\omega}\). Pulling back the function \(\widetilde{\varphi}\) via the Mobius transformation
\[\mathcal{M}(z)=\frac{\alpha z+\beta}{\gamma z+\delta}\]
we see
\[\varphi_{k}(z)=\left\{\begin{array}{cc}Q\left(\frac{\alpha z+\beta}{\gamma z +\delta}\right)+(\kappa/\varepsilon)\log\left(\left|\frac{\alpha z+\beta}{ \gamma z+\delta}\right|^{2}+\varepsilon\right)-K&\text{if}\quad z\in R_{k}\\ Q\left(\frac{\alpha z+\beta}{\gamma z+\delta}\right)+\left(\frac{(\kappa/ \varepsilon)}{1+\varepsilon}-\frac{(\tilde{\kappa}/\tilde{\varepsilon})}{1+ \tilde{\varepsilon}}\right)\log\left(\left|\frac{\alpha z+\beta}{\gamma z+ \delta}\right|^{2}\right)&\text{if}\quad z\not\in R_{k}\end{array}\right.\]
is a Kahler potential for the metric which is singular at at the point \(z=-\delta/\gamma\). We can replace the \(Q\)-term by the appropriate function for the previous \(\varphi\) and then add the appropriate multiple of \(\log(|\gamma z+\delta|^{2})\) to produce a Kahler potential of the appropriate form.
## 6 Acknowledgements
The authors acknowledge support from an EPSRC Doctoral Training Partnership studentship (grant EP/R51309X/1), the Alan Turing Institute Enrichment Scheme (R.P.), and a UKRI Future Leaders Fellowship (grant MR/T019654/1) (D.J.C.). S.J.H. would like to thank Dr R. L. Hall for his interest and for useful conversations about the project. T.M. would like to thank University of California, Irvine for their hospitality whilst some of the work on this paper was completed.
|
2308.06738 | Probabilistic Imputation for Time-series Classification with Missing
Data | Multivariate time series data for real-world applications typically contain a
significant amount of missing values. The dominant approach for classification
with such missing values is to impute them heuristically with specific values
(zero, mean, values of adjacent time-steps) or learnable parameters. However,
these simple strategies do not take the data generative process into account,
and more importantly, do not effectively capture the uncertainty in prediction
due to the multiple possibilities for the missing values. In this paper, we
propose a novel probabilistic framework for classification with multivariate
time series data with missing values. Our model consists of two parts; a deep
generative model for missing value imputation and a classifier. Extending the
existing deep generative models to better capture structures of time-series
data, our deep generative model part is trained to impute the missing values in
multiple plausible ways, effectively modeling the uncertainty of the
imputation. The classifier part takes the time series data along with the
imputed missing values and classifies signals, and is trained to capture the
predictive uncertainty due to the multiple possibilities of imputations.
Importantly, we show that na\"ively combining the generative model and the
classifier could result in trivial solutions where the generative model does
not produce meaningful imputations. To resolve this, we present a novel
regularization technique that can promote the model to produce useful
imputation values that help classification. Through extensive experiments on
real-world time series data with missing values, we demonstrate the
effectiveness of our method. | SeungHyun Kim, Hyunsu Kim, EungGu Yun, Hwangrae Lee, Jaehun Lee, Juho Lee | 2023-08-13T10:04:13Z | http://arxiv.org/abs/2308.06738v1 | # Probabilistic Imputation for Time-series Classification with Missing Data
###### Abstract
Multivariate time series data for real-world applications typically contain a significant amount of missing values. The dominant approach for classification with such missing values is to impute them heuristically with specific values (zero, mean, values of adjacent time-steps) or learnable parameters. However, these simple strategies do not take the data generative process into account, and more importantly, do not effectively capture the uncertainty in prediction due to the multiple possibilities for the missing values. In this paper, we propose a novel probabilistic framework for classification with multivariate time series data with missing values. Our model consists of two parts; a deep generative model for missing value imputation and a classifier. Extending the existing deep generative models to better capture structures of time-series data, our deep generative model part is trained to impute the missing values in multiple plausible ways, effectively modeling the uncertainty of the imputation. The classifier part takes the time series data along with the imputed missing values and classifies signals, and is trained to capture the predictive uncertainty due to the multiple possibilities of imputations. Importantly, we show that naively combining the generative model and the classifier could result in trivial solutions where the generative model does not produce meaningful imputations. To resolve this, we present a novel regularization technique that can promote the model to produce useful imputation values that help classification. Through extensive experiments on real-world time series data with missing values, we demonstrate the effectiveness of our method.
Machine Learning, ICML
## 1 Introduction
Multivariate time-series data are universal; many real-world applications ranging from healthcare, stock markets, and weather forecasting take multivariate time-series data as inputs. Arguably the biggest challenge in dealing with such data is the presence of missing values, due to the fundamental difficulty of faithfully measuring data for all time steps. The degree of missing is often severe, so in some applications, more than 90% of data are missing for some features. Therefore, developing an algorithm that can accurately and robustly perform predictions with missing data is considered an important problem to be tackled.
In this paper, we focus on the task of classification, where the primary goal is to classify given multivariate time-series data with missing values, simply imputing the missing values with heuristically chosen values considered to be strong baselines that are often competitive or even better than more sophisticated methods. For instance, one can fill all the missing values with zero, the mean of the data, or values from the previous time steps. GRU-D (Che et al., 2018) proposes a more elaborated imputation algorithm where the missing values are filled with a mixture between the data means and values from the previous time steps with the mixing coefficients learned from the data. While these simple imputation-based methods work surprisingly well (Che et al., 2018; Du et al., 2022), they lack a fundamental mechanism to recover the missing values, especially the underlying generative process of the given time series data.
Dealing with missing data is deeply connected to handling uncertainties originating from the fact that there may be multiple plausible options for filling in the missing values, so it is natural to analyze them with the probabilistic framework. There have been rich literature on statistical analysis for missing data, where the primary goal is to understand how the observed and missing data are generated. In the seminal work of Little & Rubin (2002), three assumptions for the missing data generative process were introduced, including Missing Completely At Random (MCAR), Missing At Random (MAR), and Missing Not At Random (MNAR). While MCAR or MAR simplifies the modeling and thus makes the inference easier, they may be unrealistic for real-world applications, because they assume that the missing mechanism is independent of the missing values (MAR) or
both missing and observed values (MCAR). MNAR, the most generic assumption, assumes that the missing mechanism depends on both missing and observed values, so the generative model based on the MNAR assumption should explicitly take the missing mechanism into account. Based on this framework, Mattei and Frellsen (2019) presented deep generative models for missing data under MAR assumption, and this was later extended to MNAR in Ipsen et al. (2021). Combining the deep generative model and classifier, Ipsen et al. (2022) proposed a hybrid model that can classify missing data with problematically imputed values generated under MAR assumption.
Still, in our opinion, there is no satisfactory work combining probabilistic generative models for multivariate time-series data with missing values and classification models, so that the classifier could consider the uncertainty in filling in the missing values when making predictions. The aforementioned probabilistic frameworks are not designed for classification (Mattei and Frellsen, 2019; Ipsen et al., 2021), and more importantly, not tailored for time series data (Ipsen et al., 2022). A naive extension of Ipsen et al. (2022) for time series is likely to fail; putting the obvious difference between the static and time series data aside, the fundamental difficulty of learning the generative models for missing is that there are no explicit learning signals that could promote the model to generate "meaningful" missing values. Since we don't have ground truth for the missing values, in principle, the generative model can generate arbitrary values (e.g., zeros), and the combined classifier can still successfully classify time series data, which is a critical problem that is overlooked in the existing works.
To this end, we propose a hybrid model combining the deep generative models for multivariate time series data and the classification models for them. The generative part is built under the MNAR assumption and is designed to naturally encode the continuity of the multivariate time series data. The classifier then takes the missing values generated from the generative model to classify time-series, and unlike the algorithms based on heuristic imputations, it takes multiple feasible options for the missing values and computes predictions based on them. To tackle the difficulty in guiding the generative model to generate "meaningful" missing values, we introduce a novel regularization technique that deliberately erases _observed values_ during training. As a consequence, the classifier is forced to do classification based more on the generated missing values, so the generative model is encouraged to produce missing values that are more advantageous for the classification. Using the various real-world multivariate time series benchmarks with missing values, we demonstrate that our approach outperforms baselines both in terms of classification accuracy and uncertainty estimates.
## 2 Background
### Settings and Notations
Let \(\mathbf{x}=[x_{1},\ldots,x_{d}]^{\top}\in\mathbb{R}^{d}\) be a \(d\)-dimensional vector, along with the mask vector \(\mathbf{s}=[s_{1},\ldots,s_{d}]^{\top}\in\{0,1\}^{d}\), where \(s_{j}=1\) if \(x_{j}\) is observed and \(s_{j}=0\) otherwise. Given a mask \(\mathbf{s}\), we can split \(\mathbf{x}\) into the observed part \(\mathbf{x}^{\rm o}:=\{x_{j}\,|\,s_{j}=1\}\) and the missing part \(\mathbf{x}^{\rm m}:=\{x_{j}\,|\,s_{j}=0\}\). For a collection of data, the \(i^{\text{th}}\) instance is denoted as \(\mathbf{x}_{i}=[x_{i,1},\ldots,x_{i,d}]\), and \(\mathbf{s}_{i}\), \(\mathbf{x}_{i}^{\rm o}\), and \(\mathbf{x}_{i}^{\rm m}\) are defined similarly. For a multivariate time-series data, we denote the vector of \(t^{\text{th}}\) time step as \(\mathbf{x}_{t}=[x_{t,1},\ldots,x_{t,d}]\in\mathbb{R}^{d}\), and the corresponding mask as \(\mathbf{s}_{t}=[s_{t,1},\ldots,s_{t,d}]\). The \(t^{\text{th}}\) time step of \(i^{\text{th}}\) instance of a collection is denoted as \(\mathbf{x}_{t,i}\), which is split into \(\mathbf{x}_{t,i}^{\rm o}\) and \(\mathbf{x}_{t,i}^{\rm m}\) according to \(\mathbf{s}_{t,i}\).
Following Mattei and Frellsen (2019); Ipsen et al. (2021), we assume that the joint distribution of an input \(\mathbf{x}\) and a mask \(\mathbf{s}\) is factorized as \(p_{\mathbf{\theta},\mathbf{\psi}}(\mathbf{x},\mathbf{s})=p_{\mathbf{\theta}}(\mathbf{x})p_{\mathbf{\psi}} (\mathbf{s}|\mathbf{x})\). The conditional distribution \(p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x})\) plays an important role for describing missing mechanism. Under MCAR assumption, we have \(p(\mathbf{s}|\mathbf{x})=p(\mathbf{s})\), under MAR we have \(p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x})=p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x}^{\rm o})\), and under MNAR we have \(p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x})=p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x}^{\rm o},\mathbf{x}^{\rm m})\). The likelihood for the observed data \(\mathbf{x}^{\rm o}\) is thus computed as \(p_{\mathbf{\theta},\mathbf{\psi}}(\mathbf{x}^{\rm o},\mathbf{s})=\int p_{\mathbf{\theta},\mathbf{\psi} }(\mathbf{x},\mathbf{s})\mathrm{d}\mathbf{x}^{\rm m}\).
### Missing Data Importance-Weighted Autoencoder and its extensions
In this section, we briefly review the Missing data Importance-Weighted AutoEncoder (MIMAE) (Mattei and Frellsen, 2019), a deep generative model for missing data, and its extensions to MNAR and supervised settings. Similar to variational autoencoder (VAE) (Kingma and Welling, 2014), MIMAE assumes that a data \(\mathbf{x}\) is generated from a latent representation \(\mathbf{z}\), but we only observe \(\mathbf{x}^{\rm o}\) with \(\mathbf{s}\) generated from the missing model \(p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x})\). MIMAE assumes MAR, so we have \(p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x})=p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x}^{\rm o})\). The log-likelihood for \((\mathbf{x}^{\rm o},\mathbf{s})\) is then computed as
\[\log p_{\mathbf{\theta},\mathbf{\psi}}(\mathbf{x}^{\rm o},\mathbf{s})\] \[=\log p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x}^{\rm o})+\underbrace{\log\int p _{\mathbf{\theta}}(\mathbf{x}^{\rm o}|\mathbf{z})p_{\mathbf{\theta}}(\mathbf{z})\mathrm{d}\mathbf{z}}_{ =\log p_{\mathbf{\theta}}(\mathbf{x}^{\rm o})}. \tag{1}\]
For the missing data imputation, \(p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x}^{\rm o})\) is not necessary, so we choose to maximize only the \(\log p_{\mathbf{\theta}}(\mathbf{x}^{\rm o})\). The integral is intractable, so we consider the Importance Weighted AutoEncoder (IWAE) lower bound (Burda et al., 2015) as a proxy loss,
\[\mathcal{L}^{(K)}_{\text{MIMAE}}(\mathbf{\theta},\mathbf{\phi}):=\mathbb{E}\bigg{[}\log \frac{1}{K}\sum_{k=1}^{K}\frac{p_{\mathbf{\theta}}(\mathbf{x}^{\rm o}|\mathbf{z}_{k})p_{ \mathbf{\theta}}(\mathbf{z}_{k})}{q_{\mathbf{\phi}}(\mathbf{z}_{k}|\mathbf{x}^{\rm o})}\bigg{]}. \tag{2}\]
Here, \(q_{\mathbf{\phi}}(\mathbf{z}_{k}|\mathbf{x}^{\rm o})\) for \(k=1,\ldots,K\) are i.i.d. copies of the variational distribution (encoder) \(q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x}^{\rm o})\) approximating the true posterior \(p_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x}^{\rm o})\), and the expectation is w.r.t. \(\prod_{k=1}^{K}q(\mathbf{z}_{k}|\mathbf{x}^{\rm o})\). \(\mathbb{E}_{\mathbf{z}_{1:K}}\) denotes the expectation w.r.t. \(\prod_{k=1}^{K}q_{\mathbf{\phi}}(\mathbf{z}_{k}|\mathbf{x}^{\rm o})\). \(K\) is the number of particles, and the bound converges to the log-likelihood as \(K\rightarrow\infty\), that is, \(\mathcal{L}^{(1)}_{\text{MIWAE}}(\mathbf{\theta},\mathbf{\phi})\leq\mathcal{L}^{(2)} _{\text{MIWAE}}(\mathbf{\theta},\mathbf{\phi})\leq\cdots=\log p_{\mathbf{\theta}}(\mathbf{x}^ {\rm o})\).
Ipsen et al. (2021) presented not-MIWAE, an extension of MIWAE with MNAR assumption. The log-likelihood for \((\mathbf{x}^{\rm o},\mathbf{s})\) under the MNAR assumption is,
\[\log p_{\mathbf{\theta},\mathbf{\psi}}(\mathbf{x}^{\rm o},\mathbf{s}) =\log\int p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x}^{\rm o},\mathbf{x}^{\rm m})p_{ \mathbf{\theta}}(\mathbf{x}^{\rm o}|\mathbf{z})\] \[\times p_{\mathbf{\theta}}(\mathbf{x}^{\rm m}|\mathbf{z})p_{\mathbf{\theta}}(\bm {z})\mathrm{d}\mathbf{z}\mathrm{d}\mathbf{x}^{\rm m}, \tag{3}\]
where we assume that \((\mathbf{x}^{\rm o},\mathbf{x}^{\rm m})\) are independent given \(\mathbf{z}\). The corresponding IWAE lower-bound with the variational distribution \(q_{\mathbf{\phi}}(\mathbf{x}^{\rm m},\mathbf{z}|\mathbf{x}^{\rm o})=p_{\mathbf{\theta}}(\mathbf{x}^{ \rm m}|\mathbf{z})q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x}^{\rm o})\) is,
\[\mathcal{L}^{(K)}_{\text{noMIWAE}}(\mathbf{\theta},\mathbf{\psi},\mathbf{ \phi})\] \[:=\mathbb{E}\bigg{[}\log\frac{1}{K}\sum_{k=1}^{K}\frac{p_{\mathbf{ \theta}}(\mathbf{s}|\mathbf{x}^{\rm o},\mathbf{x}_{k}^{\rm m})p_{\mathbf{\theta}}(\mathbf{x}^{\rm o }|\mathbf{z}_{k})p_{\mathbf{\theta}}(\mathbf{z}_{k})}{q_{\mathbf{\phi}}(\mathbf{z}_{k}|\mathbf{x}^{\rm o })}\bigg{]}, \tag{4}\]
where the expectation is w.r.t. \(\prod_{k=1}^{K}p_{\mathbf{\theta}}(\mathbf{x}_{k}^{\rm m}|\mathbf{z}_{k})q_{\mathbf{\phi}}( \mathbf{z}_{k}|\mathbf{x}^{\rm o})\).
On the other hand, Ipsen et al. (2022) extended MIWAE to a supervised learning setting, where the goal is to learn the joint distribution of an observed input \(\mathbf{x}^{\rm o}\), a mask \(\mathbf{s}\), and corresponding label \(\mathbf{y}\),
\[\log p_{\mathbf{\theta},\mathbf{\psi},\mathbf{\lambda}}(\mathbf{y},\mathbf{x}^{\rm o },\mathbf{s})=\log p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x}^{\rm o})\] \[+\underbrace{\log\int p_{\mathbf{\lambda}}(\mathbf{y}|\mathbf{x}^{\rm o},\bm {x}^{\rm m})p_{\mathbf{\theta}}(\mathbf{x}^{\rm o}|\mathbf{z})p_{\mathbf{\theta}}(\mathbf{x}^{\rm m }|\mathbf{z})p_{\mathbf{\theta}}(\mathbf{z})\mathrm{d}\mathbf{z}}_{=\log p_{\mathbf{\phi},\mathbf{ \lambda}}(\mathbf{y},\mathbf{x}^{\rm o})} \tag{5}\]
The term \(p_{\mathbf{\psi}}(\mathbf{s}|\mathbf{x}^{\rm o})\) is irrelevant to the prediction for \(\mathbf{y}\), so we choose to maximize \(\log p_{\mathbf{\theta},\mathbf{\lambda}}(\mathbf{y},\mathbf{x}^{\rm o})\), which again can be lower-bounded by IWAE bound with the variational distribution \(q_{\mathbf{\phi}}(\mathbf{z},\mathbf{x}^{\rm m}|\mathbf{x}^{\rm o})=p_{\mathbf{\theta}}(\mathbf{x}^{ \rm m}|\mathbf{z})q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x}^{\rm o})\):
\[\mathcal{L}^{(K)}_{\text{supMIWAE}}(\mathbf{\theta},\mathbf{\lambda}, \mathbf{\phi})\] \[:=\mathbb{E}\bigg{[}\log\frac{1}{K}\sum_{k=1}^{K}\frac{p_{\mathbf{ \lambda}}(\mathbf{y}|\mathbf{x}^{\rm o},\mathbf{x}_{k}^{\rm m})p_{\mathbf{\theta}}(\mathbf{x}^{\rm o }|\mathbf{z}_{k})p(\mathbf{z}_{k})}{q_{\mathbf{\phi}}(\mathbf{z}_{k}|\mathbf{x}^{\rm o})}\bigg{]}, \tag{6}\]
where the expectation is w.r.t. \(\prod_{k=1}^{K}p_{\mathbf{\theta}}(\mathbf{x}_{k}^{\rm m}|\mathbf{z}_{k})q_{\mathbf{\phi}}( \mathbf{z}_{k}|\mathbf{x}^{\rm o})\).
### GRU for multivariate time series data and imputation methods
We briefly review GRU (Cho et al., 2014) and its variant for time series classification with missing data since they are common baselines. Given a multivariate time series \((\mathbf{x}_{t})_{t=1}^{T}\), GRU takes a vector of one time step at a time and accumulates the information into a hidden state \(\mathbf{h}_{t}\). Specifically, the forward pass at \(t^{\text{th}}\) time step takes \(\mathbf{x}_{t}\) and updates the hidden state \(\mathbf{h}_{t}\) as follows:
\[\mathbf{a}_{t} =\sigma(\mathbf{W}_{\mathbf{a}}\mathbf{x}_{t}+\mathbf{U}_{\mathbf{a}}\mathbf{h}_{t-1}+ \mathbf{b}_{\mathbf{a}}),\] \[\mathbf{r}_{t} =\sigma(\mathbf{W}_{\mathbf{r}}\mathbf{x}_{t}+\mathbf{U}_{\mathbf{r}}\mathbf{h}_{t-1}+ \mathbf{b}_{\mathbf{r}})\] \[\tilde{\mathbf{h}}_{t} =\tanh(\mathbf{W}\mathbf{x}_{t}+\mathbf{U}(\mathbf{r}_{t}\odot\mathbf{h}_{t-1})+\mathbf{b}),\] \[\mathbf{h}_{t} =(1-\mathbf{a}_{t})\odot\mathbf{h}_{t-1}+\mathbf{a}_{t}\odot\tilde{\mathbf{h}}_{t},\]
where \(\odot\) denotes the element-wise multiplication. We also review the heuristical imputation methods described in Che et al. (2018), which are common baselines for the related methods. Let \(\hat{x}_{t,j}\) denote the imputed value for \(x_{t,j}\).
* **GRU-zero**: a zero padding setting \(\hat{x}_{t,j}=s_{t,j}x_{t,j}\).
* **GRU-mean**: imputes the missing values as \(\hat{x}_{t,j}=s_{t,j}x_{t,j}+(1-s_{t,j})\bar{x}_{j}\), where \(\bar{x}_{j}=\sum_{i=1}^{n}\sum_{t=1}^{T}s_{t,i,j}x_{t,i,j}/\sum_{i=1}^{n}\sum_{t =1}^{T}s_{t,i,j}\) is the empirical mean of observed values for \(j^{\text{th}}\) feature of a given collection of time series data \(((\mathbf{x}_{t,i})_{t=1}^{T})_{i=1}^{n}\).
* **GRU-forward**: set \(\hat{x}_{t,j}=s_{t,j}x_{t,j}+(1-s_{t,j})x_{t^{\prime},j}\), where \(t^{\prime}\) is the last time when \(j^{\text{th}}\) feature was observed before \(t\).
* **GRU-simple**: along with the imputed vector \(\hat{\mathbf{x}}_{t}\) (either by GRU-mean or GRU-forward), concatenate additional information. Che et al. (2018) proposed to concatenate 1) the mask \(\mathbf{s}_{t}\), and the _time-interval_\(\mathbf{\delta}_{t}\) saving the length of the intervals between observed values (see Che et al. (2018) for precise definition). The concatenated vector \([\hat{\mathbf{x}}_{t},\mathbf{s}_{t},\mathbf{\delta}_{t}]\) is then fed into GRU.
* **GRU-D**: introduces _learnable decay_ values for the input \(\mathbf{x}_{t}\) and hidden state \(\mathbf{h}_{t}\) as follows: \[\mathbf{\gamma}_{\mathbf{x}_{t}} =\exp(-\max(\mathbf{W}_{\mathbf{\gamma}_{\mathbf{x}_{t}}}\mathbf{\delta}_{t}+\mathbf{b}_{ \mathbf{\gamma}_{\mathbf{x}}},\mathbf{0})),\] \[\mathbf{\gamma}_{\mathbf{h}_{t}} =\exp(-\max(\mathbf{W}_{\mathbf{\gamma}_{\mathbf{h}}}\mathbf{\delta}_{t}+\mathbf{b}_{ \mathbf{\gamma}_{\mathbf{h}}},\mathbf{0})).\] Given a vector \(\mathbf{x}_{t}\) with mask \(\mathbf{s}_{t}\), GRU-D imputes the missing values as \[\hat{
## 3 Methods
In this section, we describe our method, a probabilistic framework for multivariate time series data with missing values. Our method is an extension of supMIWAE to time series data under MNAR assumption, but the actual implementation is not merely a naive composition of the existing models. In Section 3.1, we first present supnot-MIWAE, an MNAR version of supMIWAE, with the encoder and decoder architectures designed for time series data with missing values. In Section 3.2, we show why the sup(not)MIWAE for data with missings may fail, and propose a novel regularization technique to prevent that.
### supnotMIWAE for multivariate time series data
Given a multivariate time series data \(\mathbf{x}_{1:T}:=(\mathbf{x}_{t})_{t=1}^{T}\) with observed \(\mathbf{x}_{1:T}^{\mathrm{o}}\), missing \(\mathbf{x}_{1:T}^{\mathrm{m}}\), a missing mask \(\mathbf{s}_{1:T}:=(\mathbf{s}_{t})_{t=1}^{T}\), and a label \(\mathbf{y}\), we assume the following state-space model with latent vectors \(\mathbf{z}_{1:T}:=(\mathbf{z}_{t})_{t=1}^{T}\).
\[p_{\mathbf{\theta},\mathbf{\psi},\mathbf{\lambda}}(\mathbf{y},\mathbf{x}_{1:T}^{ \mathrm{o}},\mathbf{s}_{1:T})\] \[=\int p_{\mathbf{\lambda}}(\mathbf{y}|\mathbf{x}_{1:T}^{\mathrm{o}},\mathbf{x}_{1 :T}^{\mathrm{m}})p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\mathrm{o}}|\mathbf{z}_{1:T})p_{ \mathbf{\theta}}(\mathbf{x}_{1:T}^{\mathrm{m}}|\mathbf{z}_{1:T})\] \[\quad\times p_{\mathbf{\theta}}(\mathbf{z}_{1:T})p_{\mathbf{\psi}}(\mathbf{s}_{1 :T}|\mathbf{x}_{1:T})\mathrm{d}\mathbf{x}_{1:T}^{\mathrm{m}}\mathrm{d}\mathbf{z}_{1:T}. \tag{8}\]
Below we describe each component more in detail.
Prior \(p_{\mathbf{\theta}}(\mathbf{z}_{1:T})\).We assume Gaussian process prior as in Fortuin et al. (2020) for \(\mathbf{z}_{1:T}\) to encode temporal correlation in the latent space. Let \(\mathbf{z}_{1:T,j}=[z_{1,j},\dots,z_{T,j}]^{\top}\) be the vector collecting \(j^{\mathrm{th}}\) dimension of the series \(\mathbf{z}_{1:T}\).
\[p_{\mathbf{\theta}}(\mathbf{z}_{1:T})=\prod_{j=1}^{d}\mathcal{N}(\mathbf{z}_{1:T,j}|\mathbf{0},\mathbf{K}), \tag{9}\]
where \(\mathbf{K}_{ij}=k(t_{i},t_{j})\)\(i,j\in\{1\dots T\}\) and \(k\) is kernel function. We use a Cauchy kernel for all experiments in this paper.
Decoders \(p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\mathrm{o}}|\mathbf{z}_{1:T})\) and \(p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\mathrm{m}}|\mathbf{z}_{1:T})\).The decoder for the observed \(p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\mathrm{o}}|\mathbf{z}_{1:T})\) is defined in an autoregressive fashion,
\[p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\mathrm{o}}|\mathbf{z}_{1:T})\] \[=\prod_{t=1}^{T}\mathcal{N}(\mathbf{x}_{t}^{\mathrm{o}}|\mathbf{\mu}_{ \text{dec}}(\mathbf{z}_{1:t}),\mathrm{diag}(\mathbf{\sigma}_{\text{dec}}^{2}(\mathbf{z}_{1 :t}))), \tag{10}\]
where \((\mathbf{\mu}_{\text{dec}}(\mathbf{z}_{1:t}),\mathbf{\sigma}_{\text{dec}}(\mathbf{z}_{1:t}))_{ t=1}^{T}\) are defined with a transformer (Vaswani et al., 2017) with causal maskings.
\[\mathbf{h}_{t}=\text{Transformer}_{\text{dec}}(\mathbf{z}_{1:t}),\] \[(\mathbf{\mu}_{\text{dec}}(\mathbf{z}_{1:t}),\mathbf{\sigma}_{\text{dec}}(\bm {z}_{1:t}))=\mathrm{MLP}_{\text{dec}}(\mathbf{h}_{t}). \tag{11}\]
In practice, this casual transformer layer is applied at times. The decoder for the missing \(p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\mathrm{m}}|\mathbf{z}_{1:T})\) defined similarly. In our implementation, we actually let them share the same model generating both \(\mathbf{x}_{t}^{\mathrm{o}}\) and \(\mathbf{x}_{t}^{\mathrm{m}}\).
Missing model \(p_{\mathbf{\psi}}(\mathbf{s}_{1:T}|\mathbf{x}_{1:T})\).The missing model is simply assumed to be independent Bernoulli distributions over the time steps and features.
\[p_{\mathbf{\psi}}(\mathbf{s}_{1:T}|\mathbf{x}_{1:T})=\prod_{t=1}^{T}\prod_{j=1}^{d}\mathrm{ Bern}(s_{t,j}|\sigma_{\text{mis},t,j}(\mathbf{x}_{1:T})), \tag{12}\]
where \(\sigma_{\text{mis}}(\mathbf{x}_{1:T})\) is computed as
\[\sigma_{\text{mis}}(\mathbf{x}_{1:T})=\mathrm{MLP}_{\text{mis}}(\mathbf{x}_{1:T}). \tag{13}\]
Classifier \(p_{\mathbf{\lambda}}(\mathbf{y}|\mathbf{x}_{1:T}^{\mathrm{o}},\mathbf{x}_{1:T}^{\mathrm{m}})\)We use a transformer based model for the classifier. Given a time-series data \(\mathbf{x}_{1:T}\) packing the observed values \(\mathbf{x}_{1:T}^{\mathrm{o}}\) and the imputed missing values generated from the decoders, we first process the data with 1D CNN applied along the time axis to compute \(\mathbf{r}_{1:T}:=\mathrm{CNN}(\mathbf{x}_{1:T})\). Then we process \(\mathbf{r}_{1:T}\) with a Transformer block to compute an output \(\mathbf{h}_{T}\). The conditional distribution \(p_{\mathbf{\lambda}}(\mathbf{y}|\mathbf{x}_{1:T}^{\mathrm{o}},\mathbf{x}_{1:T}^{\mathrm{m}})\) is defined as
\[\mathrm{Categorical}(\mathbf{y}\,|\,\mathrm{Softmax}(\mathrm{Linear}_{\text{ cls}}(\mathbf{h}_{T})). \tag{14}\]
During the forward pass, the classifier takes the observed input \(\mathbf{x}_{1:T}^{\mathrm{o}}\) and the missing values _generated_ from the decoder \(p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\mathrm{m}}|\mathbf{z}_{1:T})\). We find it beneficial to adopt the idea of GRU-D, where instead of directly putting the generated missing values \(\mathbf{x}_{1:T}^{\mathrm{m}}\), putting the _decayed_ missing values as follows:
\[\hat{\mathbf{x}}_{1:T}:=(\mathbf{x}_{1:T}^{\mathrm{o}},\mathbf{x}_{1:T}^{ \mathrm{m}})\text{ where }\mathbf{x}_{1:T}^{\mathrm{m}}\sim p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{ \mathrm{m}}|\mathbf{z}_{1:T}),\] \[\hat{x}_{t,j}=\mathrm{Decay}(s_{t,j},x_{t,j},\gamma_{\hat{\mathbf{x} }_{t},j},x_{t^{\prime},j},\tilde{x}_{t,j}). \tag{15}\]
where \(\mathbf{\gamma}_{\hat{\mathbf{x}}_{t}}=\exp(-\max(\mathbf{0},\mathbf{W}_{\mathbf{\pm}}\mathbf{\theta}_{t} +\mathbf{b}_{\mathbf{\pm}})\) is a learnable decay. We find this stabilizes the learning when the generated missing values \(\mathbf{x}_{1:T}^{\mathrm{m}}\) are inaccurate, for instance, in the early stage of learning. Note also the difference between (15) and the original GRU-D imputation (7). In GRU-D, the last observed values are mixed with the mean feature, while ours mix them with the generated values.
Encoder \(q_{\mathbf{\phi}}(\mathbf{z}_{1:T}|\mathbf{x}_{1:T}^{\mathrm{o}})\).Given the generative model defined above, we introduce the variational distribution for \((\mathbf{x}_{1:T}^{\mathrm{m}},\mathbf{z}_{1:T})\) that factorizes as,
\[p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\mathrm{m}}|\mathbf{z}_{1:T})q_{\mathbf{\phi}}(\mathbf{z}_{1:T} |\mathbf{x}_{1:T}^{\mathrm{o}}). \tag{16}\]
Here, the encoder \(q_{\mathbf{\phi}}(\mathbf{z}_{1:T}|\mathbf{x}_{1:T}^{\mathrm{o}})\) is defined as an autoregressive model as before,
\[q_{\mathbf{\phi}}(\mathbf{z}_{1:T}|\mathbf{x}_{1:T}^{\mathrm{o}})\] \[=\prod_{t=1}^{T}\mathcal{N}(\mathbf{z}_{t}|\mathbf{\mu}_{\text{enc}}(\mathbf{ x}_{1:t}^{\mathrm{o}}),\mathrm{diag}(\mathbf{\sigma}_{\text{enc}}^{2}(\mathbf{x}_{1:t}^{ \mathrm{o}}))). \tag{17}\]
Given a series of observed values \(\mathbf{x}_{1:T}^{\rm o}\), we first apply zero imputation for the missing values, that is, set \(x_{t,j}^{\rm o}=x_{t,j}^{\rm o}\) if \(s_{t,j}=1\) and \(x_{t,j}^{\prime}=0\) otherwise. Then we concatenate the missing indicators to \(\mathbf{x}_{1:T}^{\rm o}\) and apply the 1D CNN to the time-axis as \(\mathbf{r}_{1:T}=\mathrm{CNN}(\mathbf{x}_{1:T}^{\rm o})\). Having computed \(\mathbf{r}_{1:T}\), similar to the decoder, we use a transformer with causal masking to compute
\[\mathbf{h}_{t}=\text{Transformer}_{\text{enc}}(\mathbf{r}_{1:t}),\] \[(\mathbf{\mu}_{\text{enc}}(\mathbf{x}_{1:t}^{\rm o}),\mathbf{\sigma}_{\text{ enc}}(\mathbf{x}_{1:t}^{\rm o}))=\mathrm{MLP}_{\text{enc}}(\mathbf{h}_{t}). \tag{18}\]
Objective.Having all the ingredients defined, the IWAE bound for supnotMIWAE is computed as follows:
\[\mathcal{L}^{(K)}(\mathbf{\lambda},\mathbf{\theta},\mathbf{\psi},\mathbf{\phi}):=\mathbb{E} \bigg{[}\log\frac{1}{K}\sum_{k=1}^{K}\omega_{k}\bigg{]}, \tag{19}\]
where the expectation is w.r.t. \(K\) copies of a variational distribution, \(\prod_{k=1}^{K}p_{\mathbf{\theta}}(\mathbf{x}_{k:1:T}^{\rm m}|\mathbf{x}_{k,1:T})q_{\mathbf{ \phi}}(\mathbf{z}_{k,1:T}|\mathbf{x}_{1:T}^{\rm o})\), and \(\omega_{k}\) is the importance weight term defined as
\[\omega_{k} :=p_{\mathbf{\lambda}}(\mathbf{y}|\mathbf{x}_{1:T}^{\rm o},\mathbf{x}_{k,1:T}^{ \rm m})p_{\mathbf{\psi}}(\mathbf{s}_{1:T}|\mathbf{x}_{1:T}^{\rm o},\mathbf{x}_{k,1:T}^{\rm m})\] \[\times p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\rm o}|\mathbf{z}_{k,1:T})p_{\bm {\theta}}(\mathbf{z}_{k,1:T})/q_{\mathbf{\phi}}(\mathbf{z}_{k,1:T}|\mathbf{x}_{1:T}^{\rm o}). \tag{20}\]
### ObsDropout: regularizing supnotMIWAE for better imputation
The problem with (19) is that there is no clear supervision for the missing values \(\mathbf{x}_{1:T}^{\rm m}\). Obviously, if we had an access to the missing values, the conditional probability \(p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\rm m}|\mathbf{z}_{1:T})\) would guide the model to learn to correctly impute those missing values. Without such true values, we can only encourage the model to impute the missing values with some indirect criteria. In the objective (19), there are two terms that the model hinges on for this matter.
* The missing model \(p_{\mathbf{\psi}}(\mathbf{s}_{1:T}|\mathbf{x}_{1:T}^{\rm o},\mathbf{x}_{1:T}^{\rm m})\): this term encourages the model to reconstruct the missing mask \(s_{t}\) from the imputed value \(x_{t}^{\rm m}\), so in principle, the model should impute the missing values in a way that they are distinguishable from the observed values. However, in general, the distributions of the observed and the missings are not necessarily different, and more importantly, the model can easily cheat the objective. For instance, consider a trivial case where the model imputes all the missing values with zero. The conditional probability \(p_{\mathbf{\psi}}(\mathbf{s}_{1:T}|\mathbf{x}_{1:T}^{\rm o},\mathbf{x}_{1:T}^{\rm m})\) can still be maximized by setting \(\sigma_{\text{mis}}(x_{t,j})=0\) if \(x_{t,j}=0\) (unless there are not many observed with \(x_{t,j}^{\rm o}=0\)).
* The classifier \(p_{\mathbf{\theta}}(\mathbf{y}|\mathbf{x}_{1:T}^{\rm o},\mathbf{x}_{1:T}^{\rm m})\): this term expects the model to generate meaningful imputations so that they are helpful for the classification. However, as shown in prior works (Che et al., 2018), the classifier can achieve decent classification accuracy _without_ meaningful imputations, for instance, it will still be able to classify the signals while all the missing values are imputed with zeros. Hence, in the current form, there is no strong incentive for the model to learn non-trivial imputations that will bring significant accuracy gain over zero imputations.
To summarize, a model trained with the objective (19) is not likely to generate realistic missing values. To resolve this, we may introduce a missing model \(p_{\mathbf{\theta}}(\mathbf{s}_{1:T}|\mathbf{x}_{1:T}^{\rm o},\mathbf{x}_{1:T}^{\rm m})\) much more elaborated than the simple i.i.d. model that we are using right now, but that may require some dataset-specific design. Instead, we present a simple regularization technique that can effectively enhance the quality of the imputed values.
Our idea is simple; when passing the observed inputs \(\mathbf{x}_{1:T}^{\rm o}\) and the imputed missing values \(\tilde{\mathbf{x}}_{1:T}^{\rm m}\) (i.e., imputed by (15)) to the classifier, _deliberately drop_ some portion of the observed inputs. Without dropping the observed inputs, the classifier may heavily rely on the observed inputs to do the classification, but if some of the observed inputs are dropped out during training, the classifier can focus more on the imputed missing values \(\tilde{\mathbf{x}}_{1:T}^{\rm m}\). As a result, the model is encouraged to generate more "useful" missing values that are beneficial for classification. More specifically, let \(\beta\) be a predefined dropout probability. Then we construct the imputed input \(\tilde{\mathbf{x}}_{t}\) to the classifier as follows:
\[\tilde{\mathbf{x}}_{1:T} :=(\mathbf{x}_{1:T}^{\rm o},\mathbf{x}_{1:T}^{\rm m})\text{ where }\mathbf{x}_{1:T}^{\rm m}\sim p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\rm m}|\mathbf{z}_{1:T}),\] \[m_{t,j} \sim\mathrm{Bern}(1-\beta),\] \[\hat{x}_{t,j} :=\mathrm{Decay}(s_{t,j}m_{t,j},x_{t,j},\gamma_{\tilde{\mathbf{x}}_{ t,j}},x_{t^{\prime},j},\tilde{x}_{t,j}). \tag{21}\]
That is, when an observed \(x_{t,j}\) is dropped out, we put a generated value with the decay applied as in (15), so that the classifier could focus more on the values generated by the decoder as we intended. We call this idea _ObsDropout_, since we are dropping out the observed values during the training.
With the mask variables \(\mathbf{m}_{1:T}\) included, the likelihood is extended
\[p_{\mathbf{\theta},\mathbf{\psi},\mathbf{\lambda}}(\mathbf{y},\mathbf{x}_{1:T}^{\rm o },\mathbf{s}_{1:T})\] \[=\int p_{\mathbf{\lambda}}(\mathbf{y}|\mathbf{x}_{1:T}^{\rm o},\mathbf{x}_{1:T}^{ \rm m},\mathbf{m}_{1:T})p_{\beta}(\mathbf{m}_{1:T})\] \[\quad\times p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\rm o}|\mathbf{z}_{1:T})p_{ \mathbf{\theta}}(\mathbf{x}_{1:T}^{\rm m}|\mathbf{z}_{1:T})\] \[\quad\times p_{\mathbf{\theta}}(\mathbf{z}_{1:T})p_{\mathbf{\psi}}(\mathbf{s}_{1:T} |\mathbf{x}_{1:T})\mathrm{d}\mathbf{x}_{1:T}^{\rm m}\mathrm{d}\mathbf{z}_{1:T}\mathrm{d}\bm {m}_{1:T}. \tag{22}\]
The corresponding IWAE objective is defined similarly to Eq. 19, with the expectation taken with respect to \(K\) copies of a variational distribution,
\(\prod_{k=1}^{K}p_{\mathbf{\theta}}(\mathbf{x}_{k,1:T}^{\text{m}}|\mathbf{z}_{k,1:T})q_{\mathbf{ \phi}}(\mathbf{z}_{k,1:T}|\mathbf{x}_{1:T}^{\text{o}})p_{\beta}(\mathbf{m}_{k,1:T})\) and the importance term is defined as
\[\omega_{k} :=p_{\mathbf{\theta}}(\mathbf{y}|\mathbf{x}_{1:T}^{\text{o}},\mathbf{x}_{k,1:T}^{ \text{m}},\mathbf{m}_{k,1:T})p_{\mathbf{\psi}}(\mathbf{s}_{1:T}|\mathbf{x}_{1:T}^{\text{o}}, \mathbf{x}_{k,1:T}^{\text{m}})\] \[\times p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\text{o}}|\mathbf{z}_{k,1:T})p_{ \mathbf{\theta}}(\mathbf{z}_{k,1:T})/q_{\mathbf{\phi}}(\mathbf{z}_{k,1:T}|\mathbf{x}_{1:T}^{\text{ o}}), \tag{23}\]
where \(p_{\mathbf{\theta}}(\mathbf{m}_{1:T}):=\prod_{t=1}^{T}\prod_{j=1}^{d}\operatorname{ Bern}(m_{t,j}|\beta)\).
### Prediction
Similar to SupMIWAE, we exploit Self-Normalized Importance Sampling (SNIS) to approximate the predictive distribution for a new input \(\mathbf{x}_{1:T}^{\text{o}}\). With the model trained with obsdropout, we have
\[p(\mathbf{y}|\mathbf{x}_{1:T}^{\text{o}})\approx\frac{1}{S}\sum_{s=1}^{S}\sum_{k=1}^{ K}\bar{\zeta}_{k}^{(s)}, \tag{24}\]
where
\[(\mathbf{z}_{k,1:T}^{(s)},(\mathbf{x}^{\text{m}})_{k,1:T}^{(s)},\mathbf{m}_{k,1:T}^{(s)})\] \[\stackrel{{\text{i.i.d.}}}{{\sim}}q_{\mathbf{\phi}}( \mathbf{z}_{1:T}|\mathbf{x}_{1:T}^{\text{o}})p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{\text{m}}| \mathbf{z}_{1:T})p_{\beta}(\mathbf{m}_{1:T}), \tag{25}\] \[\zeta_{k}^{(s)} :=p_{\mathbf{\lambda}}(\mathbf{y}|\mathbf{x}_{1:T}^{\text{o}},(\mathbf{x}^{ \text{m}})_{k,1:T}^{(s)},\mathbf{m}_{k,1:T}^{(s)})p_{\mathbf{\theta}}(\mathbf{x}_{1:T}^{ \text{o}}|\mathbf{z}_{k,1:T}^{(s)})\] \[\times p_{\mathbf{\theta}}(\mathbf{z}_{k,1:T}^{(s)})/q_{\mathbf{\phi}}(\mathbf{z} _{k,1:T}^{(s)}|\mathbf{x}_{1:T}^{\text{o}}),\] (26) \[\bar{\zeta}_{k}^{(s)} :=\zeta_{k}^{(s)}/\sum_{\ell=1}^{k}\zeta_{\ell}^{(s)}. \tag{27}\]
## 4 Related Work
There are two lines of literature closely related to our method. The first line consists of work dealing with the problem of missing data imputation based on Deep Latent Variable Models (DLVMs). The other line consists of work designing a tailored neural network architecture for supervised learning on irregularly sampled time series data, which also implies missingness in the process of handling it as a tensor.
DLVMs for missing dataMattei and Frellsen (2019) proposed the MIWAE bound for training DLVMs in the presence of missing data under the MAR assumption. Ipsen et al. (2021) modified the MIWAE bound suitable for the MNAR scenario. Ipsen et al. (2022) extended the MIWAE bound to the supervised learning task. This line of work provides a useful framework for training DLVMs under missingness. However, it is not directly applicable to time series data because it cannot model the temporal dependency within a series. There is previous work that makes DLVMs suitable for multivariate time series. For example, Fortuin et al. (2020) proposed a CNN-based VAE architecture with a Gaussian Process prior to encode the temporal correlation in the latent space. Rubanova et al. (2019) presented ODE-RNNs, which employ Neural Ordinary Differential Equations (Neural ODEs) to model hidden state dynamics of RNNs. Shukla and Marlin (2021) developed an attention-based VAE architecture with probabilistic interpolation for irregularly sampled time series data.
Irregularly sampled time series classificationReearchers have developed deep neural network architectures customized to classify irregularly sampled time series data. Several architectures have shown competitive empirical performance in this task. Che et al. (2018) modified the architecture of GRU intending to perform supervised learning with sparse covariates by introducing a learnable temporal decay mechanism for the input and hidden state of GRU. This mechanism has been applied to further research. For example, Cao et al. (2018) employed temporal decay in hidden states of their bidirectional-RNN-based model to capture the missing pattern of irregularly sampled times series. Shukla and Marlin (2019) presented a hybrid architecture of an interpolation network and a classifier. The interpolation network takes irregularly sampled time series as input and returns fully observed and regularly sampled representation of the original time series data. Shukla and
Figure 1: An overview of our model with obsdropout.
Marlin (2021b) later modified interpolation network with an attention-based architecture.
## 5 Experiments
In this section, we demonstrate our method on real-world multivariate time series data with missing values. We compare ours to the baselines on three datasets: MIMIC-III (Johnson et al., 2016), PhysioNet 2012 (Silva et al., 2012), and Human Activity Recognition (Anguita et al., 2013). MIMIC-III and PhysioNet 2012 datasets contain Electronic Health Records of patients from Intensive Care Units (ICU). Human Activity Recognition dataset consists of the 3D coordinate of sensors mounted on the people doing some daily activities such as walking and sitting. See Appendix A for the details of datasets. For all three datasets, we compare classification accuracy and the uncertainty quantification performances in Section 5.1. We also compare the missing value imputation performance of our methods to the baselines in Section 5.2
For the baselines, we considered GRU classifiers with various imputation methods and other deep neural network based methods that are considered to be competitive in the literature. See Appendix A for the detailed description of the baselines. For the uncertainty quantification metrics, we compared cross-entropy (CE, equals negative log-likelihood), expected calibration error (ECE), and brier score (BS). Please refer to Appendix A for the detailed description of the metrics.
### Classification Results
We summarize the classification results in Table 1, Table 2, and Table 3. In general, our method achieves the best performance among the competing methods both in terms of prediction accuracy and uncertainty quantification. In all classification experiments, our method beats other baseline methods by a wide margin in terms of predictive accuracy. Also, our model shows competitive results with respect to uncertainty metrics. Although SeFT (Horn et al., 2020) shows strong results in terms of uncertainty quantification in some experiments, this method shows inferior performance with respect to predictive accuracy. Since it is reasonable to compare uncertainty quantification between models with similar predictive performance, it can be said that our model shows the best performance among baseline models in general. We also provide an ablation study for our model to see the effect of 1) obstropout and 2) MNAR assumption. The results clearly show that both components play important roles in our model. For all the experiments, obstropout clearly makes the gain in terms of predictive performance. Also, removing MNAR assumption makes the worse performance.
time series such as time series that have many sudden spikes or periodicity. Since our model simultaneously employs the decay mechanism and generative model, our model is more flexible and able to cope with various cases. Especially, the ablation study on the class supervision part \(p_{\mathbf{\lambda}}(\mathbf{y}|\mathbf{x}_{1:T})\) and the obsdropout and MNAR assumption implies that the imputation values generated by our model which was trained to better classify the signals are more "realistic". Fig. 2 highlights the effect of using transformer-based encoders and decoders. The values imputed with those techniques form smoother trajectories and better capture the uncertainties in the intervals without observed values.
## 6 Conclusion
In this paper, we presented a novel probabilistic framework for multivariate time series classification with missing data. Under the MNAR assumption, we first developed a deep generative model suitable for generating missing values in multivariate time series data. Then we identified an important drawback of the naive combination of the deep generative models with the classifiers and proposed a novel regularization technique called obsdropout to circumvent that. In this way, combining the MNAR assumption and the obsdropout regularization technique, the generative model can generate more natural imputation, and the classifier can also perform more accurate and robust classification through this. Also, by using the transformer layers in the internal modules, it can effectively capture the time series structure. Through experiments, we show that it is possible to achieve high performance and uncertainty calibration at the same time in classification tasks with missing values. We demonstrated that ours could classify real-world multivariate time series data more accurately and robustly than existing methods.
Reproducibility statementPlease refer to Appendix A for full experimental detail including datasets, models, and evaluation metrics.
## Acknowledgement
This work was partially supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)), Artificial Intelligence Innovation Hub (No.2022-0-00713), and National Research Foundation
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{PhysioNet 2012} & \multicolumn{2}{c}{MIMIC-III} & \multicolumn{2}{c}{Human Activity Recognition} \\ \cline{2-7} Method & MAE (\(\downarrow\)) & MRE (\(\downarrow\)) & MAE (\(\downarrow\)) & MRE (\(\downarrow\)) & MAE (\(\downarrow\)) & MRE (\(\downarrow\)) \\ \hline Mean & \(0.696\pm 0.001\) & \(1.000\pm 0.000\) & \(0.330\pm 0.002\) & \(1.000\pm 0.000\) & \(0.799\pm 0.001\) & \(1.000\pm 0.000\) \\ Forward & \(0.400\pm 0.004\) & \(0.576\pm 0.005\) & \(0.151\pm 0.002\) & \(0.459\pm 0.002\) & \(0.305\pm 0.005\) & \(\mathbf{0.373}\pm 0.007\) \\ GP-VAE & \(0.439\pm 0.010\) & \(0.630\pm 0.005\) & \(0.198\pm 0.012\) & \(0.601\pm 0.031\) & \(0.548\pm 0.008\) & \(0.684\pm 0.019\) \\ SAITS & \(0.653\pm 0.030\) & \(0.942\pm 0.013\) & \(0.341\pm 0.005\) & \(1.040\pm 0.087\) & \(0.834\pm 0.005\) & \(1.048\pm 0.013\) \\ \hline Ours & \(\mathbf{0.367}\pm 0.005\) & \(\mathbf{0.526}\pm 0.005\) & \(0.149\pm 0.002\) & \(0.451\pm 0.008\) & \(\mathbf{0.297}\pm 0.005\) & \(\mathbf{0.373}\pm 0.006\) \\ w/o supervision & \(0.376\pm 0.007\) & \(0.541\pm 0.009\) & \(\mathbf{0.148}\pm 0.002\) & \(\mathbf{0.449}\pm 0.006\) & \(0.298\pm 0.005\) & \(\mathbf{0.373}\pm 0.007\) \\ w/o obsdropout & \(0.377\pm 0.004\) & \(0.542\pm 0.005\) & \(0.152\pm 0.001\) & \(0.459\pm 0.002\) & \(0.299\pm 0.005\) & \(0.374\pm 0.006\) \\ w/o supervision \& MNAR & \(0.394\pm 0.003\) & \(0.570\pm 0.006\) & \(0.150\pm 0.002\) & \(0.457\pm 0.003\) & \(0.299\pm 0.005\) & \(0.374\pm 0.006\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Imputation performance on PhysioNet 2012, MIMIC-III and Human Activity Recognition dataset.
Figure 2: Plots of \(\mathbf{\mu}_{\text{acc}}(\mathbf{z}_{1:t}),\mathbf{\sigma}_{\text{dec}}^{2}(\mathbf{z}_{1:t})\). (**Left**) Our model with MLP encoder and MLP decoder. (**Right**) Our model trained with obsdropout with a rate of 0.4. Since MLP architecture does not take the temporal association into account, it shows spiky imputation while our model shows smooth imputation. Also, our model shows better performance in uncertainty quantification since the learned variance of the decoder captures sudden spikes and considers the initial part of the time series more uncertain.
of Korea (NRF) funded by the Ministry of Education (NRF2021M3E5D9025030).
|
2307.10907 | The Role of Entropy and Reconstruction in Multi-View Self-Supervised
Learning | The mechanisms behind the success of multi-view self-supervised learning
(MVSSL) are not yet fully understood. Contrastive MVSSL methods have been
studied through the lens of InfoNCE, a lower bound of the Mutual Information
(MI). However, the relation between other MVSSL methods and MI remains unclear.
We consider a different lower bound on the MI consisting of an entropy and a
reconstruction term (ER), and analyze the main MVSSL families through its lens.
Through this ER bound, we show that clustering-based methods such as
DeepCluster and SwAV maximize the MI. We also re-interpret the mechanisms of
distillation-based approaches such as BYOL and DINO, showing that they
explicitly maximize the reconstruction term and implicitly encourage a stable
entropy, and we confirm this empirically. We show that replacing the objectives
of common MVSSL methods with this ER bound achieves competitive performance,
while making them stable when training with smaller batch sizes or smaller
exponential moving average (EMA) coefficients.
Github repo: https://github.com/apple/ml-entropy-reconstruction. | Borja Rodríguez-Gálvez, Arno Blaas, Pau Rodríguez, Adam Goliński, Xavier Suau, Jason Ramapuram, Dan Busbridge, Luca Zappella | 2023-07-20T14:29:51Z | http://arxiv.org/abs/2307.10907v2 | # The Role of Entropy and Reconstruction in
###### Abstract
The mechanisms behind the success of multi-view self-supervised learning (MVSSL) are not yet fully understood. Contrastive MVSSL methods have been studied through the lens of InfoNCE, a lower bound of the Mutual Information (MI). However, the relation between other MVSSL methods and MI remains unclear. We consider a different lower bound on the MI consisting of an entropy and a reconstruction term (ER), and analyze the main MVSSL families through its lens. Through this ER bound, we show that clustering-based methods such as DeepCluster and SwAV maximize the MI. We also re-interpret the mechanisms of distillation-based approaches such as BYOL and DINO, showing that they explicitly maximize the reconstruction term and implicitly encourage a stable entropy, and we confirm this empirically. We show that replacing the objectives of common MVSSL methods with this ER bound achieves competitive performance, while making them stable when training with smaller batch sizes or smaller exponential moving average (EMA) coefficients.
Machine Learning, ICML
## 1 Introduction
Representation learning tackles the problem of learning lower dimensional representations of data which capture the data's semantic information. To achieve this, many representation learning methods aim to maximize the _mutual information_ (MI) between the input data and the learned representations (Linsker, 1988; Belghazi et al., 2018; Hjelm et al., 2019), while inducing biases in the model that steer the learned information to be semantically meaningful (Alemi et al., 2017; van den Oord et al., 2018; Velickovic et al., 2019). As such, MI has played a crucial role in understanding the performance of many representation learning methods (Tishby et al., 1999; Rodriguez Galvez et al., 2020; Goldfeld and Polyanskiy, 2020).
Recently, multi-view self-supervised learning (MVSSL), where the loss enforces the model to produce similar representations for different views of the same data, has proven to be a successful approach for representation learning (Bachman et al., 2019; Tian et al., 2020; He et al., 2020; Caron et al., 2021). The success of MVSSL has motivated the research of several families of MVSSL approaches, such as _contrastive_(Chen et al., 2020), _clustering_- (Caron et al., 2018), and _distillation_-based methods (Grill et al., 2020). However, the effort to understand all of them under a common umbrella lags behind the development of new methods. In this work, we aim to further our understanding of MVSSL methods by identifying any mechanisms contributing to maximizing MI, and to what extent they do so.
The connection of the contrastive MVSSL methods to MI maximization is well established through the InfoNCE bound (van den Oord et al., 2018; Poole et al., 2019), which, in the MVSSL context, lower bounds the MI between the learned representations of different views. Tian et al. (2020) and Tsai et al. (2020) argue that maximizing this MI is attractive as a representation learning target since, when the views are selected carefully, it extracts task-relevant and discards task-irrelevant information.
The interest in the MI perspective on representation learning, and MVSSL in particular, has been undermined following the work of Tschannen et al. (2020), whose key result is showing that maximizing MI alone is not sufficient for learning good representations. Yet, it is empirically evident that methods based on MI lower bound maximization are competitive with state-of-the-art, and Tschannen et al. (2020) note that "the performance of these methods depends strongly on the bias that is encoded not only in the encoders, but also on the actual form of the used MI estimators". In our opinion, their results strongly motivates further study
of the mechanisms by which, and to what extent, the MI maximization takes place in representation learning.
In this work, we center our analysis of MVSSL methods around the MI between the learned representations of different views \(Z_{1},Z_{2}\). The MI lower bound we focus on consists of an _entropy_ and a _reconstruction_ term (Gallager, 1968):
\[I(Z_{1};Z_{2})\geq\underbrace{H(Z_{2})}_{\text{Entropy}}+\underbrace{\mathbb{ E}[\log q_{Z_{2}|Z_{1}}(Z_{2})]}_{\text{Reconstruction term}}\coloneqq I_{\textsc{ER}}(Z_{1};Z_{2}),\]
where the \(\log q_{Z_{2}|Z_{1}}\) corresponds to a choice of a similarity function between representations used in MVSSL, e.g., a cosine similarity. We refer to this bound as ER, referring to the _Entropy_ and _Reconstruction_ terms. Focusing on this bound, rather than the InfoNCE, allows us to analyze a wide range of MVSSL methods through the lens of MI.
The work closest in spirit to ours is (Wang and Isola, 2020), which analyzes the contrastive MVSSL methods through the lens of _alignment_ and _uniformity_, two metrics which they derive through formulating desiderata for the learned representations. While their motivation was, in the light of the results of Tschannen et al. (2020), to offer an alternative interpretation of InfoNCE, different than as a lower bound on MI, we show the metrics they define coincide with a specific instantiation of the ER MI bound we consider. We generalize their results through the use of the ER bound which allows us to also analyze the clustering- and distillation-based MVSSL methods.
Our contributions in this work are the following:
* We review how, and to what extent, the major families of MVSSL methods (contrastive, clustering, and distillation-based) maximize MI via the use of the ER bound on MI. Specifically, we show that the clustering-based methods SwAV(Caron et al., 2020) and DeepCluster(Caron et al., 2018) maximize the ER bound and therefore the MI between representations of different views.
* We empirically show that simply substituting the loss function and instead optimizing ER in SimCLR(Chen et al., 2020), BYOL(Grill et al., 2020), and DINO(Caron et al., 2021) results in similar performance while improving resiliency with respect to training with smaller batch sizes or exponential moving average (EMA) coefficients. This is especially important for distillation methods such as BYOL or DINO, as they become resilient to batch size changes without any need for hyperparameter changes or gradient accumulation.
* Finally, we show that it is not necessary for distillation methods like BYOL to maximize entropy to achieve competitive results, although mechanisms such as the softmax centering in DINO and other related architectural constraints prevent the entropy collapse.
## 2 Background
Here, we introduce some notation, the multi-view self-supervised learning setting, and the relevant bounds on MI.
**Notation**\(X\) represents a random variable (RV) with probability mass function or density \(p_{X}\), and \(x\) is its realization. Expectations are denoted as \(\mathbb{E}[f(X)]=\mathbb{E}_{x\sim p_{X}}[f(x)]\). The conditional density for a fixed realization \(x\) is denoted as \(p_{Y|X=x}\). The density \(q_{Y|X}\) is not the real conditional density of \(X\) given \(Y\), but an an auxiliary one that serves, e.g., as an optimization target. The mutual information is denoted as \(I(X;Y)\), the Shannon and the differential entropy are both denoted as \(H(X)\), and the Kullback-Leibler divergence between densities \(p\) and \(q\) is denoted as \(D_{\text{KL}}(p\|q)\). A sub-sequence of elements from \(a\) to \(b\) in a sequence \(x\) is denoted as \(x^{(a:b)}\), and all elements except \(x^{(i)}\) as \(x^{(\neq i)}\).
**Multi-view self-supervised learning** In MVSSL, for each data sample \(X\), we generate two (or more) views \(V_{b}\). These views are commonly obtained by using augmentations (Bachman et al., 2019; Tian et al., 2020; Chen et al., 2020; Caron et al., 2020; Zbontar et al., 2021), by leveraging multiple modalities (Radford et al., 2021), or natural views of data (Tian et al., 2020), e.g., multiple camera views of the same scene. Views \(V_{b}\) are chosen or engineered such that most of the semantic information remains unchanged with respect to the original data sample \(X\) and shared between the views (Tian et al., 2020). Each view is then passed through a neural network encoder \(f_{\theta}(\cdot)\) to produce representations \(R_{b}\) which are in turn projected via \(\pi_{\theta}(\cdot)\), usually a small MLP, into a lower dimensional space to yield \(Z_{b}\), where \(\theta\) are the learnable parameters. Typically, the intermediate representations \(R_{b}\) are used for downstream tasks and transfer learning, as that yields better performance than using \(Z_{b}\)(Chen et al., 2020; Bordes et al., 2023). The parameters \(\theta\) are learned by optimizing an objective which encourages the projections \(Z_{b}\) to be predictive of the other branches' outputs \(Z_{(\neq b)}\). This is commonly achieved by optimizing a _similarity_ score, such as the L2 distance. Most of the methods use two views and we will focus on this setting, without loss of generality.1 Since the processing of each view takes place separately and for some methods differs between views, we refer to those separate computation paths as _branches_. See Figure **1** for an illustrative diagram.
Footnote 1: When more than two views are considered, the objective decomposes into a sum of independent sub-objectives based on view pairs, see e.g., Tian et al. (2020) or Caron et al. (2018).
The three families of MVSSL considered in this work are _contrastive_, _clustering_- and _distillation_-based methods. Contrastive methods work by comparing the projections of the two views of the same datum (or _positive pairs_), with a set of projections of different data (or _negative pairs_). The different methods in this category are usually distinguished by
how they define the negative pairs. Most of these methods are derived either from the metric learning literature (Sohn, 2016) or the InfoNCE objective (van den Oord et al., 2018), which is a lower bound on the mutual information between the projections \(I(Z_{1};Z_{2})\). We discuss these methods in detail in Section 3.1. Clustering methods cluster the projections from one branch and use the resulting discrete cluster assignments as targets for the other branch by optimizing a cross-entropy loss (Caron et al., 2018, 2020; Asano et al., 2019). Distillation-based methods design the two branches asymmetrically, using one branch's projections as targets for the other (Grill et al., 2020; Chen and He, 2021; Caron et al., 2021). The two branches, referred to as _teacher_ and _student_, differ. Common differences include gradients being computed only by the student (stop-grad), teacher's parameters being set via an EMA of the student's, and an additional predictor network for the student.
**Mutual information lower bounds** Estimating MI is fundamentally difficult (McAllester and Stratos, 2020) and for gradient-based representation learning, it is common to rely on the gradients of a lower bound on MI without estimating MI directly (Poole et al., 2019). In this work, the core quantity of interest is the MI between MVSSL projections \(I(Z_{1};Z_{2})\). Two MI lower bounds that can be used to optimize this quantity are InfoNCE and ER.
InfoNCE(van den Oord et al., 2018; Poole et al., 2019) is a lower bound on MI. In MVSSL, the MI is between the projections \(Z_{1},Z_{2}\). It is estimated from a sequence of i.i.d. samples of pairs \((Z_{1}^{(1:k)},Z_{2}^{(1:k)})\) from the joint density \(p_{Z_{1},Z_{2}}\):
\[I_{\textsc{Ince}}(Z_{1};Z_{2})\coloneqq\frac{1}{k}\sum_{i=1}^{k}\mathbb{E} \Bigg{[}\log\frac{e^{f(Z_{1}^{(i)},Z_{2}^{(i)})}}{\frac{1}{k}\sum_{j=1}^{k}e^ {f(Z_{1}^{(i)},Z_{2}^{(j)})}}\Bigg{]}\,, \tag{1}\]
where \(f(\cdot,\cdot)\) is a function scoring similarity between vectors, e.g., cosine similarity. Many contrastive methods use it as a loss function in the original or slightly different forms depending on negative sample choice. We discuss the MI maximization in this class of methods in detail in Section 3.1.
The ER bound is a long standing result in information theory (Gallager, 1968). It can be derived by considering a tractable _reconstruction density_\(q_{Z_{2}|Z_{1}}\) that for MVSSL corresponds to a choice of a similarity function:
\[I(Z_{1};Z_{2}) =\mathbb{E}\Bigg{[}\!\!\log\frac{q_{Z_{2}|Z_{1}}(Z_{2})}{p_{Z_{2} }(Z_{2})}\Bigg{]}\!\!+\!\mathbb{E}\Big{[}\!\!\frac{\stackrel{{ \geq 0}}{{\longrightarrow}}}{D_{\textsc{KL}}(p_{Z_{2}|Z_{1}}\|q_{Z_{2}|Z_{1}})} \Big{]}\] \[\geq H(Z_{2})+\mathbb{E}[\log q_{Z_{2}|Z_{1}}(Z_{2})]\!\coloneqq\!I_{ \textsc{ER}}(Z_{1};Z_{2}). \tag{2}\]
In the MVSSL setting, \(q_{Z_{2}|Z_{1}}\) is a design choice and we are interested in optimizing the parameters of \(\pi_{\theta}\circ f_{\theta}\) such that the resulting density \(p_{Z_{1},Z_{2}}\) maximizes \(I_{\textsc{ER}}(Z_{1};Z_{2})\). The density \(p_{Z_{1},Z_{2}}\) implicitly results from sampling inputs \(X\), possibly transforming them via stochastic transformations \(t\), and then deterministically transforming them through the encoder \(\pi_{\theta}\circ f_{\theta}\) to form \(Z\). The term \(\mathbb{E}[D_{\textsc{KL}}(p_{Z_{2}|Z_{1}}\|q_{Z_{2}|Z_{1}})]\) determines the magnitude of the gap of the \(I_{\textsc{ER}}\) bound.
The term _reconstruction_ originates from information theory. It is often concerned with reconstructing a signal from a compressed code and is equal to \(-H(Z_{2}|\hat{Z}_{2})\), where \(\hat{Z}_{2}\) is a RV such that \(Z_{2}-Z_{1}-\hat{Z}_{2}\) is a Markov chain. We find it also more appropriate to reason about MVSSL such as the right column of Figure 1, where \(Z_{1}\) and \(W_{2}\) belong to different spaces, and hence the term _similarity_ seems less accurate.
Intuitively, the _entropy_ and _reconstruction_ terms in the ER bound (2) play different roles in MVSSL. The entropy term determines how much information from one projection _can
Figure 1: The MVSSL prototypes. An image \(X\) is transformed with augmentations \(t\) to generate two views \(V\) and projections \(Z\). Dashed and dotted lines indicate loss functions and optional relationships between variables respectively. **Top:** Identical branches: Parameters \(\theta\) are identical across branches and the loss is symmetric. **Bottom:** Asymmetric branches: Parameters \(\theta,\xi\) across branches are different and the loss is asymmetric. **Left:** The projections \(Z\) are not further processed. **Right:** The projections \(Z\) are processed into auxiliary discrete variables \(W\), potentially using another variable \(C\). Parameters \(\theta,\xi\) are optimized such that \(Z\) are predictive of the other branch’s \(W\).
be learnt_, while the reconstruction term determines how much of this available information _is learnt_. For instance, let the projections lay on the sphere: the more spread out (higher entropy) the projections of different data are, the more revealing (higher mutual information) it is if projections from different views of the same datum are close (lower reconstruction error). Conversely, if one branch projects all data to the same point (lowest entropy, also known as _collapse_), the projections from the other branch can't reveal any information about them.
**MVSSL for small batch sizes** Small batch sizes degrade the performance of MVSSL methods, especially contrastive ones (Chen et al., 2020; Grill et al., 2020; Caron et al., 2021). Potentially, this is due to the fact that most methods maximize the entropy either explicitly or implicitly, as shown in this paper, and the entropy estimation is limited to \(\log k\) bits for a batch size of \(k\)(McAllester and Stratos, 2020). Some works (HaoChen et al., 2021; Chen et al., 2021; Yuan et al., 2022) addressed this issue and modified existing methods to perform well under the small batch size regime.
## 3 MVSSL and MI optimization
In this section, we reflect on the relationship between different MVSSL methods and the MI. First, we review the known connection between contrastive methods and MI maximization through the InfoNCE bound, as well as the lack thereof. Also, we show that none of the existing methods formally maximize the ER bound, while all of them are a good proxy for it. Next, we show for the first time that the clustering-based methods DeepCluster (Caron et al., 2018) and SwAV(Caron et al., 2020) also optimize the MI through the ER bound. Finally, we interpret the techniques used in distillation-based methods such as EMA (Grill et al., 2020) and softmax centering (Caron et al., 2021) as mechanisms to prevent the entropy collapse. The results of this section are summarized in Table 1.
### Contrastive methods
Contrastive learning (CL) methods are the family of MVSSL methods that have been most closely connected to MI maximization in the existing literature and, as such, a good starting point for our analysis. Here, we first give a review of the connections established through the InfoNCE bound and otherwise, before exhibiting the relationship to the ER bound. Summarizing, generally CL algorithms cannot be formally shown to maximize the InfoNCE nor the ER bound due to the violation of the i.i.d. assumption. This is not the case for CMC those methods derived from it, nor for methods using a memory bank like Instance Discrimination (Wu et al., 2018, IR) or MoCo(He et al., 2020; Chen et al., 2020) under particular circumstances, which do maximize the InfoNCE. Nevertheless, as also concluded by Wang and Isola (2020), CL is a good proxy for entropy maximization, and therefore, for MI maximization.
Given the projection of a view of datum \(i\), e.g., \(Z_{2}^{(i)}\), contrastive learning algorithms aim to maximize its similarity with the projection of another view of the same datum, e.g., \(Z_{1}^{(i)}\) (_positive sample_), while making it as different as possible from the projections of a set of _negative samples_\(\mathcal{S}_{\text{neg}}(Z_{2}^{(i)})\). This is achieved by minimizing a cross entropy loss based on a similarity score. Given a batch of \(k\) samples a generic contrastive loss for the second branch is
\[\mathcal{L}_{\text{contr},2}\coloneqq-\frac{1}{k}\sum_{i=1}^{k}\log\frac{e^{f( Z_{2}^{(i)},Z_{1}^{(i)})}}{\sum_{Z^{\prime}\in\mathcal{S}_{\text{neg}}(Z_{2}^{(i)} )}e^{f(Z_{2}^{(i)},Z^{\prime})}} \tag{3}\]
and the full loss is \(\mathcal{L}_{\text{contr}}\coloneqq(\mathcal{L}_{\text{contr},1}+\mathcal{L}_ {\text{contr},2})/2\), where usually \(f=\text{sim}(\cdot)/\tau\), \(\text{sim}(\cdot)\) is the cosine similarity, and \(\tau\) is a temperature parameter. Then, different CL methods are distinguished by how the set of negative samples for a particular sample \(Z_{2}^{(i)}\) is constructed. Note that the negatives might include samples from the other branches.
In CMC(Tian et al., 2020), the negative samples set is composed of all the other projections from the opposite branch, i.e., \(\mathcal{S}_{\text{neg}}(Z_{2}^{(i)})=Z_{1}^{(1:k)}\). Comparing (1) and (3) with these negative samples we see that CMC maximizes the InfoNCE bound and \(\mathbb{E}[-\mathcal{L}_{\text{CMC}}]\leq I(Z_{1};Z_{2})-\log k\).
The maximization of the InfoNCE bound can be similarly shown for methods that can be derived from the basic CMC, like the full CMC, where more than two views are considered; (Bachman et al., 2019), which adapts DIM(Hjelm et al., 2019) to the basic CMC; and (Tian et al., 2020), which attempts to learn the augmentations that best suit the information maximization.
For SimCLR(Chen et al., 2020), on the other hand, the negative samples are all the projections other than \(Z_{2}^{(i)}\), i.e., \(\mathcal{S}_{\text{neg}}(Z_{2}^{(i)})=Z_{2}^{(\neq i)}\cup Z_{1}^{(1:k)}\). Given such a definition of the negative set, even if all negative samples were identically distributed, the negative samples are not independent as \(Z_{1}^{(j)}\) and \(Z_{2}^{(j)}\) are derived from the same datum \(j\), for all \(j\)s. As shown in (Tschannen et al., 2020), InfoNCE is not maximized when violating the independence assumption. Hence, SimCLR does not maximize the InfoNCE bound. This also holds true for methods that are derived from SimCLR such as (Ramapuram et al., 2021).
Finally, methods like IR or MoCo use representations from a memory bank as negative samples, i.e., \(\mathcal{S}_{\text{neg}}(Z_{2}^{(i)})=Z_{\text{bank}}^{(1:m)}\). In these cases the negative samples can be dependent and are not identically distributed with respect to \(Z_{2}^{(i)}\). However, Wu et al. (2020) showed that under certain mild conditions on the distribution of these samples the contrastive loss used in these methods is a lower bound on the
InfoNCE, and thus optimizing it also maximizes MI.
**Relationship with the ER bound** None of the contrastive methods above directly translates to an optimization of the ER bound, even if it may appear so. In the context of (3), if we consider a density s.t. \(q_{Z_{2}|Z_{1}=z_{1}}(z_{2})\propto\exp f(z_{2},z_{1})\), the expected value of the first term corresponds to the reconstruction error in (2), and when \(f(\cdot,\cdot)\) is the cosine similarity with temperature \(\tau\), the density \(q_{Z_{2}|Z_{1}=z_{1}}\) corresponds to a von Mises-Fisher density with mean direction \(z_{1}\) and concentration parameter \(1/\tau\). However, as shown above, in all methods analyzed, the negative samples are either not independent between themselves (as in SimCLR), or not identically distributed with respect to the positive sample (as in MoCo), or the set contains the positive pair itself (as in CMC). Therefore, the log-denominator in (3) is not an unbiased kernel density estimator (KDE, Joe (1989)) of the entropy and therefore its expectation is not necessarily the entropy \(H(Z_{2})\) from (2).
Nonetheless, all these methods force the projections to be maximally separated from the negative samples in a convex set (usually the hypersphere). Moreover, the highest entropy distribution on a convex set is precisely the uniform distribution on that volume. Hence, the contrastive loss, even with non-i.i.d. negative samples, is a good proxy for entropy maximization, and therefore, for MI maximization. Wang and Isola (2020) make a similar observation and conclude that maximizing the uniformity of the samples in the projections' space is required for good performance.
**Caveats** As seen above, most current analyses for CL methods require the i.i.d. assumption, which is not usually met due to the use of batch normalization. The breaking of the independence assumption is important as it can break the InfoNCE results (Tschannen et al., 2020; Wu et al., 2020). Nonetheless, it does not discredit that the result of the KDE is a good proxy to maximize the entropy.
### Clustering-based methods
In this section, we show that both DeepCluster(Caron et al., 2018; Asano et al., 2019) and SwAV(Caron et al., 2020) maximize the ER lower bound on the MI between the projections of different views of the data \(I_{\texttt{ER}}(Z_{1};Z_{2})\).
The key observation underlying the results in this section is that DeepCluster and SwAV generate a discrete surrogate of the projections, e.g., for the second branch \(W_{2}=\phi(Z_{2})\), and that they maximize the ER bound on \(I(Z_{1};W_{2})\leq I(Z_{1};Z_{2})\), where the inequality holds by the data processing inequality. For the rest of the section, let \(\mathcal{Z}\subseteq\mathbb{R}^{d}\) and \(\mathcal{W}=\{1,\ldots,m\}\).
DeepCluster has an asymmetric setting with \(\xi=\theta\) (Figure 1**d**). First, the cluster assignments \(W_{2}^{(i)}=\phi(Z_{2}^{(i)})\) of all the \(n\) data points are obtained solving the problem
\[C^{\star}\in\operatorname*{arg\,inf}_{C\in\mathbb{R}^{d\times m}}\frac{1}{n} \sum_{i=1}^{n}\lVert Z_{2}^{(i)}-Cp_{2}^{(i)}\rVert^{2},\]
with \(p_{2}^{(i)}\in\{0,1\}^{m}\) and \(\lVert p_{2}^{(i)}\rVert_{0}=1\), where \(C^{\star}\) represent the \(m\) centroids of the clusters in \(\mathcal{Z}\) and \(p_{2}^{(i)}\) is the p.m.f. of \(W_{2}^{(i)}\) given \(Z_{2}^{(i)}\).2 Then, the parameters \(\theta\) are optimized by minimizing the cross entropy
Footnote 2: Asano et al. (2019) obtain the clusters solving an optimal transport problem similar to SwAV.
\[\mathcal{L}_{\texttt{DeepCluster}}\coloneqq-\frac{1}{k}\sum_{i=1}^{k}\Big{(} p_{2}^{(i)}\Big{)}^{\intercal}\log\Big{(}\mathsf{s}\circ g_{\theta}(Z_{1}^{(i)}) \Big{)},\]
where \(g_{\theta}:\mathcal{Z}\rightarrow\mathbb{R}^{m}\) is a small predictor network, and \(\mathsf{s}\) is the softmax operator. Note that \(Z\) also depends on \(\theta\) via \(Z\!=\!\pi_{\theta}\circ f_{\theta}(V)\), see Figure 1. With \(q_{W_{2}|Z_{1}=z_{1}}=\mathsf{s}\circ g_{\theta}(z_{1})\), _this optimization precisely amounts to maximizing the reconstruction term in the ER bound for \(I(Z_{1};W_{2})\)_. Furthermore, to prevent degenerate solutions, Caron et al. (2018) sample the images of each batch based on a uniform distribution over cluster assignments, i.e. for each batch \(p_{W_{2}}\approx\frac{1}{k}\sum_{i=1}^{k}p_{2}^{(i)}\) is almost uniform. Through this, _the entropy \(H(W_{2})\) is approximately maximized_. Combined with the maximization of the reconstruction term via \(\mathcal{L}_{\texttt{DeepCluster}}\), this implies DeepCluster maximizes the ER MI bound_.
Now, let us turn to SwAV. SwAV has a symmetric setting (Figure 1**b**). We focus on branch \(b=2\), as the analysis is analogous for the other branch. Here, the cluster assignments \(W_{2}^{(i)}=\phi(Z_{2}^{(i)})\) are obtained solving the following optimization problem
\[P_{2}=\operatorname*{arg\,max}_{P\in\mathcal{P}}\bigg{\{}\text{Tr}\Big{(}Z_{2 }^{(1:k)}C^{\intercal}P^{\intercal}\Big{)}+\epsilon H(P)\bigg{\}},\]
where \(Z_{2}^{(1:k)}\in\mathbb{R}^{k\times d}\), \(C\in\mathbb{R}^{m\times d}\) are the \(m\) centroids (or prototypes) in \(\mathbb{R}^{d}\), \(\mathcal{P}=\{P\in\mathbb{R}_{+}^{k\times m}:P\intercal_{1}k=\mathbf{1}_{m}/m\) and \(P\mathbf{1}_{m}=\mathbf{1}_{k}/k\}\) is the transportation polytope, and \(\mathbf{1}_{k}\) is the all ones vector in \(\mathbb{R}^{k}\). Let \(C^{(i)}\) and \(P_{2}^{(i)}\) denote the \(i\)-th row of \(C\) and \(P_{2}\), respectively. In SwAV, both the projections and the prototypes lay in the unit hypersphere, i.e., \(Z^{(i)},C^{(i)}\in\mathbb{S}^{d-1}\), and thus maximizing the dot product is equivalent to minimizing the squared \(\ell_{2}\) norm distance (Grill et al., 2020). Moreover, to aid the optimization calculations, an entropic regularization is included to approximately solve it using the Sinkhorn-Knopp algorithm (Sinkhorn, 1974; Cuturi, 2013), where \(H(P_{2})\coloneqq-\sum_{i=1}^{k}\Big{(}P_{2}^{(i)}\Big{)}^{\intercal}\log P_{2 }^{(i)}\).
The \(l\)-th element of \(P_{2}^{(i)}\) can be understood as the probability of assigning \(Z_{2}^{(i)}\) to the cluster \(W_{2}^{(i)}=l\). The optimization aims to have \(P_{2}\in\mathcal{P}\) and therefore \(P_{2}^{\intercal}\mathbf{1}_{k}\approx\mathbf{1}_{m}/m\), which
by this interpretation would mean that \(p_{W_{2}|Z_{2}}\approx\mathbf{1}_{m}/m\) is approximately uniform, thus maximizing the entropy \(H(W_{2}|Z_{2})\). As \(H(W_{2}|Z_{2})\leq H(W_{2})\), this construction _maximizes the desired entropy \(H(W_{2})\) in the ER bound_.
For SwAV, similarly to DeepCluster, _the reconstruction term is maximized_ by minimizing the loss function
\[\mathcal{L}_{\text{SwAV,2}}\coloneqq-\frac{1}{k}\sum_{i=1}^{k}\Big{(}p_{2}^{( i)}\Big{)}^{\intercal}\log\Big{(}\mathsf{s}\big{(}CZ_{1}^{(i)}\big{)}\Big{)},\]
where \(p_{2}^{(i)}=P_{2}^{(i)}/(\mathbf{1}_{m}^{\intercal}P_{2}^{(i)})\) and \(q_{W_{2}|Z_{1}=z_{1}}=\mathsf{s}(Cz_{1})\), hence maximizing the mutual information \(I(Z_{1};W_{2})\). An analogous analysis for the branch \(b=1\) reveals that minimizing \(\mathcal{L}_{\text{SwAV,1}}\) with the entropic regularisation assignment maximizes the mutual information \(I(W_{1};Z_{2})\). In SwAV, the prototypes are treated as parameters of the network (i.e., \(C\in\theta\)) and are updated using stochastic gradient descent to minimize \(\mathcal{L}_{\text{SwAV}}\). This implies SwAV_also maximizes ER_.
### Distillation methods
Distillation methods naturally optimize the reconstruction term of the ER bound since the projection of one branch is optimized to predict the projection of the other branch. However, it is more challenging to understand if and how they might maximize the entropy term of ER, hence, we cannot yet claim they are maximizing the MI. There are some tools, such as EMA or centering, that distillation methods employ that could have an effect on the entropy. In fact, such tools are key to prevent the phenomenon known as collapse (Grill et al., 2020; Caron et al., 2021). Our analysis of their role below does not yield definitive, formal statements. However, it should still shed some light on this question.
First, let us detail how each method maximizes the reconstruction term of the ER bound. We start by analyzing the reconstruction term for the BYOL loss, which is the \(\ell_{2}\) normalised mean squared error
\[\mathcal{L}_{\text{BYOL}}\coloneqq\frac{1}{k}\sum_{i=1}^{k}\big{\|}\overline {g_{\theta}(Z_{1}^{(i)})}-\overline{Z_{2}^{(i)}}\big{\|}^{2}, \tag{4}\]
where \(\overline{x}\coloneqq x/\|x\|\). Since \(\|\overline{x}-\overline{y}\|^{2}=2(1-\text{sim}(x,y))\), optimizing (4) is equivalent to maximizing the reconstruction term in the ER bound with a von Mises-Fisher reconstruction density with mean direction \(\overline{g_{\theta}(Z_{1}^{(i)})}\) and concentration parameter 1. For DINO, the loss is similar to the one used by the clustering-based methods, namely
\[\mathcal{L}_{\text{DINO}}\coloneqq-\frac{1}{k}\sum_{i=1}^{k}\mathsf{s}\big{(} (Z_{2}^{(i)}-C)/\tau_{2}\big{)}^{\intercal}\log\Big{(}\mathsf{s}(Z_{1}^{(i)} /\tau_{1})\Big{)}, \tag{5}\]
where \(C\) is a centering variable, and \(\tau_{1},\tau_{2}\) are temperature hyperparameters. Letting \(p_{W_{2}|Z_{2}=z_{2}}=\mathsf{s}\big{(}(z_{2}-C)/\tau_{2}\big{)}\) and \(q_{W_{2}|Z_{1}=z_{1}}=\mathsf{s}(z_{1}/\tau_{1})\) shows that optimizing (5) is equivalent to maximizing the reconstruction term in the ER bound of \(I(Z_{1};W_{2})\leq I(Z_{1};Z_{2})\).
Let us now analyze the potential effect of the stabilizing algorithms used by distillation methods on the entropy of the projections to understand if distillation methods also maximize the entropy term of the ER bound. We focus on the role of EMA and centering.
EMA introduces an asymmetry between the teacher and the student in distillation methods (Figure 1b and d). Specifically, the teacher's parameters \(\xi\) track the student's parameters \(\theta\) during the optimization with the use of EMA: \(\xi\leftarrow\lambda\xi+(1-\lambda)\theta\) for some \(\lambda\in(0,1)\) close to 1. The hypothesis is two-fold: on the one hand, while \(\xi\) does depend on \(\theta\), the dependence is weak enough so that \(H(Z_{2})\) or \(H(W_{2})\) is not degrading to values yielding trivial bounds. This would happen in the extreme case of \(\xi=\theta\), for which minimizing the respective losses will have an optimal solution \(\theta^{\star}\) that would be highly concentrated or degenerate around one point, under which \(H(Z_{2})\rightarrow-\infty\) or \(H(W_{2})=0\), which clearly would not maximize the MI. On the other hand, the dependence of \(\xi\) on \(\theta\), while weak, ensures that the projections \(Z_{2}\) capture information about the data. If this was not the case, e.g., by fixing \(\xi\) to random values, the then random projections \(Z_{2}\) would contain very little information about \(X\). In this case, despite maximising \(I(Z_{1};Z_{2})\) via minimising the respective losses and simultaneously ensuring constant entropy \(H(Z_{2})\) (due to the random projections), the information learned would still be little as by the data processing inequality \(I(Z_{1};Z_{2})\leq I(X;Z_{2})\). BYOL and DINO balance this trade-off between not maximizing MI due to minimal entropy and maximizing MI to a small achievable minimum with constant entropy with their choice of \(\lambda\), but the resulting effect on entropy and MI maximization is hard to estimate.
Beyond EMA, DINO also promotes a high conditional entropy \(H(W_{2}|Z_{2})\) through the centering before the softmax operation. Like in SwAV, this avoids collapse as it controls the entropy \(H(W_{2})\) via \(H(W_{2}|Z_{2})\leq H(W_{2})\). To be precise, the center \(C\) in (5) is updated with an EMA of the previous projections, that is, \(C\leftarrow\mu C+\frac{1-\mu}{k}\sum_{i=1}^{k}Z_{2}^{(i)}\) for some \(\mu\in(0,1)\). Then, the right balance between this EMA and the temperature parameters \(\tau_{1}\) and \(\tau_{2}\) adjusts how uniform the conditional density \(p_{W_{2}|Z_{2}}\) is. This promotes a high conditional entropy \(H(W_{2}|Z_{2})\). However, having a completely uniform conditional density means that \(p_{W_{2}|Z_{2}}=p_{W_{2}}\) and thus no information of \(Z_{2}\) is in \(W_{2}\). For this reason, Caron et al. (2021) need to also include a sharpening of the conditional density via the temperature \(\tau_{2}\). Therefore, the degree of maximization of \(H(W_{2})\) is hard to quantify as it depends on the chosen values of the parameters \(\mu,\tau_{1},\) and \(\tau_{2}\).
To summarize, the use of both EMA and centering is crucial for distillation methods to work, and they do affect the entropy term of the ER bound. However, it is not yet possible to quantify these effects exactly, hence, one cannot make any statement that distillation methods maximize MI, despite clearly maximizing the reconstruction term of the ER bound.
## 4 Optimizing the ER bound in practice
In this section, we describe different ways to maximize the ER bound regardless of the MVSSL prototype (see Figure 1). That is, we will describe how to estimate the entropy and the reconstruction term in (2) when the projections are not processed (Figure 1a and c). The case when discrete surrogates are generated (Figure 1b and d) is discussed in Appendix A.2. Then, the objective resulting from such an estimation is maximized. Later, in Section 5, we use these approaches on top of the architectures of current contrastive and distillation-based methods and observe that their performance is on par (or slightly better) than their original formulation, and that they become more resilient to the choice of the batch size and EMA coefficient without the need for neither adjusted hyper-parameters nor accumulated gradients.
### Maximizing MI between projections
We consider an estimation of the ER bound of the MI between the projections \(I_{\texttt{ER}}(Z_{1};Z_{2})\). Let \(f(z_{2},z_{1})\) be a function measuring the similarity between \(z_{1}\) and \(z_{2}\). Choosing the reconstruction density \(q_{Z_{2}|Z_{1}=z_{1}}(z_{2})\propto\exp f(z_{2},z_{1})\), an unbiased estimate of the reconstruction term is given by
\[\widehat{\text{Rec}}_{\text{cont}}\coloneqq\frac{1}{k}\sum\nolimits_{i=1}^{k }f(Z_{2}^{(i)},Z_{1}^{(i)}), \tag{6}\]
where the term associated with the normalizing constant of the density is discarded as it does not affect the optimization. To estimate the entropy term, one may consider different variants of KDEs. For example, both the KDE of Joe (1989)
\[\hat{H}(Z_{2})_{\text{KDE,Joe}}\coloneqq-\frac{1}{k}\sum\limits_{i=1}^{k}\log \hat{p}_{Z_{2}}(Z_{2}^{(i)}) \tag{7}\]
or the plug-in estimator (Krishnamurthy and Wang, 2015)
\[\hat{H}(Z_{2})_{\text{KDE,plug-in}}\coloneqq-\sum\limits_{i=1}^{k}\hat{p}_{Z_{ 2}}(Z_{2}^{(i)})\log\hat{p}_{Z_{2}}(Z_{2}^{(i)}) \tag{8}\]
can be used (both give similar results in practice, see Appendix D). Here, \(\hat{p}_{Z_{2}}(z)\) is Joe (1989)'s KDE of \(p_{Z_{2}}\):
\[\hat{p}_{Z_{2}}(z)\coloneqq\frac{1}{khd}\sum\limits_{j=1}^{k}q\bigg{(}\frac{z -Z_{2}^{(j)}}{h}\bigg{)}, \tag{9}\]
with kernel \(q(\cdot)\) and bandwidth \(h\in\mathbb{R}_{+}\). Both the reconstruction and the entropy estimators are (asymptotically) unbiased and converge in mean squared error (MSE) with an appropriate choice of the bandwidth (see Appendix A). The selection of an optimal kernel bandwidth can be seen as a limitation of ER. While minimizing the number of hyper-parameters would be desirable, the bandwidth plays a similar role to the temperature term typically tuned in other SSL methods, e.g. (Chen et al., 2020). So much so, that we adopted as bandwidth the same temperature parameter specified by the SSL methods on top of which we incorporate ER.
**Connection to CL** When the chosen kernel \(q\) is such that \(q(z_{2}-z_{1})=f(z_{2},z_{1})\), then maximizing the ER bound with estimators (6, 7) is _equivalent to contrastive learning_ with the negative samples being \(\mathcal{S}_{\text{neg}}(Z_{2}^{(i)})=Z_{2}^{(\neq i)}\), up to constants independent of the optimization parameters.
**Connection to Uniformity and Alignment** The _alignment_ and _uniformity_ objective of Wang and Isola (2020) is a relaxation of the ER objective with estimators (6, 7). Let \(f(z_{2},z_{1})=\|z_{2}-z_{1}\|_{2}^{2}\), then the estimator (6) recovers their alignment term. Consider also a kernel \(q(z_{2}-z_{1})\propto\exp\big{(}-t\|z_{2}-z_{1}\|_{2}^{2}\big{)}\), then Joe (1989)'s KDE (7) recovers their alignment term after applying Jensen's inequality.3 Hence, our analysis can be considered a natural extension of their analysis to other MVSSL families.
Footnote 3: The application of Jensen’s inequality makes Wang and Isola (2020)’s objective a looser MI lower bound than the ER bound.
**Connection to Identifiability** Under certain assumptions, MVSSL partitions the latent representations into a content component, invariant to augmentations, and a style component, which can change with augmentations (Von Kugelgen et al., 2021). The ER objective recovers their main theorem (Theorem 4.4) with a reconstruction density \(q_{Z_{2}|Z_{1}=z_{1}}(z_{2})\propto\exp\big{(}-\|z_{2}-z_{1}\|_{2}^{2}\big{)}\). Moreover, CL methods implicitly invert the underlying generative model of the observed data, again under certain assumptions (Zimmermann et al., 2021). We show that the same is true for methods maximising the ER bound, revealing that the main reason for this inversion is not the contrastive nature of the methods, but that they maximize the mutual information (see Appendix B).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & InfoNCE & ER & Violation \\ \hline CMC & \(\checkmark\)* & (\(\checkmark\)) & - \\ SimCLR & \(\times\) & (\(\checkmark\)) & negatives not i.i.d. \\ IR, MoCo & (\(\checkmark\))* & (\(\checkmark\)) & negatives not i.i.d. \\ \hline DeepCluster & \(\times\) & \(\checkmark\) & - \\ SwAV & \(\times\) & \(\checkmark\) & - \\ \hline BYOL & \(\times\) & (\(\checkmark\)) & not max. entropy \\ DINO & \(\times\) & (\(\checkmark\)) & not max. entropy \\ \hline \hline \end{tabular}
\end{table}
Table 1: The relation between existing MVSSL methods and the maximization of MI via the InfoNCE and ER lower bounds. \(\checkmark\): formally shown, (\(\checkmark\)): approximately or empirically, \(\times\): no formal or empirical evidence, *: previously known (Section 3.1).
### Dealing with an EMA
The maximization of the ER bound is compatible with an asymmetric structure (Figure 0(c), d) where the teacher's parameters \(\xi\) are updated with an EMA of the student's parameters \(\theta\). The objective is equivalent to the maximization of the symmetric bound with an additional stop_gradient operator on the teacher's projections. The optimization from the reconstruction of the teacher from the student is unaffected. Then, since the entropy of the student's projections \(Z\) (or surrogates \(W\)) is maximized, it will also be maximized for the teacher, which is only updated through the EMA. This is confirmed empirically in Section 5.
## 5 Experiments
In this section, we show that replacing the objective of common MVSSL methods with the ER bound results in competitive performance while being more robust to the changes in batch size and EMA coefficient without changing any other hyperparameters. Further experiments are included in Appendices E and G and the code is available at [https://github.com/apple/ml-entropy-reconstruction](https://github.com/apple/ml-entropy-reconstruction).
**Experimental Setup** For all experiments, we pre-train a resnet50 (He et al., 2016) on the ImageNet (Deng et al., 2009) training set. We train for 400 epochs and following Chen et al. (2020) we use a batch size of 4096 with the LARS optimizer (You et al., 2017) with linear warmup, a single cycle cosine annealed learning rate schedule, and a base learning rate of \(0.3\)(Goyal et al., 2017). We chose BYOL, DINO, and SimCLR as baseline methods, with CMC results presented in Appendix E. For each model except DINO, we substitute their objective function by the continuous estimate of the ER bound from Section 4,4 while keeping the original set of augmentations and their original projection heads. For DINO we estimate the entropy as the average of the discrete plug-in entropy among replicas. CMC shares augmentations and projection head with SimCLR.
Footnote 4: We use the plug-in estimator instead of Joe (1989)’s, but we observe both to perform almost identically (Appendix D).
**Training with ER yields competitive accuracy** We train a linear classifier on top of the ImageNet pre-trained features and report the test accuracy in Table 2. For all models, we kept their original hyperparameters. For SimCLR, adding ER increases test accuracy (\(+0.72\)) while for BYOL and DINO it decreases slightly (\(-1.5\) and \(-1.65\), respectively).
**ER further improves distillation method's stability with small batch size and small EMA coefficients** The right column in Table 2 shows the performance degradation when training with batch size \(512\) and EMA coefficient of \(0.8\) instead of \(0.99\) (we observe similar results with a batch size 1024 or an EMA coefficient of \(0.6\)). The original version of BYOL and DINO exhibit the largest degradation of all algorithms. This can also be observed in Figure 2. Note that Grill et al. (2020) provided recipes to train BYOL with smaller batch sizes by retuning hyperparameters or by gradient accumulation. They also observed that the batch size had a strong influence on the optimal EMA coefficient. Here, we limit our observation to what happens when nothing else is changed in the optimization. Interestingly, we observe that ER significantly improves the resilience towards the change in batch size for all methods tested, especially for BYOL where the degradation is reduced from \(-20.32\) to \(-0.21\). Regarding the EMA coefficient, we observe a degradation of \(-8.25\) for DINO and \(-2.62\) for BYOL which are reduced to \(-0.92\) and \(-0.41\) respectively with ER.
In fact, we find that training with ER outperforms recent literature on small-batch SSL training (HaoChen et al., 2021; Chen et al., 2021; Yuan et al., 2022). For example, for SimCLR with batch size 512, we report an accuracy of \(69.85\) (Table 2) while the most recent of these works reports an accuracy of \(68.8\)(Yuan et al., 2022).
**BYOL does not maximize entropy** Figure 2 shows the evolution of entropy and reconstruction during training (top and middle) and the ImageNet accuracy (bottom) (see Appendix F for clustering methods like DeepCluster and SwAV). We observe that methods trained with ER clearly maximize entropy while others such as BYOL with batch size 4096 display a slight decrease in entropy while still achieving high accuracy. This might provide an empirical answer to the question left in Section 3.3 and indicate that BYOL does not maximize entropy. The EMA was introduced to avoid representation collapse in the absence of negative samples. When properly tuned, the effect seems sufficient to maintain a high entropy and create discriminative representations. Nevertheless, one could argue that it does not take full advantage of the overall space (or we would observe higher entropy) and that the accuracy is very sensitive to its tunning (see Table 2 and Figure 2). In addition to the EMA, DINO introduces a softmax centering
\begin{table}
\begin{tabular}{l c c c c} Model & MI & Acc (\(\uparrow\)) & \(\Delta 512(\downarrow)\) & \(\Delta\)EMA\({}_{0.8}(\downarrow)\) \\ \hline DINO &? & 75.59 & 6.76 & 8.25 \\ DINO + ER & (\(\checkmark\)) & 73.39 & 2.35 & 0.92 \\ \hline BYOL &? & 73.42 & 23.65 & 2.63 \\ BYOL + ER & (\(\checkmark\)) & 71.94 & 2.35 & 0.41 \\ \hline SimCLR & \(\times\) & 70.23 & 2.17 & - \\ SimCLR + ER & \(\checkmark\) & 70.86 & 1.01 & - \\ \hline \end{tabular}
\end{table}
Table 2: Training with ER yields competitive performance while improving stability with small batch size and EMA coefficients. Model: set of augmentations, loss, and projection head. \({}^{*}\)Our implementation. ER: the original loss has been substituted by the ER bound (2). MI: known to maximize MI. (\(\checkmark\)): no formal proof (Section 4.2). \(\Delta\)**512**: accuracy drop with batch size 512. \(\Delta\)**EMA\({}_{0.8}\): accuracy drop with EMA coefficient of \(0.8\).
procedure to keep the output probabilities in a certain range. In Figure 2, we observe that DINO's entropy and accuracy become extremely low when softmax centering is deactivated. Notably, _adding ER makes it possible to train DINO without softmax centering_, which confirms that softmax centering plays a role in keeping the entropy high (Section 3.3).
**ER is not sensitive to the entropy estimator** All ER models except DINO used a KDE-based entropy estimator. To gain more insight into the effect of the estimator, we train a continuous KDE-based version of DINO + ER and compare it with the one reported in Table 2, which uses an exact discrete estimator. We find no significant differences between their performances (see Appendix E).
## 6 Discussion
We showed to what extent different MVSSL methods maximize MI through the ER bound on the MI. First, we revisited previous knowledge about the maximization of MI in contrastive methods and reinterpreted it in the context of ER. Second, we showed that two clustering-based methods, DeepCluster and SwAV, maximize the ER bound. Third, we interpreted two distillation-based methods, BYOL and DINO, as maintaining a stable level of entropy while maximizing the reconstruction term of the ER bound.
We explained how ER can be optimized in most MVSLL frameworks, and we showed empirically that SimCLR, BYOL and DINO, when optimizing the ER bound result in a performance which is competitive with that of the respective original versions. We also showed that it is not necessary for distillation methods like BYOL to maximize entropy to achieve competitive results. This is an interesting observation in the context of (Wang and Isola, 2020) who conclude both alignment and uniformity are required for contrastive methods to work well, we showed that at least for distillation methods, maximizing uniformity is not necessary. Uniformity (or high entropy), however, seems to be correlated with resilience as all methods became more resilient to smaller batch size and/or EMA coefficient when maximizing ER, with a particularly pronounced effect for distillation methods. Understanding the exact mechanism for these behaviors remains an exciting subject of future work.
Finally, our theoretical analysis in Section 4.1 and Appendix B indicates that methods that explicitly maximize the ER bound should yield desirable identifiability properties. We believe that exploring this result in practice is an exciting avenue for future research.
## Acknowldegments
The authors thank the reviewers for their valuable feedback, which resulted in new experiments and clarifications that strengthened the paper, as well as the colleagues at Apple for productive discussions that helped shape and fortify the paper, especially Effrosyni Simou, Michal Klein, Tatiana Likhomanenko, and R. Devon Hjelm.
Borja Rodriguez-Galvez was funded, in part, by the Swedish research council under contract 2019-03606.
Figure 2: ER maximizes entropy during training (top) while it is unclear for distillation methods. ER allows training DINO w/o softmax centering. Top: Entropy dynamics while training SimCLR, BYOL, DINO w/ and w/o ER, and DINO w/ and w/o softmax centering for 400 epochs. Middle: Reconstruction loss dynamics. Bottom: top-1 accuracy on the ImageNet test set (linear probe trained online). |
2310.13619 | Semi-supervised multimodal coreference resolution in image narrations | In this paper, we study multimodal coreference resolution, specifically where
a longer descriptive text, i.e., a narration is paired with an image. This
poses significant challenges due to fine-grained image-text alignment, inherent
ambiguity present in narrative language, and unavailability of large annotated
training sets. To tackle these challenges, we present a data efficient
semi-supervised approach that utilizes image-narration pairs to resolve
coreferences and narrative grounding in a multimodal context. Our approach
incorporates losses for both labeled and unlabeled data within a cross-modal
framework. Our evaluation shows that the proposed approach outperforms strong
baselines both quantitatively and qualitatively, for the tasks of coreference
resolution and narrative grounding. | Arushi Goel, Basura Fernando, Frank Keller, Hakan Bilen | 2023-10-20T16:10:14Z | http://arxiv.org/abs/2310.13619v1 | # Semi-supervised Multimodal Coreference Resolution in Image Narrations
###### Abstract
In this paper, we study multimodal coreference resolution, specifically where a longer descriptive text, _i.e._, a _narration_ is paired with an image. This poses significant challenges due to fine-grained image-text alignment, inherent ambiguity present in narrative language, and unavailability of large annotated training sets. To tackle these challenges, we present a data efficient semi-supervised approach that utilizes image-narration pairs to resolve coreferences and narrative grounding in a multimodal context. Our approach incorporates losses for both labeled and unlabeled data within a cross-modal framework. Our evaluation shows that the proposed approach outperforms strong baselines both quantitatively and qualitatively, for the tasks of coreference resolution and narrative grounding.
## 1 Introduction
In linguistic processing, coreference resolution is a standard task that aims to identify referring expressions such as noun phrases and pronouns that refer to the same entity. It is fundamental to many standard problems including question answering Kwiatkowski et al. (2019); Das et al. (2017), sentiment analysis Cambria et al. (2017); Medhat et al. (2014), summarization Gupta and Lehal (2010); Shi et al. (2021) and machine translation Lopez (2008); Bahdanau et al. (2014); Wu et al. (2016). In this work, we focus on a multimodal coreference resolution (MCR) scenario where the coreferences occur in a narration paired with an image and also link to an image region as shown in Figure 1. Here resolving coreferences is challenging, as mentions referring to different entities can be very similar when encoded by a language model, _e.g._, _one boy_, _the other boy_, _the boy_. Hence it demands a fine-grained understanding of each modality and as well as across them. In particular, it requires simultaneously grounding instances by identifying fine-grained visual details (_e.g._, disambiguating them by recognizing the action 'crying', spotting 'white color t-shirt and cream color short' or 'a white color sticker on the head'), and capturing long-range dependency across sentences (_e.g._, _two small boys_ and _their_).
MCR has recently gained increasing attention, with several notable studies Ramanathan et al. (2014); Huang et al. (2018); Cui et al. (2021); Parcalabescu et al. (2021); Das et al. (2017); Guo et al. (2022); Goel et al. (2022); Hong et al. (2023). However, many of them focus on images with simple short sentences, such as 'A woman is driving a motorcycle. Is she wearing a helmet?' Das et al. (2017); Parcalabescu et al. (2021), or are limited to identifying movie characters or people Ramanathan et al. (2014); Cui et al. (2021). More recently, Goel et al. (2022) introduced a challenging and unconstrained MCR problem (see Figure 1) including a dataset, Coreferenced Image Narratives (CIN), with both people and objects as referents with long textual descriptions (narrations). As manually annotating a large dataset with coreferencing and grounding labels is expensive, the authors provide annotations only for evaluation purposes. They also propose a weakly supervised method that learns to jointly ground mentions in images and use them as anchors along with prior linguistic rules Lee et al. (2011) to group coreferring men
Figure 1: Example image-narration pair from the Coreferenced Image Narratives dataset Goel et al. (2022). Phrases marked in the same color coreferring to the same entity which are also grounded in the image. We do not show singletons for brevity.
tions from only image and narration pairs without the annotations. The method has multiple shortcomings: (1) weakly supervised grounding fails to disambiguate multiple instances of the same object class, boy (_one boy_, _the other boy_), (2) language rules such as _exact match of phrases_ are either too strict or too generic, _e.g._, _pronoun match_, linking pronouns to one antecedent (_one boy_, _he_, _he_, _his_) and, (3) they require an additional modality, mouse traces to learn coreferences which can be expensive to obtain.
Motivated by these limitations, we argue that it difficult to successfully resolve coreferences from only image-narration pairs in cases where multiple instances of the same object category are present; this situation is common coincides with a mention in the narration. Since full manual annotations of coreference chains and bounding boxes is expensive, we propose to resolve coreferences and ground mentions in a semi-supervised setting where only a few data samples are labeled. Our approach involves a customized multi-modal fusion model that combines image region features and mention features from narrations through cross-attention Vaswani et al. (2017); Li et al. (2021). We investigate different task-specific losses for training on labeled and unlabeled data, and show that naively combining training on the labeled and pseudo-labeled data suffers from severe overfitting Arazo et al. (2020). Hence, we propose a robust loss function and thresholding-based training scheme to effectively learn from the unlabeled set. This novel approach results in consistent performance improvements with the inclusion of unlabeled data during training.
Our main contributions are (1) a vision-language framework for MCR trained on a small labeled and an unlabeled dataset, (2) novel task-specific losses (on both labeled and pseudo-labeled data) for learning joint multi-modal embeddings for coreference resolution while simultaneously improving narrative grounding, (3) extensive evaluation of our proposed method on the CIN dataset and ablation studies to validate our design choices, showing consistent performance gains compared to baselines on coreference resolution and narrative grounding.
## 2 Related work
**Multimodal coreference resolution.** MCR involves comprehending the contextual information in language and establishing connections with specific regions in an image. Recently, considerable efforts have been dedicated to developing datasets that can effectively address this intricate task. Parcalabescu et al. (2021) introduced the VALSE dataset, which encompasses various coreference scenarios. However, this dataset focuses on the downstream task of visual question answering without evaluating coreference resolution or grounding. Hence, we evaluate our method on CIN dataset Goel et al. (2022) that contains coreference chains and grounding annotations. Another approach to MCR datasets involves linking people's names mentioned in the text to corresponding images and resolving pronouns that connect to those specific names Ramanathan et al. (2014); Cui et al. (2021); Hong et al. (2023). However, our main focus is to resolve coreferences in a generic scenario (with visual complexity) unlike the others that are either limited to only people names/characters Ramanathan et al. (2014); Cui et al. (2021); Hong et al. (2023) or have simple sentences Das et al. (2017); Parcalabescu et al. (2021).
**Vision-language learning.** Existing work on vision and language understanding employs either pre-trained object detector features He et al. (2017); Ren et al. (2015) as an image encoder, ViT Dosovitskiy et al. (2020) or a CNN Simonyan and Zisserman (2014) combined with a transformer-based text encoder Devlin et al. (2018). To model cross-modal interaction between the image and text encoders, UNITER Chen et al. (2020), ALBEF Li et al. (2021) and VinVL Zhang et al. (2021) employ a multimodal encoder. They are pre-trained on large-scale image-caption pairs such as COCO Lin et al. (2014), Conceptual captions Sharma et al. (2018); Changpinyo et al. (2021), Visual Genome Krishna et al. (2017). The pre-training objectives are implemented with image-text contrastive loss, masked language modeling, and image-text matching loss. Our method is inspired by these architectures and is trained using a set of self-supervised and task-based objectives in a semi-supervised learning fashion.
**Semi-supervised learning.** There is a large body of work in semi-supervised learning Zhai et al. (2019); Van Engelen and Hoos (2020); Ouali et al. (2020). These methods typically exploit unlabeled data via either pseudo-labeling with small amounts of labeled data Lee et al. (2013); Arazo et al. (2020); Rizve et al. (2021); Sohn et al. (2020); Zhang et al. (2021) or by enforcing consistency regularization
(Berthelot et al., 2019; Abuduweili et al., 2021) on the unlabeled data to produce consistent predictions over various perturbations of the same input by applying several augmentation strategies (Zhang et al., 2017; Cubuk et al., 2018, 2020). Our method draws inspiration from pseudo-labeling literature and uses a robust loss function and thresholding to counter overfitting to pseudo-labels.
## 3 Method
### Task Overview
Our goal is (1) to group mentions (_i.e._, referential words or phrases) in the narration that corefer to the same entity and, (2) to ground each mention to a region in an image. Formally, let \(N=\{m_{1},m_{2},\ldots,m_{|N|}\}\) denote a narration with \(|N|\) mentions for an image \(I\) with \(|I|\) regions where \(I=\{r_{1},r_{2},\ldots,r_{|I|}\}\). We wish to learn an embedding function \(f\) that takes in an image \(I\) and its narration \(N\), parsed to contain a set of mentions, and outputs a score for a mention pair \((m,m^{\prime})\):
\[\frac{f(m)\cdot f(m^{\prime})}{|f(m)||f(m^{\prime})|} \tag{1}\]
The mention pair \(m\) and \(m^{\prime}\) corefiers if the score in Equation (1) is high, otherwise they do not.
For grounding of the mention \(m\) on the image region \(r\), we also learn another function \(g\) that outputs a score for the mention \(m\) being located at region \(r\) in image \(I\). Next, we describe in detail our methodology to learn the two functions \(f\) and \(g\).
### Model Architecture
In Figure 2, we illustrate our model architecture. Each image is parsed into a set of regions through a pre-trained object detector (Ren et al., 2015), where each region \(r\) is represented by a \(d\)-dimensional joint embedding \(\mathbf{v}_{r}\in\mathbb{R}^{d}\) including its visual, semantic and spatial features. In particular, the visual encoder \(f_{v}\) is instantiated as a transformer block that takes in a joint feature embedding \(\mathbf{v}_{r}\) for the object region \(r\) and outputs a \(D\) dimensional embedding, _i.e._, \(f_{v}(\mathbf{v}_{r}):\mathbb{R}^{d}\rightarrow\mathbb{R}^{D}\).
Furthermore, we encode the words in each narration \(N\) using a tokenizer (Devlin et al., 2018) to get a set of tokens for the words \(w\in\mathbb{R}^{V}\) where \(V\) is the vocabulary size. The text encoder \(f_{t}\) which is also a transformer block that takes in the word token \(w\) and outputs a \(D\) dimensional embedding, _i.e._, \(f_{t}(w):\mathbb{R}^{V}\rightarrow\mathbb{R}^{D}\). The mention embeddings are computed by averaging its corresponding word representations as: \(f_{t}(m)=\frac{1}{|m|}\sum_{w\in m}f_{t}(w)\) where, \(|m|\) indicates the mention length in words, and the embeddings \(f_{t}(m)\) have the same dimensionality as the visual features.
Next, the multi-modal encoder \(f\) fuses the visual features from the visual encoder \(f_{v}(\mathbf{v}_{r})\) with the mention features from the text encoder \(f_{t}(m)\). Similar to the cross-modal architectures (Li et al., 2021; Zhang et al., 2021), the embeddings from the text encoder are first encoded using self-attention layers (Vaswani et al., 2017). Then, a multi-head cross attention module integrates the textual and visual features. In the cross-attention module, the self-attended mention embeddings \(f_{t}(m)\) are treated as the query, while the image representations \(f_{v}(\mathbf{v}_{r})\) are treated as keys and values. The attention weights between the mention \(m\) and the region \(r\) are given as:
\[g(m,r)=\frac{\exp(\frac{f_{t}(m)^{T}f_{v}(\mathbf{v}_{r})}{\sqrt{d}})}{\sum_{r^{ \prime}\in I}\exp(\frac{f_{t}(m)^{T}f_{v}(\mathbf{v}_{r^{\prime}})}{\sqrt{d}})} \tag{2}\]
where the softmax is computed over the image regions for each mention. This attention matrix (or the grounding function) \(g\) from the multi-head cross attention learns fine-grained mention to region alignment scores. Finally, the vision-aware mention embedding is represented as:
\[f(m)=g(m,r).f_{v}(\mathbf{v}_{r}) \tag{3}\]
Figure 2: Illustration of our model architecture and training methodology. The pre-extracted image regions are fed into the visual encoder, the narrations are fed into the text encoder and both modalities are fused using a multimodal encoder. The model is optimized using self-supervised objectives (in grey) and specialized task-based losses on both the labeled data (in yellow boxes) and the pseudo-labeled data (in green boxes).
where, \(f(m)\in\mathbb{R}^{D}\). This weighted embedding is then passed to a feed-forward module (Li et al., 2021) with an MLP and layer normalization. All the transformer encoders/blocks are based on the architecture proposed by (Li et al., 2021). It is important to note that compared to Goel et al. (2022), our model fuses vision and text features with a multimodal encoder, unlike theirs.
### Semi-supervised Learning
Concretely, we aim to learn the parameters of the modules \(f_{v}\), \(f_{t}\) and \(f\) given a training dataset \(\mathcal{D}\) with \(|\mathcal{D}|\) samples of image-narration pairs. Specifically, we use a small labeled set \(\mathcal{D}_{s}=\{x_{i},y_{i}\}_{i=1}^{|\mathcal{D}_{s}|}\) where \(x_{i}=\{I,N\}\) is the image-narration input pair and \(y_{i}=\forall_{m\in N}\{P(m),A(m),b_{m}\}\) is the label for the input pair. In particular, the label for each mention \(m\) in the narration is given as: \(P(m)\) and \(A(m)\), the set of positive and negative mentions respectively for the mention \(m\) and \(b_{m}\), the bounding-box coordinates of the region corresponding to the mention \(m\).
Due to the unavailability of a large labeled training set, we leverage the unlabeled data \(\mathcal{D}_{u}\) = \(\mathcal{D}\setminus\mathcal{D}_{s}\) where, \(\mathcal{D}_{u}=\{x_{i}\}_{i=1}^{|\mathcal{D}_{u}|}\) with only image-narration pairs as inputs. Our overall training objective is the joint loss function as follows:
\[\sum_{(x,y)\in\mathcal{D}_{s}}\frac{1}{|\mathcal{D}_{s}|}\mathcal{L}_{s}(x,y) +\sum_{x\in\mathcal{D}_{u}}\frac{1}{|\mathcal{D}_{u}|}\mathcal{L}_{u}(x) \tag{4}\]
where, \(\mathcal{L}_{s}\) is the supervised loss and \(\mathcal{L}_{u}\) is the unsupervised loss. First, we discuss how to formulate task-based supervised losses on the dataset \(\mathcal{D}_{s}\).
**(S1) Coreference loss (CR)** Specifically, we propose to learn the similarity between the mention embeddings using a supervised contrastive loss (Khosla et al., 2020) which is defined as:
\[\begin{split}\mathcal{L}_{cr}=\sum_{m\in N}\frac{-1}{|P(m)|} \sum_{p\in P(m)}\\ \text{log}\frac{exp(f(m).f(p)/\tau)}{\sum_{a\in A(m)}exp(f(m).f(a) /\tau)}\end{split} \tag{5}\]
where \(\tau\) is the temperature. This loss helps to cluster embeddings for coreferring mentions together and push the embeddings of non-referrants away from each other.
**(S2) Grounding loss (GD)** To align the mention \(m\) and region \(r\), we use the grounding function \(g\) defined in Equation (2). In particular, we first define the ground-truth binary alignment on the labeled training set \(\mathcal{D}_{s}\). For the ground-truth bounding box \(b_{m}\) for a mention \(m\) we compute the intersection over union (IoU) between this bounding-box and the \(R\) pre-extracted image regions. This is crucial because we don't have the exact region-mention match for the detections from the object detector. Following this, we get the binary alignment function \(h(m,r)\), which is 1 for the mention \(m\) and the detected image region \(r\) if the region \(r\) has the maximum IoU overlap with the ground-truth bounding box \(b_{m}\), and 0 otherwise. Once we have the ground-truth alignment \(h(m,r)\), we compute the cross-entropy loss as:
\[\mathcal{L}_{gd}=-\sum_{m\in N}\sum_{r\in I}h(m,r)\text{log}(g(m,r)) \tag{6}\]
**(S3) Bounding box regression loss (BBR)** We further propose to add additional supervision to refine the object proposals from the detector for a mention. For each mention \(m\), the ground-truth bounding box localization is represented as \(b_{m}=(x,y,w,h)\). To learn refinements, we predict the box deltas from the model as \(\delta_{m}=(\delta_{x},\delta_{y},\delta_{w},\delta_{h})\) for each mention \(m\). We then take the highest scoring region for a given mention \(m\) as:
\[r_{m}=\operatorname*{arg\,max}_{r\in I}g(m,r). \tag{7}\]
Our goal is to learn a transformation that maps a proposed box \(r_{m}\) to a ground-truth box \(b_{m}\). We then apply the smooth-L1 loss following Ren et al. (2015), denoted as \(\mathcal{L}_{bbr}\). Further details about this loss are given in the appendix.
Next, we discuss how to train on the unlabeled subset of the dataset by generating pseudo-labels for the coreference and grounding tasks.
**(U1) Pseudo coreference loss (PCR)** Given the unlabeled dataset \(\mathcal{D}_{u}\), we compute the pseudo coreferring pairs for the mentions in \(N\). More specifically, we compute pseudo-positives \(\hat{P}(m)\) and pseudo-negatives \(\hat{A}(m)\) for a mention \(m\) by computing the cosine similarity between the embeddings as in Equation (1). For each mention \(m\), if the similarity with another mention \(m^{\prime}\) is greater than a threshold then we label it as a positive otherwise a negative. Finally, we compute the triplet loss as:
\[\begin{split}\mathcal{L}_{pcr}=\\ \sum_{m\in N}\text{max}(||f(m)-\frac{1}{|\hat{P}(m)|}\sum_{p\in \hat{P}(m)}f(p)||^{2}\\ -||f(m)-\frac{1}{|\hat{A}(m)|}\sum_{a\in\hat{A}(m)}f(a)||^{2}+ \alpha,0)\end{split} \tag{8}\]
where \(\alpha\) is the margin, \(f(m)\) is the embeddings for the query mention \(m\), \(\frac{1}{|\hat{P}(m)|}\sum_{p\in\hat{P}(m)}f(p)\) is the mean of embeddings of the pseudo-positive labels \(\hat{P}(m)\) and \(\frac{1}{|\hat{A}(m)|}\sum_{a\in\hat{A}(m)}f(a)\) is the mean of embeddings of the pseudo-negative labels \(\hat{A}(m)\).
The key intuition behind using the mean in a triplet loss formulation is to reduce overfitting to the noise in the pseudo labels. This works better in practice compared to the contrastive loss formulation in Equation (5) or mining a random positive/negative label for the standard triplet loss, especially when dealing with pseudo labels.
**(U2) Pseudo grounding loss (PGD)** Furthermore, we compute the pseudo grounding loss on the unlabeled training dataset. Specifically, we impute the pseudo-labels from the grounding function, \(g(m,r)\). We only consider samples whose grounding score is greater than a confidence threshold \(t\), which is set to 0.9 in our experiments. The high threshold value ensures that we consider only confident samples in the unlabeled set and eliminates learning from noisy samples. We denote this label after binary thresholding as \(\hat{h}(m,r)\). The pseudo grounding alignment loss is:
\[\mathcal{L}_{pgd}=\sum_{m\in N}\sum_{r\in I}-\hat{h}(m,r)\text{log}(g(m,r)) \tag{9}\]
Apart from the above mentioned task-based losses, we combine the standard image-text pre-training losses (Vaswani et al., 2017; Li et al., 2021). These losses help to learn better unimodal representations before fusion.
**(U3) Image-Text contrastive loss (ITC)** Following Goel et al. (2022), we incorporate the contrastive loss to align the image and narration pairs to learn better representations before fusion. This loss is defined as:
\[\mathcal{L}_{itc}=\sum_{m\in N}-\log\big{(}\frac{\exp(f_{v}(\mathbf{v}_{r})f_{t}( m))}{\sum_{r^{\prime}\in I}\exp(f_{v}(\mathbf{v}_{r^{\prime}})f_{t}(m)))}\big{)} \tag{10}\]
where \(f_{v}(\mathbf{v}_{r})f_{t}(m)\) is the mention-region matching score from the visual and text representations before fusing in the multi-modal encoder and \(\mathbf{v}_{r}\) are the raw features for the highest scoring region for a mention \(m\).
**(U4) Masked language modeling loss (MLM)** To fine-tune the pretrained BERT model (Devlin et al., 2018) on the image-narration data, we also use the pre-trained task of masked language modeling. In particular, the input word tokens are randomly masked and are replaced by a special masking token. The model needs to predict the mask token based on the unmasked words. This task is trained with a cross-entropy loss, \(\mathcal{L}_{mlm}\).
Hence, our overall training objective in Equation (4) is a combination of specialized task losses on the labeled training set \(\mathcal{D}_{s}\) (\(\mathcal{L}_{cr}\), \(\mathcal{L}_{gd}\) and \(\mathcal{L}_{bbr}\)) and the unlabeled training set \(\mathcal{D}_{u}\) (\(\mathcal{L}_{prc}\) and \(\mathcal{L}_{pgd}\)) and global pre-training objectives on the entire training dataset \(\mathcal{D}\) (\(\mathcal{L}_{itc}\) and \(\mathcal{L}_{mlm}\)).
### Inference
To obtain the coreference scores, we form chains by measuring the cosine similarity between the mentions as described in Equation (1), considering the pairs with similarity higher than a predefined threshold as positives. When evaluating narrative grounding, we extract the cross-attention scores from the last layer of the multimodal encoder. For each mention, we identify the region with the highest softmax score as the positively referred region.
## 4 Experiments
**Datasets.** We evaluate our proposed method on the CIN dataset (Goel et al., 2022) that consists of 1000 test and 880 validation image-narration pairs from the Flickr30k split of the Localized Narratives dataset (Pont-Tuset et al., 2020) annotated with coreference chains and bounding boxes. We use the test split of the CIN dataset to report the performance on CR and narrative grounding. The annotations from the validation split are used as the small labeled set during training. The unlabeled dataset is the Flickr30k training subset of the Localized Narratives dataset, which consists of 50k image-narration pairs but is not annotated with bounding boxes or coreference chains.
**Implementation details.** For the image regions, we extract bounding box regions, visual features and object class labels using the Faster-RCNN object detector (Ren et al., 2015) as in Goel et al. (2022). We use a 4-layer transformer architecture for the text encoder and the multi-modal encoder similar to the ALBEF (Li et al., 2021) framework. The weights of the transformer encoders are initialized with the first four layers of BERT (Devlin et al., 2018). The visual encoder is a stack of two transformer encoder layers. Each transformer encoder layer includes a multi-head self-attention layer and an FFN. There are two heads in the multi-head attention layer, and two FC layers followed by ReLU
activation layers in the FFN. Training details are in the appendix.
**Evaluation.** We report results for coreference resolution and narrative grounding. For the former, we use the standard CoNLL F1 score which is the average of three coreference-based metrics: MUC, B\({}^{3}\) and CEAF\({}_{\phi 4}\). For the latter, we follow Goel et al. (2022) and report the grounding accuracy for both noun phrases and pronouns. More precisely, if the overlap between the ground-truth box and the predicted box is greater than 0.5, then it is considered to be a correct prediction.
## 5 Results and Discussion
### Coreference Resolution
Table 1 reports the coreference resolution performance on the CIN dataset Goel et al. (2022) for our method and the baselines. Further details about the baselines are given in the appendix. The text-based baselines Neural Coref Lee et al. (2017) and longdoc Toshniwal et al. (2021) are evaluated in a zero-shot way on the task. Their low CoNLL F1 scores indicate the incapability of the model to generalize to new domains which is in line with what has been evaluated extensively in the coreference literature Toshniwal et al. (2021); Xia and Van Durme (2021); Gandhi et al. (2023).
We further compare to strong multi-modal baselines by directly evaluating the VLMs in a zero-shot way on the CIN dataset. Interestingly, all three methods: VisualBERT Su et al. (2019), UNITER Chen et al. (2020) and VinVL Zhang et al. (2021) perform better in MUC and B\({}^{3}\) compared to the text-based baseline, longdoc Toshniwal et al. (2021), but drop in performance on the average CoNLL F1 scores. These results show the inability of these models to effectively find singletons, hence leading to poor performance in the precision scores. Moreover, we can conclude that the vision and language pre-trained models fail to generalize for MCR.
We also compare to two weakly supervised methods that are trained on the CIN dataset, MAF Wang et al. (2020) and WS-MCR Goel et al. (2022). Goel et al. (2022) present results on the MAF model as a baseline and their proposed method, WS-MCR. MAF is a weakly supervised grounding method trained with ITC that is evaluated for CR and WS-MCR Goel et al. (2022) learns weakly-supervised grounding and CR combining the ITC loss and prior linguistic rules. Both of these methods improve significantly in MUC scores compared to other zero-shot unimodal and multi-modal baselines.
Finally, we compare with the text-only variant (without any images) of our method. This method improves over the baselines on the CoNLL F1 scores. The significant gains in performance of our final method, with both text and image, combined with label supervision shows the importance of carefully tuning the model with a small amount of labeled data and large amounts of pseudo-labeled data.
### Narrative Grounding
In Table 2, we present a comprehensive comparison between the baselines and our proposed approach on the task of narrative grounding. This task is both challenging and crucial, as it evaluates the precise
\begin{table}
\begin{tabular}{c c c|c c c|c c c|c c c|c} \multirow{2}{*}{Method} & \multicolumn{2}{c}{Modality} & \multicolumn{2}{c}{MUC} & \multicolumn{2}{c}{B\({}^{3}\)} & \multicolumn{2}{c}{CEAF\({}_{\phi 4}\)} & \multicolumn{2}{c}{CoNLL} \\ & Text & Image & R & P & F1 & R & P & F1 & R & P & F1 & F1 \\ \hline Neural Coref Lee et al. (2017)\({}^{\ast}\) & ✓ & ✗ & 0.11 & 0.17 & 0.13 & - & - & - & - & - & - & - \\ longdoc Toshniwal et al. (2021)\({}^{\ast}\) & ✓ & ✗ & 7.79 & 8.43 & 7.24 & 62.27 & 76.10 & 67.69 & 48.77 & 84.95 & 61.02 & 45.31 \\ \hline VisualBERT Su et al. (2019)\({}^{\ast}\) & ✓ & ✓ & 18.17 & 6.08 & 8.06 & 69.01 & 36.08 & 41.03 & 21.25 & 57.10 & 28.67 & 25.92 \\ UNITER Chen et al. (2020)\({}^{\dagger}\) & ✓ & ✓ & 16.92 & 7.15 & 8.83 & 68.34 & 44.29 & 50.22 & 28.12 & 72.78 & 38.91 & 32.65 \\ VinVL Zhang et al. (2021)\({}^{\ast}\) & ✓ & ✓ & 16.76 & 8.60 & 9.75 & 68.49 & 62.32 & 61.30 & 42.88 & 80.81 & 53.69 & 41.58 \\ \hline MAF Wang et al. (2020) & ✓ & ✓ & 19.07 & 15.62 & 15.65 & - & - & - & - & - & - & - \\ WS-MCR Goel et al. (2022) & ✓ & ✓ & 24.87 & 18.34 & 19.19 & - & - & - & - & - & - & - \\ \hline \multirow{2}{*}{Ours} & ✓ & ✗ & 13.30 & 14.12 & 12.55 & 67.91 & 79.48 & 72.41 & 56.05 & 86.20 & 67.05 & 50.67 \\ & ✓ & ✓ & **31.11** & **35.25** & **31.86** & **70.63** & **87.85** & **78.06** & **63.99** & **93.44** & **75.47** & **61.79** \\ \hline \end{tabular}
\end{table}
Table 1: Coreference resolution results on the CIN dataset Goel et al. (2022) from our proposed method and other state-of-the-art unimodal and multi-modal baselines. \(\dagger\) indicates the use of predicted mentions, while the other results rely on ground-truth mentions during inference. \(\ast\) means zero-shot performance.
alignment between image regions and phrases in textual data. Notably, our proposed method goes beyond the traditional alignment of noun phrases and also addresses the grounding pronouns, which is vital for multimodal coreference resolution. We measure noun phrase grounding, pronoun grounding, and overall accuracy to measure performance (Goel et al., 2022).
Remarkably, our proposed method exhibits superior performance compared to weakly supervised baselines, showing a margin of improvement if approximately 2% and 2.5% in noun phrase and pronoun grounding accuracy, respectively. Furthermore, when compared to our unsupervised baseline, namely "Ours (ITC + MLM)", the inclusion of labeled and pseudo-labeled data yields a significant performance boost of approximately 6%. These results demonstrate the significance of training with both grounding alignment and coreference resolution loss, highlighting the mutual benefits derived from this approach.
### Ablation Study
**Varying labeled and unlabeled data.** We study the impact of labeled data on the learning process, allowing us to showcase the strengths of our approach. In Table 3, we measure the model's performance on CoNLL F1 scores at different proportions of labeled data (20% and 50%). Remarkably, despite the limited amount of labeled data samples, the model demonstrates consistently high performance without any significant drop. This highlights the exceptional ability of our model to effectively learn from a small labeled set, without relying on a large number of annotated training samples.
Furthermore, to validate the efficacy of our proposed method, we also investigate the influence of unlabeled data samples during training. Following the same data split as in the supervised experiments, we observe the changes in performance indicated by row 2 in Table 3. As the quantity of unlabeled samples increases, the model exhibits enhanced coreference resolution performance. This result reinforces the ability of our proposed method to leverage and effectively learn from pseudo-labeled data. Detailed results are in the appendix.
**Impact of different loss functions.** In Table 4, we assess the performance of coreference resolution by incorporating various losses proposed in Section 3. Throughout the training process, the model consistently integrates the self-supervised objectives of ITC and MLM, see first row of Table 4.
Integrating the supervised contrastive coreference resolution loss, CR, in addition to ITC and MLM, results in a significant performance drop. Due to the limited availability of labeled data, the model struggles to effectively generalize for coreference resolution, leading to overfitting and consequently lower F1 scores. However, by progressively incorporating the bounding box regression loss, BBR, and the grounding alignment loss GD, we get a much stronger training signal even with a small labeled set. This multi-task training objective contributes to an improvement of approximately 1.5% in the CoNLL F1 score.
Subsequently, we investigate the impact of incorporating loss on pseudo-labeled data. By introducing the pseudo coreference loss, denoted as PCR, we observe a remarkable improvement of approximately 2% in the CoNLL F1 scores. This result highlights the significance of leveraging pseudo clusters and underscores the effectiveness of our proposed robust triplet loss, which computes the triplet loss using the mean of positive and negative embeddings. Notably, this approach successfully incorporates pseudo-labeled data without leading to overfitting while achieving substantial performance gains. Consequently, our final proposed method, which integrates the pseudo grounding loss, PGD, exhibits the most superior overall performance, validating the potency of pseudo-labels for both coreference resolution and grounding.
**Choice of coreference resolution loss.** In Table 5, we examine the impact of different types of coreference resolution losses. We present a comparison of the following loss combinations: (1) Binary
\begin{table}
\begin{tabular}{c|c|c|c} Method & Noun Phrases & Pronouns & Overall \\ \hline MAF (Wang et al., 2020) & 21.60 & 18.31 & 20.91 \\ WS-MCR (Goel et al., 2022) & 30.27 & 25.96 & 29.36 \\ \hline Ours (ITC + MLM) & 27.44 & 22.77 & 26.45 \\ Ours (Full) & **32.58** & **28.45** & **31.71** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of narrative grounding performance on the CIN dataset (Goel et al., 2022).
\begin{table}
\begin{tabular}{c|c|c} Data type & \% Samples & CoNLL F1 \\ \hline \multirow{2}{*}{Labeled} & 20\% & 60.04 \\ & 50\% & 61.24 \\ \hline \multirow{2}{*}{Unlabeled} & 20\% & 56.82 \\ & 50\% & 59.11 \\ \hline \end{tabular}
\end{table}
Table 3: CR performance by changing the amount of labeled and unlabeled data during training.
cross-entropy loss (BCE) applied to both \(\mathcal{D}_{s}\) and \(\mathcal{D}_{u}\), (2) Supervised contrastive loss (CR) applied to both \(\mathcal{D}_{s}\) and \(\mathcal{D}_{u}\), and (3) Supervised contrastive loss (CR) on \(\mathcal{D}_{s}\) and random triplet mining loss (RTC) on \(\mathcal{D}_{u}\).
We observed a significant performance drop when training with the BCE loss, compared to utilizing the supervised contrastive loss. We hypothesize that the supervised contrastive loss provides a better clustering of mentions by contrasting them in the embedding space directly than the binary cross-entropy loss. Consequently, the embeddings become more robust for CR, contributing to improved performance.
Interestingly, when applying the supervised contrastive loss to \(\mathcal{D}_{u}\) (row 2), we observed a drop in performance. Our hypothesis is that the contrastive loss tends to overfit in the presence of noisy pseudo labels, leading to a degradation in performance. In contrast, our pseudo triplet loss formulation PCR is softer in penalizing noisy pseudo labels. This allows the model to gradually adapt and become more resilient to such noise, resulting in more efficient clustering of mentions. We also compare to another ablation where instead of taking the mean of the embeddings for pseudo-positive labels and pseudo-negative labels, we sample a random positive and negative label (results in row 3) abbreviated as RTC. Randomly sampling the labels generalizes better than the other ablations but the mean cluster embeddings outperforms than randomly selecting samples.
### Qualitative Results
In Figure 3, we qualitatively visualize the performance of our method and compare it with the weakly supervised baseline from Goel et al. (2022). Our model correctly separates the mentions _the front man_ and the _the man_ both during CR and grounding, whereas the WS-MCR (Goel et al., 2022) method incorrectly assigns the mention _the man_ to the _the front man_ and grounds it incorrectly too (denoted by the blue dotted line). Hence, our method can effectively learn to disambiguate the instances based on the visual details which is also helpful for coreference resolution.
## 6 Conclusion
In conclusion, this paper addresses the challenging task of multimodal coreference resolution where an image is accompanied by a longer descriptive text. We propose a data efficient semi-supervised approach that incorporates task-based losses for both labeled and unlabeled data, operating within a cross-modal framework. Our method achieves remarkable results for CR and narrative grounding tasks on the CIN dataset, showcasing its effectiveness in handling the complexities of MCR. In the future, we plan to investigate how the power of pre-training combined with semi-supervised fine-tuning can be fully utilized for the task of MCR.
\begin{table}
\begin{tabular}{c c|c c|c c c c|c c c c|c c c|c c c} \multicolumn{1}{c}{} & \multicolumn{4}{c|}{Loss} & \multicolumn{4}{c|}{MUC} & \multicolumn{4}{c|}{B\({}^{1}\)} & \multicolumn{4}{c}{CEAF\({}_{\text{rel}}\)} & \multicolumn{4}{c}{CoNLL} \\ \cline{3-14} PRGN(U1) & PRQ(U2) & ITC(U3) & MR (U4) & CR (S1) & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline ✗ & ✗ & ✓ & ✓ & ✗ & ✗ & ✗ & 23.81 & 25.83 & 23.12 & 69.32 & 85.87 & 76.41 & 61.00 & 89.69 & 72.05 & 57.19 \\ \hline & & & & & ✓ & ✗ & ✗ & 22.70 & 21.40 & 20.23 & 69.05 & 80.03 & 73.66 & 55.52 & 87.09 & 67.20 & 53.70 \\ ✗ & ✗ & ✓ & ✓ & ✓ & ✗ & ✓ & 23.86 & 24.52 & 22.31 & 69.31 & 84.15 & 75.67 & 59.50 & 89.15 & 70.80 & 56.26 \\ & & & & ✓ & ✓ & ✓ & 27.68 & 29.04 & 26.66 & 69.38 & 85.43 & 76.61 & 60.92 & 90.61 & 72.26 & 58.51 \\ \hline ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & 30.66 & 32.82 & 30.31 & 70.70 & 86.09 & 77.33 & 62.64 & 92.92 & 74.27 & 60.64 \\ ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & **31.11** & **35.25** & **31.86** & **70.63** & **87.85** & **78.06** & **63.99** & **93.44** & **75.47** & **61.79** \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study on our proposed method with the combination of the proposed losses.
Figure 3: Visualization for grounding and coreference resolution. The colored boxes in image correspond to the mentions with the same color in the sentence.
### Limitations
Here, we outline limitations that are important considerations for future work.
First, the current model's performance in coreference resolution and grounding is limited by the use of a pre-trained object detector. The detectors pretrained for object detection task have a limited object category vocabulary and lack in fine-grained properties including adjectives, human actions and the open vocabulary found in narrations. This forces the model to rely on a predetermined set of regions and object classes, preventing it from directly learning region coordinates for a mention on an image. To improve performance, we envision the development of an end-to-end approach that eliminates this reliance on pre-defined regions.
Second, our model currently depends on ground-truth mentions to resolve coreferences and ground them. In the future, one promising direction would be to detect mentions simultaneously with coreference resolution and grounding. This would significantly improve the applicability of our proposed method and reduce dependence on off-the-shelf mention detectors or ground-truth annotations.
## Ethics Statement
All datasets used in this work have been previously released. The use of the CIN dataset [1] in our paper is consistent with their intended use. The detail of the dataset is described in Goel et al. (2022). Multimodal datasets frequently include social biases, and we expect the models trained on them to reflect the biases in these datasets. It is important to note that multimodal models have both beneficial and harmful applications. Beneficial applications include advanced image and video retrieval, visual description systems to assist the visually impaired, and user interfaces that enhance interaction with smart home devices. However, harmful applications, such as non-consensual surveillance or fine-tuning models to retrieve inappropriate content, must be carefully addressed and mitigated.
## Acknowledgements
AG is supported by the Armeane Choksi Scholarship and HB is supported by the EPSRC programme grant Visual AI EP/T028572/1 and HB and FK are supported by Edinburgh Laboratory for Integrated Artificial Intelligence (ELIAI). This research/project is supported by the National Research Foundation, Singapore, under its NRF Fellowship (Award NRF-NRFF14-2022-0001). Thanks to Nikita Moghe and Matt Grenander for discussions and constructive feedback.
|
2308.11244 | Are current long-term video understanding datasets long-term? | Many real-world applications, from sport analysis to surveillance, benefit
from automatic long-term action recognition. In the current deep learning
paradigm for automatic action recognition, it is imperative that models are
trained and tested on datasets and tasks that evaluate if such models actually
learn and reason over long-term information. In this work, we propose a method
to evaluate how suitable a video dataset is to evaluate models for long-term
action recognition. To this end, we define a long-term action as excluding all
the videos that can be correctly recognized using solely short-term
information. We test this definition on existing long-term classification tasks
on three popular real-world datasets, namely Breakfast, CrossTask and LVU, to
determine if these datasets are truly evaluating long-term recognition. Our
study reveals that these datasets can be effectively solved using shortcuts
based on short-term information. Following this finding, we encourage long-term
action recognition researchers to make use of datasets that need long-term
information to be solved. | Ombretta Strafforello, Klamer Schutte, Jan van Gemert | 2023-08-22T07:39:31Z | http://arxiv.org/abs/2308.11244v1 | # Are current long-term video understanding datasets long-term?
###### Abstract
Many real-world applications, from sport analysis to surveillance, benefit from automatic long-term action recognition. In the current deep learning paradigm for automatic action recognition, it is imperative that models are trained and tested on datasets and tasks that evaluate if such models actually learn and reason over long-term information. In this work, we propose a method to evaluate how suitable a video dataset is to evaluate models for long-term action recognition. To this end, we define a long-term action as excluding all the videos that can be correctly recognized using solely short-term information. We test this definition on existing long-term classification tasks on three popular real-world datasets, namely Breakfast, CrossTask and LVU, to determine if these datasets are truly evaluating long-term recognition. Our study reveals that these datasets can be effectively solved using shortcuts based on short-term information. Following this finding, we encourage long-term action recognition researchers to make use of datasets that need long-term information to be solved.
## 1 Introduction
Many interesting actions happening in the real world are long-term. That is, they are composed of several short sub-actions, that we refer to as _short-term actions_. For an action to be _long-term_, we deem that recognizing a single-short term action is not enough, and reasoning about the order and the relationship of short-term actions is required. Two examples of long-term actions, shown in Figure 1, are _winning a soccer game_ and _shoplifting in the supermarket_. To understand which team is winning a soccer game, it is necessary to recognize and count the goals scored since the beginning of the game. For the other example, recognizing if a person is shoplifting, it is necessary to observe a person storing a product in their pocket _and_ leaving the supermarket without paying. In both examples, it is not possible to recognize the actions without reasoning on multiple ordered short-term actions.
Achieving automatic long-term action recognition is important because it can be used to solve real-world problems, from analyzing sports videos, to understanding movies and recognizing threats in surveillance footage. To make it possible, we need purpose-built computer vision models, that are trained and evaluated on datasets that need long-term reasoning to be solved. While working on long-term action recognition, we notice that every video in the Breakfast dataset [24], a go-to choice in long-term video understanding research [16, 17, 26, 47], contains short-term actions that map to a single long-term action. This implies that accurately recognizing a short-term action in a Breakfast video should be sufficient to infer the corresponding long-term action. We analyze the short-term actions of another popular instructional video dataset, CrossTask [49], and find the same occurrence in 97.72% of its primary tasks videos. We illustrate our statistics on the short-term action occurrences in Figure 2. Since deep learning models are known to use shortcuts to solve classification tasks [13], the models trained and tested on these datasets might learn to exploit short-term information, without encoding any long-term relations.
Motivated by this finding, we propose a method to diagnose whether a long-term dataset is suitable to study long-term action recognition, or can be solved using solely short-term information. To this end, we define two requirements for an action to be long-term: (1) The action is _recognizable only from multiple short-term actions_ and not from a single short-term action. (2) The action maps to a _single label_. The first requirement makes long-term action recognition impossible without reasoning over an extended time span. Models that lack this capability, for example based on straightforward pooling operations over time [40], cannot recognize long-term actions. The second requirement leads to discarding multi-label action recognition datasets, like Charades [31], MultiTHUMOS [45] and EPIC-Kitchens [11], as long-term action datasets. In these datasets, the task is to recognize each short-term action contained in the videos. This task could be solved by classifying each short-term action one at a time, while here we are interested in the case where the classification can be made only after reasoning over multiple short-term actions together.
We design a user study to assess whether a video dataset contains long-term action videos that are not recognizable from a single short-term action. Our study is based on two surveys where users have to watch a video and predict the long-term action being performed in the video. In the _Full Videos Survey_, the users can watch the full video, while in the _Video Segments Survey_ a separate group of users can watch only a single short clip extracted from the full video. We measure the average action recognition accuracy of the users per video for each survey. The _Full Videos Survey_ gives an upper bound to the user long-term action recognition performance. Comparing the accuracy obtained from the _Video Segments Survey_ to the upper bound gives an estimate of how many videos in the dataset require long-term
Figure 1: Example of truly long-term actions. _Top:_ Who is winning this soccer game?1, _Bottom:_ Is this person shoplifting in the supermarket?2. In both cases, it is not possible to answer correctly without considering multiple short-term actions together, their order and relations over time. To understand who is winning the soccer game, it is necessary to recognize and count the goals scored since the beginning of the game. To recognize shoplifting, it is not enough to see a person putting a product in their pocket: also the short-term action _leaving without paying_ needs to occur.
Footnote 1: Source: YouTube; 2Source: YouTube from movie _Un povero ricco_, by Pasquale Festa Campanile (1983).
Figure 2: We analyze two popular long-term datasets with long-term and short-term action annotations, Breakfast (coarse annotations) [24] and CrossTask [49] (primary tasks). We count in how many long-term actions the short-term action appears. Recurrent short-term actions, like _pour milk_ and _pour egg_ appear in four different long-term action classes. More specific short-term actions, like _fry pancake_ and _add kimchi_, only occur in one long-term action class. We find that a large percentage of short-term actions (70.8% for Breakfast and 89.5% for CrossTask) appears only in one long-term action class. This implies that recognizing a single short-term action might be sufficient to correctly infer the long-term actions in these datasets.
information to be correctly recognized. If the action recognition performance of the two groups of users is close, we can conclude that most of the videos in the dataset are not suitable to train and evaluate models for long-term action recognition, because they can be recognized solely by exploiting short-term information.
We apply our proposed method to the aforementioned Breakfast and CrossTask datasets and to the Long-form Video Understanding benchmark (LVU) [41], recently proposed for long-term video recognition tasks in movies. We implement the user studies on Amazon Mechanical Turk [1] and collect responses from more than 150 users. Our results show that looking at a single short video segment is sufficient to recognize 90% and 97.2% of the analyzed videos from Breakfast and CrossTask. Similarly, we find that most of the content understanding tasks in LVU can be classified without long-term information, and that some video segments in this dataset are misclassified by users due to annotation noise. We conclude that the aforementioned datasets might not be suitable to develop new methods for long-term action recognition in videos, because they can be solved by ignoring long-term information. We recommend long-term video understanding researchers to be careful when using these datasets and encourage the community to collect more representative video datasets.
In summary, the contributions of our study can be outlined as follows: (1) We provide a definition of long-term action datasets that should prevent long-term action recognition models to use traditional short-term action recognition as a shortcut to solve the task. (2) We introduce a method to investigate whether a video dataset meets this definition of long-term action. (3) We find that short-term information is, in most cases, sufficient to solve long-term video understanding tasks in three commonly used datasets. Thus, we recommend against using these datasets in further research on long term action recognition models. The code and responses from our user study are publicly available1.
Footnote 1: [https://github.com/ombreta/longterm_datasets](https://github.com/ombreta/longterm_datasets)
## 2 Related work
### Action recognition with deep learning
The progress of deep learning (DL) has brought significant advancements in automatic action recognition. DL-based models learn to extract discriminative spatial and temporal features directly from the RGB frames of the training videos. Current action recognition models are composed of 3D convolutional networks [22], like I3D [8], C3D [38], Slow-Fast [12]. More recently, attention-based architectures have also shown competitive performance on action recognition tasks. Examples include ViViT [3], TimeSformer [5] and Video Swin Transformer [27]. When pre-trained on sufficiently large datasets, like Kinetics [8] or ActivityNet [7], these models can achieve state-of-the art action recognition on _short_ videos datasets, like UCF101 [32], HMDB51 [25] and Something-Something [15]. However, they are not suitable to learn long-term dynamics in long videos, either due to their limited temporal receptive field or the high computational requirements.
### Long-term action recognition
Long-term action recognition refers to the task of recognizing and understanding human actions composed of several short-term actions, possibly involving multiple objects and movements [47]. Examples include cooking a recipe [24], performing a medical surgery [30] or playing a sport game [45]. Usually, long-term actions require an extended period of time to be executed, e.g. above one minute [17]. Several works that tackled the problem of long-term action recognition use different names and definitions for the same concepts. In fact, long-term actions can also be referred to in the literature as _long-range activities_[19] or _complex activities_[16, 17]. Being composed of multiple steps, the activities in _instructional videos_ share the same properties of long-term actions [26, 28, 48] and can be comprised into this category. Finally, also _long-form_ video understanding involves reasoning over human-object interactions in long videos [41, 44] and can be considered as an instance of long-term action recognition.
Traditional DL-based action recognition models [8, 12, 38, 40] are deemed insufficient to capture discriminative spatio-temporal features that encode long-term information and the semantic relations between the sub-actions. A variety of models have been proposed to overcome this limitation. Hussein _et al._[17] proposed to capture long-term information with multi-scale temporal convolution. Yu _et al._[46] used Recurrent Neural Networks to model long video sequences capturing temporal information at different rhythms. Ballan _et al._[4] showed that explicitly focusing on the actor performing the long-term action improves the recognition performance. Different approaches showed that long-term action recognition can be tackled using graph-based representations, where the nodes correspond to short-term entities and the edges to their interaction over space and time [18, 21, 47]. Finally, Transformer architectures have been designed to model long-term information in a compute- [20, 42] and data-efficient [16] fashion.
Despite their success, DL-based action recognition models can find shortcuts in the data that let them solve action recognition without learning semantic features, for example classifying the action based on the background scene [10, 13, 43]. In this work, we try to address this problems by analyzing whether commonly used video datasets for long-term action recognition are representative for training DL models, or can be solved using short-term shortcuts.
### Long-term video datasets
Several datasets have been proposed in the literature to study long-term video understanding tasks. CATER [14] is an ideal example of a dataset that requires long-term information. It involves tracking geometrical shapes that move in a 3D space over time. Sometimes bigger shapes incorporate smaller shapes, rendering their localization impossible without continuous reasoning about past information. As a consequence, models that are not truly long-term fail on this dataset. Unfortunately, the CATER dataset is highly synthetic and cannot be used to train models for real-world applications.
Real-world datasets mostly include cooking [11, 24, 33, 48], home activities [31, 39], sports [45] and instructional videos [2, 36, 48, 49]. A comprehensive overview of long-term video understanding datasets is provided in Table 1. Many of these datasets, for example Charades [31], Epic Kitchens [11] and MultiTHUMOS [45], contain long videos annotated with fine-grained, short-term actions. They can be used for multi-label action recognition, where the task is to predict every short-term action occurring in the video, or for fine-grained action localization. Differently, here we are interested in the single-label classification case, where a global label describes the long-term activity happening in the video. The single label should be recognizable only by reasoning over multiple short-term actions.
Previous work showed that video datasets are sometimes biased towards appearance [6] and better recognizable by short-term over long-term information [34]. Similarly, in this work we explore whether the global labels of datasets proposed for long-term video understanding tasks can be predicted without long-term information. We choose for our study three popular datasets that include single, video-level labels and cover different long-term dataset categories: Breakfast, CrossTask and LVU. Breakfast [24] is a _complex action recognition_ dataset used in several works on long-term video understanding [16, 17, 26, 47]. CrossTask [49] is a dataset of _instructional videos_, which are composed of several short-term steps that contribute to the completion of a long-term task. Finally, the _Long-form Video Understanding_ (LVU) dataset [41] was proposed to learn complex long-term relationships, in contrast to short-term patterns, in video clips extracted from movies.
## 3 Assessing long-term action recognition datasets
### User study
According to our definition, an action is long-term if it cannot be classified from a single short video segment. We design a user study to test whether current long-term video understanding datasets respect this property. Our user study consists of two surveys. In the _Full Videos Survey_, the users are presented with the full-length videos from the datasets. In the _Video Segments Survey_, the users are presented with a short video segment extracted from a full-length video. In both surveys, the users are instructed to watch the video clip and express what action is being performed in the full video, in their opinion. The users are provided with a list of possible actions, which correspond to the classes from the analyzed long-term action datasets, and have to select exactly one action class from the list. We include the additional option "_I am not sure_", to let the users express uncertainty when they are in doubt about which action to select.
From the collected user votes in the _Full Videos Survey_ and the _Video Segments Survey_, we calculate and compare the action recognition accuracy. If the users from the two groups perform similarly, we can conclude that the videos do not contain long-term actions, as they can be recognized from single short-term actions comparably well than looking at the full videos. We also calculate the user agreement per survey, measured with Krippendorff's \(\alpha\)[23], which gives an indication of how subjective the prediction task is. We expect that the more a video is difficult to classify, the more subjective the choice will be, thus resulting in low agreement.
### Measuring recognition accuracy
From the _Full Videos Survey_, we collect user votes per class for each full-length video. In each full video, we express the votes in percentages (\(\%user\_votes_{v}(c)\)), which we obtain by dividing the votes per class by the amount of
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Dataset** & **\#Videos** & **Length** & **\#L.T.** & **\#S.T.** \\ \hline COFFEE [2] & 150 & 2 & 5 & 51 \\ Epic-Kitchens [11] & 432 & 7.5 & - & 149, \\ Breakfast [24] & 2k & 2.3 & 10 & 48 \\ Composite [29] & 212 & 1-23 & 41 & 218 \\ Charades [31] & 10k & 0.5 & - & 157 \\
50-Salads [33] & 54 & 6.4 & - & 17 \\ COIN [36] & 11.8k & 2.4 & 180 & 778 \\ IKEA FA [37] & 101 & 2-4 & - & 12 \\ DAHLIA [39] & 51 & 39 & 7 & - \\ LVU - Content understanding [41] & 226 & 1-3 & 4 & - \\ & 1.3k & 1-3 & 5 & - \\ & 723 & 1-3 & 6 & - \\ Multi-THUMOS [45] & 413 & 3 & - & 65 \\ YouCookII [48] & 2k & 5.3 & 89 & - \\ CrossTask [49] & 4.7k & 3-6 & 83 & 517 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of current real-world datasets proposed for long-term video understanding tasks. We report the (approximate) number of videos, the average video length in minutes, the number of global _long-term_ (L.T.) and _short-term_ (S.T.) action recognition classes, if it applies.
votes collected for the full video. As formalized in Equation 1, given \(\mathcal{C}\) classes from the evaluated dataset, excluding the _I am not sure_ option, we assign to the full video prediction (\(pred(v)\)) the class voted by the majority of the users. The long-term action recognition accuracy is given by the number of full videos assigned with the correct class over the number of full videos considered in the study for the dataset.
\[pred(v)=\operatorname*{arg\,max}_{c\in\mathcal{C}}\quad\%user\_votes_{v}(c) \tag{1}\]
In the _Video Segments Survey_, we collect user votes for every segment \(s_{v}\) in a full video. Again, for each segment we calculate the percentage of votes per class \(\%user\_votes(c)\). Then, we extract the full video prediction from the votes of a single segment. To do this, we select the segment \(s^{*}_{v}\) with highest percentage of votes for a single class, excluding the _I am not sure_ option. This approach is formalized in Equation 2. In the example in Figure 3, the full video is assigned the class _Making scrambled eggs_, which is voted by 86% of users in _Segment 5_, which is the maximum ratio of votes for one class across the video segments. According to our definition, if the full-length video is long-term, there should be no video segments that lead to the right predicted class. The accuracy is given by the number of full videos assigned with the correct label over the number of full videos considered in the study.
\[pred(v)=pred(s^{*}_{v})\text{,} \tag{2}\]
\[\text{where }s^{*}_{v}=\operatorname*{arg\,max}_{s_{v}\in v}\quad\{ \max_{c\in\mathcal{C}}\quad\%user\_votes_{s_{v}}(c)\}\text{,}\] \[pred(s^{*}_{v})=\operatorname*{arg\,max}_{c\in\mathcal{C}}\quad \%user\_votes_{s^{*}_{v}}(c)\text{.}\]
## 4 Results
We include in our study a representative dataset from complex action recognition, Breakfast [24], one instructional video dataset, CrossTask [49], and the Long-Form Video Understanding (LVU) dataset [41]. We implement the user study on Amazon Mechanical Turk [1] and collect responses from 167 users. We collect, on average, 12.09\(\pm\)1.62 votes for each video and video segment, which is proved to be a proper amount [9]. Table 2 provides an overview of the results from the _Full Videos Survey_ and the _Video Segments Survey_, discussed in the following sections.
### Breakfast
Breakfast [24] is a collection of third-person videos of actors cooking a breakfast recipe, like scrambled eggs, coffee, cereals and milk. Each video has a global label, which corresponds to the recipe being made, for a total of 10 classes. The classification task consists in correctly recognizing the recipe.
For our study, we select a representative subset of 30 videos, corresponding to 3 randomly selected videos per class. The full videos have average duration of 2.44 \(\pm\) 2.18 minutes. For the _Video Segments Survey_, we segment the video according to the short-term action timesteps (_coarse segmentation_) provided in the dataset. We remove segments that are shorter than 5 seconds, as we deem those segments highly uninformative, and we obtain 154 segments in total, of average duration 29 \(\pm\) 39 seconds, where \(\sim\)56% of the segments last less than 15 seconds. The large standard deviation is due to some repetitive short-term actions that can last above a minute, e.g. _stir dough_ or _fry egg_.
The results in Table 2 show that the recognition accuracy from the _Full Videos Survey_ (93.33%) and the _Video Segments Survey_ (90.0%) are close. This suggests that, although having access to the full long-term information in the video helps, looking at single short segments is sufficient to infer the right recipe class for the majority of the
\begin{table}
\begin{tabular}{l|c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c}{**Classification accuracy (\%)**} \\ & Full Videos & Video Segments \\ \hline
**Breakfast** & **93.33** & 90.0 \\
**CrossTask** & **100.0** & 97.2 \\
**LVU – Relationship** & **88.89** & **88.89** \\
**LVU – Scene** & **100.0** & **100.0** \\
**LVU – Speaking** & **80.0** & 60.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average video recognition accuracy obtained from the _Full Videos Survey_ and _Video Segments Survey_ on the Breakfast [24], CrossTask [49] and LVU [41] datasets. The results suggest that long-term information is helpful but not necessary in the majority of the evaluated datasets.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c}{**User agreement**} \\ & Full Videos & Video & Selected \\ \hline
**Breakfast** & **0.717** & 0.386 & 0.593 \\
**CrossTask** & 0.671 & 0.462 & 0.**767** \\
**LVU – relationship** & 0.499 & 0.340 & **0.523** \\
**LVU – scene** & **0.755** & 0.481 & 0.686 \\
**LVU – speaking** & 0.159 & 0.191 & **0.265** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Overview of the user agreement in our user studies, measured terms of Krippendorff’s \(\alpha\)[23]. We find that the users tend to agree in the _Full Videos Surveys_ and when selecting the segments with highest amount of votes for a class. Recognizing the actions in the _Video Segments Survey_ is generally harder then when looking at the full video, resulting in more variability in the users predictions and, consequently, in lower agreement.
videos. From this result we conclude that the Breakfast dataset is not a proper long-term action dataset, according to our definition.
We analyze the amount of correct user votes, wrong votes and _I am not sure_ votes obtained in the user study and illustrated in Figure 4 (a). We obtained 86.78% of correct votes in the _Full Videos Survey_ and 54.47% in the _Videos Segments Survey_. However, if we consider only the segments with the highest percentage of votes for one class, the amount of correct votes reaches 76.36%. A similar trend occurs in the user agreement in Table 3. By further inspecting the results from the _Video Segments Survey_, we notice that users are generally more uncertain classifying the video segments early in the video, with a higher portion of _I am not sure_ votes compare to the later segments. In particular, 63.57% of _I am not sure_ votes are obtained in from the first two video segments in chronological order. We argue that breakfast dishes are usually better recognizable towards the
Figure 4: Overview of the user votes (correct, wrong and _I am not sure_) collected in our study. We compare the results from the _Full Videos_, all the _Video Segments_, and the Selected Segments with highest percentage of votes for one class. The amount of correct votes in the Selected Segments is significantly higher than for all the _Video Segments_, and comparable, or even higher, to the amount of correct votes obtained watching the full videos. N.b., the user votes reported in this figure do not have to match the accuracies in Table 2. While the accuracy shows the percentage of videos correctly classified, the user votes are aggregated without considering the votes distributions within the specific videos.
Figure 3: In the _Video Segments Survey_, users have to understand what is happening in a long video by looking only at one short segment. We ask the users to vote for a video class and obtain predictions per segment. We assign to the full video the segment prediction with the highest percentage of votes for one class. In the example, taken from the Breakfast dataset [24], _Segment 5_ determines the video prediction _Scrambled eggs_.
end of the video, when the recipe is complete.
### CrossTask
CrossTask [49] is an instructional video dataset of \(\sim\)4.7k videos, covering themes like auto repair, cooking and DIY. The instructional videos show how to perform a _tasks_ (e.g., _Make a Latte_) through a list of _steps_ (e.g., _add coffee_, _press coffee_, _pour water_, _pour espresso_, _steam milk_, _pour milk_). It contains 18 primary tasks with steps annotations and 65 related tasks with unlabeled steps. The dataset is meant to be used to learn steps in a weakly supervised learning setup. Here, we evaluate whether predicting the _task_ illustrated in an instructional video also fits our definition of long-term action recognition. We collect results from 36 video clips (2 random videos per primary task) of average duration 4.50 \(\pm\) 2.14 minutes. Similarly to Breakfast, we extract 260 segments from the videos according to the timesteps provided with the dataset. In CrossTask, the segments are significantly shorter than Breakfast, with average duration of 10 \(\pm\) 11 seconds and \(\sim\)81% of the segments being shorter than 15 seconds.
In Table 2, we compare the task recognition accuracy from the _Full Videos Survey_, 100%, and the _Video Segments Survey_, 97.2%. In both cases, users can recognize the task with high accuracy. Only one video (YouTube id _kReUYkInvinc_) is misclassified in the _Video Segments Survey_, despite 5/8 of its video segments being correctly classified. Considering the user agreement (Table 3) and correct votes by the users (Figure 4, b), we find that both quantities are marginally higher in the Selected Segments over the Full Videos. This result shows that users tend to make the same mistakes (as for video _kReUYklyinc_) while confirming that most of the tasks are generally recognizable both from short video segments and full videos. It is worth noting that the results reported in Table 2 and Figure 4 are not necessarily the same. The accuracy corresponds to the percentage of videos correctly classified, while the user votes are aggregated without considering the votes distributions within the specific videos. Because of the high task recognition accuracy obtained from the _Video Segments Survey_, we conclude that the videos in CrossTask do not contain long-term actions. We recommend to use this dataset for the other video understanding tasks that is supports, like captioning and action localization.
### Lvu
The Long-Form Video Dataset (LVU) [41] has been recently proposed to study complex relationships in video clips extracted from movies. It provides three tasks, related to content understanding, user engagement prediction and movie metadata prediction and contains over 11k videos. Similarly to previous work [35], we select the task of _Content Understanding_, which involves classifying the _relationship_ among the characters, where the _scene_ is taking place and the characters _speaking_ style, from video clips of \(\sim\)2.5 minutes. The respective annotations consist in a global label per video. We assess whether predicting _Relationship_, _Scene_ and _Speaking_ is a form of long-term action recognition, according to our definition. We select videos from the test set and manually extract segments for each of the three classification tasks. We obtain 9 videos (3 per class) for _Relationship_, 12 videos (2 per class) for _Scene_ and 10 videos (2 per class) for _Speaking_, and a total of 140 segments of \(\sim\)30 seconds.
Table 2 shows the classification accuracies obtained from the _Full Videos Survey_ and _Video Segments Survey_. Comparing the results, we find no difference for _Relationship_ and _Scene_. In particular, _Scene_ classification is performed with 100% accuracy, indicating that this prediction task is easy for humans. We identify a problem associated with LVU - _Relationship_. The labels husband-wife, friends, boyfriend-girlfriend are associated with specific characters in the movie, but other characters might appear within the same video clip. For example, in Figure 5 (a), the ground-truth label for the movie in the first row is _Husband-Wife_. However, a third male character appears in the scene in addition to the _husband and wife_. Therefore, the labels only correctly apply to a specific subset of the characters in the scene, or to a precise time window when only the target characters appear. As a result, the full videos are classified with a high percentage of wrong votes, while some of the video segments that do not include the characters corresponding to the label are completely misclassified. This justifies the large portion of wrong votes in Figure 4 (c) and relatively low agreement in Table 3.
We find a similar annotation problem in LVU - _Speaking_. Also in this case, the global label only applies to a subset of the characters in the scene. In the example in Figure 5 (c), the label _Threatens_ only applies to the man with the gun. This explains the difference in performance when comparing the accuracies from the _Full Videos Survey_ and _Video Segments Survey_ in Table 2, the large amount of wrong votes in Figure 4 (e) and low agreement in Table 3. Because of the problem with the annotations and the equal recognition performance of 88.89% obtained from the _Full Videos Survey_ and _Video Segments Survey_ (reported in Table 2), we conclude that LVU - _Relationship_ is not a long-term video understanding task. Similar conclusions apply for LVU - _Scene_, with perfect classification scores resulting from both surveys. Finally, the labels in LVU - _Speaking_ are not truly long-term, as they apply to a subset of characters speaking only during some relatively short time-windows.
## 5 Conclusion
We propose a method to assess whether an action is _long-term_. We apply our method to three current long-term video
understanding datasets, Breakfast, CrossTask and LVU. Our results show that long-term information might help but is _not necessary_ in the majority of videos from the analyzed datasets. In fact, the long-term actions in these videos can be correctly classified by humans by looking solely at a single short video segment. This result suggests that deep learning models trained and tested on these datasets might pick short-term shortcuts and still show correct recognition performance, without actually learning any long-term information. Following our findings, we urge researchers who are investigating automatic long-term action recognition to use datasets that need long-term information to be solved.
**Acknowledgements.** This work is part of the research program Efficient Deep Learning (EDL), which is (partly) financed by the Dutch Research Council (NWO).
Figure 5: Examples of correct (green) and wrong (red) classification results collected from the _Video Segments_ (V.S.) and _Full Videos_ (F.V.) surveys on the Long-form Video Understanding (LVU) - Relationship (a), Scene (b) and Speaking (c) dataset [41]. Users correctly classify a large portion of video segments. Other segments result misclassified due to annotation noise. |
2301.02055 | An adaptive solution strategy for Richards' equation | Flow in variably saturated porous media is typically modelled by the Richards
equation, a nonlinear elliptic-parabolic equation which is notoriously
challenging to solve numerically. In this paper, we propose a robust and fast
iterative solver for Richards' equation. The solver relies on an adaptive
switching algorithm, based on rigorously derived a posteriori indicators,
between two linearization methods: L-scheme and Newton. Although a combined
L-scheme/Newton strategy was introduced previously in [List & Radu (2016)],
here, for the first time we propose a reliable and robust criteria for
switching between these schemes. The performance of the solver, which can be in
principle applied to any spatial discretization and linearization methods, is
illustrated through several numerical examples. | Jakob S. Stokke, Koondanibha Mitra, Erlend Storvik, Jakub W. Both, Florin A. Radu | 2023-01-05T13:10:53Z | http://arxiv.org/abs/2301.02055v1 | # An adaptive solution strategy for Richards' equation
###### Abstract
Flow in variably saturated porous media is typically modelled by the Richards equation, a nonlinear elliptic-parabolic equation which is notoriously challenging to solve numerically. In this paper, we propose a robust and fast iterative solver for Richards' equation. The solver relies on an adaptive switching algorithm, based on rigorously derived _a posteriori_ indicators, between two linearization methods: L-scheme and Newton. Although a combined L-scheme/Newton strategy was introduced previously in [1], here, for the first time we propose a reliable and robust criteria for switching between these schemes. The performance of the solver, which can be in principle applied to any spatial discretization and linearization methods, is illustrated through several numerical examples.
**Keywords:**_Iterative linearization, Adaptivity, L-scheme, Newton's method, Richards' equation, Nonlinear degenerate diffusion_
## 1 Introduction
In this paper, we consider the pressure head \(\psi\) based formulation of the Richards equation
\[\partial_{t}\theta(\psi)-\nabla\cdot[K(\theta(\psi))\nabla(\psi+z))]=f, \tag{1}\]
where \(\theta:\mathbb{R}\rightarrow[0,1]\) is the water content, \(K\) is the rank 2 permeability tensor of the porous medium, \(z\) is the height against the gravitational direction, and \(f\) is a source/sink term. Richards' equation is used to model the flow of water in saturated/unsaturated porous media. It is a highly nonlinear and degenerate elliptic-parabolic equation which makes solving it a very challenging task, see e.g. the review work of [2]. We refer to [3] for the existence and uniqueness of a weak solution of Richards' equation.
There are plenty of works regarding discretization of Richards' equation. Due to the low regularity of solutions of (1), see [4], generally, a backward Euler (implicit) scheme (3) is employed to discretize it in time, see e.g. [1, 5]. Regarding spatial discretization we mention continuous Galerkin finite elements [6, 7], mixed or expanded mixed finite
elements [8, 9, 10, 11, 12], finite volumes [13, 14] (see also the recent review [15]), or multipoint flux approximation (MPFA) [16]. Regardless of the choice of the spatial discretization method, one has to solve at each time step a nonlinear, finite-dimensional problem. In this paper, we will focus on how to efficiently solve these problems using iterative linearization techniques.
The main iterative linearization methods used for this type of nonlinear problem are the Newton method, Picard or modified Picard, L-scheme, the Jaeger-Kacur method, or combinations of them. Perhaps the most common choice is the Newton method [17, 18] which converges quadratically provided the initial guess is close enough to the final solution. For a \(r\)-Holder continuous \(\theta^{\prime}\) function (\(r\in(0,1]\)) and the initial guess equal to the solution of the previous time step, it was shown in [10] that the Newton scheme is \((1+r)^{\text{th}}\) order convergent if
\[\tau\leq C\theta_{m}^{\frac{2+r}{r}}h^{d}, \tag{2}\]
where \(\tau>0\) is the time step size, \(h>0\) the mesh size, \(d\in\mathbb{N}\) the spatial dimension, \(C>0\) a constant which depends on the domain and the nonlinearities, and \(\theta_{m}:=\inf\theta^{\prime}\geq 0\). However, for simulations in 2 or 3 dimensions, condition (2) is quite restrictive particularly if the mesh size \(h\) is small, or if the problem is degenerate (\(\theta_{m}=0\)). This fact is corroborated by numerical simulations in [1, 19] which show that the Newton method fails to converge in many such cases. One can improve the robustness of Newton method by using a damped version of it. Line search, variable switching [20] or trust-regions techniques [21] are examples of such. Alternatively, one can increase the robustness of Newton's method by performing first a few fixed-point iterations. This was proposed in [17, 18] by using the Picard method and in [1] by using the L-scheme. Nevertheless, the switching between the schemes was not based on an _a posteriori_ indicator, but done in a heuristic manner.
The other linearization schemes are fixed-point type schemes, typically more robust, however only linearly convergent. It has been shown in [22, 13] that the Picard method does not perform well for Richards' equation. A modified Picard method was proposed in [22]. The modified Picard coincides with Newton's method for the case of a constant permeability, therefore it inherits robustness problems. The L-scheme, first proposed in [23, 24, 1], is a stabilized Picard method and it was designed to be unconditionally converging irrespective of the choice of the initial guess even in degenerate settings and for larger time steps. The L-scheme (see Definition 2.3) uses a global constant as a stabilization coefficient, does not involve the computation of any derivatives, and thus, is not only more stable but also consumes less computational time per iteration due to easier assembly of the stiffness matrices which are better conditioned. Numerical results in [1, 19] clearly demonstrate this. However, they also reveal that the L-scheme converges considerably slower in terms of number of iterations compared to the Newton scheme and at a linear rate. Furthermore, its overall performance strongly depends on the careful choice of a tuning parameter; despite theoretical stability, an improper choice may effectively result in stagnation. The sensitivity of the performance of the L-scheme with respect to the stabilization can be significantly relaxed when combining the L-scheme with Anderson acceleration [25]. Indeed, for Richards equation extended to deformable porous media and solved by an L-scheme, it has been demonstrated that, first, the stabilization parameter can be chosen outside the theoretical range, and second, the non-degenerate convergence can be retained in case of previous divergence or accelerated, as also discussed from a theoretical perspective [26]. Similar stabilizing properties of the Anderson acceleration have
been also discussed for general fixed-point methods [27, 28]. Other fixed point iterations schemes include Jager-Kacur scheme [29] which converges unconditionally albeit slowly, and is more computationally expensive than the L-scheme per iteration, see Table 1. The modified L-scheme, proposed in [19], shows stability similar to the L-scheme while having much faster convergence rates (scaling with \(\tau\)); yet, the convergence is still linear.
In this paper, we investigate a hybrid strategy, dynamically switching between the L-scheme and Newton's method. This utilizes the advantages of both methods: the unconditional stability of the L-scheme, and the quadratic convergence of Newton's method when close to the exact solution. The crucial difference to previous works on hybrid approaches, e.g. [1, 17], is the adaptive nature of the switch between both linearization methods. A switch from the L-scheme to Newton's method is performed when the iterate is sufficiently close to the solution. This finally allows us to balance robustness and speed.
The main challenge in implementing this strategy originates from deriving a rigorous switching criteria between the schemes. Since, the _a priori_ estimates, such as the ones provided in [10], involve unknown constants and assume the worst-case scenario, we pursue an _a posteriori_ estimate-based approach here instead. A rigorous and efficient _a posteriori_ estimator for the fully degenerate Richards equation involving linearization errors was derived in [30] in the continuous space-time setting. For the time-discrete problem (3), a robust, efficient, and reliable estimator was derived in [31] using an orthogonal decomposition result dividing the total error into a discretization and a linearization component. Furthermore, its effectiveness was demonstrated numerically. These papers serve as the main inspirations in deriving the _a posteriori_ based switching criteria in Section 3 and an adaptive L-scheme algorithm in Appendix A. Nevertheless, since we are only interested in computing the linearization error component, the computation of equilibrated flux will be avoided wherever possible.
The paper is organized as follows. In Section 2, we introduce the mathematical notation, state the assumptions, define the fully-discrete solution, and elaborate on different linearization methods. In Section 3, the adaptive switching algorithm is developed. Firstly, a concept of linearization error is introduced along with the derivation of a predictive indicator for linearization error of the next iteration. The adaptive algorithm compares the linearization error with the estimator to determine the exact switching points. In Section 4, three numerical test cases (partially saturated, degenerate, and realistic benchmarks) are presented which illustrate the robustness and computational efficiency of the adaptive scheme compared to the standard Newton's method or the L-scheme. Section 5 contains the conclusions of this work. The paper ends with two appendices, one concerning an adaptive L-scheme and the other on the details of the computation of the equilibrated flux.
## 2 Mathematical and numerical formulation
We consider Richards' equation in the space-time domain \(\mathcal{G}=\Omega\times[0,T]\), where \(\Omega\) is a bounded domain in \(\mathbb{R}^{d}\) with a Lipschitz continuous boundary \(\partial\Omega\), and \(T>0\). Let \((\cdot,\cdot)\) and \(\|\cdot\|\) be the inner product and norm of the square-integrable functions in \(\Omega\), i.e. \(L^{2}(\Omega)\), respectively. Moreover, using common notation from functional analysis, \(H^{1}(\Omega)\) represents the Sobolev space of functions with first-order weak derivatives in \(L^{2}(\Omega)\), and \(H_{0}^{1}(\Omega)\) its subspace containing functions with vanishing trace at the boundary.
**Assumption 1**.: _For the material properties \(\theta\) and \(K\), and source term \(f\) in (1), the following assumptions are made:_
1. _The saturation function_ \(\theta(\cdot)\) _is Lipschitz continuous and monotonically increasing with_ \(L_{\theta}\) _and_ \(\theta_{m}\geq 0\) _being the Lipschitz constant and the lower bound for the derivative, respectively._
2. _The permeability tensor_ \(K:[0,1]\to\mathbb{R}^{d\times d}\) _satisfies the uniform (pseudo) ellipticity condition, i.e., for constants_ \(\kappa_{M}>\kappa_{m}\geq 0\)_,_ \[\kappa_{m}|\boldsymbol{z}|^{2}\leq\boldsymbol{z}^{\mathrm{T}}\,K\,\boldsymbol{ z}\leq\kappa_{M}|\boldsymbol{z}|^{2},\quad\forall\,\boldsymbol{z}\in\mathbb{R}^{d}.\] _Moreover,_ \((K\circ\theta)\) _is Lipschitz continuous, with Lipschitz constant_ \(L_{\kappa}\)_._
3. _The source function satisfies_ \(f\in C(0,T;L^{2}(\Omega))\)_._
Note that these assumptions are consistent with the commonly used Brooks-Corey [32] and van Genuchten [33] parametrizations of the functions \(\theta\) and \(K\).
### Time-discretization: Backward Euler
To discretize the Richards equation in time we consider the backward-Euler time discretization of (1). For this implicit scheme, no CFL conditions need to be satisfied for stability (thus avoiding restrictions on the time step size). Moreover, it does not require higher-order time regularity (unlike the Crank-Nicholson scheme) to converge to the time-continuous solutions. We subdivide the time-interval \([0,T]\) uniformly \(N\) times with time step size \(\tau=T/N\) and discrete time steps \(t_{n}=\tau n\), where \(n\in\{1,...,N\}\). Then, we look for a sequence \(\{\psi^{n}\}_{n=1}^{N}\) of functions in \(\Omega\), satisfying the time-discrete system
\[\frac{\theta(\psi^{n})-\theta(\psi^{n-1})}{\tau}-\nabla\cdot[K(\theta(\psi^{n }))\nabla(\psi^{n}+z))]=f(t_{n}). \tag{3}\]
Denoting \(f(t_{n})\) by \(f^{n}\) subsequently, a more precise and general definition of the weak solutions of (3) is given below. For simplicity, we assume homogeneous Dirichlet boundary condition although our results are valid for Dirichlet and Neumann boundary conditions in general.
**Definition 2.1** (Backward Euler time-discretization of (1)).: _Let \(\psi^{0}\in L^{2}(\Omega)\) be given. Then the sequence \(\{\psi^{n}\}_{n=1}^{N}\subset H^{1}_{0}(\Omega)\) is the backward Euler solution of (1) if for all \(n\in\{1,...,N\}\), and \(v\in H^{1}_{0}(\Omega)\),_
\[\frac{1}{\tau}(\theta(\psi^{n})-\theta(\psi^{n-1}),v)+(K(\theta(\psi^{n})) \nabla(\psi^{n}+z),\nabla v)=(f^{n},\,v). \tag{4}\]
### Space-discretization: Continuous Galerkin finite elements
We consider the finite element method to discretize (4) further in space. Let \(\mathcal{T}_{h}\) be a triangulation of \(\Omega\) into closed \(d\)-simplices, where \(h:=\max_{E\in\mathcal{T}_{h}}\left(\mathrm{diam}(E)\right)\) denotes the mesh size. Assuming \(\Omega\) is a polygon, the Galerkin finite element space is
\[V_{h}=\left\{v_{h}\in H^{1}_{0}(\Omega)|\;v_{h|E}\in\mathcal{P}_{p}(E),\;T\in \mathcal{T}_{h}\right\}, \tag{5}\]
where \(\mathcal{P}_{p}(E)\) denotes the space of \(p\)-order polynomials on \(E\), \(p\in\mathbb{N}\). Then, the fully discrete Galerkin formulation of Richards' equation reads
**Definition 2.2** (Fully discrete solution of (1)).: _Let \(\psi^{0}_{h}:=\psi^{0}\in L^{2}(\Omega)\). Then the sequence \(\{\psi^{n}_{h}\}_{n=1}^{N}\subset V_{h}\) is the fully discrete solution of (1) if for all \(n\in\{1,...,N\}\), and \(v_{h}\in V_{h}\),_
\[(\theta(\psi^{n}_{h})-\theta(\psi^{n-1}_{h}),v_{h})+\tau(K(\theta(\psi^{n}_{h} ))\nabla(\psi^{n}_{h}+z),\nabla v_{h})=\tau(f^{n},v_{h}). \tag{6}\]
### Iterative linearization schemes
To obtain the solution of the nonlinear problem (6) an iterative linearization scheme is generally employed. To investigate the trade-off between the stability and speed of such schemes, we focus on two linearization strategies that will be representatives of linearly and quadratically convergent methods with convergence meant in the L\({}^{2}\) sense.
#### 2.3.1 Linearly convergent schemes: The L-scheme
Where the quadratically convergent Newton method utilizes a proper first-order Taylor expansion of the nonlinear terms in (6), the linearly convergent methods that we consider here, only exploit an expansion of the monotone components, i.e. the nonlinear saturation function. Moreover, the expansion does not need to be exact. Consider the following scheme: Given \(\psi_{h}^{n-1},\psi_{h}^{n,j-1}\in V_{h}\), find \(\psi_{h}^{n,j}\in V_{h}\) such that
\[(\mathcal{L}(\psi_{h}^{n,j-1})(\psi_{h}^{n,j}-\psi_{h}^{n,j-1}),v_ {h})+\tau(K(\theta(\psi_{h}^{n,j-1}))\nabla(\psi_{h}^{n,j}+z),\nabla v_{h})\] \[\qquad\qquad=\tau(f^{n},v_{h})-(\theta(\psi_{h}^{n,j-1})-\theta( \psi_{h}^{n-1}),v_{h}), \tag{7}\]
for all \(v_{h}\in V_{h}\), where \(\mathcal{L}:\mathbb{R}\rightarrow[0,\infty)\) is a predetermined positive weight function, and \(j\in\mathbb{N}\) is the iteration index. Observe that, provided \(\kappa_{m}>0\) in Assumption 1, the problem above is linear, monotone, and Lipschitz with respect to \(\psi_{h}^{n,j}\), and hence a unique weak solution of (7) exists. Moreover, if the iteration converges, i.e. if \(\psi_{h}^{n,j}\rightarrow\psi_{h}^{n}\) strongly in \(H_{0}^{1}(\Omega)\), then \(\psi_{h}^{n}\) indeed solves (6). There can be many different choices of the function \(\mathcal{L}\) which leads to different linearization schemes, see Table 1. For the rest of this paper, we mainly focus on the case when \(\mathcal{L}\) is constant which leads to the widely studied L-scheme.
**Definition 2.3** (L-scheme).: _Let \(\psi_{h}^{n-1},\psi_{h}^{n,0}\in L^{2}(\Omega)\) and \(L>0\) be given. Then the L-scheme solves for the sequence \(\{\psi_{h}^{n,j}\}_{j\in\mathbb{N}}\subset V_{h}\) which satisfies for all iteration indices \(j\in\mathbb{N}\), and \(v_{h}\in V_{h}\)_
\[\begin{split}& L((\psi_{h}^{n,j}-\psi_{h}^{n,j-1}),v_{h})+\tau(K( \theta(\psi_{h}^{n,j-1}))\nabla(\psi_{h}^{n,j}+z),\nabla v_{h})\\ &=\tau(f^{n},v_{h})-(\theta(\psi_{h}^{n,j-1})-\theta(\psi_{h}^{n -1}),v_{h}).\end{split} \tag{8}\]
Different choices of \(\mathcal{L}\) and the resulting schemes are listed below
**Remark 1** (Non-constant \(L\) for heterogeneous media).: _For the L-scheme, \(L\) might not necessarily be a constant, but can be a function of the spatial variable \(\mathbf{x}\). This would be typically the case for heterogeneous media. All the proofs can be adapted to include a spatially dependent \(L\), see [34] where this was done for a splitting scheme for Biot equations._
\begin{table}
\begin{tabular}{|l|c|} \hline Scheme & \(\mathcal{L}(\psi)\) \\ \hline Picard & 0 \\ Modified Picard [22] & \(\theta^{\prime}(\psi)\) \\ Jäger-Kacür [29] & \(\sup_{\xi\in\mathbb{R}}\frac{\theta(\xi)-\theta(\psi)}{\xi-\psi}\) \\ L-scheme [23, 24, 1] & \(L>0\) constant \\ Modified L-scheme [19] & \(\theta^{\prime}(\psi)+M\tau\), \(M>0\) constant \\ \hline \end{tabular}
\end{table}
Table 1: Different linearly convergent schemes (7) defined along with their linearization weight function \(\mathcal{L}\).
It has been shown in [1, Theorem 1] that if \(L\geq\frac{1}{2}\sup_{\xi\in\mathbb{R}}\theta^{\prime}(\xi)\), then the L-scheme iterations converge irrespective of the initial guess under minor restrictions on the time step size \(\tau\) and independent of the mesh size. However, numerical results in [1, 19] reveal that the convergence of the L-scheme can be relatively slow, depending on the choice of the stabilization parameter \(L\), see please the Appendix A for an adaptive L-scheme. One can enhance the convergence speed by computing \(L\) using the previous iterates and derivatives. In general, taking \(L\) as the Jacobian matrix, would lead to Newton method, this is the reason one can interpret the L-scheme also as a modified Newton method. This is exploited in the modified Picard scheme, first proposed in [22], uses \(\mathcal{L}(\psi^{n,j-1})=\theta^{\prime}(\psi^{n,j-1})\), complying with the first-order Taylor series expansion \(\theta(\psi^{n,j})\approx\theta(\psi^{n,j-1})+\theta^{\prime}(\psi^{n,j-1})( \psi^{n,j}-\psi^{n,j-1})\). As a result, if converging it requires fewer iterations compared to the L-scheme although the convergence is still linear. Nevertheless, this choice of the \(\mathcal{L}\) function may lead to divergence of the scheme for larger time step sizes, as predicted in [10] and observed numerically in [1, 19]. In an attempt to resolve this issue, a modified L-scheme was proposed in [19] that inherits the characteristics of both the L-scheme (except that it is using derivatives and the linear systems are not necessarily well conditioned) and the Picard scheme. The modified L-scheme exhibits increased stability compared to the Picard scheme while retaining its speed. However, the modified L-scheme converges unconditionally under the additional restriction that \(\psi_{h}^{n,0}=\psi_{h}^{n-1}\) and the discrete time-derivative \((\psi_{h}^{n}-\psi_{h}^{n-1})/\tau\) is in \(L^{\infty}(\Omega)\). Since the objective of this paper is to start the linearization iterations with a stable scheme, and then switch to a quadratically converging scheme when its convergence can be guaranteed, the rest of the study will be with respect to the L-scheme which is arguably the most stable among the schemes presented in Table 1 and the cheapest in terms of computing time per iteration (due to well-conditioned linear systems and not involving derivatives). Nonetheless, we remark that our methodology generalizes to all other linearly converging iterative methods.
**Remark 2** (Generality of the results).: _Although the analysis of Section 3 primarily focuses on the switching between L-scheme and the Newton method, the same techniques can be directly extended to cover switching between the schemes in Table 1 and Newton. Moreover, the \(L\)-adaptive strategy in Appendix A can be extended to the modified L-scheme (see Table 1) to select the parameter \(M>0\) adaptively._
#### 2.3.2 Quadratically convergent scheme: The Newton method
The Newton method uses the first order Taylor series expansions of all the nonlinear functions in (1) to ensure quadratic rates of convergence.
**Definition 2.4** (The Newton method).: _Let \(\psi_{h}^{n-1},\psi_{h}^{n,0}\in L^{2}(\Omega)\) be given. Then the Newton method solves for the sequence \(\{\psi_{h}^{n,j}\}_{j\in\mathbb{N}}\subset V_{h}\) which satisfies for all iteration indices \(j\in\mathbb{N}\), and \(v_{h}\in V_{h}\)_
\[\begin{split}(\theta^{\prime}&(\psi_{h}^{n,j-1})( \psi_{h}^{n,j}-\psi_{h}^{n,j-1}),v_{h})+\tau(K(\theta(\psi_{h}^{n,j-1})) \nabla(\psi_{h}^{n,j-1}+z),\nabla v_{h})\\ &+\tau\left((K\circ\theta)^{\prime}(\psi_{h}^{n,j-1})\nabla(\psi_ {h}^{n,j-1}+z)(\psi_{h}^{n,j}-\psi_{h}^{n,j-1}),\nabla v_{h}\right)\\ &=\tau(f^{n},v_{h})-(\theta(\psi_{h}^{n,j-1})-\theta(\psi_{h}^{n -1}),v_{h}).\end{split} \tag{9}\]
However, this comes at the cost of decreased numerical stability as discussed in Section 1. In the next section we combine the L-scheme and the Newton method in a consistent manner in order to obtain a linerization strategy that is both stable and fast.
A posteriori estimate based adaptive switching between L-scheme and Newton
In this section, we develop the switching algorithm between L-scheme and the Newton method using _a posteriori_ error analysis. For comparing the errors between different linearization schemes we introduce a uniform notion of linearization errors \(\eta_{\text{lin}}\) in Section 3.1 based on arguments in [31]. The idea behind the adaptive algorithm is to start with the L-scheme and derive an estimator \(\eta_{L\to N}\) in Section 3.2 that predicts from the \(j^{\text{th}}\) and \((j-1)^{\text{th}}\) iterate the linearization error for the next iteration if done using the Newton scheme. If the error is predicted to decrease, then the iteration switches to Newton. Then another estimator \(\eta_{N\to L}\) is derived in Section 3.3 which predicts the linearization error of the next step of the Newton iteration. The algorithm switches back to the L-scheme in case the error is predicted to increase. In fact, we go one step further in Appendix A and derive an estimator \(\eta_{L\to L}\) to predict if the L-scheme itself will converge and to tune the value of \(L\) accordingly. Finally, the full algorithm is laid out in Section 3.4 based on these estimators.
### Linearization errors and iteration-dependent energy norms
In [31] it is shown that the total numerical error corresponding to a finite element-based linearization scheme can be orthogonally decomposed into a discretization component and a linearization component if the errors are computed using an iteration-dependent energy norm (for linearly convergent schemes in Table 1 this is just the energy norm invoked by the symmetric bilinear form associated with the unknown \(\psi_{h}^{n,j}\) in (7)). Here, we are only interested in the linearization component which is defined as the difference between successive iterates in the aforementioned energy norm, i.e.,
\[\eta_{\text{lin}}^{j}:=\big{|}\!\big{|}\!\big{|}\psi_{h}^{n,j}-\psi_{h}^{n,j-1 }\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!
for all \(\xi\in H^{1}_{0}(\Omega)\), and with reference to Definition 2.4 the norm for the Newton method is
\[\left|\kern-1.075pt\left|\kern-1.075pt\left|\xi\right|\kern-1.075pt\right|\kern-1.075pt \right|\kern-1.075pt\right|_{N,\psi^{n,j-1}_{h}}:=\left(\int_{\Omega}\theta^{ \prime}(\psi^{n,j-1}_{h})\,\xi^{2}+\tau|K(\theta(\psi^{n,j-1}_{h}))^{\frac{1}{2 }}\nabla\xi|^{2}\right)^{\frac{1}{2}}. \tag{12}\]
### L-scheme to Newton switching estimate
For some \(i\in\mathbb{N}\), let the sequence \(\{\psi^{n,j}_{h}\}_{j=1}^{i}\subset V_{h}\) be obtained using the L-scheme (8), and in the \((i+1)^{\text{th}}\)-iteration we want to test for switching to the Newton scheme. Let \(\tilde{\psi}^{n,i+1}_{h}\in V_{h}\) be the solution of the Newton scheme (9) having \(\psi^{n,i}_{h}\) as the previous iterate. In this section, we will assume the following:
**Assumption 2** (Convection term is not dominant).: _For a given \(i\in\mathbb{N}\), there exists a constant \(C^{i}_{N}\in[0,2)\) such that_
\[\tau|K(\theta(\psi^{n,i}_{h}))^{-\frac{1}{2}}(K\circ\theta)^{\prime}(\psi^{n, i}_{h})\nabla(\psi^{n,i}_{h}+z)|^{2}\leq(C^{i}_{N})^{2}\theta^{\prime}(\psi^{n,i }_{h}), \tag{13}\]
_a.e. in \(\Omega\)._
The assumption above is also required to show the coercivity of the linear problem (9) for \(j=i+1\), and hence, to show the existence of solution \(\tilde{\psi}^{n,i+1}_{h}\). Observe that, since \(\psi^{n,i}_{h}\) is known, the constant \(C^{i}_{N}\) is fully computable. Additionally, it is smaller than 2 if the numerical flux is bounded, and \(\tau\) is small. Notably, the estimate holds even in the degenerate case when \(\theta^{\prime}(\psi^{n,i}_{h})=0\), since the left-hand side has \((\theta^{\prime}(\psi^{n,i}_{h}))^{2}\). To cover the degenerate case, we also introduce the concept of an equilibrated flux.
**Definition 3.1** (Equilibrated flux \(\boldsymbol{\sigma}^{i}_{L}\) for degenerate regions).: _For a pre-determined \(\epsilon>0\), let \(\mathcal{T}^{i,\epsilon}_{\deg}:=\{K\in\mathcal{T}_{h}:\inf\theta^{\prime}( \psi^{n,i}_{h})<\epsilon\text{ in }K\}\). Let \(\Pi_{h}:L^{2}(\Omega)\to\mathcal{P}_{p}(\mathcal{T}_{h})\) be the \(\mathcal{P}_{p}\) projection operator, i.e. \((\Pi_{h}u,v_{h})=(u,v_{h})\) for all \(u\in L^{2}(\Omega)\) and \(v_{h}\in\mathcal{P}_{p}(\mathcal{T}_{h})\). Moreover, let \(\mathbf{RT}_{p}(\mathcal{T}_{h})\) be the \(p^{\text{th}}\)-order Raviart-Thomas space on \(\mathcal{T}_{h}\), i.e., \(\boldsymbol{\sigma}\in\mathbf{RT}_{p}(\mathcal{T}_{h})\) implies \(\boldsymbol{\sigma}|_{K}\in(\mathcal{P}_{p}(K))^{d}+\boldsymbol{x}\mathcal{P} _{p}(K)\) for all \(K\in\mathcal{T}_{h}\). Then, we define \(\boldsymbol{\sigma}^{i}_{L}\in\mathbf{RT}_{p}(\mathcal{T}_{h})\,\cap\,\boldsymbol {H}(\operatorname{div},\Omega)\) as_
\[\nabla\cdot\boldsymbol{\sigma}^{i}_{L}=\begin{cases}\frac{1}{\tau}\Pi_{h}(L( \psi^{n,i}_{h}-\psi^{n,i-1}_{h})-(\theta(\psi^{n,i}_{h})-\theta(\psi^{n,i-1}_ {h})))&\text{ in }\mathcal{T}^{i,\epsilon}_{\deg},\\ 0&\text{ otherwise }.\end{cases} \tag{14}\]
We defer to Appendix B for discussions on how to compute \(\boldsymbol{\sigma}^{i}_{L}\) in practice. Then, we have the following result.
**Proposition 1** (Error control of L-scheme to Newton switching step).: _For a given \(\psi^{n,0}_{h},\,\psi^{n-1}_{h}\in V_{h}\), let \(\{\psi^{n,j}_{h}\}_{j=1}^{i}\subset V_{h}\) solve (8) for some \(i\in\mathbb{N}\). Let \(\tilde{\psi}^{n,i+1}_{h}\in V_{h}\) be the solution of (9) with the previous iterate \(\psi^{n,i}_{h}\). Recall Definition 3.1. Then, under the Assumptions 1-2, one has_
\[\left|\kern-1.075pt\left|\kern-1.075pt\left|\kern-1.075pt\left|\tilde{\psi}^{ n,i+1}_{h}-\psi^{n,i}_{h}\right|\kern-1.075pt\right|\kern-1.075pt\right| \kern-1.075pt\right|_{N,\psi^{n,i}_{h}}\leq\eta^{i}_{L\to N},\]
_where,_
\[\eta^{i}_{L\to N}:=\tfrac{2}{2-C^{i}_{N}}\left(\left[\eta^{i,\text{ \rm poten}}_{L\to N}\right]^{2}+\tau\left[\eta^{i,\text{\rm flux}}_{L\to N,2} \right]^{2}\right)^{\frac{1}{2}}\]
_with_
\[\eta^{i,\mathrm{poten}}_{\iota_{\to N}} :=\left\|\theta^{\prime}(\psi^{n,i}_{h})^{-\frac{1}{2}}\left(L\left( \psi^{n,i}_{h}-\psi^{n,i-1}_{h}\right)-\left(\theta(\psi^{n,i}_{h})-\theta(\psi ^{n,i-1}_{h})\right)\right)\right\|_{\mathcal{T}_{h}\setminus\mathcal{T}^{i, \epsilon}_{\mathrm{deg}}},\] \[\eta^{i,\mathrm{flux}}_{\iota_{\to N}} :=\left\|K(\theta(\psi^{n,i}_{h}))^{-\frac{1}{2}}\left[\left(K( \theta(\psi^{n,i}_{h}))-K(\theta(\psi^{n,i-1}_{h}))\right)\nabla\left(\psi^{n, i}_{h}+z\right)+\mathbf{\sigma}^{i}_{L}\right]\right\|.\]
Proof.: Observe from (9) that \(\delta\psi^{i+1}_{h}:=\tilde{\psi}^{n,i+1}_{h}-\psi^{n,i}_{h}\in V_{h}\) satisfies
\[(\theta^{\prime}(\psi^{n,i}_{h})\delta\psi^{i+1}_{h},v_{h})+\tau (K(\theta(\psi^{n,i}_{h}))\nabla\delta\psi^{i+1}_{h},\nabla v_{h})\] \[\quad+\tau\left((K\circ\theta)^{\prime}(\psi^{n,i}_{h})\nabla( \psi^{n,i}_{h}+z)\,\delta\psi^{i+1}_{h},\nabla v_{h}\right)\] \[\quad=\tau(f^{n},v_{h})-(\theta(\psi^{n,i}_{h})-\theta(\psi^{n,i }_{h}),v_{h})-\tau(K(\theta(\psi^{n,i}_{h}))\nabla\psi^{i}_{h},\nabla v_{h}), \tag{15}\]
for all \(v_{h}\in V_{h}\). Inserting the test function \(v_{h}=\delta\psi^{i+1}_{h}\) in (15), one has
\[\left\|\!\left\|\delta\psi^{i+1}_{h}\right\|\!\right\|_{N,\psi^{n,i}_{h}}^{2} \overset{\eqref{eq:v_h}}{=}\int_{\Omega}\left(\theta^{\prime}(\psi^{n,i}_{h} )|\delta\psi^{i+1}_{h}|^{2}+\tau|K(\theta(\psi^{n,i}_{h}))^{\frac{1}{2}}\nabla \delta\psi^{i+1}_{h}|^{2}\right)\] \[\overset{\eqref{eq:v_h}}{=}\underbrace{-\tau\left((K\circ\theta )^{\prime}(\psi^{n,i}_{h})\nabla(\psi^{n,i}_{h}+z)\,\delta\psi^{i+1}_{h}, \nabla\delta\psi^{i+1}_{h}\right)}_{=:T_{1}}\] \[\quad+\underbrace{\tau(f^{n},\delta\psi^{i+1}_{h})-(\theta(\psi^ {n,i}_{h})-\theta(\psi^{n-1}_{h}),\delta\psi^{i+1}_{h})-\tau(K(\theta(\psi^{n,i}_{h}))\nabla(\psi^{n,i}_{h}+z),\nabla\delta\psi^{i+1}_{h})}_{=:T_{2}}. \tag{16a}\]
Calling \(\mathbf{\sigma}^{i}=(K\circ\theta)^{\prime}(\psi^{n,i}_{h})\nabla(\psi^{n,i}_{h}+z)\) for brevity, we estimate that
\[T_{1} :=-\tau(\mathbf{\sigma}^{i}\delta\psi^{i+1}_{h},\nabla\delta\psi^{i+1 }_{h})\] \[\leq\left(\tau\int_{\Omega}|K(\theta(\psi^{n,i}_{h}))^{-\frac{1}{ 2}}\mathbf{\sigma}^{i}|^{2}(\delta\psi^{i+1}_{h})^{2}\right)^{\frac{1}{2}}\left( \tau\int_{\Omega}|K(\theta(\psi^{n,i}_{h}))^{\frac{1}{2}}\nabla\delta\psi^{i+ 1}_{h}|^{2}\right)^{\frac{1}{2}}\] \[\overset{\eqref{eq:v_h}}{\leq}C^{i}_{N}\left(\int_{\Omega} \theta^{\prime}(\psi^{n,i}_{h})(\delta\psi^{i+1}_{h})^{2}\right)^{\frac{1}{2}} \left(\tau\int_{\Omega}|K(\theta(\psi^{n,i}_{h}))^{\frac{1}{2}}\nabla\delta \psi^{i+1}_{h}|^{2}\right)^{\frac{1}{2}}\] \[\leq\frac{C^{i}_{N}}{2}\int_{\Omega}\left(\theta^{\prime}(\psi^{n,i}_{h})|\delta\psi^{i+1}_{h}|^{2}+\tau|K(\theta(\psi^{n,i}_{h}))^{\frac{1}{2}} \nabla\delta\psi^{i+1}_{h}|^{2}\right)\] \[=\frac{C^{i}_{N}}{2}\big{\|}\!\left\|\delta\psi^{i+1}_{h}\right\| \!\right\|_{N,\psi^{n,i}_{h}}^{2}. \tag{16b}\]
For estimating the last term, we observe from the divergence theorem that
\[-(\mathbf{\sigma}^{i}_{L},\nabla\delta\psi^{i+1}_{h})=(\nabla\cdot\mathbf{ \sigma}^{i}_{L},\delta\psi^{i+1}_{h})\] \[\overset{\eqref{eq:v_h}}{=}\tfrac{1}{\tau}(\Pi_{h}(L(\psi^{n,i }_{h}-\psi^{n,i-1}_{h})-(\theta(\psi^{n,i}_{h})-\theta(\psi^{n,i-1}_{h}))),\delta \psi^{i+1}_{h})\mathcal{T}^{i,\epsilon}_{\mathrm{deg}}\] \[\quad=\tfrac{1}{\tau}(L(\psi^{n,i}_{h}-\psi^{n,i-1}_{h})-(\theta( \psi^{n,i}_{h})-\theta(\psi^{n,i-1}_{h})),\delta\psi^{i+1}_{h})\mathcal{T}^{i, \epsilon}_{\mathrm{deg}}\]
The last equality follows from the definition of the projection operator \(\Pi_{h}\) and \(\delta\psi^{i+1}_{h}\in\mathbb{R}^{n}\).
\(V_{h}\subset\mathcal{P}_{p}(\mathcal{T}_{h})\). Using this result, along with (8) and \(\delta\psi_{h}^{i+1}\in V_{h}\), one has
\[T_{2} :=\tau(f^{n},\delta\psi_{h}^{i+1})-(\theta(\psi_{h}^{n,i})-\theta( \psi_{h}^{n-1}),\delta\psi_{h}^{i+1})-\tau(K(\theta(\psi_{h}^{n,i}))\nabla\psi_{ h}^{i},\nabla\delta\psi_{h}^{i+1})\] \[\overset{\eqref{eq:K_2}}{=}(L(\psi_{h}^{n,i}-\psi_{h}^{n,i-1})-( \theta(\psi_{h}^{n,i})-\theta(\psi_{h}^{n,i-1})),\delta\psi_{h}^{i+1})\] \[\qquad-\tau((K(\theta(\psi_{h}^{n,i}))-K(\theta(\psi_{h}^{n,i-1}) ))\nabla(\psi_{h}^{n,i}+z),\nabla\delta\psi_{h}^{i+1})\] \[=(L(\psi_{h}^{n,i}-\psi_{h}^{n,i-1})-(\theta(\psi_{h}^{n,i})- \theta(\psi_{h}^{n,i-1})),\delta\psi_{h}^{i+1})+\tau(\mathbf{\sigma}_{L}^{i}, \nabla\delta\psi_{h}^{i+1})\] \[\qquad-\tau((K(\theta(\psi_{h}^{n,i}))-K(\theta(\psi_{h}^{n,i-1}) ))\nabla(\psi_{h}^{n,i}+z)+\mathbf{\sigma}_{L}^{i},\nabla\delta\psi_{h}^{i+1})\] \[=(L(\psi_{h}^{n,i}-\psi_{h}^{n,i-1})-(\theta(\psi_{h}^{n,i})- \theta(\psi_{h}^{n,i-1})),\delta\psi_{h}^{i+1})_{\mathcal{T}_{h}\backslash \mathcal{T}_{\mathrm{deg}}^{i+1}}\] \[\qquad-\tau((K(\theta(\psi_{h}^{n,i}))-K(\theta(\psi_{h}^{n,i-1}) ))\nabla(\psi_{h}^{n,i}+z)+\mathbf{\sigma}_{L}^{i},\nabla\delta\psi_{h}^{i+1})\] \[\overset{\eqref{eq:K_2}}{\leq}(\theta^{\prime}(\psi_{h}^{n,i})^{ -\frac{1}{2}}(L(\psi_{h}^{n,i}-\psi_{h}^{n,i-1})-(\theta(\psi_{h}^{n,i})- \theta(\psi_{h}^{n,i-1}))),\theta^{\prime}(\psi_{h}^{n,i})^{\frac{1}{2}}\delta \psi_{h}^{i+1})_{\mathcal{T}_{h}\backslash\mathcal{T}_{\mathrm{deg}}^{i+ \epsilon}}\] \[\qquad+\tau[\eta_{L\to N}^{i,\text{flux}}]\,\|K(\psi_{h}^{n,i})^{ \frac{1}{2}}\nabla\delta\psi_{h}^{i+1}\|\] \[\leq[\eta_{L\to N}^{i,\text{potent}}]\cdot\|\theta^{\prime}(\psi_{h} ^{n,i})^{\frac{1}{2}}\delta\psi_{h}^{i+1}\|+\sqrt{\tau}\,[\eta_{L\to N}^{i, \text{flux}}]\cdot\sqrt{\tau}\|K(\psi_{h}^{n,i})^{\frac{1}{2}}\nabla\delta\psi_ {h}^{i+1}\|. \tag{16c}\]
Combining (16), using the Cauchy-Schwarz inequality along with the definition of \(\eta_{L\to N}^{i}\), one has the estimate.
### Newton to L-scheme switching estimate
Assuming that the L-scheme converges unconditionally, after switching to Newton we would want to switch back to the L-scheme only if linearization error of the Newton scheme increases with iterations. Similar to before, we can estimate if this is going to happen in the \((i+1)^{\text{th}}\)-step, purely from the iterates up to the \(i^{\text{th}}\)-step. For this purpose, we introduce another equilibrated flux.
**Definition 3.2** (Equilibrated flux \(\mathbf{\sigma}_{L}^{i}\) for degenerate regions (Newton scheme)).: _Recalling Definition 3.1, we define \(\mathbf{\sigma}_{N}^{i}\in\mathbf{RT}_{p}(\mathcal{T}_{h})\,\cap\,\mathbf{H}(\mathrm{ div},\Omega)\) as_
\[\nabla\cdot\mathbf{\sigma}_{N}^{i}=\begin{cases}\frac{1}{\tau}\Pi_{h}(\theta^{ \prime}(\psi_{h}^{n,i})(\psi_{h}^{n,i}-\psi_{h}^{n,i-1})-(\theta(\psi_{h}^{n,i })-\theta(\psi_{h}^{n,i-1})))&\text{ in }\mathcal{T}_{\mathrm{deg}}^{i,\epsilon},\\ 0&\text{ otherwise }.\end{cases} \tag{17}\]
The corresponding result mirroring Proposition 1 is
**Proposition 2** (Error control of Newton to Newton step).: _For a given \(\psi_{h}^{n,0},\,\psi_{h}^{n-1}\in V_{h}\), let \(\{\psi_{h}^{n,j}\}_{j=1}^{i+1}\subset V_{h}\) solve (9) for some \(i\in\mathbb{N}\). Then, under Assumptions 1-2, one has_
\[\big{|}\big{|}\psi_{h}^{n,i+1}-\psi_{h}^{n,i}\big{|}\big{|}_{N,\psi_{h}^{n,i}} \leq\eta_{N\to L}^{i},\]
_where_
\[\eta_{N\to L}^{i}:=\tfrac{2}{(2-C_{N}^{i})}\left([\eta_{N\to L}^{i,\text{ \rm poten}}]^{2}+\tau[\eta_{N\to L}^{i,\text{\rm flux}}]^{2}\right)^{\frac{1} {2}}\]
_with_
\[\eta_{N\to L}^{i,\text{\rm poten}}:=\|\theta^{\prime}(\psi_{h}^{n,i})^{- \frac{1}{2}}(\theta^{\prime}(\psi_{h}^{n,i-1})(\psi_{h}^{n,i}-\psi_{h}^{n,i-1})-( \theta(\psi_{h}^{n,i})-\theta(\psi_{h}^{n,i-1})))\|_{\mathcal{T}_{h}\backslash \mathcal{T}_{\mathrm{deg}}^{i,\epsilon}},\] \[\eta_{N\to L}^{i,\text{\rm flux}}:=\left\|\begin{bmatrix}(K( \theta(\psi_{h}^{n,i}))-K(\theta(\psi_{h}^{n,i-1})))\nabla(\psi_{h}^{n,i}+z)\\ -(K\circ\theta)^{\prime}(\psi_{h}^{n,i-1})(\psi_{h}^{n,i}-\psi_{h}^{n,i-1}) \nabla(\psi_{h}^{n,i-1}+z))\end{bmatrix}K(\theta(\psi_{h}^{n,i}))^{-\frac{1}{2} }\right\|.\]
The proof is identical to the proof of Proposition 1 and hence is left for the avid reader.
**Remark 3** (Effectivity of the estimators \(\eta^{i}_{L\to N}\) and \(\eta^{i}_{N\to L}\)).: _The estimators \(\eta^{i}_{L\to N}\) and \(\eta^{i}_{N\to L}\) predict the linearization error \(\eta^{i+1}_{\text{lin}}\) of the \((i+1)^{\text{th}}\) iteration if done using the Newton scheme (9). In the cases where the iteration is done indeed using the Newton scheme, the sharpness of the estimate can be measured using the **effectivity index**, i.e., if \((i+1)^{\text{th}}\) iteration is Newton then_
\[\text{(Eff. Ind.)}_{i}:=\begin{cases}\eta^{i}_{L\to N}/\eta^{i+1}_{\text{lin}}& \text{ if $i^{\text{th}}$ iteration is L-scheme},\\ \eta^{i}_{N\to L}/\eta^{i+1}_{\text{lin}}&\text{ if $i^{\text{th}}$ iteration is Newton}.\end{cases} \tag{18}\]
_Observe that it is always greater than 1 due to Propositions 1 and 2 and an effectivity index close to 1 implies a sharp estimate. The estimators are expected to be quite accurate since mainly the Cauchy-Schwarz inequality is used to derive them, except for estimate (16b) where the term \(T_{1}\) is bounded above using the global approximation in Assumption 2. This expected sharpness is shown to be the case through the numerical experiments of Section 4, see in particular Figures 5 and 8_
### A-posteriori estimate based adaptive linearization algorithm
With the above estimates in mind, we propose a switching algorithm between the L-scheme and the Newton method. The linearization scheme used at iteration \(j=i+1\) should be Newton if the linearization error, predicted by the estimators \(\eta^{i}_{L\to N}\) and \(\eta^{i}_{N\to L}\), is smaller than the linearization error \(\eta^{i}_{\text{lin}}\) of the \(i^{\text{th}}\) step, see (10). However, to optimize the algorithm we take a few numerical considerations into account first.
#### 3.4.1 Computational considerations
To speed up the computations of this switching criteria, we make a few more reductions
* **[Equilibrated flux]** If the saturated domain is much smaller than the unsaturated domain, then we take \(\boldsymbol{\sigma}^{i}_{L}=\boldsymbol{\sigma}^{i}_{N}=0\).
* **[Switching condition]** The condition \(\eta^{i}_{L\to N}\leq\eta^{i}_{\text{lin}}\) might be difficult to satisfy if the estimators are not sharp (see Remark 3), and even when it is satisfied it might require large values of \(i\). Hence, to expedite the switching between L-scheme and Newton, we will use the criteria \(\eta^{i}_{L\to N}<C_{\text{tol}}\,\eta^{i}_{\text{lin}}\) for a constant \(C_{\text{tol}}>1\).
#### 3.4.2 Adaptive linearization algorithm
Under these considerations we propose the following adaptive algorithm:
**Remark 4** (Combining L-scheme adaptivity).: _In Appendix A, we further propose an algorithm to adaptively select \(L\) in order to expedite the convergence of the L-scheme. This can directly be implemented in conjunction to Algorithm 1 to improve the convergence speed of the composite scheme. Nevertheless, we have refrained from combining these schemes for the ease of presentation._
**Remark 5** (Computational cost of the estimators).: _In the non-degenerate case, the quantities \(C_{N}^{i}\), \(\eta_{l\to N}^{i}\) and \(\eta_{N\to L}^{i}\), can be directly computed from the iterates \(\psi_{h}^{n,i}\) and \(\psi_{h}^{n,i-1}\) by inserting \(\mathbf{\sigma}_{L}^{i}=\mathbf{\sigma}_{N}^{i}=0\), see Propositions 1 and 2. Hence, the cost of computing the estimators is small in comparison to the cost of the iterations. Since the L-scheme iterations are less expensive than the Newton iterations, the L/N scheme generally performs better or similarly to the Newton scheme time-wise. This is evident from the numerical experiments, e.g. see Figure 2(b). In the degenerate case, global computation are required for computing \(\mathbf{\sigma}_{L}^{i}\) and \(\mathbf{\sigma}_{N}^{i}\) if they are used. We discuss the computation of these equilibrated fluxes in Appendix B and their computation can be made relatively inexpensive by precomputing the associated stiffness matrices. The computational cost for the estimators can be reduced even further by evaluating them only for selected iterations. Nevertheless, we do not pursue this option for the sake of simplicity._
```
0:\(\mathbf{\psi}^{n,0}\in L^{2}(\Omega)\) as initial guess.
0: Scheme=[L-scheme], \(C_{\mathrm{tol}}=1.5\) for i=1,2,.. do if Scheme=[L-scheme]then Compute iterate using L-scheme], i.e., (8) if\(C_{N}^{i}\geq 2\)then continue. else if\(\eta_{\mathrm{L}\to N}^{i}\leq C_{\mathrm{tol}}\eta_{\mathrm{lin}}^{i}\)then Set Scheme=[Newton] else Compute iterate using Newton] i.e., (9) if\(\eta_{\mathrm{N}\to L}^{i}>\eta_{\mathrm{lin}}^{i}\)then Set Scheme=[L-scheme]
```
**Algorithm 1** L-scheme/Newton a-posteriori switching
## 4 Numerical results
In this section, we perform several numerical examples that demonstrate the robustness and efficiency of the proposed algorithm for switching between Newton's method and the L-scheme. This is done through careful comparison between the switching algorithm, hereafter called the L/N-scheme, the standard Newton method and the L-scheme. It is important to note that the L-scheme includes a tuning parameter that significantly affects the performance of the method. As a remedy, we choose two different values, \(L_{1}\) and \(L_{2}\) in the performance comparison. Here, \(L_{1}\) is a quasi-optimal choice of tuning parameter and will be defined for each specific subproblem, see Table 2, and \(L_{2}=\sup\left\{\theta^{\prime}\left(\psi\right)\right\}\). For the L/N-scheme, \(L_{1}\) is always chosen for the L-scheme iterations.
To measure the performance of each separate method, we examine both the number of iterations and computational time that they require to satisfy the stopping criterion
\[\big{|}\big{|}\psi_{h}^{n,j}-\psi_{h}^{n,j-1}\big{|}\big{\|}_{\mathcal{L},\psi _{h}^{n,j-1}}<10^{-7},\]
where \(\left|\left|\cdot\right|\right|\)\({}_{\mathcal{L},\psi_{n}^{n,j-1}}\) is the iteration and linearization-dependent energy norm for the pressure head, with \(\mathcal{L}\in\{L,N\}\). Here, the computational time covers the entire simulations and all experiments were performed on an Acer Swift 3, with an Intel core i7-1165G7-processor.
In total, three different test cases for the numerical experiments are considered:
* Test case 1: The first test case is taken from [35], although it is modified in the sense that we disregard surfactant transport. Here, the flow is always partially saturated.
* Test case 2: The second test case can be found in [1], and it considers extraction/injection above the water table.
* Test case 3: The final test case is a known benchmark problem that is studied in [1, 36, 37, 38]. Here, a time-dependent Dirichlet boundary condition is used to describe the recharge of a groundwater reservoir from a drainage trench.
For all test cases, the van Genuchten-Mualem parametrization [33] is used to describe the relation between the saturation, the pressure head and the permeability,
\[\theta(\psi) =\begin{cases}\theta_{R}+(\theta_{S}-\theta_{R})\left[\frac{1}{ 1+(-\alpha\psi)^{n}}\right]^{\frac{n-1}{n}},&\psi\leq 0,\\ \theta_{S},&\psi>0,\end{cases} \tag{19}\] \[K(\Theta(\psi)) =\begin{cases}K_{s}\left(\Theta(\psi)\right)^{\frac{1}{2}}\left[ 1-\left(1-\Theta(\psi)^{\frac{n}{n-1}}\right)^{\frac{n-1}{n}}\right]^{2},& \psi\leq 0,\\ K_{s},&\psi>0.\end{cases}\]
Here,
\[\Theta(\psi)=\frac{\theta(\psi)-\theta_{R}}{\theta_{S}-\theta_{R}},\]
with \(\theta_{S}\) and \(\theta_{R}\) being the water volume and the residual water content respectively, \(K_{s}\) the hydraulic conductivity of the fully saturated porous medium, and \(\alpha\) and \(n\) soil related parameters.
In all of the test-cases, triangular linear conforming finite elements with mesh diameter \(h\) are applied together with the implicit Euler time-discretization with time step size \(\tau\), as described in Sections 2.1 and 2.2. The mesh diameter \(h\) and time step size \(\tau\) vary between the different experiments and will be specified for each individual experiment. We note that the numerical experiments are expected to perform equivalently for other spatial discretization methods such as the Raviart-Thomas mixed finite elements or discontinuous Galerkin finite elements.
The finite element implementation is Python based and uses the simulation toolbox PorePy [39] for grid management. It is available for download at [https://github.com/MrShuffle/RichardsEquation/releases/tag/v1.0.1](https://github.com/MrShuffle/RichardsEquation/releases/tag/v1.0.1).
### Test case 1: Strictly unsaturated medium
In this test case, we consider a strictly unsaturated porous medium, and use the van Genuchten-Mualem parametrization that is described by parameters from Table 2. The test case is heavily inspired by [35], and the domain is given by \(\Omega=\Omega_{1}\cup\Omega_{2}\), where \(\Omega_{1}=[0,1]\times[0,1/4]\) and \(\Omega_{2}=[0,1]\times(1/4,1]\). We consider the time interval \([0,T]\), where \(T=\tau\) varies with choice of time step size \(\tau\), as we only take one time step. As initial condition, we choose the pressure head
\[\boldsymbol{\psi}^{0}(x,z)=\begin{cases}-z-1/4&(x,z)\in\Omega_{1}\\ -4&(x,z)\in\Omega_{2},\end{cases}\]
where \(x\) represents the positional variable in the horizontal direction and \(z\) in the vertical direction. A Dirichlet boundary condition is imposed at the top boundary that complies with the initial condition. For the rest of the boundary no-flow boundary conditions are used, and the following source term is applied
\[f(x,z)=\begin{cases}0&(x,z)\in\Omega_{1}\\ 0.06\cos\left(\frac{4}{3}\pi(z)\right)\sin\left(x\right)&(x,z)\in\Omega_{2}. \end{cases}\]
The solution after one time step with time step size \(\tau=1\), is given in Figure 2.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Parameters & Test case 1 & Test case 2 & Test case 3 \\ \hline van Genuchten-Mualem & & & \\ \(\theta_{R}\) & 0.026 & 0.026 & 0.131 \\ \(\theta_{S}\) & 0.42 & 0.42 & 0.396 \\ \(K_{S}\) & 0.12 & 0.12 & \(4.96\cdot 10^{-2}\) \\ \(\alpha\) & 0.551 & 0.95 & 0.423 \\ \(n\) & 2.9 & 2.9 & 2.06 \\ \hline \hline L-scheme & & & \\ \(L_{1}\) & 0.1 & 0.15 & \(3.501\cdot 10^{-3}\) \\ \(L_{2}=L_{\theta}\) & 0.136 & 0.2341 & \(4.501\cdot 10^{-3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameter values for all test cases. The parameters are presented in column format, where each column corresponds to the parameters for the specified test case.
Figure 2: Test case 1: Strictly unsaturated medium. Pressure head \(\psi\) at final time \(T=1\).
#### 4.1.1 Comparison of convergence properties.
Here, we discuss the performance and convergence properties of the newly proposed L/N-scheme and compare it to the Newton method and the L-scheme. In Figure (a)a, the number of iterations for different choices of the mesh size parameters, with time step size \(\tau=0.01\) are presented. As expected the L-scheme is robust and converges in each scenario, for both \(L_{1}\) and \(L_{2}\). Newton's method, however, only converges for sufficiently coarse meshes. Yet, when converging, it converges in fewer iterations than the L-scheme. Finally, the hybrid L/N method converges in as few if not fewer iterations as the Newton method (when it converges) and converges robustly, and in far fewer iterations than the L-scheme for the other mesh sizes.
Furthermore, a similar experiment is performed for a fixed mesh size \(h=\sqrt{2}/40\), and varying time step sizes, see Figure (a)a. For larger time step sizes the Newton method diverges, while the other methods converge robustly. Again the L/N-scheme converges with the performance expected of Newton's method, in addition to being as robust as the L-scheme. We highlight the enormous difference in the number of iterations for the largest time step size \(\tau=1\) in Figure (a)a.
Then, the performance of the linearization schemes is compared in terms of computational time, cf. Figure (b)b and Figure (b)b. One can observe virtually the same performance for the hybrid method as for Newton's method when the latter converges. The former in fact is sometimes slightly faster, due to each L-scheme iteration being slightly less expensive than a Newton iteration, see Remark 6. In addition, the hybrid method continues to show the same performance for the cases in which Newton's method does not converge. Finally, Figure (b)b shows that, for all meshes, the computational time of the L-schemes is consistent with the reported numbers of iterations in Figure (a)a with \(L_{1}\) being the fastest. Although it uses more than double the computational time of the L/N-scheme.
Overall, the newly proposed L/N-scheme shows the best performance. It is as fast as Newton's method when it converges, and is significantly more robust.
Figure 3: Test case 1: Strictly unsaturated medium. Performance metrics for all linearization schemes for fixed \(\tau=0.01\) and varying mesh size.
**Remark 6** (Computational time per iteration).: _It is known that condition numbers for matrices coming from systems linearized by Newton's method are higher than for those linearized by the L-scheme [1]. Therefore, each iteration of Newton's method, when implemented without preconditioning, takes more time than each L-scheme iteration._
**Remark 7** (Computational time for the coarsest mesh).: _The computational times of the coarsest meshes are omitted due to the use of multiprocessing in the implementations. This causes the most time consuming part to be the spawn process of the local assembly on each element. As a result, the computational times for the coarsest meshes are very similar for all the linearization methods._
#### 4.1.2 Switching characteristics
Finally, the dynamic switch between the L-scheme and Newton's method is inspected in further detail. In Figure 5, the evolution of the indicators for the switch is displayed for a fixed mesh and time step size. The example particularly demonstrates the ability of the hybrid method to switch back and forth between both linearizations before switching fully to Newton. In addition, the final number of L-scheme iterations is kept at its minimum. The plot also shows the effectivity indices introduced in (18) and discussed in Remark 3. The effectivity index is greater than \(1\) in all cases, which validates Propositions 1 and 2 and it stays between \(1.27\) to \(2.3\), implying that the estimators \(\eta^{i}_{\cdot\to N}\) and \(\eta^{i}_{\cdot\to L}\) are sharp.
### Test case 2: Variably saturated medium
The example parameters are as in Table 2, Test case 3. We consider a variably saturated medium, \(\Omega=\Omega_{gw}\cup\Omega_{vad}\), where the groundwater zone is \(\Omega_{gw}=[0,1]\times[0,1/4)\) and a vadoze zone is \(\Omega_{vad}=[0,1]\times[1/4,1]\). Here, we consider the time interval \([0,T]\), where \(T=0.01\) and we only take one time step with \(\tau=0.01\). As initial condition, we choose
Figure 4: Test case 1: Strictly unsaturated medium: Performance comparison for all of the linearization schemes for different time step sizes and fixed mesh size \(h=\sqrt{2}/40\).
the pressure head
\[\mathbf{\psi}^{0}(x,z)=\begin{cases}-z+1/4&(x,z)\in\Omega_{gw}\\ -3&(x,z)\in\Omega_{vad},\end{cases}\]
where \(x\) represents the positional variable in the horizontal direction and \(z\) in the vertical direction. On the surface a constant Dirichlet boundary condition is imposed, being equal to the initial condition at all times. For the rest of the boundary no-flow boundary conditions are used. We apply the following source term
\[f(x,z)=\begin{cases}0&(x,z)\in\Omega_{gw}\\ 0.006\cos\left(\tfrac{4}{3}\pi(z-1)\right)\sin\left(2\pi x\right)&(x,z)\in \Omega_{vad}.\end{cases}\]
After one time step the pressure head profile is given in Figure 6.
Figure 5: Test case 1: Strictly unsaturated medium. Evolution of switching indicators for the L/N-scheme and efficiency indices (18) for the Newton iterations (see Remark 3). Here, the mesh size is \(h=\sqrt{2}/80\) and time step size \(\tau=0.01\).
Figure 6: Test case 2: Variably saturated medium: Pressure head profile at \(T=0.01\).
#### 4.2.1 Comparison of convergence properties.
The iteration count for the second test case for different mesh sizes and fixed time step for all linearization schemes is illustrated in Figure 6(a). Again the L-scheme converges in every case. However, Newton's method does not converge for any mesh size. The hybrid method needs the fewest number of iterations, which shows that the dynamic switch is successful.
The CPU time performance of the linearization schemes is compared in Figure 6(b). Both versions of the L-scheme takes computational times consistent with the number of iterations, with the simulations with the parameter \(L_{1}\) being less expensive. However, the L-scheme (using \(L_{1}\)) requires approximately 373% of the computational time of the hybrid method including the computation of the switching indicators. In addition, the benefit of a few additional L-scheme iterations further decreases the computational time of the hybrid method.
#### 4.2.2 Switching characteristics
We also give a more in-depth look to the dynamic switch between the Newton's method and the L-scheme. In Figure 8, the evolution of the switching indicators is shown for a fixed time step and a fixed mesh size. After 8 L-scheme iterations the switching indicator \(\eta_{L\to N}\) becomes lower than \(C_{\text{tol}}\) and then Newton's method converges. From Figure 6(a) the number of L-scheme iterations required before the switching indicator becomes small enough to switch to Newton's method varies with the mesh size. Note that for the coarsest mesh no switch to Newton's method happens.
Figure 7: Test case 2: Variably saturated medium: Performance metrics for all linearization schemes for fixed \(\tau=0.01\) and varying mesh size.
### Test case 3: Benchmark problem
Here, we consider a known benchmark problem [38], also used e.g. in [1], which models the recharge of a groundwater reservoir from a drainage trench in two spatial dimensions. The domain \(\Omega\subset\mathbb{R}^{2}\) represents a vertical segment of the subsurface. One portion of the right side of the domain is fixed by a constant Dirichlet boundary condition. A time-dependent Dirichlet boundary condition on parts of the upper boundary is used to mimic the drainage trench. No-flow conditions are utilized on the remaining parts of the boundary. The used parameters are given in Table 2 Test case 3, corresponding to silt loam. The geometry is given by
\[\Omega =[0,2]\times[0,3],\] \[\Gamma_{D_{1}} =[0,1]\times(3),\] \[\Gamma_{D_{2}} =(2)\times[0,1],\] \[\Gamma_{N} =\Omega\backslash\left\{\Gamma_{D_{1}}\cup\Gamma_{D_{2}}\right\},\]
and the initial pressure head distribution and boundary conditions are
\[\psi(0,x,z)=1-z\] \[\psi(t,x,z)=\begin{cases}-2+35.2t,&\text{if }t\leq\frac{1}{16}, \quad\text{on }\Gamma_{D_{1}},\\ 0.2,&\text{if }t>\frac{1}{16},\quad\text{on }\Gamma_{D_{1}},\\ 1-z,&\text{on }\Gamma_{D_{2}},\end{cases}\] \[-K(\theta(\psi(t,x,z)))\nabla(\psi(t,x,z)+z)\cdot\mathbf{\nu}=0, \quad\text{on }\Gamma_{N},\]
where \(\mathbf{\nu}\) is the outward normal vector. The solution is computed over 9 timesteps, where the time unit is in days, with time step size \(\tau=1/48\) and with a regular mesh consisting
Figure 8: Test case 2: Variably saturated medium: Evolution of switching indicators for L/N-scheme for fixed \(h=\sqrt{2}/50\) and \(\tau=0.01\). The dashed line is \(C_{\text{tol}}=1.5\), the switching criterion from L-scheme to Newton’s method. The effectivity indices (18) corresponding to the Newton iterations are also plotted and they remain below 2.8.
of 2501 nodes. The pressure head profile at the final time for the L/N-scheme is shown in Figure 9.
#### 4.3.1 Comparison of convergence properties.
The performance of all schemes for test case 3 is displayed in Table 3. All schemes converge for this example. The Newton method requires the least amount of iterations. However, the hybrid method only needs one more iteration. Both uses significantly less iterations than the L-schemes. For all time steps except one, only one L-scheme iteration is needed per time step, which indicates a successful dynamic switch for almost all time steps.
The computational time for the L-schemes is much higher than both Newton's method and the hybrid method, which is consistent with the expense per iteration discussed in Remark 6. More significantly, the L/N-scheme performs almost the same as Newton's method.
## 5 Conclusions
In this paper, we considered solving Richards' equation, which models the flow of water through saturated/unsaturated porous media (soil). After applying backward Euler time-discretization and continuous Galerkin finite element space-discretization to Richards' equation, to solve the resulting nonlinear finite-dimensional problem we developed a hybrid iterative linearization strategy that combines the L-scheme with the Newton method.
\begin{table}
\begin{tabular}{c c c} \hline \hline & No. Itr & CPU time [s] \\ \hline \(L_{1}\) & 274 & 6136 \\ \(L_{2}\) & 330 & 7356 \\ Newton & 39 & 980 \\ L/N & (10/30) & 1021 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test case 3: Benchmark problem: Performance metrics for 2501 nodes.
Figure 9: Test case 3: Benchmark problem: Pressure head profile at 4.5 hours.
The idea behind this is to use the robust, but only first-order convergent L-scheme to stabilize the quadratically convergent Newton method. The switching between the two schemes is done in an adaptive manner using _a posteriori_ indicators which predict the linearization error of the next iteration using a concept of iteration-dependent energy norms. After each iteration, it is checked whether the Newton method is predicted to decrease the linearization error of the next iteration. If so, then the Newton method is used, otherwise, the iteration is done using the L-scheme. The hybrid scheme is now robust, but still quadratically convergent after switching to the Newton scheme.
The performance of the hybrid scheme is tested on illustrative, realistic numerical examples which reveal that the scheme is as robust as the L-scheme and it converges in cases where Newton fails. Moreover, in cases when Newton converges, the hybrid scheme takes roughly the same amount of iterations and computational time and is considerably faster than even the optimized L-scheme. Lastly, we comment that the scheme is quite general as it can, in principle, be extended to other spatial discretization and linearization methods.
## Appendix A An adaptive L-scheme
As discussed in Sections 1 and 2.3.1, the L-scheme converges unconditionally provided that \(L\geq\frac{1}{2}\sup_{\xi\in\mathbb{R}}\theta^{\prime}(\xi)\) and the time step size \(\tau\) is smaller than a constant independent of the mesh size. However, numerical results in [1] suggest that the optimal rate of convergence of the L-scheme is obtained for a considerably smaller \(L\) although convergence cannot always be guaranteed for such values. Hence, to speed up the computations, it is possible to start the iterations with a smaller value of \(L\) and then use the _a posteriori_ estimates to decide if \(L\) is to be increased or not. Analogous to Propositions 1 and 2 we state a result that allows us to do this rigorously.
**Proposition 3** (Error control of L-scheme).: _For a given \(\psi_{h}^{n,0},\,\psi_{h}^{n-1}\in V_{h}\), let \(\{\psi_{h}^{n,j}\}_{j=1}^{i+1}\subset V_{h}\) solve (8) for some \(i\in\mathbb{N}\). Then under Assumption 1,_
\[\big{|}\big{|}\psi_{h}^{n,i+1}-\psi_{h}^{n,i}\big{|}\big{|}_{L,\psi_{h}^{n,i}} \leq\eta_{L\to L}^{i},\]
_where_
\[\eta_{L\to L}^{i}:=\big{(}[\eta_{L\to L}^{i,\mathrm{potent}}]^{2}+\tau[\eta_{L \to L}^{i,\mathrm{flux}}]^{2}\big{)}^{\frac{1}{2}}\]
_with_
\[\eta_{L\to L}^{i,\mathrm{potent}}:=\|L^{-\frac{1}{2}}(L(\psi_{h}^{n,i}- \psi_{h}^{n,i-1})-(\theta(\psi_{h}^{n,i})-\theta(\psi_{h}^{n,i-1})))\|,\] \[\eta_{L\to L}^{i,\mathrm{flux}}:=\left\|(K(\theta(\psi_{h}^{n,i}))- K(\theta(\psi_{h}^{n,i-1})))K(\theta(\psi_{h}^{n,i}))^{-\frac{1}{2}}\nabla( \psi_{h}^{n,i}+z)\right\|.\]
The detailed proof is again omitted. Observe that for the estimate above, neither Assumption 2 nor any separate treatment of the degenerate domains is required.
### L-adaptive algorithm
Based on Proposition 3, we propose an algorithm that selects optimal \(L\)-values adaptively.
### Numerical result
In Figure 10 we show a result where the \(L\)-adaptive scheme is superior to a fixed \(L\)-approach. In this case, \(L_{\theta}/2\) is too small for convergence due to a large time step size. Compared with fixed \(L_{1}\) with the same mesh size and time step size, see Figure 4, the number of iterations is improved by \(20\). For smaller time steps, the numerical results reveal that Algorithm 2 results in roughly the same number of iterations compared to a fixed and optimized \(L=L_{1}\) lesser than \(L_{\theta}\). But in all examples considered, it uses fewer iterations than simply choosing \(L=L_{2}=L_{\theta}\). The advantage of such an adaptive technique is that an optimization study of \(L\) does not need to be conducted prior to the simulation. However, since the \(L\)-adaptive strategy does not significantly improve the behavior of the L-scheme over the optimized \(L=L_{1}\), we refrained from including it in Algorithm 1 for the sake of simplicity.
## Appendix B Computation of equilibrated flux
Recalling Definitions 3.1 and 3.2, let us propose a simple algorithm to compute an equilibrated flux \(\mathbf{\sigma}_{h}\in\mathbf{RT}_{p}(\mathcal{T}_{h})\cap\mathbf{H}(\mathrm{div}, \Omega)\) satisfying \(\nabla\cdot\mathbf{\sigma}_{h}=\Pi_{h}f\) in \(\mathcal{T}_{\mathrm{deg}}^{i,\epsilon}\), and \(\nabla\cdot\mathbf{\sigma}_{h}=0\) otherwise, where \(f\in L^{2}(\Omega)\). Defining \(\mathbf{Q}_{h}:=\mathbf{RT}_{p}(\mathcal{T}_{h})\,\cap\,\mathbf{H}(\mathrm{div},\Omega)\) and
Figure 10: Test case 1: Strictly unsaturated medium: L-scheme with L-adaptivity and initial stabilization parameter \(L_{0}=L_{2}/8\), \(h=\sqrt{2}/40\) and \(\tau=1\).
\(\tilde{V}_{h}:=\{v_{h}\in\mathcal{P}_{p}(\mathcal{T}_{h})|\;\mathrm{Tr}_{\partial \Omega}(v_{h})=0\}\), we seek a pair \((\boldsymbol{\sigma}_{h},r_{h})\in\boldsymbol{Q}_{h}\times\tilde{V}_{h}\) that satisfies the mixed finite element problem,
\[(K(1)^{-1}\boldsymbol{\sigma}_{h},\boldsymbol{q}_{h}) =(r_{h},\nabla\cdot\boldsymbol{q}_{h}), \forall\,\boldsymbol{q}_{h}\in\boldsymbol{Q}_{h}, \tag{20a}\] \[(\nabla\cdot\boldsymbol{\sigma}_{h},v_{h}) =(f,v_{h}), \forall\,v_{h}\in\tilde{V}_{h}. \tag{20b}\]
The advantage of this flux is that it minimizes \(\|K(1)^{-\frac{1}{2}}\boldsymbol{\sigma}_{h}\|\) which appears in the estimates in Propositions 1 and 2. For practical purposes, a much coarser mesh can be used outside of \(\mathcal{T}_{\deg}^{i,\epsilon}\) to compute it, and the stiffness matrix can be precomputed to accelerate the computation.
## Acknowledgements
The work of JWB is funded in part through the Center of Sustainable Subsurface Resources (Norwegian Research Council project 331841) and the 'FracFlow' project funded by Equinor, Norway through Akademiaavtalen. KM acknowledges the support of FWO (Fonds Wetenschappelijk Onderzoek) for funding him through the 'Junior Postdoctoral Fellowship' and to Akademiaavtalen for funding his visit to the University of Bergen.
|
2308.01057 | MammoDG: Generalisable Deep Learning Breaks the Limits of Cross-Domain
Multi-Center Breast Cancer Screening | Breast cancer is a major cause of cancer death among women, emphasising the
importance of early detection for improved treatment outcomes and quality of
life. Mammography, the primary diagnostic imaging test, poses challenges due to
the high variability and patterns in mammograms. Double reading of mammograms
is recommended in many screening programs to improve diagnostic accuracy but
increases radiologists' workload. Researchers explore Machine Learning models
to support expert decision-making. Stand-alone models have shown comparable or
superior performance to radiologists, but some studies note decreased
sensitivity with multiple datasets, indicating the need for high generalisation
and robustness models. This work devises MammoDG, a novel deep-learning
framework for generalisable and reliable analysis of cross-domain multi-center
mammography data. MammoDG leverages multi-view mammograms and a novel
contrastive mechanism to enhance generalisation capabilities. Extensive
validation demonstrates MammoDG's superiority, highlighting the critical
importance of domain generalisation for trustworthy mammography analysis in
imaging protocol variations. | Yijun Yang, Shujun Wang, Lihao Liu, Sarah Hickman, Fiona J Gilbert, Carola-Bibiane Schönlieb, Angelica I. Aviles-Rivero | 2023-08-02T10:10:22Z | http://arxiv.org/abs/2308.01057v1 | MammoDG: Generalisable Deep Learning Breaks the Limits of Cross-Domain Multi-Center Breast Cancer Screening
###### Abstract
Breast cancer is a major cause of cancer death among women, emphasising the importance of early detection for improved treatment outcomes and quality of life. Mammography, the primary diagnostic imaging test, poses challenges due to the high variability and patterns in mammograms. Double reading of mammograms is recommended in many screening programs to improve diagnostic accuracy but increases radiologists' workload. Researchers explore Machine Learning models to support expert decision-making. Stand-alone models have shown comparable or superior performance to radiologists, but some studies note decreased sensitivity with multiple datasets, indicating the need for high generalisation and robustness models. This work devises MammoDG, a novel deep-learning framework for generalisable and reliable analysis of cross-domain multi-center mammography data. MammoDG leverages multi-view mammograms and a novel contrastive mechanism to enhance generalisation capabilities. Extensive validation demonstrates MammoDG's superiority, highlighting the critical importance of domain generalisation for trustworthy mammography analysis in imaging protocol variations.
## 1 Introduction
Breast cancer is the second leading cause of cancer death in women worldwide1. Early cancer detection is relevant for treatment and improvement of life quality and outcomes. Mammography is the primary imaging test for diagnosis yet its interpretation is a major challenge (Marmot et al., 2013; Pharoah, Sewell, Fitzsimmons, Bennett, & Pashayan, 2013). The number of false-positive and false-negative findings is due to the high variability and patterns in the mammograms. Therefore, it is often necessary to advocate a double reading of mammograms, which increases radiologists' workload, cost, and time (Royal College of Radiologists, 2019).
Footnote 1: [https://www.cancer.org/cancer/types/breast-cancer/about/how-common-is-breast-cancer.html](https://www.cancer.org/cancer/types/breast-cancer/about/how-common-is-breast-cancer.html)
Some prior research has been devoted to developing Machine Learning (ML) models to support expert decision-making and achieve comparable to or superior performance to radiologists with stand-alone tools (McKinney et al., 2020; Rodriguez-Ruiz et al., 2019). However, in other studies, the sensitivity performance is observed to decrease or without change when facing large cohorts from different sites and dataset characteristics (Schaffter et al., 2020). The reason is that the large cohort contains the out-of-distribution (OOD) data collected from different vendor machines and protocols in different sites leading to a distribution shift of the imaging data.
The body of literature on ML for mammography cancer diagnosis can be broadly divided into three main categories: 1) single view-based models (Wu et al., 2019; Yala, Lehman, Schuster, Portnoi, & Barzilay, 2019; W. Zhu, Lou, Yang, & Xie, 2017); 2) multiple view-based models (Geras et al., 2017;
Khan, Shahid, Raza, Dar, & Alquhayz, 2019; Wei et al., 2022; Zhao, Yu, & Wang, 2020); and 3) patch-based techniques (Agarwal, Diaz, Llado, Yap, & Marti, 2019; Mercan et al., 2017; Wu et al., 2019). Moreover, these models can use either a single ML model or ensembles. However, they do not include any mechanism or are designed to address the above problem of distribution shift from large cohorts of mammography data. Whilst the ML community has studied this topic with domain generalization for other real-world applications (Zhou, Liu, Qiao, Xiang, & Loy, 2022), the works on domain generalisation for analysing mammograms are scarce. In recent work, Z. Li et al. (2021) uses contrastive learning principles to further augment the generalization capability of the deep learning model considering 4 seen vendors and one unseen vendor. However, that approach still is limited in terms of extracting more richer and statistical information.
In this work, we address the challenging question of - how to design deep learning models that can be generalisable, robust, and reliable to multi-center OOD data. With this purpose in mind, we introduce a novel deep learning framework based on domain generalisation to mitigate the distribution shift problem, on mammography screening tasks, namely MammoDG. Our new framework considers multi-view mammograms. The key of our framework is how we harmonise richer statistical information from multiple views and enforce fine-grained detection via a proposed contrastive mechanism. Our contributions are summarised as follows.
1. We propose a novel domain generalisation framework, MammoDG (Figure 1), for breast-level mammography diagnosis (classification). We highlight an interpretable multi-view strategy with a Cross-Channel Cross-View Enhancement module (Figure 2(a)). This module seeks to effectively harmonise the statistical information from CC and MLO views in the middle feature phase (Figure 2(b)).
2. We introduce a novel Multi-instance Contrastive Learning mechanism (MICL) to enhance generalisation and fine-grained detection capabilities of our model. Our mechanism enforces local and global knowledge to address the out-of-distribution samples drawn from different vendors and hospitals large-scale acquisitions (as shown in Figure 2(c)).
3. We extensively validate our new framework using benchmarking and in-home datasets from different vendor machines and sites, three of which are seen and two of which are unseen. We demonstrate that our model leads to better performance than existing deep learning models by a large margin on both seen and unseen datasets.
4. We have shown that domain generalisation is critical to ensure trustworthiness and reliable deep learning models for mammography analysis, where limited data and substantial variations across imaging protocols and vendors machines.
## 2 Methods
In this section, we describe in detail our proposed MammoDG framework for addressing the out-of-distribution problem in breast cancer screening. Figure 1 depicts our domain generalisation framework for breast-level mammography classification. We consider a training set of multiple source domains \(\mathcal{S}=\{S_{1},...,S_{K}\}\), where each domain \(S_{k}\) contains \(N_{k}\) weakly labelled samples \((a_{i}^{k},b_{i}^{k},y_{i}^{k})_{i=1}^{N_{k}}\) representing the CC and MLO views, and breast-level labels respectively. Our framework learns a domain-agnostic model, \(f_{\theta}:X\to Y\), using \(K\) distributed source domains so that it can generalise to a completely unseen domain \(\mathcal{T}\) without performance degradation.
The CC and MLO views are first fed into two-stream view-specific learning networks to obtain their multi-level feature representations. A Cross-**C**hannel Cross-**V**ew **E**nhancement (**CVE**) module is then proposed to learn the data statistical knowledge. We also introduce a Transformer as global encoder for better final feature fusion. The view-specific and shared decoder subnetworks are then adopted to provide image-level and breast-level predictions. _To extract domain-invariant features from data from different vendors_, we propose **M**ulti-**I**nstance **C**ontrave **L**earning (**MICL**), which uses the principles of Multi-Instance Learning and Contrastive Learning for boosting performance by detecting abnormal critical instances (patches) across domains.
### Cross-Channel Cross-View Enhancement
Previous work in Multi-view Mammography Classification either adopted a single-stream network to separately process different views (Z. Li et al., 2021; Y. Shen et al., 2021), or directly concatenate the
outputs of the multi-stream network in the late fusion level (Geras et al., 2017; Khan et al., 2019; Wu et al., 2019). However, existing works do not consider the statistical information shared by two views, of the same breast, at the middle feature level. To this end, we introduce a CVE module to enhance the feature representation of one view by exploiting complementary knowledge from the other view. The CVE includes two parts, _i.e._, cross-channel and cross-view feature enhancement, as illustrated in Figure 2(a). First, we leverage Instance Normalisation (IN) to perform style normalisation by normalising feature statistics from different distributions (domains). While IN has the power to better the generalisation ability of networks, it inevitably results in weaker discrimination capability. To recover task-relevant discriminative feature from the IN removed information, we conduct cross-channel enhancement. Specifically, we distill the task-relevant feature from the residual feature \(\mathcal{R}\) of the original feature \(\mathcal{F}\) and the normalised feature \(\tilde{\mathcal{F}}\), which reads: \(\mathcal{R}=\mathcal{F}-\tilde{\mathcal{F}}\). We highlight task-relevant part \(\mathcal{R}^{+}\) from \(\mathcal{R}\) through a learned channel-wise attention vector \(\mathbf{t}=[t_{1},t_{2},...,t_{C}]\in\mathbb{R}^{C}\):
\[\begin{split}\mathcal{R}^{+}(:,k)=t_{k}\mathcal{R}(:,k),\\ \mathbf{t}=\sigma(\theta_{2}\delta(\theta_{1}\text{GAP}(\mathcal{ R}))),\end{split} \tag{1}\]
where the attention module is implemented by a spatial global average pooling layer (GAP), followed by two \(1\times 1\) convolutional layers (that are parameterised by \(\theta_{1}\in\mathbb{R}^{c\times(c/r)}\) and \(\theta_{2}\in\mathbb{R}^{(c/r)\times c}\)), \(\sigma(\cdot)\) and \(\delta(\cdot)\) represent sigmoid activation function and ReLU function, respectively. To reduce the number of parameters, a dimension reduction ratio \(r\) is empirically set to 16. After that, we obtain the channel-enhanced feature by adding the distilled task-relevant feature \(\mathcal{R}^{+}\) to the normalised feature \(\tilde{\mathcal{F}}\) as:
\[\tilde{\mathcal{F}}^{+}=\tilde{\mathcal{F}}+\mathcal{R}^{+}. \tag{2}\]
Once we have obtained the channel-enhanced feature representations from different views, one critical task is to effectively integrate them. Intuitively, as the CC and MLO views capture the same breast from above and side, abnormal tissues in the same breast can be observed in both views. To exploit the correlations between the two views, we propose using the geometric-attended vector. Specifically, we calculate their feature-level attention maps by a \(3\times 3\) convolutional layer (\(\theta_{3}\)) with a sigmoid function as
\[w_{cc}=\sigma(\theta_{3}(\tilde{\mathcal{F}}^{+}_{cc})),\ w_{mlo}=\sigma( \theta_{3}(\tilde{\mathcal{F}}^{+}_{mlo})), \tag{3}\]
Figure 1: **Overview of our MammoDG framework.** A batch of CC and MLO pair views, from different domains, are fed into two-stream view-specific learning networks. Our CVE modules learn their statistical knowledge from the same pair, at the first three levels, while global encoder further integrates two-stream feature maps \(\tilde{\mathcal{F}}_{cc},\tilde{\mathcal{F}}_{mlo}\) at the last level. The share decoder, consisting of two fully connected layers and a sigmoid layer, generates breast-level predictions. To give a strong supervision by discovering patch information across domains, MICL plays view-specific learning and generates image-level predictions.
We then aggregate the complementary information into a learned column-wise geometric-attended vector \(\mathbf{v}=[v_{1},v_{2},...,v_{W}]\in\mathbb{R}^{W}\) to enhance the other view. We regard the maximum weight of each column, \(w\), as the summarised value in our geometric-attended vector \(\mathbf{v}\). For example, as Figure 2(b) shows, the abnormal tissue in the \(k\)-th column of CC view should exist in the corresponding column of the MLO view, and thus the geometric information is summarised in \(v_{k}\) by assigning the bigger attended value. After obtaining the geometric-attended vector, we multiply it by the attention map of the other view to differentiate the pixels in the same column. This process reads:
\[\hat{w}_{cc}=w_{cc}\cdot\mathbf{v}_{mlo},\hat{w}_{mlo}=w_{mlo}\cdot\mathbf{v}_ {cc}. \tag{4}\]
Finally, we achieve cross-view enhancement by adding the attended feature to the input feature as
\[\hat{\mathcal{F}}_{cc}=\hat{\mathcal{F}}_{cc}^{+}+\hat{w}_{cc}\cdot\hat{ \mathcal{F}}_{cc}^{+},\ \hat{\mathcal{F}}_{mlo}=\hat{\mathcal{F}}_{mlo}^{+}+\hat{w}_{mlo}\cdot\hat{ \mathcal{F}}_{mlo}^{+}. \tag{5}\]
The cross-channel cross-view enhanced feature representation \(\hat{\mathcal{F}}\) is propagated to the next layer of each stream network to capture and integrate multi-level information.
### Multi-Instance Contrastive Learning
Regions of interest (ROI) in mammography images, such as masses, asymmetries, and microcalcifications, are often small and sparsely distributed over the breast, and may present as subtle changes in the breast tissue pattern. The Multiple Instance Learning (MIL) technique is a great solution to improve fine-grained detection when ROI annotations are not available (W. Zhu et al., 2017). However, due to the absence of global guidance, the instance classifier is much more likely to be confused by local knowledge in patches from images of different distributions. It is hard to fully leverage MIL when samples come from different domains. On the other hand, Z. Li et al. (2021) recently proposed employing self-supervised Contrastive
Figure 2: **(a) CVE module**. First, task-relevant features are distilled from the input feature \(\mathcal{F}\) to achieve Cross-Channel Enhancement for each view. Secondly, in Cross-View Enhancement, the geometric-attended vector \(\mathbf{v}_{mlo}\) computed from the channel-enhanced feature \(\hat{\mathcal{F}}_{mlo}^{+}\) is multiplied by the self-attention map of the CC view to integrate the complementary information from the MLO view into the feature of CC view. **(b) The visualisation of our geometric-attended vector \(\mathbf{v}\)** helps understand the principle of Cross-View Enhancement. The attended value of abnormal tissues in the \(k\)-th column in CC view is summarised in \(v_{k}\) to provide valuable geometric information for the corresponding column in MLO view. **(c) Multi-Instance Contrastive Learning strategy. \(\hat{\mathcal{F}}\)** is a mini-batch of enhanced feature maps by CVE module obtained from ResNet18 while \(\mathcal{F}\) is the same mini-batch of original feature maps from ResNet18.
Learning to attain the goal of generalization robustness in mammography detection tasks. However, they depend on CycleGAN (J.-Y. Zhu, Park, Isola, & Efros, 2017) to generate multi-style multi-view images, which may have poor quality due to unexpected changes in tiny tissues of patches.
To address these limitations, we propose Multi-Instance Contrastive Learning (MICL) scheme by integrating MIL and Contrastive Learning to more effectively enhance both the generalization and fine-grained detection capability of the model. As Figure 1 shows, we treat our MICL module as view-specific decoder subnetworks to preserve the special knowledge of each view while the shared information can be learned in the shared decoder. The detailed procedure of MICL is illustrated in Figure 2(c). Specifically, we adopt a dual-stream MIL aggregator (B. Li, Li, & Eliceiri, 2021) to jointly learn a patch (instance) and an image (bag) classifier. Before feeding the cross-channel cross-view enhanced feature map \(\hat{\mathcal{F}}\) to MICL, we divide it into \(n\times n\) tiles along the spatial dimension to generate the bag of \(n^{2}\) instances. Let \(B=\{p_{1},...,p_{n^{2}}\}\) denotes a bag of instances of one view. The MIL aggregator first decides the critical instance \(p_{m}\) in a bag by using the instance classifier \(f_{m}(\cdot)\) on each instance embedding \(p_{i}\) and max-pooling the scores. This process is given by:
\[\begin{split} x=p_{m}&=\underset{p_{i}\in B}{argmax} \,f_{m}(p_{i}),\\ S_{m}(B)&=\underset{p_{i}\in B}{max}\,f_{m}(p_{i}).\end{split} \tag{6}\]
Secondly, the MIL aggregator measures the distance between each instance and the critical instance \(p_{m}\), and then produces a bag embedding by summing the instance embeddings using the distances as weights. More specifically, each instance embedding \(p_{i}\) (including critical instance \(p_{m}\)) is transformed into two vectors, query \(q_{i}\) and information \(v_{i}\), by linear layers. The distance \(d_{i}\) denotes the similarity between queries of the instance embedding \(p_{i}\) and critical instance embedding \(p_{m}\), which is calculated by inner product and softmax. The bag score is further given by the bag classifier \(f_{b}(\cdot)\):
\[S_{b}(B)=f_{b}(\sum_{i}^{n^{2}}d_{i}v_{i}). \tag{7}\]
The final score \(S(B)\) is the average of the scores of the dual streams:
\[S(B)=\frac{1}{2}(S_{m}(B)+S_{b}(B)). \tag{8}\]
As the critical instance represents its bag and plays a significant role in both streams, it is necessary to guide the network to select the correct instance in a bag. To this end, we integrate weakly-supervised contrastive learning into multiple instance learning. First of all, we separate the critical instances of bags in a mini-batch into the malignant set \(P=\{x_{i}^{+}\mid y_{i}=1\}\) and the benign set \(Q=\{x_{j}^{-}\mid y_{j}=0\}\) according to breast-level labels. Then for each malignant critical instance \(x_{i}^{+}\) as an anchor, we adopt its out-of-distribution view \(\bar{x}_{i}^{+}\) as the positive sample while all benign critical instances are negative samples. Instead of standard data augmentation that cannot perturb the distribution of images and may destroy details in breast tissues, we apply a feature-level augmentation protocol comprised of Mixstyle (Zhou, Yang, Qiao, & Xiang, 2021) and random noise on the whole feature maps to obtain out-of-distribution instance embeddings. Mixstyle is inserted between layers in the CNN architecture to perturbing the distribution information of images from source domains inspired by Adaptive Instance Normalization. More specifically, given an input batch of feature maps \(\mathbf{F}\) and the shuffled batch \(\mathbf{F}^{{}^{\prime}}\), Mixstyle computes their feature statistics, _i.e._, the mean \(\gamma(\mathbf{F}),\gamma(\mathbf{F}^{{}^{\prime}})\) and variance \(\beta(\mathbf{F}),\beta(\mathbf{F}^{{}^{\prime}})\). Then, we mix their feature statistics by linear interpolation:
\[\gamma_{mix}=m\gamma(\mathbf{F})+(1-m)\gamma(\mathbf{F}^{{}^{\prime}}),\ \beta_{mix}=m\beta(\mathbf{F})+(1-m)\beta(\mathbf{F}^{{}^{\prime}}), \tag{9}\]
where \(m\) is randomly sampled from the uniform distribution, \(m\sim U(0,1.0)\). Finally, the mixture of feature statistics is applied to the distribution-normalized \(\mathbf{F}\):
\[\mathbf{F}_{mix}=\beta_{mix}\cdot\frac{\mathbf{F}-\gamma(\mathbf{F})}{\beta( \mathbf{F})}+\gamma_{mix}. \tag{10}\]
Note that we randomly shuffle the order in the batch dimension of \(\mathbf{F}\) to obtain \(\mathbf{F}^{{}^{\prime}}\). Mixstyle only perturbs the distribution information of images, promising that the correlations among patches from one image remain invariant. Based on \(\mathbf{F}_{mix}\), we additionally inject slight feature noise to alleviate over-fitting.
Similar to InfoNCE contrastive loss (Oord, Li, & Vinyals, 2018), we applied our modified contrastive loss on the sampled features to give a stronger and more stable supervision:
\[\mathcal{L}_{cl}=-\frac{1}{|P|}\sum_{x_{i}^{+}\in P}\log\frac{e^{h(x_{i}^{+}) \cdot h(x_{i}^{+})/\tau}}{e^{h(x_{i}^{+})\cdot h(x_{i}^{+})/\tau}+\sum_{x_{j}^{- }\in Q}e^{h(x_{i}^{+})\cdot h(x_{j}^{-})/\tau}}, \tag{11}\]
where \(|P|\) is the cardinality of \(P\), \(\tau\) is a scalar temperature hyper-parameter, and \(h(\cdot)\) denote global average pooling and the normalization operation to convert instance embeddings into normalized feature vectors. Finally, the view-specific objective function of our MICL can be formulated as:
\[\mathcal{L}_{cc}=\mathcal{L}_{bce}(S_{cc}(B_{i}^{k}),y_{i}^{k})+\lambda \mathcal{L}_{cl},\ \mathcal{L}_{mlo}=\mathcal{L}_{bce}(S_{mlo}(B_{i}^{k}),y_{i}^{k})+\lambda \mathcal{L}_{cl}, \tag{12}\]
where \(L_{bce}(\cdot)\) is binary cross entropy for supervised learning, and \(\lambda\) is a balancing hyper-parameter.
Our MICL scheme has several inherent advantages compared with the original MIL and self-supervised Contrastive Learning: (1) **Hard negative mining**: The selection of negative samples is crucial for learning contrastive features effectively (Kalantidis, Sarijildiz, Pion, Weinzaepfel, & Larlus, 2020). Instead of including all instances in a bag into contrastive learning, we only consider the critical instance that has the highest score. This naturally provides our MICL with the ability to mine hard negative samples since the critical instance is most likely to be the false positive in a negative bag. (2) **Mini-batch training**: To improve the generalisation robustness, we ensure that the mini-batch is composed of all source domains evenly during training. Our MICL can effectively suppress the confusion caused by patches from different domains not only because negative samples come from diverse distributions but also because Mixstyle enforces positive samples to contain the distribution information of negative samples, making the model more focus on task-related information.
### Global Encoder
After MICL enforces view-specific learning, we aggregate the feature maps \(\hat{\mathcal{F}}\) from CC and MLO branches using Transformer as the final global encoder to incorporate the global context for two views due to their complementary nature, as shown in Figure 1. Specifically, we introduce Transformer (Vaswani et al., 2017) to apply a multi-head self-attention mechanism, and operate on grid structured feature maps to discover the spatial dependencies between patches. Let the grid-structured feature map of a single view be a 3D tensor with dimensions \(H\times W\times C\). For CC and MLO views, their features are stacked together to form a sequence with dimension \((2\times H\times W)\times C\). We add a learnable positional embedding, which is a trainable parameter with dimension \((2\times H\times W)\times C\), so allow the network to infer spatial dependencies between different tokens during training. The input sequence and positional embedding are combined using element-wise summation to form a tensor of dimension \((2\times H\times W)\times C\) as the input of the transformer. The output is then reshaped into two feature maps of dimension \(H\times W\times C\) and fed back into each branch using an element-wise summation with the existing feature maps.
To save computational cost, we downsample higher resolution feature maps using average pooling to a fixed resolution of \(H=W=16\) before passing them as inputs to the transformer and upsample the output to the original resolution using bilinear interpolation before element-wise summation with the existing feature maps. After Transformer, the feature map is converted into a 512-dimensional feature vector by global average pooling. The feature vectors from both views are further combined via element-wise summation. This final 512-dimensional feature vector \(\mathbf{g}\) constitutes a compact representation that encodes the global context of two views. This is then fed to the shared decoder subnetwork which consists of two fully connected layers\((\theta_{4})\) to obtain the breast-level prediction. The objective function of the shared decoder subnetwork is formulated as:
\[\mathcal{L}_{sh}=\mathcal{L}_{bce}(\sigma(\theta_{4}(\mathbf{g}_{i}^{k})),y_{i} ^{k}). \tag{13}\]
Finally, we formulate a unified and end-to-end trainable framework. The overall loss function can be formulated as follows:
\[\mathcal{L}_{total}=\mathcal{L}_{sh}+L_{cc}+\mathcal{L}_{mlo}. \tag{14}\]
### Implementation Details
Our proposed framework was trained on an NVIDIA A100 GPU and implemented on the Pytorch platform. The backbone of our framework was first pre-trained on BI-RADS labels following (Y. Shen et al.,
2021) and then finetuned on our seen domains. Our framework was empirically trained for 50 epochs in an end-to-end manner and the Adam optimizer was applied. The initial learning rate was set to \(2\times 10^{-5}\) and decayed by 10% every 5 epochs. During training, we first resized and randomly crop the mammography images to \(512\times 512\), and then applied the image augmentation protocol including random horizontal flipping (p=0.5), random rotation (\(-15^{\circ}\) to \(15^{\circ}\)), random translation (up to 10% of image size), scaling by a random factor between 0.8 and 1.6, random shearing (\(-25^{\circ}\) to \(25^{\circ}\)), and pixel-wise Gaussian noise (\(\mu=0,\sigma=0.005\)). A batch of 12 cases evenly composed of three seen domains (_i.e._, CBIS, CMMD, TOMMY1) was fed into the network each time.
## 3 Results
In this section, we present a comprehensive account of all the experiments we conducted to validate our proposed MammoDG framework.
### Data Description
In this study, we use four datasets to evaluate the model generalisation performance, including three public datasets: CBIS, CMMD, and INBreast, and one private dataset TOMMY. Due to the large size of the TOMMY dataset, we split it into two non-overlapping parts TOMMY1 and TOMMY2 based on the patient level. To assess the generalisation ability of the models, all the datasets utilized in this study are split into the Seen domain and the Unseen domain. Seen domain means that the datasets contain both training and testing samples, _i.e._, their training samples are seen to the model, while Unseen domain means that the whole dataset is utilised for only testing. Experimentally, we regard the CBIS, CMMD, and TOMMY1 as the Seen domain, TOMMY2 and INBreast as the Unseen domain for the final performance evaluation. For model selection strategy, we chose the checkpoint with the best performance on Unseen domain as the final model. Data splits for Seen domain were created at the breast level, meaning that exams from a given breast were all in the same split.
CBIS-DDSM datasetCBIS-DDSM (Lee et al., 2017) is a public database of scanned film mammography studies containing cases categorized as normal, benign, and malignant with verified pathology information. It is a collection of mammograms from Massachusetts General Hospital, Wake Forest University School of Medicine, Sacred Heart Hospital, and Washington University of St Louis School of Medicine. Mammography image data from CBIS-DDSM is an updated version of the DDSM providing easily accessible data. We followed the official splits but discarded the cases that did not have both CC and MLO views, resulting in 572 benign, 475 malignant for training and 153 benign, 102 malignant for testing. We did not use any data from DDSM for testing given that it is a scanned film dataset.
CMMD datasetCMMD (Cui et al., 2021) is a large public mammography database collected from patients from China, categorized as benign and malignant with verified pathology information. Mammography image data were acquired on a GE Senographe DS mammography system. We split the breast-level cases with complete views into 80%/20% training/model selection splits, resulting in 423 benign, 1,021 malignant studies for training and 115 benign, 246 malignant studies for testing.
INBreast datasetINBreast (Moreira et al., 2012) is a small public mammography database with a relatively balanced benign and malignant case. We split data from patient-level into breast-level and excluded cases with incomplete views, resulting in 125 benign and 46 malignant out of 171 studies.
TOMMY datasetTOMMY (Gilbert et al., 2015) is a rich and well-labeled dataset with over 7,000 patients (over 1,000 malignant) collected through six NHS Breast Screening Program (NHSBSP) centers throughout the UK and read by expert radiologists. To keep the number of breast-level cases consistent with other datasets, we just sampled a part of TOMMY for experiments. TOMMY1 as Seen domain has 1,560 benign cases and 364 malignant cases for training, 406 benign and 76 malignant cases for testing. TOMMY2 with 2,108 benign and 394 malignant cases was all treated as Unseen domain. The TOMMY1 and TOMMY2 datasets were obtained from Hologic vendor machines.
\begin{table}
\begin{tabular}{c|c|c|c c c c c c c|c} \hline \hline \multicolumn{2}{c|}{**Datasets**} & \multicolumn{1}{c|}{**Metric**} & \multicolumn{1}{c|}{**BIRADS**} & \multicolumn{1}{c|}{**DMV-CNN**} & \multicolumn{1}{c}{**MVFF**} & \multicolumn{1}{c}{**GMIC**} & \multicolumn{1}{c}{**MSVCL**} & \multicolumn{1}{c|}{**MSVCL**} & \multicolumn{1}{c|}{**Baseline**} & \multicolumn{1}{c}{**Ours**} \\ \hline \multirow{10}{*}{**CBIIS**} & AUC & 0.6660 & 0.6654 & 0.7344 & 0.7666 & 0.6874 & 0.7045 & 0.7544 & 0.7798 \\ & TPR & 0.6078 & 0.6078 & 0.6536 & 0.6993 & 0.6209 & 0.6013 & 0.6732 & 0.6932 \\ & TNR & 0.6176 & 0.6176 & 0.6569 & 0.7034 & 0.6275 & 0.6176 & 0.6765 & 0.6863 \\ & ACC & 0.6118 & 0.6118 & 0.6549 & 0.7020 & 0.6235 & 0.6078 & 0.6745 & 0.6884 \\ \cline{2-11} & \multirow{3}{*}{**CMMD**} & AUC & 0.6661 & 0.6818 & 0.7686 & 0.8157 & 0.7878 & 0.8070 & 0.8018 & 0.8181 \\ & TPR & 0.6087 & 0.6435 & 0.6957 & 0.7304 & 0.7130 & 0.7217 & 0.7291 & 0.7391 \\ & TNR & 0.6098 & 0.6382 & 0.6870 & 0.7398 & 0.7195 & 0.7236 & 0.7217 & 0.7439 \\ & ACC & 0.6094 & 0.6399 & 0.6898 & 0.7368 & 0.7175 & 0.7230 & 0.7241 & 0.7424 \\ \cline{2-11} & \multirow{3}{*}{**TOMMY1**} & AUC & 0.6624 & 0.6977 & 0.7178 & 0.7146 & 0.7039 & 0.7535 & 0.6665 & 0.7235 \\ & TPR & 0.5936 & 0.6108 & 0.6576 & 0.6404 & 0.6601 & 0.7069 & 0.6010 & 0.6724 \\ & TNR & 0.6053 & 0.6184 & 0.6579 & 0.6579 & 0.6579 & 0.7105 & 0.6184 & 0.6711 \\ & ACC & 0.5954 & 0.6120 & 0.6577 & 0.6432 & 0.6598 & 0.7075 & 0.6037 & 0.6722 \\ \cline{2-11} & \multirow{3}{*}{**Average**} & AUC & 0.6648 & 0.6816 & 0.7403 & 0.7656 & 0.7264 & 0.7550 & 0.7409 & 0.7738 \\ & TPR & 0.6034 & 0.6207 & 0.6690 & 0.6647 & 0.6766 & 0.6678 & 0.7016 \\ & TNR & 0.6109 & 0.6247 & 0.6673 & 0.7004 & 0.6683 & 0.6839 & 0.6722 & 0.7004 \\ & ACC & 0.6055 & 0.6212 & 0.6675 & 0.6940 & 0.6669 & 0.6794 & 0.6674 & 0.7010 \\ \cline{2-11} & \multirow{3}{*}{**Overall**} & AUC & 0.8005 & 0.8062 & 0.8264 & 0.8445 & 0.8258 & 0.8394 & 0.8225 & 0.8491 \\ & TPR & 0.7300 & 0.7285 & 0.7374 & 0.7567 & 0.7270 & 0.7329 & 0.7270 & 0.7596 \\ & TNR & 0.7311 & 0.7288 & 0.7382 & 0.7618 & 0.7288 & 0.7358 & 0.7288 & 0.7618 \\ & ACC & 0.7304 & 0.7286 & 0.7377 & 0.7587 & 0.7277 & 0.7341 & 0.7277 & 0.7605 \\ \hline \hline \multirow{10}{*}{**Overall**} & AUC & 0.6298 & 0.6466 & 0.6760 & 0.6798 & 0.6714 & 0.6919 & 0.6994 & 0.7288 \\ & TPR & 0.5954 & 0.6029 & 0.6248 & 0.6314 & 0.6157 & 0.6271 & 0.6461 & 0.6769 \\ & TNR & 0.5939 & 0.6041 & 0.6269 & 0.6345 & 0.6168 & 0.6294 & 0.6447 & 0.6777 \\ & ACC & 0.5951 & 0.6031 & 0.6251 & 0.6319 & 0.6159 & 0.6275 & 0.6459 & 0.6771 \\ \cline{2-11} & \multirow{3}{*}{**INBreast**} & AUC & 0.4692 & 0.5195 & 0.6522 & 0.6791 & 0.7097 & 0.7649 & 0.6623 & 0.7889 \\ & TPR & 0.4080 & 0.5200 & 0.5520 & 0.7120 & 0.6080 & 0.7040 & 0.5760 & 0.7520 \\ & TNR & 0.4348 & 0.5217 & 0.5870 & 0.6304 & 0.6304 & 0.6957 & 0.5870 & 0.6957 \\ & ACC & 0.4152 & 0.5205 & 0.5614 & 0.6901 & 0.6140 & 0.6998 & 0.5789 & 0.7368 \\ \cline{2-11} & \multirow{3}{*}{**Average**} & AUC & 0.5495 & 0.5831 & 0.6641 & 0.6795 & 0.6906 & 0.7284 & 0.6809 & 0.7589 \\ & TPR & 0.5017 & 0.5615 & 0.5884 & 0.6717 & 0.6119 & 0.6656 & 0.6111 & 0.7145 \\ & TNR & 0.5144 & 0.5629 & 0.6070 & 0.6325 & 0.6236 & 0.6626 & 0.6159 & 0.6867 \\ & ACC & 0.5052 & 0.5618 & 0.5933 & 0.661 & 0.6150 & 0.6637 & 0.6124 & 0.7070 \\ \cline{2-11} & \multirow{3}{*}{**Overall**} & AUC & 0.6343 & 0.6494 & 0.6784 & 0.6792 & 0.6750 & 0.6955 & 0.6979 & 0.7341 \\ \cline{2-11} & \multirow{3}{*}{**Overall**} & TPR & 0.5979 & 0.6082 & 0.6229 & 0.6341 & 0.6238 & 0.6301 & 0.6413 & 0.6767 \\ \cline{2-11} & \multirow{3}{*}{**Overall**} & TNR & 0.6000 & 0.6091 & 0.6250 & 0.6341 & 0.6250 & 0.6295 & 0.6432 & 0.6773 \\ \cline{2-11} & \multirow{3}{*}{**Average**} & AUC & 0.6187 & 0.6422 & 0.7098 & 0.7312 & 0.7120 & 0.7444 & 0.7169 & 0.7678 \\ \cline{2-11} & & TPR & 0.5627 & 0.5970 & 0.6367 & 0.6827 & 0.6435 & 0.6722 & 0.6451 & 0.7067 \\ \cline{2-11} & \multirow{3}{*}{**Average**} & TNR & 0.5723 & 0.6000 & 0.6431 & 0.6732 & 0.6504 & 0.6754 & 0.6497 & 0.6949 \\ \cline{2-11} & \multirow{3}{*}{**Overall**} & AUC & 0.7386 & 0.7476 & 0.7
Vendor-Specific Mammography Scanner Information.In our study, we utilised four distinct mammography datasets collected from different scanners to examine the impact of scanner variability on mammography analysis. The CBIS-DDSM dataset was digitalized with four different scanners: DBA scanner at MGH, HOWTEK scanner at MGH, LUMISYS scanner at Wake Forest University, and HOWTEK scanner at ISMD. Additional information about this dataset can be found here 2. The CMMD dataset was acquired on a GE Senographe DS mammography system. The InBreast dataset was captured with MammoNovation Siemens FFDM equipment at the Breast Centre in CHSJ, Porto (Moreira et al., 2012). Lastly, the TOMMY dataset was collected by a commercially available (Hologic) digital mammography system (Gilbert et al., 2011). By analysing and processing these diverse datasets, we aim to investigate the generalisation ability of the proposed MammoDG framework under the influence of scanner characteristics on mammography and explore potential implications for clinical applications.
Footnote 2: [http://www.eng.usf.edu/cvprg/namography/database.html](http://www.eng.usf.edu/cvprg/namography/database.html)
### Performance Evaluation
To quantitatively evaluate the performance of our method, we adopt four popular classification metrics for all experiments, _i.e._, the area under receiver operator characteristic curve (AUC), true positive recall (TPR), true negative recall (TNR) and accuracy (ACC). All models are trained on the training sets of seen domains and evaluated on the test set of each domain, respectively. It is worth noting that, to obtain average performance, we simply average the metric values of all the target domains, _i.e._, different thresholds are adopted across domains. For overall performance, we aggregate the test set of all target domains and then evaluate the model on the mixed test set, _i.e._, the same threshold is adopted for the classification of all domains.
### Comparison with state-of-the-art methods
We compare our network against state-of-the-art mammography classification methods, including BI-RADS (Geras et al., 2017), DMV-CNN (Wu et al., 2019), MVFF (Khan et al., 2019) and GMIC (Y. Shen et al., 2021). To further demonstrate the generalisation ability of our model, we also reimplement a generalizable mammography detection framework MSVCL (Z. Li et al., 2021) in two ways. MSVCL(ResNet),
Figure 3: The visualisation of heatmap after cross-channel cross-view enhancement. Three malignant cases, one from each dataset, are tested. While the red and blue regions denote abnormal and normal tissues with high confidence. It is important to note that the yellow regions represent abnormal tissues with low confidence, and therefore should be verified through additional scrutiny.
MSVCL(FCOS) both utilize ResNet as the backbone but the latter additionally incorporates feature pyramid network to leverage multi-level features as FCOS (Tian, Shen, Chen, & He, 2019) does. BIRADS, DMV-CNN and MVFF are designed in the multi-view fashion while GMIC and MSVCL are the single-view frameworks. For a fair comparison, we obtain breast-level predictions by averaging their image-level predictions for those single-view frameworks. As displayed in Table 1, our framework produces superior performance on both Seen domain and Unseen domain. For Seen domain, our MammoDG surpasses the second-best method GMIC by 0.0082 in AUC and 0.0070 in ACC of the average performance, 0.0045 in AUC and 0.0018 in ACC of the overall performance, respectively. For Unseen domain, our method improves the average and overall performance by a considerable margin of 0.0305 in AUC and 0.0433 in ACC, 0.0386 in AUC and 0.0468 in ACC, respectively, than the generalisable method MSVCL(FCOS). The consistent improvements on all datasets result in superb advances in all domains in view of all four metrics of the average and overall performance.
### Ablation Studies
In this part, we conduct extensive ablation studies on the publicly available datasets, _i.e._, CBIS, CMMD as Seen domain, and INBreast as Unseen domain.
**The Effectiveness of Each Module.** As shown in Table 2, we validate the effectiveness of each module in our framework. The Baseline model consists of two Resnet18 branches where CC and MLO views are encoded respectively and then fused by concatenating two feature embeddings for the final breast-level prediction. The Cross-Channel Cross-View Enhancement module (+CVE) dramatically advances the overall performance over the Baseline model by 0.0212 in AUC and 0.0178 in ACC. The Mixstyle augmentation strategy (+MS) is further incorporated at each stage to mitigate the domain shift problem and achieve significant improvement, particularly on unseen domains. While the Global
\begin{table}
\begin{tabular}{c|c|c|c c c c} \hline \hline \multicolumn{2}{c|}{**Method**} & **Metrics** & **Baseline** & **+CVE** & **+MS** & **+GE** & **+MICL** \\ \hline \multirow{8}{*}{**Seen**} & \multirow{4}{*}{**CBIS**} & AUC & 0.6928 & 0.7066 & 0.7590 & 0.6960 & **0.7602** \\ & & TPR & 0.6144 & 0.6275 & 0.6863 & 0.6340 & **0.6928** \\ & & TNR & 0.6275 & 0.6471 & **0.6961** & 0.6569 & 0.6863 \\ & & ACC & 0.6196 & 0.6353 & 0.6902 & 0.6431 & **0.6902** \\ \cline{2-7} & \multirow{4}{*}{**CMMD**} & AUC & 0.7881 & 0.7926 & 0.7840 & **0.8567** & 0.8309 \\ & & TPR & 0.7043 & 0.7217 & 0.7043 & **0.7652** & 0.7391 \\ & & TNR & 0.7033 & 0.7317 & 0.7114 & **0.7805** & 0.7440 \\ & & ACC & 0.7036 & 0.7285 & 0.7092 & **0.7756** & 0.7424 \\ \cline{2-7} & \multirow{4}{*}{**Average**} & AUC & 0.7405 & 0.7496 & 0.7715 & 0.7764 & **0.7956** \\ & & TPR & 0.6594 & 0.6746 & 0.6953 & 0.6996 & **0.7160** \\ & & TNR & 0.6654 & 0.6894 & 0.7038 & **0.7187** & 0.7152 \\ & & ACC & 0.6616 & 0.6819 & 0.6997 & 0.7094 & **0.7163** \\ \cline{2-7} & \multirow{4}{*}{**Overall**} & AUC & 0.7777 & 0.7869 & 0.7983 & 0.8037 & **0.8201** \\ & & TPR & 0.7052 & 0.7239 & 0.7089 & 0.7164 & **0.7463** \\ & & TNR & 0.7069 & 0.7213 & 0.7126 & 0.7184 & **0.7500** \\ & & ACC & 0.7062 & 0.7224 & 0.7110 & 0.7175 & **0.7484** \\ \hline \hline \multirow{4}{*}{**Unseen**} & \multirow{4}{*}{**INBreast**} & AUC & 0.6048 & 0.7780 & 0.8193 & 0.8064 & **0.8289** \\ & & TPR & 0.5600 & 0.6960 & 0.7280 & 0.7200 & **0.7920** \\ & & TNR & 0.5870 & 0.7391 & 0.7391 & 0.7391 & **0.8043** \\ & & ACC & 0.5673 & 0.7076 & 0.7310 & 0.7251 & **0.7953** \\ \hline \hline \multirow{4}{*}{**Average**} & AUC & 0.6952 & 0.7591 & 0.7874 & 0.7864 & **0.8067** \\ & & TPR & 0.6262 & 0.6817 & 0.7062 & 0.7064 & **0.7413** \\ \cline{1-1} & & TNR & 0.6393 & 0.7060 & 0.7155 & 0.7255 & **0.7449** \\ \cline{1-1} & & ACC & 0.6302 & 0.6905 & 0.7101 & 0.7146 & **0.7426** \\ \hline \multirow{4}{*}{**Overall**} & AUC & 0.7781 & 0.7993 & 0.8207 & 0.8213 & **0.8364** \\ \cline{1-1} & & TPR & 0.6997 & 0.7201 & 0.7354 & 0.7455 & **0.7659** \\ \cline{1-1} & & TNR & 0.7081 & 0.7234 & 0.7386 & 0.7487 & **0.7691** \\ \cline{1-1} & & ACC & 0.7039 & 0.7217 & 0.7370 & 0.7471 & **0.7675** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitatively ablation studies on our domain generalisation setting. The public datasets CBIS and CMMD are treated as Seen domain while INBreast is treated as Unseen domain. The module “**CVE**” means Cross-Channel Cross-View Enhancement while the module “**MS**” denotes Mixstyle. “**GE**” denotes Global Encoder while “**MICL**” represents Multi-Instance Contrastive Learning. The top values are bold.
Encoder (+GE) explores the shared representation of two views, the Multi-Instance Contrastive Learning strategy (+MICL) conducts view-specific learning and endows the full MammoDG model with the overall performance of 0.8364, 0.7659, 0.7691, 0.7675 in AUC, TPR, TNR, ACC.
Details of CVE Module.We quantitatively explore the efficacy of each component in our CVE module in Table 2(a) and conduct the experiments based on the full MammoDG. The model without the entire CVE module achieves an overall performance of 0.8014, 0.7141 in AUC, ACC, respectively. Cross-Channel Enhancement (CE) brings 0.0300 AUC gains and 0.0280 ACC gains in overall performance while Cross-View Enhancement (VE) further improves by 0.0050 and 0.0254, respectively. To qualitatively verify the effectiveness of our CVE module, we visualize the heatmap of three samples after manipulating cross-channel cross-view enhancement. Figure 3 clearly demonstrates that our method successfully detects malignant tissues after these enhancements.
Discussion on view-specific learning.In Table 2(c), we conduct experiments on different strategies of view-specific learning. We first replace our MICL with a vanilla classifier head to give supervision on image-level classification, which degrades the overall performance by 0.0128 and 0.0227 in AUC and ACC. Additionally, we replace our MICL with a MIL aggregator B. Li et al. (2021), leading to a significant drop of 0.0117 and 0.0166 in overall AUC and ACC.
Discussion on the balancing hyper-parameters.We discuss the best choice of the balancing ratio of breast-level prediction and image-level predictions in Table 2(b). The equal weight (1:1:1) for CC, MLO and breast-level predictions achieves the best performance. We also discuss the balancing hyper-parameter \(\lambda\) to weight supervision loss and contrastive loss in MICL in Table 2(d). \(\lambda\) should be set as 0.5 to obtain the best overall AUC and ACC.
Details of the MICL strategy.In Table 2(e), we explore the effects of the number of tiles (instances) in one bag on our MICL strategy. The experiment results show that when one image is divided into \(4\times 4\) tiles, the best overall performance is achieved.
## 4 Discussion
MammoDG outperforms traditional supervised methods on mammography diagnosis.In our comparison with traditional supervised methods ("Seen" Category in Table 1) for mammography diagnosis, MammoDG demonstrated superior performance across all metrics. This is largely attributed to the effective use of multi-view mammograms framework and the innovative contrastive mechanism that enhances generalisation capabilities. Traditional models often struggle with the high variability and complex patterns found in mammograms, while MammoDG was designed to robustly manage this inherent complexity. In terms of AUC, TPR, TNR, and ACC, our method consistently outperformed traditional supervised methods, highlighting the benefit of leveraging advanced domain generalisation mechanisms for this task.
MammoDG consistently surpasses the generalisable mammography diagnosis methods on unseen domains.Another distinguishing feature of MammoDG is its ability to maintain superior performance even when tested on unseen domains. This was a limitation observed in previous studies with other generalisable mammography diagnosis methods. According to the "Unseen" part of Table 1 & 2, MammoDG's robustness to out-of-distribution data, collected from various vending machines and protocols, allows it to handle the data distribution shift in large cohorts effectively. This shows the feasibility of the deployment of MammoDG in real-world scenarios across various centres and hospitals.
MammoDG saves the cost of annotation in target domains.MammoDG's ability to achieve high performance with limited annotations is crucial to the medical image analysis community. Given the difficulty and expense of acquiring reliable annotations, a model that can still excel with such limitations is invaluable. As compared to traditional supervised models that require extensive and costly annotations for training, MammoDG substantially cuts the cost of annotation in target domains, which makes it an efficient and cost-effective solution for large-scale mammography analysis across multiple centers.
**MammoDG provides reliable evidence for clinical decisions.** The results from this study have significant practical implications for the healthcare industry, specifically for radiologists and healthcare providers engaged in breast cancer detection. The machine learning model developed in this research demonstrated robust performance across various datasets, with promising implications for real-world application. In the domain of breast cancer diagnosis, MammoDG is an especially powerful tool as
\begin{table}
\end{table}
Table 3: Quantitatively ablation studies on details of each parts in our MammoDG. All models are trained on the train set of CBIS and CMMD and test on INBreast and the test set of CBIS and CMMD. (Four metrics from up to bottom are AUC, TPR, TNR, ACC.)
it considers both CC and MLO views, providing a comprehensive analysis that leverages cross-view complementary information. As depicted in Figure 3, MammoDG consistently generates reliable attention regions, providing evidence that matches well with radiologists' diagnoses. The intersection over union between our model's attention regions and the areas highlighted by radiologists consistently exceeded a threshold, indicating MammoDG's capability to provide trustworthy and actionable insights for clinical decisions.
In the future, we aim to conduct reader studies to measure the extent to which accuracy improves when radiologists use our system and to evaluate their level of trust in it. Given the potential benefits of AI assistance, particularly for less-experienced readers, further investigation will be valuable in comparing the benefits of this system for both sub-specialists and community radiologists who might be called on to do this work only occasionally.
MammoDG's LimitationsDespite its strengths, this study also has several limitations. First, although the model was evaluated on several diverse datasets, these are primarily from China and the UK. Additional validation on datasets from other regions and ethnicities would be valuable in assessing the global applicability of our model. Second, the results of this study are contingent on the accuracy of the ground truth labels, which are based on human interpretation and thus subject to inter-observer variability. Lastly, while the model demonstrated strong performance in distinguishing between benign and malignant cases, there remains a need to further investigate its efficacy in detecting early stage cancers, as this is crucial for improving patient outcomes. Future work should aim to address these limitations, refine the model's capabilities, and assess its performance in a real-world clinical setting.
## 5 Conclusion
Our work exhibits our ability to develop a pioneering deep-learning framework for generalisable, robust, and reliable analysis of cross-domain multi-center mammography data. Our framework MammoDG outperforms traditional models when trained on limited data. We are able to provide a generalisable network that performs comparably to radiologists on breast cancer analysis without requiring specific training when transferring to new clinical sites. Extensive experiments further validate the critical importance of domain generalisation for trustworthy mammography analysis in the presence of imaging protocol variations.
## 6 Check List Information
### Data availability
This study involved four datasets. Three of them are published data and the remaining one is the private dataset. The CBIS dataset is Breast Cancer Image Dataset from Kaggle ([https://www.kaggle.com/datasets/awsaf49/cbis-ddsm-breast-cancer-image-dataset](https://www.kaggle.com/datasets/awsaf49/cbis-ddsm-breast-cancer-image-dataset)), and CMMD is The Chinese Mammography Database from [https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70230508](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70230508), and InBreast dataset from Kaggle ([https://www.kaggle.com/datasets/martholi/inbreast](https://www.kaggle.com/datasets/martholi/inbreast)). The TOMMY dataset (Gilbert et al., 2015) is not currently permitted for public release by their respective Institutional Review Boards.
### Code availability
The code for this project, including all libraries used and their versions, is available online at [https://github.com/needupdate](https://github.com/needupdate).
## Acknowledgements
LL gratefully acknowledges the financial support from a GSK scholarship and a Girton College Graduate Research Fellowship at the University of Cambridge. FJG acknowledges support by the NIHR Cambridge Biomedical Research Centre and an early detection programme grant from Cancer Research UK. AIAR acknowledges support from CMIH and CCIMI, University of Cambridge, ESPRC Digital Core Capability Award. CBS acknowledges the Philip Leverhulme Prize, the EPSRC fellowship EP/V029428/1, EPSRC
grants EP/T003553/1, EP/N014588/1, Wellcome Trust 215733/Z/19/Z and 221633/Z/20/Z, Horizon 2020 No. 777826 NoMADS and the CCIMI.
|
2309.02677 | Simplicial Approximation of Deforming 3D Spaces for Visualizing Fusion
Plasma Simulation Data | We introduce a fast and invertible approximation for data simulated as 2D
planar meshes with connectivities along the poloidal dimension in deforming 3D
toroidal (donut-like) spaces generated by fusion simulations. In fusion
simulations, scientific variables (e.g., density and temperature) are
interpolated following a complex magnetic-field-line-following scheme in the
toroidal space represented by a cylindrical coordinate system. This deformation
in 3D space poses challenges for visualization tasks such as volume rendering
and isosurfacing. To address these challenges, we propose a novel paradigm for
visualizing and analyzing such data based on a newly developed algorithm for
constructing a 3D simplicial mesh within the deforming 3D space. Our algorithm
introduces no new nodes and operates with reduced time complexity, generating a
mesh that connects the 2D meshes using tetrahedra while adhering to the
constraints on node connectivities imposed by the magnetic field-line scheme.
In the algorithm, we first divide the space into smaller partitions to reduce
complexity based on the input geometries and constraints on connectivities.
Then we independently search for a feasible tetrahedralization of each
partition taking nonconvexity into consideration. We demonstrate use cases of
our method for visualizing XGC simulation data on ITER and W7-X. | Congrong Ren, Hanqi Guo | 2023-09-06T03:08:04Z | http://arxiv.org/abs/2309.02677v2 | # Meshing Deforming Spacetime for
###### Abstract
We introduce a novel paradigm that simplifies the visualization and analysis of data that have a spatially/temporally varying frame of reference. The primary application driver is tokamak fusion plasma, where science variables (e.g., density and temperature) are interpolated in a complex magnetic field-line-following coordinate system. We also see a similar challenge in rotational fluid mechanics, cosmology, and Lagrangian ocean analysis; such physics implies a deforming spacetime and induces high complexity in volume rendering, isosurfacing, and feature tracking, among various visualization tasks. Without loss of generality, this paper proposes an algorithm to build a simplicial complex--a tetrahedral mesh, for the deforming 3D spacetime derived from two 2D triangular meshes representing consecutive timesteps. Without introducing new nodes, the resulting mesh fills the gap between 2D meshes with tetrahedral cells while satisfying given constraints on how nodes connect between the two input meshes. In the algorithm we first divide the spacetime into smaller partitions to reduce complexity based on the input geometries and constraints. We then independently search for a feasible tessellation of each partition taking nonconvexity into consideration. We demonstrate multiple use cases for a simplified visualization analysis scheme with a synthetic case and fusion plasma applications.
Spacetime meshing, triangulation, time-varying mesh, isosurfacing, volume rendering.
## I Introduction
Science applications often involve spacetime that is constantly changing. For example, the cosmos evolved from a dense matter and expanded to today's universe with sparsely distributed galaxies [29, 39]. In the general theory of relativity, distortions of spacetime are created by mass and energy [7]. In fluid dynamics, fluid parcels may be rotated, translated, compressed, and expanded, driven by flow transport [9, 18]. Likewise, in electrodynamics, electromagnetic fields deform infinitesimal volumes of particle parcels [1].
In our observation, to date, _deforming spacetime_ is either overlooked or induces high complexity in scientific visualization. In the former case, physical insights between discrete timesteps are ignored because usually the time-varying data are visualized only at discrete timesteps. In the latter case, deforming spacetime challenges visualization and analysis tasks. For example, nonlinear physics-driven interpolation incurs prohibitively high computational costs in visualization tasks such as volume rendering: interpolating science variables (e.g., density and temperature) in tokamak fusion plasma1 requires tracing streamlines in a magnetic field [37]. Furthermore, such a nonlinear interpolation basis makes it even more difficult for root-finding tasks, such as extracting isosurfaces where the interpolated value equals a given value and finding critical points where interpolated vector field values vanish. Later sections will refer to the interpolation and root-finding processes as _forward_ and _inverse approximations_, respectively, as we further motivate visualization with deforming spacetime.
Footnote 1: Technically, this paper is not concerned with physical time in tokamaks but presents 3D deformation by treating toroidal direction (\(\phi\)) as time without losing generality, as explained in case studies.
In this work we explore a novel paradigm to represent deforming spacetime by (simplicial) meshes to simplify the visualization and analysis of data with spatially/temporally varying frames of reference. Our spacetime mesh offers a continuous (and also invertible for root-finding problems) data representation of time-varying data, making forward/inverse approximation possible with less computational cost. For example, volume rendering in the fusion plasma applications relies on a complex and nonlinear magnetic-driven interpolation; with the deformed space represented as an unstructured grid, the data can be directly visualized with off-the-shelf visualization tools. Likewise, isosurfacing in the deformed space is also made possible with the spacetime meshing scheme. Although our primary application driver is tokamak fusion plasma, our methodology is applicable also in rotational fluid mechanics [9, 18], cosmology [29, 39], and Lagrangian ocean analysis [11, 34]; such physics implies a deforming spacetime and induces high complexity in volume rendering and feature extraction and tracking, among various visualization tasks.
This paper focuses on meshing 3D spacetime (2D space and 1D time) and assumes that the inputs are a series of 2D simplicial meshes, each representing the spatial discretization in a timestep. The assumption reflects real application scenarios: most simulations output data as a spatial grid. The deformation of spacetime is explicitly defined as temporal connectivity of mesh nodes in adjacent steps (i.e., the "next node" in tokamak applications, as explained in Section III-B). In other words, the connectivities are a discrete representation of how spatial locations are transformed in adjacent steps. The temporal connectivities may be available from data or derived by tracing particle trajectories based on the underlying physics.
Spacetime meshing from spatial meshes and temporal connectivities imposes multiple geometrical constraints, leading to a convoluted problem that requires a computationally fea
sible method. The objectives and constraints for the resulting spacetime mesh include the following:
1. The spacetime mesh includes all simplices (e.g., triangles, edges, and nodes) from the input spatial meshes without adding new nodes (i.e., _Steiner points_).
2. The spacetime mesh contains all edges that connect nodes in adjacent steps, as specified by temporal connectivities.
3. The spacetime mesh is simplicial (containing only tetrahedral cells) for visualization and analysis tasks.
Considering the constraints induced by the spatial meshes and the temporal connectivities, meshing the deforming spacetime can be extremely complex; a new methodology is required to search for a triangulation of the spacetime subject to the above-described constraints with a reasonable time cost. First, most existing triangulation algorithms are for convex polyhedra; however, the temporal dependencies between nodes often make the spacetime nonconvex (both locally and globally). Second, even without considering constraints, the time complexity of triangulating a 3D nonconvex polyhedron is \(O(|\mathcal{V}|^{3})\)[27] (\(|\mathcal{V}|\) stands for the number of mesh nodes in a spacetime), which is considerably costly for meshes in real applications.
In this paper we propose a divide-and-conquer algorithm that divides the spacetime into small and independent partitions (the _divide_ stage) and then triangulates each partition (the _conquer_ stage). The divide stage aims to reduce computational costs by decomposing the problem into independent and manageable subproblems of triangulating partitions. For shared boundaries of neighboring partitions, we define a rule to triangulate the boundaries so that each partition can be processed independently. The conquer stage focuses on each individual partition, which is triangulated by iterating over nodes and eliminating all tetrahedra that connect to a node. A decision-tree-based search is applied to decide which node to eliminate and how to triangulate its surrounding with all geometry (e.g., nonconvexity) and connectivity constraints considered. Once we triangulate all partitions, the resulting spacetime mesh can enable and simplify many visualization tasks, and we demonstrate use cases of our spacetime meshes with tokamak fusion plasma data and a synthetic dataset. In summary, our framework makes the following contributions:
* A novel paradigm to simplify visualization of scientific data that have spatially/temporally varying frames of reference
* A method to partition 3D deforming spacetime into smaller independent components for triangulation
* A decision-tree-based algorithm for searching for feasible triangulation for a 3D deforming spacetime with geometry and connectivity constraints.
The remainder of this paper is organized as follows. Section II reviews related work, and Section III presents fundamentals and the driver application of our methodology. We describe the divide-and-conquer algorithm in Section IV. Section V demonstrates the uses cases and effectiveness of our method both quantitatively and qualitatively. Section VI discusses limitations and future work. We conclude with a brief summary in Section VII.
## II Related Work
We review literature on spacetime meshing and triangulation.
### _Spacetime Meshing for Scientific Visualization_
The visualization research community has been investigating spacetime meshing since the early 2000s, primarily motivated by extracting and tracking features that evolve over the spacetime continuum. Spacetime meshing provides a mathematically sound basis for feature tracking; however, to our knowledge, no methodology has considered meshing deforming spacetime for feature tracking. As exemplified below, the fundamental rationale of meshing spacetime for feature tracking is establishing a continuous data representation to generalize spatial feature descriptors to spacetime directly.
**Critical point tracking in vector fields.** In the early work by Tricoche et al. [32] for tracking critical points in time-varying 2D vector fields, the authors constructed a 3D mesh by first placing and copying the spatial triangulation for different timesteps and then building a prismatic cell for each corresponding triangle pair in adjacent timesteps, as illustrated in Figure 1(a). Garth et al. [12] further enabled 3D critical point tracking based on 4D prisms. Assuming a piecewise linear interpolation applies to the spatial triangulation and a linear interpolation over time, one can derive the exact location and time of critical points in the prism cells' boundaries. One can further define and reconstruct critical point trajectories and identify events such as birth/death and split/merge.
**Singularity tracking in complex-valued scalar fields.** A similar success of generalizing feature descriptors to 4D is the tracking of vortices--singularity curves in 3D complex fields (represented as magnitudes and phases)--from superfluidity, superconductivity, and Bose-Einstein condensates. In 3D, a vortex is a locus of points where the contour integral of the phase field over an infinitesimal loop is nonzero [23]. The 4D definition of vortices [15, 16, 22] generalizes the contour integral to spacetime so that one can reconstruct the trajectory surfaces of a vortex with the 4D mesh complex.
**Isosurface tracking in scalar fields.** We view higher-dimensional isosurfacing [3]--a generalization of marching cubes [21] in 4D and beyond--as a tracking technique based on spacetime meshes. Because there are a finite number of possible ways for a hypercube to intersect an isosurface, one can establish a lookup table to extract time-varying isosurfaces in 4D. Inspired by marching tetrahedra [8] that reduces ambiguity cases in marching cubes, Guo et al. [14] developed a higher-dimensional simplicial meshing to help eliminate ambiguous cases in isosurface tracking.
We refer readers to the literature [24] for a comprehensive review of feature tracking in scientific visualization. For example, _isolated time approaches_ first extract features in every timestep and then determine their correspondence using distance metrics as matching criteria. Features can be extracted by numerical [20] or topological [4] methods. Some link two features if their distance is lower than some threshold [19]. Some apply topological descriptors based on Morse theory and search for the pairing of nodes (i.e., critical points) in two
graphs (e.g., merge tree [40], Reeb graph/contour tree [33]) that minimizes the distance between the two graphs. Some approaches _implicitly incorporate temporal dimension_, such as the feature flow field (FFF) method [30] and its variants [25, 36]. Theisel and Seidel proved that critical lines in a _n_-D scalar field are equivalent to streamlines of a derived \((n+1)\)-D vector field. This equivalence transfers a critical point tracking problem to a streamline integration problem. Stable FFF is then introduced to reduce accumulated errors incurred by numerical integration [36]. Combinatorial FFF further filters out less important critical lines by their integrated persistence to make FFF more robust for noisy data [25].
Besides feature tracking, some other works are related to spacetime. For example, in flow visualization, spacetime is used to define the domain of pathlines in time-dependent vector fields. Wilde et al. [38] presented a technique to modify _flow maps_ in spacetime according to space deformation, which maps a particle seeded at some position and time to its destination after a given time interval and thus explicitly encodes pathlines. Gunther [13] considered flow visualization in applications related to finite-sized particles, where the trajectories of particles are described in a spacetime domain.
### _(Spacetime) Triangulation_
We review several basic concepts in triangulation and the most relevant literature on it. While few directly tackle triangulation in spacetime, triangulation (primarily in 2D and 3D spaces for real-world applications) is widely studied in geometric modeling, computer graphics, and computational sciences; readers are referred to the work of De Loera et al. [6] for a comprehensive understanding.
**Simplices and simplicial prisms**. An _n-simplex_ is a convex hull with \(n+1\) nodes that are affinely independent in \(\mathbb{R}^{n}\). For example, a 0-simplex is a point, a 1-simplex is a line segment, a 2-simplex is a triangle, and a 3-simplex is a tetrahedron. An \((n+1)\)-_D simplicial prism_ (also referred to as \((n+1)\)-prism or prism) is derived by extruding an _n_-simplex to one dimension higher. Note that prisms are usually nonsimplicial; for example, a 3D triangular prism consists of two congruent triangles and three quadrilaterals.
**Polytopes and their triangulation**. An _n-polytope_ is a geometry object in \(\mathbb{R}^{n}\) whose faces are all _flats_ that can be described by a system of linear equations [28]. A triangulation of an _n_-polytope is a subdivision of the polytope into a finite collection of _n_-simplices such that the union of all simplices is equal to the polytope (_union property_) and intersection of any pair of simplices is a common \((n-1)\)-facet or empty (_intersection property_) [6]. We focus on 3-polytopes, namely, _polyhedra_, that may or may not be convex in this study. Triangulation of a polyhedron is also referred to as _tetrahedralization_ in the following paragraphs.
**Triangulation problems related to this research**. We relate our research to two notoriously challenging triangulation problems: (1) triangulation without additional (Steiner) points [27] and (2) triangulation of nonconvex polytopes [5, 27]. First, we intend to avoid Steiner points in spacetime because data are usually not immediately available beyond spatial mesh vertices. Triangulation without Steiner points adds complexity and is not always achievable; for example, there exist nontriangulable polytopes (e.g., _Schonhardt polyhedron_ shown in Figure 2 (a)) [6], which can be created by rotating one of two parallel equilateral faces of a 3D triangular prism and inserting opposite diagonals in previous rectangles. Second, we must handle nonconvex polytopes introduced by a deforming spacetime, whereas most triangulation focus on convex polytopes (i.e., convex hulls of sets of finite 3D points) only [2, 6]. Nonconvexity causes severe problems because not all newly created connections remain in the polytope. Also, the time complexity of triangulating a nonconvex polytope is high (\(O(|\mathcal{V}|^{3})\)[5, 27]); we must incorporate even more complex constraints in this research, as described in the next section.
**Spacetime triangulation for computational sciences**. Scientists have recently explored using 4D spacetime meshes to simulate partial differential equations directly without using traditional 3D meshes and timestepping methods [10, 17]. Although related, the goal of this paper is fundamentally different from 4D spacetime meshing in computational sciences; we attempt to represent science variables available on spatial grids, which are still the prevailing way to represent field data in today's computational sciences. While native spacetime
Fig. 1: Two examples of spacetime meshes (a and c) and their simplicial subdivisions (b and d). The prismatic mesh in (a) is obtained by extruding a 2D spatial mesh. One possible triangulation is given by staircase triangulation [6] in (b). Time-varying behavior introduces deforming spacetime in (c), which has more complex correspondence between the lower and upper meshes: nodes \(n_{3}\) and \(n_{4}\) have the same next node; node \(n_{x}\) has no previous nodes; triangulations in the lower and upper meshes are different; the lower mesh needs translating, rotating, or hybrid transformation to be aligned with the upper mesh. All these differences make it difficult building a simplicial meshing (d) for deforming spacetime.
meshes may interpolate and represent time-varying variables as conventional meshes, spacetime simulations are subject to high complexity, increased memory footprint, and numerical instabilities. For example, in spacetime finite element methods, one can create and update an \((n+1)\)-D spacetime mesh by adding \((n+1)\)-simplices on an \(n\)-D spatial mesh along the time dimension [31]. This method forms a simplicial complex with a terrain-like surface as spacetime. Researchers have expressed interest in different phenomena, such as wave propagation [31] and rotation [35], that form spacetimes with different shapes (e.g., cone, prism) or time-variant topology of the spatial mesh.
**Spacetime triangulation in visualization** has focused on nondeforming spacetime so far, primarily for feature tracking problems. Given a time-invariant spatial discretization, for example, a triangular mesh, one can establish a prismatic mesh connecting all corresponding nodes in spacetime. One can further tesselate the prismatic mesh with the staircase triangulation scheme [6, 14], which we review in the next section.
## III Formulation and Preliminaries
This paper considers the deforming spacetime induced by a vector field (e.g., magnetic field in fusion plasma) \(\mathbf{v}:\mathbb{R}^{n+1}\rightarrow\mathbb{R}^{n}\) (or \(\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) if the deformation is time-invariant), where \(n\) is the dimensionality of the spatial domain. Assuming Lipschitz continuity of \(\mathbf{v}\), the _deformation_\(\Phi:\mathbb{R}^{n+2}\rightarrow\mathbb{R}^{n}\) (also known as _flow map_) is a continuous function representing the solution of the following initial value problem:
\[\frac{\partial\Phi(\mathbf{x}_{0},t_{0},t)}{\partial t}=\mathbf{v}(\Phi( \mathbf{x}_{0},t_{0},t))\text{ and }\Phi(\mathbf{x}_{0},t_{0},t_{0})=\mathbf{x}_{0}, \tag{1}\]
where \(\mathbf{x}_{0}\) is a spatial location at time \(t_{0}\) and \(\Phi(\mathbf{x}_{0},t_{0},t)\) denotes the spatial location of \(\mathbf{x}_{0}\) after deformation at \(t\). With the deformation, one can define the field-following interpolation as
\[f(\mathbf{x},t)=\beta f(\Phi(\mathbf{x},t,t_{0}))+(1-\beta)f(\Phi(\mathbf{x},t,t_{1})), \tag{2}\]
where \(f:\mathbb{R}^{n+1}\rightarrow\mathbb{R}\) is a scalar function (e.g., density and temperature in fusion plasma) and the interpolation weight \(\beta\) is \((t_{1}-t)/(t_{1}-t_{0})\), \(t_{0}\leq t\leq t_{1}\).
In a discrete sense, we rephrase the objective of spacetime meshing in Section I as follows. Assuming each timestep \(t_{i}\) in the sequence \(t_{0}\leq t_{1}\leq\ldots t_{i}\leq\ldots\) is associated with an \(n\)-dimensional simplicial mesh \(M_{i}=\langle V_{i},E_{i}\rangle\), where \(V_{i}=\{v_{0}^{(i)},v_{1}^{(i)},...\}\) is the set of nodes in \(M_{i}\) and \(E_{i}=\{e_{0}^{(i)},e_{1}^{(i)},...\}\) is the set of edges in \(M_{i}\), \(v_{j}^{(i)}\) denotes the \(j\)th node and \(e_{j}^{(i)}\) denotes the \(j\)th edge in \(M_{i}\). A deformed spacetime mesh is an \((n+1)\)-dimensional simplicial mesh \(\mathcal{M}=\langle\mathcal{V},\mathcal{E}\rangle\) that contains all nodes \(\mathcal{V}=\cup_{i}\lvert V_{i}\) and edges \(\mathcal{E}=(\cup_{i}\lvert E_{i}\rvert)\cup\left(\cup_{i}\cup_{j}\cup_{k}v_{j} ^{(i)}v_{k}^{(i+1)}\right)\). The temporal connectivity \(v_{j}^{(i)}v_{k}^{(i+1)}\) between \(M_{i}\) and \(M_{i+1}\) is defined by the deformation \(\Phi\) such that \(v_{k}^{(i+1)}\approx\Phi(v_{j}^{(i)},t_{i},t_{i+1})\). In practice, we only need to consider spatial meshes in two consecutive timesteps (namely, the _lower_ and _upper meshes_); spacetime meshing can be trivially applied to every two adjacent timesteps for multiple timesteps.
In the rest of this section we first review the trivial case, triangulation of nondeforming spacetime (\(\mathbf{v}=\mathbf{0}\)), and then consider challenges in deforming spacetime.
### _Meshing Nondeforming Spacetime with Staircase Triangulation_
With nondeforming spacetime, assuming the underlying spatial mesh \(M_{i}\) is identical for all timesteps, the problem is reduced to the following: each node in the lower mesh connects to the same node in the upper mesh. As a result, each lower-mesh triangle extrudes into a prism, which can be further subdivided into three tetrahedra with _staircase triangulation_[6], as illustrated in Figure 2 (b). As such, staircase triangulation can help mesh \((n+1)\)-dimensional nondeforming spacetime [14] and serve as a basis of our methodology.
**Staircase triangulation** provides an arbitrary rule to triangulate a prism based on node indices. Consider a prism \(a_{0}a_{1}a_{2}-b_{0}b_{1}b_{2}\) extruded by a triangle \(a_{0}a_{1}a_{2}\). With staircase triangulation, each tetrahedron corresponds to a _monotone path_ where both alphabets and subscripts are ascending. For example, all monotone paths for prism \(a_{0}a_{1}a_{2}-b_{0}b_{1}b_{2}\) are \(a_{0}a_{1}a_{2}b_{2}\), \(a_{0}a_{1}b_{1}b_{2}\), and \(a_{0}b_{0}b_{1}b_{2}\), as shown in Figure 2 (b) left. Each path represents a tetrahedron separated from original prism by staircase triangulation (Figure 2 (b) middle), and all tetrahedra corresponding to monotone paths constitute a tetrahedralization of the prism (Figure 2 (b) right).
**Meshing the nondeforming spacetime.** Staircase triangulation directly generalizes to multiple prisms and makes it possible to triangulate prismatic meshes (Figure 1 (a)) extruded
Fig. 2: Subdivision of a 3D prism. (a) A Schönhardt polyhedron, which is the simplest nonconvex nontriangulable polyhedron. (b) Monotone paths, corresponding tetrahedra, and simplicial subdivisions of a 3D prism by staircase triangulation, with the assumption that the 3 node indices satisfy \(a_{0}<a_{1}<a_{2}\). Nodes in the lower mesh are only connected to nodes in upper mesh whose previous nodes have greater indices. (c) A simplicial subdivision of the 3D prism without considering staircase. (d) If all 3 quads are separated by connecting top-left corner and bottom-right corner, then there is no way to triangulate the prism without additional nodes. (e) Two adjacent prisms have conflictive triangulation ways on the common quadrilateral face \(a_{1}-a_{2}-b_{2}-b_{1}\).
from a 2D spatial mesh into 3D. Note that although staircase triangulation is not the only way to triangulate each prism (see Figure 2 (c) as an example), indices that give a unique ordering of nodes define an elegant and consistent schema to represent a nondeforming spacetime as a simplicial complex. For example, for the two adjacent prisms in Figure 2 (e), both prisms triangulate the shared quadrilateral face in a consistent manner by staircase, creating a shared edge \(a_{1}b_{2}\) that connects the lower index node (\(a_{1}\)) to a higher index node (\(b_{2}\)); the resulting triangulation includes no conflicting edge, such as \(b_{2}a_{1}\).
### _2.5D Representation of a Deforming Spacetime: Lower/Upper Meshes and Temporal Connectivity_
This paper considers the temporal connectivity defined by _next node_; that is, each node in the lower mesh connects to another node (or null) in the upper mesh. While alternative definitions may be valid (e.g., one-to-many) for our algorithms, we exclusively use the next-node definition because of the simplicity of deriving deformation induced by the vector field \(\mathbf{v}\). For example, in tokamaks where the magnetic field defines the deformation, by placing a particle at a mesh node \(v\) in a lower mesh, scientists can build a connection between \(v\) and the node \(v^{\prime}\) (called _next node_ of \(v\)) closest to the particle's position in the upper mesh (Figure 3 (b)).
**2.5D independent partition**. Denote the lower and upper spatial meshes by \(S\) and \(S^{\prime}\), respectively. We define a _2.5D independent partition_\(PQ\), where \(P\) and \(Q\), respectively, is a simply connected component in the lower and upper meshes such that (1) any node \(v\in P\) maps to another node \(v^{\prime}\in Q\) and (2) any node \(v\in\partial P\setminus\partial S\) maps to another node \(v^{\prime}\in\partial Q\setminus\partial S^{\prime}\); the symbol \(\partial\) denotes the boundary of a partition. Note that \(P\) or \(Q\) may be degenerate, such as a single node or one or multiple edges. For example, the entire lower and upper meshes are a 2.5D independent partition. For another example, assuming \(P\) contains one single triangle, the next-node mapping implicitly determines diverse situations on how \(P\) maps to upper-mesh elements, which could be a single node, one or multiple edges, or one or multiple triangles as finite scenarios illustrated in Figure 4. The seven scenarios indicate different local deformation behaviors in spacetime. If we consider the deformation in the scale of edge, then two adjacent nodes may still be mapped to two adjacent nodes (small change of relative distance between the two nodes, e.g., \(n_{4}\) and \(n_{5}\) in Figure 3 (b)), merge to one node (shrinking behaviors, e.g., \(n_{5}\) and \(n_{6}\) in Figure 3 (b)), or break into a path (expanding behaviors, e.g., \(n_{7}\) and \(n_{8}\) in Figure 3 (b)) in the upper mesh.
**2.5D domain decomposition**. A decomposition of \(PQ=\cup_{i}P_{i}Q_{i}\) is defined by nonoverlapping 2.5D independent partitions \(P_{i}Q_{i}\). Formally, for any two different partitions \(P_{i}Q_{i}\) and \(P_{j}Q_{j}\), the intersection is either null or lower-dimensional simplicies such as nodes and edges. The union and intersection of two 2.5D independent partitions \(P_{i}Q_{i}\) and \(P_{j}Q_{j}\) are defined by \((P_{i}\cup P_{j})(Q_{i}\cup Q_{j})\) and \((P_{i}\cap P_{j})(Q_{i}\cap Q_{j})\), respectively. We will further discuss how the 2.5D domain is decomposed into smaller partitions for divide-and-conquer processing in the next section.
**Triangulation of a 2.5D independent partition**. A 2.5D independent partition \(PQ\) consists of only triangles, edges, and nodes from 2D complexes \(P\) and \(Q\) and does not bound a 3D volume; one must first transform \(PQ\) into a proper polyhedron before triangulating into tetrahedra. We refer to deriving a polyhedron from a 2.5D independent partition as _lateral triangulation_. To form a polyhedron, because \(P\) and \(Q\) are already triangles or degenerate triangles, one has to define a surface/triangular mesh that connects 2D boundaries \(\partial P\) and \(\partial Q\); the lateral surface must contain edges between a node \(v\in\partial P\) and its next node \(v^{\prime}\in\partial Q\). However, multiple choices exist to find a lateral triangulation. For example, two ways exist to triangulate a quadrilateral face on a prism. The non-uniqueness leads to challenges: (1) the resulting polyhedron may be nontriangulable, as discussed below, and (2) two 2.5D independent partitions should have the same triangulation on their common lateral face, or otherwise one polyhedron will overlap or form a void with its neighboring polyhedron, as addressed in the next section.
**Triangulability of a 2.5D independent partition**. Some partitions cannot find a triangulation, which we call _ill-posed_
Fig. 4: Seven _triangle-to-what_ scenarios. Connected components in lower and upper meshes are linked by next-node correspondence. Adjacent nodes might still be adjacent (in _triang-to-triangle_), merge (in _triangle-to-edge_, _triang-to-node_, and _triang-to-path_ scenarios) or become non-adjacent (in _triang-to-2edge_\(path\), _triangle-to-1edge_\(path\), and _triangle-to-path_ scenarios) in the next timestep.
Fig. 3: Illustration of problem formulation. (a) Magnetic-following interpolation. It traces magnetic lines (shown as blue line) passing a given location \(p\) at time \(t\) in both forward and backward directions and locates intersections on two neighboring timesets at \(t_{i}\) and \(k_{i+1}\). The function value at \((p,t)\) is interpolated by function values at two intersecting points. Gray lines show the magnetic lines passing nodes in \(t_{i}\). (b) Next nodes (connected by black lines) of nodes in \(t_{i}\) derived by magnetic lines and a triangulation based on next nodes (gray lines). Only connections whose two ending points are both in the 1D region are shown.
partitions_. For example, the prism in Table I is twisted so much that any lateral triangulation will lead to self-intersections. Although the prism is a triangle-to-triangle case, the ill-posed prism does not legitimately enclose a volume. Ill-posed partitions are separate from _quasi-ill-posed partitions_, which can triangulate with a different lateral triangulation. For example, a prism has eight possible ways to tessellate three quadrilateral faces (Table I); two of them lead to a Schonhardt prism and cannot triangulate, but choosing a different scheme will avoid this situation. We further discuss the adjustment of lateral triangulation later.
Should an ill-posed partition appear while no alternative way exists to decompose the domain, the spacetime cannot be triangulated. Unfortunately, ill-posed partitions suggest that temporal discretization is insufficient. In this case one can refine the temporal resolution with a smaller temporal gap \(\Delta t\) so that no ill-posed partitions appear.2 In other words, one can upsample the temporal resolution with field-following interpolation (Eq. (2)) before attempting spacetime meshing.
Footnote 2: In general, the distortion decreases with a finer temporal resolution \(\Delta t\); the distortion characterized by displacement \(||\Phi(\mathbf{x},t_{0},t)-\mathbf{x}||\) is bounded by \(L\cdot\Delta t\), \(L\) being the Lipschitz constant of \(\mathbf{v}\).
## IV Methodology
Figure 5 illustrates our workflow to triangulate a 3D deforming spacetime with a _divide-and-conquer_ strategy, with the rationale that the domain may be decomposed into 2.5D independent partitions (or simply partitions when there is no ambiguity), each of which could lead to a polyhedron for further triangulation. Multitude challenges exist, including (1) defining a proper domain decomposition that could lead to triangulatable partitions, (2) maintaining compatibility, that is, ensuring no overlaps or voids exist between neighboring partitions, (3) finding a proper lateral triangulation scheme that leads to a possible 3D triangulation, and (4) triangulating a nonconvex polyhedron. Each challenge is nontrivial and requires careful design. To these ends, we design our methodology as three major phases with multiple refinements and trial-and-error steps:
* **Spacetime decomposition** (divide stage): Decomposing the domain into as many partitions as possible based on _closed cutting paths_ and inner-node refinements (Section IV-A).
* **Lateral triangulation**: For each partition, forming a polyhedron by meshing the surface that connects the partition's lower and upper boundaries. By default, staircase triangulation rules are applied when possible but subject to change if later volume triangulation is impossible (Section IV-B).
* **Volume triangulation** (conquer stage): For each polyhedron, searching for a possible triangulation scheme with a two-tiered decision-tree algorithm (Section IV-C).
### _Spacetime Decomposition_
With the given lower and upper meshes and temporal connectivity, the first step is partitioning the spacetime into nonoverlapping 2.5D independent partitions. We aim to decompose the domain into much smaller partitions so that we do not need to triangulate the entire spacetime with our decision-tree algorithm.
#### Iv-A1 Domain Decomposition by Cutting Graphs
We introduce the definition of _cutting edges_, _cutting graphs_, and _cutting paths_ and prove that the domain can be decomposed into 2.5D independent partitions (as defined in Section III-B) as connected components isolated by cutting graphs.
**Cutting edges**. Formally, with a partition \(PQ\), edges \(uv\in P\) and \(u^{\prime}v^{\prime}\) form a pair of cutting edges if there exists edge \(u^{\prime}\backslash^{\prime}\in Q\), where \(u^{\prime}\) and \(v^{\prime}\) are the next node of \(u\) and \(v\), respectively. As shown in an example in Figure 6 (a), a pair of cutting edges is a 4-node cycle formed by four edges: a lower-mesh edge \(n_{1}n_{9}\), an upper-mesh edge \(n^{\prime}_{1}n^{\prime}_{9}\), \(n_{1}n^{\prime}_{1}\), and \(n_{9}n^{\prime}_{9}\).
**Assumption 1**.: _The lower/upper mesh and time are sufficiently refined such that for any point \(x\) on a cutting edge \(v_{0}v_{1}\) in the lower mesh, the nearest node of \(\Phi(x)\) on the upper mesh is either \(v^{\prime}_{0}\) or \(v^{\prime}_{1}\)._
Here, we omit time in \(\Phi\) for clarity. Conceptually, we assume that a cutting edge reflects distinct transport behaviors of the underlying flow \(\mathbf{v}\) that drives the deformation; for example, in a laminar flow that is parallel to the cutting edge, no particle shall cross the edge.
**Cutting graphs and cutting paths**. A cutting graph is a graph that connects nodes that are connected by cutting edges. A cutting path is an arbitrary path in the cutting graph. For example, path \(u_{0}u_{1}u_{2}u_{3}\) is a cutting path if \(u_{0}u_{1}\), \(u_{1}u_{2}\), and \(u_{2}u_{3}\) are cutting edges; the counterpart consisting of next nodes \(u^{\prime}_{0}u^{\prime}_{1}u^{\prime}_{2}u^{\prime}_{3}\) is also referred to a cutting path in the upper mesh.
**Lemma 1**.: _Partition \(PQ\) is independent if the boundary \(\partial P\) is a closed cutting path._
Proof.: In a trivial case, if all nodes on \(P\) are on boundary \(\partial P\), \(PQ\) is independent because every node on \(P\) maps to \(Q\). Now, assume \(v_{k}\) is an inner node of \(P\) such that \(v_{k}\notin\partial P\) and the next node is \(v^{\prime}_{k}\notin Q\). Because the next node is the closest point to \(\Phi(v_{k})\), one can find a sufficiently small disk \(B(v_{k},\epsilon),\epsilon>0\) so that the flow map \(\Phi\) of any point on the disk has the closest point \(v^{\prime}_{k}\). Now, the domain \(P\) maps to at least two connected components by \(\Phi\); one is the image of the disk, and another is the region that intersects \(Q\). Having two connected components contradicts Rudin's Theorem 4.22 [26]; the continuous mapping \(\Phi\) of \(P\) shall be connected. This terminates the proof.
The assumption and lemma make it possible to decompose the domain via connected component labeling. Specifically, cutting graphs break the lower and upper meshes into nonoverlapping 2.5D independent partitions for further processing. We use a union-find implementation to join edge-sharing triangles in each component and then match the components between the lower and upper meshes. Special treatment may be needed for degenerate cases on domain boundaries; for example, if the cutting path in the upper mesh is on the boundary, a connected component in the lower mesh may not find a matching component in the upper mesh. In this case the
resulting partition is degenerate, and we assign a path as the partition's upper component, as illustrated in Figure 6.
Decomposing the domain by cutting graphs provides several benefits. First, a closed cutting path helps isolate trivial scenarios such as deformed prisms (i.e., cells with triangle-to-triangle scenario in Section III-B) from more complex scenarios, because all quadrilaterals in deformed prisms are 4-node cycles formed by two cutting edges. Second, the triangulation of a 4-node cycle of a cutting edge can be directly given by staircase triangulation as introduced in Section III-A, which makes it easy to coordinate cross sections. Third, each partition resulting from our decomposition could be processed independently, according to Lemma 1.
#### Iv-A2 (Optionally) Refining a Partition by Inner Nodes
Some 2.5D independent partitions isolated by cutting graphs could still include a large number of triangles or even inner nodes in the lower or upper mesh. To derive smaller partitions and further reduce complexity in the latter processes, we decompose every 2.5D independent partition with inner node(s). These partitions are split by _penta-faces_ that include an edge in the lower mesh, two edges between two nodes and their next nodes, and a path with two edges in the upper mesh passing the inner node (e.g., \(n_{1}n_{n}n_{4}^{\prime}n_{6}^{\prime}n_{1}^{\prime}\) in Figure 7 (b)). Figure 7 (c) shows the split result of the partition in Figure 7 (b). There might be more than one penta-face crossing one inner node; we use the one that can balance the number of cells in both the lower and upper meshes in two split partitions.
Note that one can further refine a 2.5D independent partition with multiple adjacent inner nodes by multinode cycles, not only by penta-faces with 5-node cycles. Splitting by multinode cycles yields smaller partitions but makes triangulation and compatibility on cross sections more complex.
### _Lateral Triangulation_
Before triangulating a 2.5D independent partition partition, one must first triangulate its lateral faces to form a polyhedron, namely, a face-triangulated partition. Note that components from the lower/upper mesh are already simplicial; thus, we
Fig. 5: Pipeline of our tetrahedralization algorithm. The algorithm goes through two stages, _divide_ and _conquer_. In the divide stage, the input is a 3D spacetime formed by lower mesh, upper mesh, and connectivities between them, and the output is a set of 2.5D independent partitions. Then we triangulate their lateral faces. The conquer stage take every face-triangulated polyhedron as an input. Note that the original 3D spacetime is also a 2.5D independent partition and one can skip the divide stage without complexity consideration. Indivisible prisms are filtered before dividing them, while more complex polyhedra need tentatively dividing first and retriangulating lateral faces if they fail to be divided. The output of the conquer stage as well as the whole algorithm is a set of tetrahedra that form a simplicial mesh of the input 3D spacetime.
Fig. 8: All possible lateral triangulations for a penta-face. Each of them can be used as long as the subdivisions of two partitions containing the same penta-face agree on this common penta-face. Cutting by faces with more nodes in cycle is feasible but incurs more complexity of subdividing and coordinating cross sections. For a convex polygon with \(n_{p}\) nodes, there are \(\frac{1}{n_{p}-1}\binom{2n_{p}-4}{n_{p}-2}\) triangulations for it [6].
Fig. 6: Mini-example showing process of segmenting spacetime by cutting paths in the divide stage. (a) 4-node cycle formed by two cutting edges highlighted in red in the spacetime. The cycle may or may not be a plane. (b) All the cutting paths in the spacetime. (c) Segmenting lower and upper meshes into spatial partitions by cutting paths. (d) Matching lower and upper spatial partitions by node-to-next-node correspondences. Only correspondences between the blue spatial partitions are shown. Some of them are matched with degenerate spatial partitions, that is, edges, such as the gray partition in the lower mesh and the pink partition in the upper mesh. (e) After matching the spatial partitions, we add node-to-next-node correspondences back and get 2.5D independent partitions.
Fig. 7: (a) and (b) show two 2.5D independent partitions derived from XGC data. The indices of mesh nodes in all figures agree with those in XGC data. (a) has no inner nodes in upper mesh, while (b) has one inner node 50153 on upper mesh. (c) shows two partitions of (b) derived by optional refinement in _divide_ step.
need to consider only lateral faces. The lateral faces created in the divide stage include 4-node and 5-node cycles. There are two subdivisions for 4-node cycles and three subdivisions for 5-node cycles (shown in Figure 8). The subdivision of a lateral face can be arbitrary if the subdivisions of two adjacent partitions agree on it. As stated in Section III-A, we follow the staircase triangulation rule--choosing the splitting edge with ascending node indices--to coordinate the separation of cross sections caused by cutting edges in the divide stage. However, triangulation in deforming spacetime has more problems and possibly needs more than one iteration.
**Lateral triangulation of deformed prisms**. As we discussed in Section III, not all deformed prisms can be triangulated, which we refer to as _ill-posed prisms_ in this paper. Take a step back to consider a regular prism with eight potential ways to triangulate its lateral quadrilaterals, as illustrated in Section III-A; six triangulation schemes can successfully split the prism into three tetrahedra, except for the two schemes that add only opposite diagonals. Note that staircase triangulation is a particular case of the six successful schemes. Now consider a deformed prism; not all of the six separations can triangulate a deformed prism. Some deformed prisms (called _quasi-ill-posed prisms_) fail to triangulate by the staircase triangulation but are successfully triangulated by another triangulation scheme (see Figure 9 as an example). Unfortunately, ill-posed prisms cannot be triangulated by any scheme, as shown by the example in Table I.
We follow three steps to triangulate the lateral faces of a deformed prism. First, we check whether staircase triangulation could lead to a valid triangulation. If not, second, the deformed prism is at least a quasi-ill-posed prism, which requires checking all separation schemes in Table I to see whether one of the separations can triangulate the deformed prism. There are two conditions to check for whether two points are on different sides of a plane defined by a triangle, as shown in the last three columns in Table I, to see whether a triangulation is valid. Third, if all separations cannot triangulate the deformed prism, it is an ill-posed prism. It also means that the temporal resolution of data should be increased to reduce the deformation between two successive timesteps, as we previously discussed in Section III-B.
**Lateral triangulation of complex partitions beyond prisms**. We choose an arbitrary sequence of separations of the 2.5D independent partition's lateral faces that are not constrained by triangulation of quasi-ill-posed prisms, and we try to triangulate it. If we fails to triangulate the 2.5D independent partition with given triangulated faces, then we change the separation of one lateral face and try to triangulate the newly created polyhedron, as shown in Figure 5. Different from deformed prisms, it is hard to determine whether a polyhedron is triangulable without trying to triangulate it first.
Fig. 9: Example of a quasi-ill-posed prism. (a) Staircase triangulation makes the deformed prism nontriangulable without Steiner point \(s\), because newly added line segment \(a{b}_{1}\) intersects with newly created triangle \(a_{1}a_{2}b_{2}\) at point \(s\). Visible faces of the polyhedron include \(a_{1}b_{1}s\), \(b_{1}b_{2}s\), \(a_{1}a_{2}b_{2}s\), and \(a_{0}a_{2}b_{2}\). (b) The deformed prism is successfully triangulated after replacing \(a{b}_{1}\) by \(a_{1}b_{0}\) to separate 4-node cycle \(a{oa}_{1}b_{1}b_{0}\).
### _Volume Triangulation_
Once each 2.5D independent partition is face-triangulated into a polyhedron, we further triangulate each polyhedron into tetrahedra. We propose a _node elimination_ algorithm that removes nodes one by one as well as edges incident on it as tetrahedra until all nodes are removed. In searching for the sequence of nodes to be removed, there are multiple decisions to make, and we need to trace back to previous steps if we meet with nontriangulable remaining polyhedron. Thus, we formulate the node elimination algorithm into a decision-tree paradigm.
**Two-tiered decision tree for volume triangulation**. The decision tree makes two types of decisions alternately. In levels with odd numbers (odd levels), the tree makes a decision on which node to be eliminated (so-called _pivot node_); in levels with even numbers (even levels), the tree chooses one way to triangulate the polyhedron formed by pivot node and edges incident on the node (so-called _isolated polyhedron_). Every tree node contains information about separated tetrahedra, remaining polyhedron, pivot node, and unvisited children of the tree node.
**Odd levels: decision on choosing a pivot node**. The pivot node can be any node in the polyhedron, and one can introduce any heuristic to choose pivot node. Here, we prioritize all mesh nodes with minimum degree in the current remaining polyhedron as pivot nodes and create a tree node for every pivot node as child for current tree node. One advantage of this heuristic is that nodes with minimum degree are incident to fewer edges and thus incur simpler isolated polyhedra. See Algorithm 2 for pseudo-code of this step.
**Even levels: decision on triangulating polyhedron around the pivot node**. We continue with a pivot node chosen in the last level and eliminate the pivot. We first isolate the pivot, its neighbors, and edges between the pivot and its neighbors. All the isolated elements form an isolated polyhedron that is a subset of the inputted polyhedron. The isolated polyhedron usually contains 4-5 nodes; all possible isolated polyhedra with no more than 5 nodes are shown in the third column of Table II. Then we triangulate the isolated polyhedron whose tetrahedralization is shown in Table II. It is easy to exhaust all possibilities for an isolated polyhedron, remove derived tetrahedra from the current polyhedron, and update the remaining polyhedron. See Algorithm 3 for pseudo-code of this step. Note that not all the tetrahedralization ways in Table II are feasible for a polyhedron; two conditions need checking: (1) if every newly created link is inside of the polyhedron (constraints introduced by nonconvexity) and (2) if any of separated tetrahedra has four coplanar mesh nodes. We create a tree node for every feasible tetrahedralization as a child for the tree node representing the pivot node in the upper level, and we update separated tetrahedra and remaining polyhedron for these children. In the next iteration, we repeat operations in the two levels for remaining polyhedron while the remaining polyhedron is non-empty. Figure 10 shows the node elimination algorithm in the structure of the decision tree, taking an arbitrary polyhedron with triangulated lateral faces as an example. See Algorithm 3 for pseudo-code of the
whole-node elimination algorithm.
**Complexity of the algorithm.** The best and most common case is to terminate with a single branch (\(O(n)\)); the worst and nearly impossible case is to fully expand the tree (\(O(n2^{\alpha})\)). This tree has \(2(n_{v}-3)+1\) levels (\(n_{v}\) stands for the number of nodes in the face-triangulated polyhedron to be triangulated) because nearly every node (except the last three nodes that can be directly removed with the last tetrahedron) will be a pivot exactly once in any branch that connecting root and a leaf, and every pivot generates two levels. To find a tetrahedralization way, there is no need to traverse all possible tree node. The algorithm terminates when it finds a tree node whose remaining polyhedron is empty. When it meets with an indivisible isolated polyhedron around a pivot node (e.g., Figure 2 (b)), it traces back and continues with the next unvisited child of the current tree node's parent.
**Completeness of the algorithm.** The algorithm is guaranteed to find a volume triangulation that allows only connections between nodes and their 2-hop neighbors if there is one, because this decision tree covers all possible connections within 2-hop neighbors. If there are no unvisited nodes in the tree, the algorithm terminates with no solution, which means that the inputted polyhedron is indivisible if only links between 2-hop neighbors are allowed to create. It is reasonable to consider links between 2-hop neighbors because links between greater than n-hop neighbors degrade accuracy of interpolation in tetrahedra.
**Optimality of the algorithm.** If cost of a solution in the algorithm is defined to be the number of visited nodes when finding a feasible volume triangulation, then our heuristics for choosing pivot may not direct the algorithm to an optimal solution. Choosing mesh nodes with minimum degree as a pivot may not be the most complexity-saving heuristics: removing a mesh node with minimum degree might introduce nonconvexity to the remaining polyhedron, while removing mesh nodes with higher degrees could maintain convexity. Nonconvexity brings more constraints on feasible connections that one can create and raises the risk of making the remaining polyhedron indivisible, which requires tracing-back in the decision tree and thus increases cost.
Fig. 11: Illustration of a tokamak with a cylindrical coordinate system \((R,Z,\phi)^{\intercal}\), where \(R\), \(Z\), and \(\phi\) are radial, axial, and toroidal coordinates, respectively. Two 2D poloidal planes with \(\Delta\phi=\pi\) are shown, but actually we use XGC data discretized into 16 poloidal planes. The gray lines show trajectories of arbitrarily chosen particles. Each poloidal plane is nested into an identical unstructured triangular grid. Part of the triangular mesh in the circled region is shown on the right. The whole mesh has 56,980 nodes and 112,655 cells.
Fig. 10: Node elimination algorithm shown in a decision-tree paradigm. In odd levels, it makes a decision on “which mesh node to eliminate” among mesh nodes with minimum degree; in even levels, it makes a decision on “how to eliminate” based on the isolated polyhedron formed by the pivot node and its neighbors.
```
Data: a face-triangulated polyhedron Result: tetrahedralization result for the polyhedron root \(\leftarrow\) Node(_separated_tets = \(\emptyset\), remaining_volume = volume, pivot = None, unvisited_children = \(\emptyset\)); tree \(\leftarrow\) Tree(root=root); current_node \(\leftarrow\) root; divisible \(\leftarrow\) True; whilecurrent_node.remaining_volume\(\neq\emptyset\)do ifdivisible == Falsethen current_node \(\leftarrow\) the next unvisited child of current_node's parent; ifnowinised nodes in the treethen Throw("This polyhedron is indivisible.") end if ifcurrent_node.current_node.pivot==None then current_node, tree, divisible \(\leftarrow\) PivotChoose(current_node, tree); else current_node, tree, divisible \(\leftarrow\) PivotRemove(current_node, tree); end if end while return current_node.separated_tets
```
**Algorithm 1**Node Elimination Algorithm
```
Data: tree_node, tree find mesh nodes with minimum degree; generate children representing pivot nodes for tree_node; divisible \(\leftarrow\) False; whiledivisible==Falsedo iftree_node.unvisited_children\(\neq\emptyset\)then current_node \(\leftarrow\) tree_node.unvisited_children[0]; remove current_node from tree_node.unvisited_children; current_node, tree, divisible \(\leftarrow\) PivotRemove(current_node, tree); else return current_node, tree, False; end if return current_node, tree, True; end while
```
**Algorithm 2**PivotChoose
```
Data: tree_node, tree check the cycles formed by pivot and its neighbors; determine its type; divisible \(\leftarrow\) if at least one divisible way exists; ifdivisiblethen generate children representing dividing ways for tree_node; current_node \(\leftarrow\) the first unvisited child of tree_node; remove current_node from tree_node.unvisited_children; update current_node.separated_tets and current_node.remaining_volume; return current_node, tree, True else return current_node, tree, False end if
```
**Algorithm 3**PivotRemove
```
Data: tree_node, tree check the cycles formed by pivot and its neighbors; determine its type; divisible \(\leftarrow\) if at least one divisible way exists; ifdivisiblethen generate children representing dividing ways for tree_node; current_node \(\leftarrow\) the first unvisited child of tree_node; remove current_node from tree_node.unvisited_children; update current_node.separated_tets and current_node.remaining_volume; return current_node, tree, True else return current_node, tree, False end if
```
**Algorithm 4**PivotRemove
Scientists favor 3D visualizations such as volume rendering, isosurfacing, and feature curve rendering of _blobs_--filament structures of high turbulence in tokamaks that may cause disruptions and damage billion-dollar devices. The specific scalar function for visualizing blobs is _normalized electrostatic potential perturbation_ (\(\delta n_{e}/\delta n_{e0}\)), denoted as \(f\) in the rest of this paper. To date, visualization of blobs has mainly focused on 2D cross sections of blobs because of the prohibitive cost of field-following interpolation, as explained below.
Scalar functions (e.g., temperature, density, and magnetic potentials) in XGC are interpolated via a _particle shape function_, that is, field-following interpolation in Eq. (2). Figure 11 illustrates a cylindrical coordinate system (radial, axial, and toroidal coordinates \(R\), \(Z\), and \(\phi\)) representing the computational domain in XGC. XGC uniformly subdivides the toroidal direction into a finite number of poloidal planes (i.e., \(RZ\)-planes). Each poloidal plane uses the same triangular mesh. For example, a typical XGC simulation discretizes the domain with 16 poloidal planes, each with a triangular mesh of \(O(10^{5})\) nodes and \(O(10^{5})\) triangles.
In this case study we interpret XGC's spatial domain as a deformed spacetime induced by magnetic fields. Figure 3 (a) illustrates the magnetic-following interpolation scheme. The field variable \(f\) is given at every node of triangular meshes in all poloidal planes. To interpolate \(f\) at an arbitrary location \((R,Z,\phi)^{\intercal}\), one must first calculate a streamline of the magnetic field \(\mathbb{B}\) (i.e., a magnetic line), in both directions, seeded from the given location. The magnetic line normally intersects two poloidal planes at \(\phi_{i}\) and \(\phi_{i+1}\) with \(\phi_{i}\leq\phi\leq\phi_{i+1}\), where \(i\) and \(i+1\) are the indices of poloidal planes in XGC's toroidal discretization. Assuming the function value \(f\) is \(f_{i}\) and \(f_{i+1}\) at the streamline's intersection on \(i\)th and \((i+1)\)th poloidal plane,
respectively, one can approximate \(f\) as the linear combination of \(f_{i}\) and \(f_{i+1}\).
The magnetic-following interpolation poses challenges in both forward and inverse evaluations. Here, we refer to _forward evaluation_ as the interpolation scheme and _inverse evaluation_ as finding locations that satisfy given criteria, for example, finding the zero-level-set where \(f=0\). Both forward and inverse evaluations are expensive. Forward evaluations--frequently used by volume rendering and particle tracing--require magnetic line tracing and are thus expensive. Inverse evaluations such as isosurfacing and critical point extraction are complicated and may involve numerical optimization for root-finding. Note that We regard the toroidal coordinates as temporal coordinates in this work, which is different from real temporal dimensions in XGC data.
### _Evaluation Schemes_
We demonstrate and evaluate the use of our methodology with XGC fusion plasma simulations, whose 2D triangular mesh has 56,980 nodes and 112,655 cells. Our algorithm yields 339,626 tetrahedra for its 3D spacetime with \(56,980\times 2=113,960\) nodes. We define and use two types of approximations:
* Forward approximation: interpolation of function values at given 3D locations. Examples include volume rendering and particle tracing, which require sampling at arbitrary locations for ray/path integration.
* Inverse approximation: root-finding of 3D locations with given function values. Examples include isosurfacing in scalar-valued functions and critical-point-finding in vector-valued functions.
We compare three types of interpolation schemes:
* Magnetic-following (MF) interpolation, that is, the scheme defined in Eq. (2) that requires magnetic-line tracing
* Our method: Piecewise linear (PL) interpolation induced by our simplicial meshing scheme that models deforming space
* A naive baseline: Straight linear (SL) interpolation induced by simplicial meshing on uniform spacetime, where every node's next node is itself.
The three interpolation schemes are measured by mean squared error (MSE), peak signal-to-noise ratio (PSNR), and running time. Compared with magnetic-following interpolation, the PL interpolation significantly reduces the computational cost of all visualization tasks, because no magnetic-line tracing is involved for the interpolation. We recognize that the PL approximation introduces error, and thus we treat the magnetic-following interpolation as the ground truth and quantitatively evaluate the error resulting from our PL interpolation. We also work together with domain scientists to evaluate our results qualitatively.
### _Toroidal Upsampling (Forward Approximation)_
The objective of the toroidal upsampling case study is to upsample XGC's 3D scalar field data, which are sampled with relatively low resolution (e.g., \(n_{\phi}=16\)) uniformly along the toroidal (\(\phi\)) direction at \(\phi=2\pi i/16\) (\(i=0,1,...,15\)). With each given \(\phi\), science variables (e.g., density and temperature) are given at nodes on 2D triangular meshes. We interpolate simulation outputs to obtain values of nodes at two arbitrary \(\phi\)'s, \((5/32)2\pi\) and \((27/32)2\pi\), which are both in the middle of two consecutive sampled toroidal angles, to retrieve a much higher toroidal resolution of the data by MF, PL, and SL interpolations. These three interpolations are implemented on a machine with an Apple M1 chip and 64 GB system memory with no parallel technology. Results of three kinds of interpolations and quantitative evaluation are shown in Figure 12 and Table III, respectively. PL interpolation results show a similar pattern of values, low MSE and high PSNR, and significantly less running time (\(O(10^{4})\) faster) when compared with MF interpolation, as shown in Figure 13.
We also uniformly took 1,024 poloidal planes over
Fig. 12: Upsampling results of MF, PL, and SL interpolation methods. The overall patterns shown by different interpolation methods are similar. However, PL interpolation has closer values to MF while SL has a smaller range of values compared with MF and PL. In several local regions (such as those marked by red boxes), SL even shows shapes different from those of the other two.
\([0,2\pi)\) to show the tendency of PSNR with varying \(\phi\) (Figure 14), taking MF interpolation results as ground truth. Cells in SL interpolation do not conform with the variation in time-dependent data and thus interpolate the value of a position with four irrelevant mesh nodes. Therefore, PL is expected to have better performance than SL. As shown in Figure 14, PL interpolation results are close to ground truth when \(\phi\) approaches sampled toroidal angles and show worst accuracy when \(\phi\) is in the middle of two sampled angles. Also, the local minima of PL are greater than those of SL.
### _Volume Rendering (Forward Approximation)_
We implement and compare three ways to volume render the scalar function \(f\) in XGC data with the toroidal coordinate straightened, which transfers \((R,Z,\phi)^{\intercal}\) into \((x,z,y)^{\intercal}\), respectively, and renders in Cartesian coordinate system (Figure 15). We can see a rotating pattern of values in both rendering images. However, SL fails to show a continuous pattern. The reason is that PL and SL use different nodes in interpolation at the same location. PL is based on a simplicial mesh of deforming spacetime that relates each mesh node to those close to its location at the previous or next time slot, while SL always uses nodes at the same location for interpolation, which fails to incorporate any deforming property in the underlying physics.
### _Isosurfacing (Inverse Approximation)_
As shown in Figure 16, isosurfacing results of \(f\) show the same rotating pattern as volume rendering results in Figure 15. Again, PL gives continuous isosurfaces while SL yields "dashed" lines. Also, when we increase the isovalue to 0.2, then isosurfaces are mainly concentrated in only a small area that is around the saddle point in the magnetic field in XGC data, which also verifies the correctness of isosurfacing results in Figure 16. In order to show more interesting features such as blobs, well-chosen isovalues are required, which is beyond the scope of this work.
### _Extremum Lines (Inverse Approximation)_
To study 3D blob filaments, we co-designed with scientists the definition of _blob core lines_ as the extremum lines--loci of local minimum/maximum where radial and axial gradients of \(f\) vanish and the radial-axial Hessian \(\mathbf{H}_{f}\) is positive-/negative-definite:
\[\frac{\partial f}{\partial R}=\frac{\partial f}{\partial Z}=0\text{ and } \lambda_{1}\lambda_{2}>0, \tag{3}\]
where \(R\) and \(Z\) are the radial and axial coordinates, respectively, and \(\lambda_{1}\) and \(\lambda_{2}\) are the eigenvalues of the radial-axial Hessian \(\mathbf{H}_{f}\), which considers only \(R\) and \(Z\) axes:
\[\mathbf{H}_{f}=\left(\begin{array}{cc}\frac{\partial^{2}f}{\partial R^{2}}& \frac{\partial^{2}f}{\partial R\beta^{2}}\\ \frac{\partial^{2}f}{\partial Z\partial R}&\frac{\partial^{2}f}{\partial Z^{2} }\end{array}\right). \tag{4}\]
Fig. 16: Isosurfaces with isovalues \(\pm 0.1\) and \(\pm 0.2\). Red lines show isosurfaces with positive isovalues while blue lines show isosurfaces with negative isovalues. Well-chosen values can show more interesting blobs, which is beyond the scope of this work.
Fig. 14: \(\phi\) vs. PSNR for PL and SL interpolations on XGC data taking MF interpolation results as ground truth. Both of the curves have 48 infinity values because we upsampled the toroidal coordinate from 16 into 48 poloidal planes to avoid ill-posed prisms, and all three methods do the same interpolation on poloidal planes. The local minima are always attained at the middle of two poloidal planes for both PL and SL, while the PSNR of PL is usually greater than 40 dB while that of SL drops below 30 dB sometimes.
Fig. 13: \(\phi\) vs. time for MF, PL, and SL interpolations on XGC data. SL is slightly faster than PL, while MF usually needs \(O(10^{4})\) times compared with PL and SL. On the sampled poloidal planes, MF takes less than one second because only a 2D cell locating and a 2D barycentric interpolation are required for MF on sampled poloidal planes.
Fig. 15: Volume rendering result of \(f\). Transfer function is shown on the bottom. PL interpolation shows continuous trajectories of value while SL interpolation breaks the features into dashed lines, because SL builds unmeaningful connectivities between two consecutive timesteps.
We reformulate the extraction of extremum lines as a critical point tracking problem by treating \(\phi\) as time and use the feature tracking kit FTK [14] to extract and visualize the curves. FTK assumes that the gradient vector field is piecewise linear so that critical point trajectories are reconstructed directly in a spacetime mesh, which is by default nondeforming; we made minor changes to FTK software to support deformed spacetime meshes. Figure 17 demonstrates extremum line extraction results with our deformed mesh. Qualitatively speaking, the resulting curves reflect the same trends that scientists observe in isosurface and volume rendering; we plan to investigate further the accuracy evaluation of extremum lines in future work.
### _Additional Case Study with Synthetic Data_
The additional case study demonstrates the generality of our methodology beyond fusion. The data are synthesized by rotating two Gaussian blobs with phase difference \(\pi\) around the center of a circular field. We set the radius of the circular field to be 1 and the full width at half maximum of two blobs to be 0.3, and we uniformly choose 16 timesteps as sampled times in one cycle (\(2\pi\)). The synthetic function is
\[\begin{split} f(x,y,t)&=Ce^{-B((x-A\cos\omega)^{2}+ (y-A\sin\omega)^{2}))}\\ &+Ce^{-B((i+A\cos\omega)^{2}+(y+A\sin\omega)^{2}))},\end{split} \tag{5}\]
where \(C=9.806\), \(B=30.807\), and \(\omega=0.393\). Values at several timesteps are shown in Figure 18. Discretization of this 2D field gives 1,000 nodes and 1,947 triangle cells (Figure 19 (a)). Next nodes of nodes are determined by their positions at the next timestep, as illustrated in Section III-B, which forms a spacetime with 2,000 nodes (Figure 19 (c)). Our algorithm yields 5,961 tetrahedron cells for this spacetime. We conduct the same evaluation as in Sections V-B and V-C on this synthetic dataset.
**Temporal upsampling**. We again choose two arbitrary timesteps that are in the middle of two successive sampled timesteps, corresponding to \(\phi=(21/32)2\pi\) and \(\phi=(31/32)2\pi\) in toroidal coordinates. The comparison of PL interpolation and ground truth is shown in Figure 20, which shows a high PSNR. Figure 21 shows the variation of PSNR over toroidal coordinates on both PL and SL. Since there are 16 sampled timesteps, there are 16 arcs in Figure 21. Each arc attains its maxima at two sampled toroidal angles and its minimum at the middle of these two angles. Also, PL has a higher minimum PSNR than does SL, meaning SL gives wrong interpolation results.
**Volume rendering and isosurfacing**. Results for both PL (our method) and SL (naive interpolation) are shown in Figure 22. PL reflects the correct rotation of two blobs, but SL gives a broken translation along the time axis.
comparable quality (measured by MSE and PSNR) compared with those given by physical derivation while taking significantly less time. It also shows why the naive SL interpolation is not applicable: our method, PL interpolation, builds reasonable connectivities between two successive timesteps, whereas SL interpolation relates a 3D point with irrelevant mesh nodes. Furthermore, it enables the inverse approximation, a solution for root-finding problem, which cannot be achieved by tracing magnetic lines.
**Limitations**. Although this work does not require the identity of spatial meshes over time or any properties regarding the connectivity of next nodes, there are still several limitations. First, the decision tree search can be improved, such as by extending the search of new connections to \(n\)-hop neighbors, finding a better heuristic for choosing a pivot node, and parallelizing the branch search to reduce its time complexity for the worst case. Second, the indivisible problem is not completely solved. For example, we do not triangulate ill-posed prisms but instead increase the temporal resolution to avoid them. Also, this work provides no methods to check indivisible polyhedra except for prisms without dividing them first. Third, we do not continue with figuring out the optimal balance between the number of nodes in cycles formed by cutting faces and the complexity of triangulating and coordinating the cutting faces. Fourth, the strict complexity of the whole algorithm is challenging to analyze because of the uncertainty of how many loops one would take to deal with the indivisible polyhedra.
**Future work**. Besides solving the previously mentioned limitations, this work has several possible extensions. One possible way is to extend the problem from 3D deforming spacetime to multi-D deforming spacetime. For example, if we have a 3D tetrahedron mesh as a spatial mesh, then it extrudes a 4D spacetime. Triangulating a 4-polytope into 4-simplices with all similar constraints involves more problems. Another promising work could be extending the triangular spatial mesh to mesh with any polygon as cells. Additional work is triangulating the polygonal cells while guaranteeing that the whole spacetime is still triangulable. A third future work could be using other interpolation methods, such as neural networks, to increase interpolation quality further. Piecewise-linear interpolation may not be the best model of value distribution in cells. Although we can increase temporal resolution to improve interpolation accuracy, upsampling via magnetic-following interpolation is still time-consuming.
## VII Conclusion
We propose an algorithm based on the divide-and-conquer paradigm to triangulate 3D nonconvex deforming spacetime with geometric and connectivity constraints. Specifically, in the divided stage we split the spacetime into smaller 3D partitions to reduce time complexity. Cutting faces are well chosen to guarantee the independence of subdividing any partition; communication between partitions on cross sections is also taken care of. In the conquer stage, we eliminate nodes in a 3D partition one by one through a decision-tree-based search. This tree makes decisions on which node to eliminate and how to eliminate it. Our algorithm provides a scheme for and reduces the complexity of many visualization and analysis tasks, such as upsampling, volume rendering, and isosurfacing, on time-dependent datasets, especially those with rotating features. We evaluate the algorithm both quantitatively and qualitatively on XGC data and a synthetic dataset, which verifies its high accuracy and low time complexity compared with the magnetic-line-following method and naive spacetime interpolation.
## Acknowledgment
The authors thank Drs. Choong-Seock Chang, Michael Churchill, Robert Hager, Seung-Hoe Ku, Zeyu Guo, and Rephael Wenger for insightful discussions. This research is supported by DOE DE-SC0022753, NSF OAC-2311878, NSF OAC-2313123, NSF IIS-1955764. This research is also supported by the Exascale Computing Project (ECP), project number 17-SC-20-SC, a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. It is also supported by the U.S. Department of Energy, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357.
|
2304.03595 | A large area, high counting rate micromegas-based neutron detector for
BNCT | Beam monitoring and evaluation are very important to boron neutron capture
therapy (BNCT), and a variety of detectors have been developed for these
applications. However, most of the detectors used in BNCT only have a small
detection area, leading to the inconvenience of the full-scale 2-D measurement
of the beam. Based on micromegas technology, we designed a neutron detector
with large detection area and high counting rate. This detector has a detection
area of 288 mm multiples 288 mm and can measure thermal, epithermal, and fast
neutrons with different detector settings. The BNCT experiments demonstrated
that this detector has a very good 2-D imaging performance for the thermal,
epithermal, fast neutron and gamma components, a highest counting rate of 94
kHz/channel, and a good linearity response to the beam power. Additionally, the
flux fraction of each component can be calculated based on the measurement
results. The Am-Be neutron source experiment indicates that this detector has a
spatial resolution of approximately 1.4 mm, meeting the requirements of
applications in BNCT. It is evident that this micromegas-based neutron detector
with a large area and high counting rate capability has great development
prospects in BNCT beam monitoring and evaluation applications. | Zhujun Fang, Zhiyong Zhang, Bin Shi, Wei Jiang, Xianke Liu, Siqi He, Jun Chen, Ping Cao, Jianbei Liu, Yi Zhou, Ming Shao, Botian Qu, Shufeng Zhang, Qian Wang | 2023-04-07T11:34:43Z | http://arxiv.org/abs/2304.03595v2 | # A large area, high counting rate micromegas-based neutron detector for BNCT
###### Abstract
Beam monitoring and evaluation are very important to boron neutron capture therapy (BNCT), and a variety of detectors have been developed for these applications. However, most of the detectors used in BNCT only have a small detection area, leading to the inconvenience of the full-scale 2-D measurement of the beam. Based on micromegas technology, we designed a neutron detector with large detection area and high counting rate. This detector has a detection area of 288 mm x 288 mm and can measure thermal, epithermal, and fast neutrons with different detector settings. The BNCT experiments demonstrated that this detector has a very good 2-D imaging performance for the thermal, epithermal, fast neutron and gamma components, a highest counting rate of 94 kHz/channel, and a good linearity response to the beam power. Additionally, the flux fraction of each component can be calculated based on the measurement results. The Am-Be neutron source experiment indicates that this detector has a spatial resolution of approximately 1.4 mm, meeting the requirements of applications in BNCT. It is evident that this micromegas-based neutron detector with a large area and high counting rate capability has great development prospects in BNCT beam monitoring and evaluation applications.
Micromegas-based neutron detector, BNCT, large area, high counting rate, 2-D imaging,
## 1 Introduction:
Boron neutron capture therapy (BNCT) is a new treatment for some cancers [1]. The designed peak energy of the neutron beam used in BNCT is usually in the thermal neutron region or epithermal neutron region. However, the real neutron energy ranges from the thermal region to the fast region due to the limitation of the neutron moderator. It is very important to measure and monitor the flux, range and energy of neutron beams in BNCT treatment, which is of great significance to improve the curative effect and reduce the irradiation dose of patients [2]. Several kinds of neutron detectors have been developed for BNCT neutron beam monitoring. Scintillator detectors, such as Li-glass detectors, and semiconductor detectors, such as CdZnTe detectors, can be used to monitor the neutron beam dose and flux [3-5], as well as diamond detectors [6]. Additionally, activation detectors with various metal-activated tablets could be used to make an indirect measurement of the neutron flux [7]. In these detectors, most of the effective detection sizes are several mm to tens of mm, so a linear scan is required to make a full measurement of the neutron beam flux. Meanwhile, the spatial resolution is also in mm magnitude by the limit of the detector cell size. In this case, the uncertainty of the neutron beam will affect the measurement |
2310.10310 | Investigating Bias in Multilingual Language Models: Cross-Lingual
Transfer of Debiasing Techniques | This paper investigates the transferability of debiasing techniques across
different languages within multilingual models. We examine the applicability of
these techniques in English, French, German, and Dutch. Using multilingual BERT
(mBERT), we demonstrate that cross-lingual transfer of debiasing techniques is
not only feasible but also yields promising results. Surprisingly, our findings
reveal no performance disadvantages when applying these techniques to
non-English languages. Using translations of the CrowS-Pairs dataset, our
analysis identifies SentenceDebias as the best technique across different
languages, reducing bias in mBERT by an average of 13%. We also find that
debiasing techniques with additional pretraining exhibit enhanced cross-lingual
effectiveness for the languages included in the analyses, particularly in
lower-resource languages. These novel insights contribute to a deeper
understanding of bias mitigation in multilingual language models and provide
practical guidance for debiasing techniques in different language contexts. | Manon Reusens, Philipp Borchert, Margot Mieskes, Jochen De Weerdt, Bart Baesens | 2023-10-16T11:43:30Z | http://arxiv.org/abs/2310.10310v1 | # Investigating Bias in Multilingual Language Models: Cross-Lingual Transfer of Debiasing Techniques
###### Abstract
This paper investigates the transferability of debiasing techniques across different languages within multilingual models. We examine the applicability of these techniques in English, French, German, and Dutch. Using multilingual BERT (mBERT), we demonstrate that cross-lingual transfer of debiasing techniques is not only feasible but also yields promising results. Surprisingly, our findings reveal no performance disadvantages when applying these techniques to non-English languages. Using translations of the CrowS-Pairs dataset, our analysis identifies SentenceDebias as the best technique across different languages, reducing bias in mBERT by an average of 13%. We also find that debiasing techniques with additional pretraining exhibit enhanced cross-lingual effectiveness for the languages included in the analyses, particularly in lower-resource languages. These novel insights contribute to a deeper understanding of bias mitigation in multilingual language models and provide practical guidance for debiasing techniques in different language contexts.
DISCLAIMER: This paper contains explicit statements that are potentially offensive.
## 1 Introduction
There has been a growing interest in addressing bias detection and mitigation in Natural Language Processing (NLP) due to their societal implications. Initially, research focused on debiasing word embeddings (Bolukbasi et al., 2016; Zhao et al., 2018), but recent studies found that pretrained language models also capture social biases present in training data (Meade et al., 2022). Hence, attention has shifted towards debiasing techniques that target sentence representations. These techniques include additional pretraining steps (Zhao et al., 2019; Webster et al., 2020; Zmigrod et al., 2019) and projection-based methods that assume a bias direction (Liang et al., 2020; Ravfogel et al., 2020; Liang et al., 2020).
While debiasing techniques have been developed and evaluated for monolingual, and mostly English models, the effectiveness and transferability of these techniques to diverse languages within multilingual models remain largely unexplored (Stanczak and Augenstein, 2021; Sun et al., 2019). Our research aims to bridge this gap by examining the potential of debiasing techniques applied to one language to effectively mitigate bias in other languages within multilingual large language models. We examine English (EN), French (FR), German (DE), and Dutch (NL). Figure 1 illustrates an example sentence pair included in the English CrowS-Pairs dataset 1, where the unmodified and modified parts are highlighted in blue and yellow respectively. It shows the predicted probabilities of the modified part occurring given the unmodified part across different debiasing languages.
Footnote 1: This example assumes gender to be binary. We acknowledge that this fails to capture the full range of gender identities.
This study examines the cross-lingual transferability of debiasing techniques using mBERT. mBERT, trained on Wikipedia data from diverse languages, possesses the capability to process and generate text in various linguistic contexts. Despite balancing efforts, it still performs worse on low-resource languages (Wu and Dredze, 2020; Devlin, 2018). We investigate whether this performance disparity extends to gender, religious, and racial biases. Related studies demonstrate the effectiveness of cross-lingual debiasing for individual techniques and selected bias scopes (Liang et al., 2020; Lauscher et al., 2021). We show how to reduce bias in mBERT across different languages by conducting a benchmark of state-of-the-art (SOTA) debiasing techniques and providing guidance on its implementation. To facilitate further research and reproducibility, we make the code and additional data available to the research community2.
Footnote 2: [https://github.com/manon-reusens/multilingual_bias](https://github.com/manon-reusens/multilingual_bias)
Our contributions can be summarized as follows: 1) We provide a benchmark of different SOTA debiasing techniques across multiple languages in a multilingual large language model. 2) We find that SentenceDebias is the most effective for cross-lingual debiasing, reducing the bias in mBERT by 13%. 3) We provide implementation guidelines for debiasing multilingual models and highlight the differences in the cross-lingual transferability of different debiasing techniques. We find that most projection-based techniques applied to one language yield similar predictions across evaluation languages. We also recommend performing the techniques with an additional pretraining step on the lowest resource language within the multilingual model for optimal results.
Figure 1: The example of the English CrowS-Pairs dataset illustrates sentence probabilities after debiasing mBERT with SentenceDebias in English, French, German, and Dutch.
## 2 Methodology
This section introduces the data, debiasing techniques, and experimental setup respectively.
### CrowS-Pairs
CrowS-Pairs is a benchmark dataset comprising 1508 examples that address stereotypes associated with historically disadvantaged groups in the US, encompassing various types of bias, such as age and religion (Nangia et al., 2020). Following Meade et al. (2022), where different debiasing techniques were benchmarked and their effectiveness demonstrated on BERT for gender, race, and religion, we focus on these three types of bias. Neveol et al. (2022) translated the dataset in French. To the best of our knowledge, there are currently no peer-reviewed variants of CrowS-Pairs available in other languages. Therefore, we used three samples of the full dataset and translated them into the respective language to evaluate our experiments.
To create an evaluation set for our experiments, we started from the English CrowS-Pairs dataset (Nangia et al., 2020). We randomly sampled \(N\) instances, where \(N\in\{20,30,40,50\}\), and measured the performance differences on mBERT and BERT. Through three random seeds, we found that a sample size of 40 resulted in an average performance correlation of more than 75% with the full dataset for both models. Thus, we conclude that using 40 instances with three random seeds provides a representative dataset for our evaluation. Further details are shown in Appendix A. Subsequently, we included the translated samples from each language into our dataset, either the corresponding sentences from the French CrowS-Pairs or a translation.
### Debiasing techniques
Next, the different debiasing techniques are explained. For more information on the attribute lists used, we refer to Appendix B.
**Counterfactual Data Augmentation (CDA)** is a debiasing technique that trains the model on an augmented training set (Zhao et al., 2019; Webster et al., 2020; Zmigrod et al., 2019). First, the corpus is augmented by duplicating sentences that include words from a predefined attribute list. Next, counterfactual sentences are generated by swapping these attribute words with other variants in the list, for example, swapping _he_ by _she_. We augment 10% of the Wikipedia corpus of the respective language and use an additional pretraining step to debias the model for three random seeds and average the results.
**Dropout Regularization (DO)** is introduced by Webster et al. (2020) as a debiasing technique by implementing an additional pretraining step. We execute this pretraining step while training on 10% of the Wikipedia corpus of the respective language using three random seeds and averaging the results.
**SentenceDebias (SenDeb)** introduced by Liang et al. (2020) is a projection-based debiasing technique extending debiasing word embeddings (Bolukbasi et al., 2016) to sentence representations. Attribute words from a predefined list are contextualized by retrieving their occurrences from a corpus and augmented with CDA. Next, the bias subspace is computed using the representations of these sentences through principal component analysis (PCA). The first \(K\) dimensions of PCA are assumed to define the bias subspace as they capture the principle directions of variation of the representations. We debias the last hidden state of the mBERT model and implement SenDeb using 2.5% of the Wikipedia text in the respective language.
**Iterative Nullspace Projection (INLP)** is a projection-based debiasing technique in which multiple linear classifiers are trained to predict biases, such as gender, that are to be removed from the sentence representations (Ravfogel et al., 2020). After training a single classifier, the representations are debiased by projecting them onto the learned linear classifier's weight matrix to gather the rowspace projection. We implement this technique using the 2.5% of the Wikipedia text in each language.
**DensRay (DR)** is a projection-based debiasing technique first implemented by (Dufter and Schutze, 2019) and adapted for contextualized word embeddings by (Liang et al., 2020). This technique is similar to SenDeb, but the bias direction is calculated differently. This method aims to find an optimal orthogonal matrix so that the first \(K\) dimensions correlate well with the linguistic features in the rotated space. The second dimension is assumed to be orthogonal to the first one. The bias direction is considered to correspond to the eigenvector of the highest eigenvalue of the matrix. DR is only implemented for a binary bias type and using it for multiclass bias types requires modifying the technique. Therefore, we only apply it to the gender bias type. We implement DR debiasing the last hidden state of mBERT and using 2.5% of the Wikipedia text in the respective language.
### Experimental setup
We debias mBERT using language \(X\) and evaluating it on language \(Y\) with \(X,Y\in\{EN,FR,DE,NL\}\). In essence, we debiased the model using one language and evaluated it on another, covering all language combinations in our experiments. We implement mBERT in its base configuration (uncased, 12 layers, 768 hidden size) and utilize the bias score as implemented in Meade et al. (2022). This metric evaluates the percentage of sentences where the model prefers the more biased sentence over the less biased sentence, with an optimal performance of 50%. All experiments are performed on P100 GPUs.
## 3 Results
Table 1 shows the performance of the different debiasing techniques when debiasing in English in terms of the absolute deviation of the ideal unbiased model. This is an average score for all bias types and models trained for the respective evaluation language. Base represents the score that is achieved by mBERT on the respective evaluation language dataset before debiasing. More results are shown in Appendices C and D.
As shown in Table 1, English is relatively unbiased compared to the other languages and shows a small bias increase after debiasing. This observation aligns with the findings of Ahn and Oh (2021), who propose mBERT as a debiasing technique. In cases where the initial bias score is already close to the optimal level, further debiasing can lead to _overcompensation_, consequently amplifying the total bias. We assume that an unbiased model should equally prioritize both biased and unbiased sentences. However, when debiasing techniques tend to overcorrect, they skew the balance towards favoring the prediction of unbiased sentences over biased ones. Addressing this challenge necessitates the adoption of specialized techniques to effectively mitigate any residual bias.
This phenomenon of overcompensation occurs in several underperforming techniques, as illustrated in Table 1. Notably, we find instances of overcompensation for gender when debiasing using INLP for French and using CDA for German, as well as for race when debiasing using DO for German. Another contributing factor to the poor performance of certain techniques within specific debiasing
and evaluation language combinations lies in the inherent ineffectiveness of the debiasing method itself, exemplified by the cases of gender debiasing using CDA for French and religion debiasing using CDA for German. In Tables 5, 6, and 7, we find overcompensation for gender when debiasing with INLP in German and French, evaluating in German, debiasing with Sendeb and DR in French, and evaluating in French, as well as when debiasing in Dutch with INLP and evaluating in French. Moreover, overcompensation for race is also observed when debiasing with CDA in French and evaluating in German.
**Is cross-lingual transferability of debiasing techniques possible?** Table 1 shows that cross-lingual transfer is possible using English as debiasing language. Figure 2 confirms this, depicting the bias scores averaged over all debiasing techniques. As discussed, for English, these techniques increase the bias contained in the model due to its already close to optimal performance. For the other evaluation languages, we find better performance after debiasing. Therefore, we conclude that for these four languages, it is possible to debias the mBERT model to some extent using a different debiasing language, except when the bias contained in the model is already relatively low.
To shed some light on the insights that can be gathered from Figure 2, Table 2 offers an overview of the best- and worst-performing techniques per evaluation language. As shown, Dutch is found to be the best debiasing language for English. This is because this debiasing language has shown to overcompensate the gender bias category the least, therefore, resulting in the best performance. In general, we find that using the same debiasing language as evaluation language often results in an overcompensation of the bias, therefore turning around the bias direction. This means that the best-performing debiasing language is often not the same as the evaluation language. However, German is the exception. As this language already has the highest bias score for gender before debiasing, strong debiasing is beneficial and therefore does not result in overcompensation. Besides German being the best-performing debiasing language for German, it also shows the best performance for French because it achieves the best performance on all different evaluation sets. Moreover, it also shows less overcompensation for the gender bias present in the model than other languages such as Dutch.
French is the worst-performing debiasing language for all evaluation languages except for Dutch, where it is the best-performing one. We find that when evaluating in French, the gender bias is overcompensated. For English, both racial and gender bias are overcompensated. The German evaluation shows lower overall performance due to already two ineffective methods (INLP and CDA), which were also due to overcompensating racial bias. Finally, for Dutch, we find that debiasing with French overcompensates gender bias less than Dutch and, therefore, is the best-performing method. As Dutch has the second highest gender bias score before debiasing, it also benefits from strong debiasing and therefore both French and Dutch perform well.
We believe that these results are influenced by the fact that both German and French have a grammatical gender distinction, which may impact debiasing gender to a greater extent. This
\begin{table}
\begin{tabular}{l l|l l l l l} \hline & Base & INLP & Sendeb & DR & CDA & DO \\ \hline EN & 6.11 & 8.70 \(\uparrow\) & 7.78 \(\uparrow\) & 6.94 \(\uparrow\) & 13.43 \(\uparrow\) & 8.70 \(\uparrow\) \\ FR & 11.11 & 11.20 \(\uparrow\) & 10 \(\downarrow\) & 10.28 \(\downarrow\) & 12.6 \(\uparrow\) & 9.44 \(\downarrow\) \\ DE & 9.33 & 7.52 \(\downarrow\) & 6.57 \(\downarrow\) & 6.84 \(\downarrow\) & 10.75 \(\uparrow\) & 9.75 \(\uparrow\) \\ NL & 17.66 & 13.96 \(\downarrow\) & 15.14 \(\downarrow\) & 16.54 \(\downarrow\) & 16.84 \(\downarrow\) & 17.40 \(\downarrow\) \\ \hline \end{tabular}
\end{table}
Table 1: Overall performance score per evaluation language and debiasing technique averaged over the three random seeds after debiasing in English.
grammatical gender distinction is not embedded in English and Dutch. Moreover, as the religion category regularly shows limited bias decrease, we find that the performance in the gender and race category often determines whether a technique works well or not.
**How are the different techniques affected by cross-lingual debiasing?** Table 3 shows the overall percentage increase of the bias score per technique. From this, we conclude that SenDeb is the best-performing technique and reduces bias in mBERT on average by 13%. DO is the second best-performing method reducing bias on average by 10%. However, Figure 3 shows that DO performs well for all debiasing languages except English, while SenDeb performs consistently well for all languages. The other techniques perform worse overall. Hence, we suggest using SenDeb as cross-lingual debiasing technique for these languages.
When zooming in on the **projection-based techniques**, i.e. INLP, SenDeb, and DR, a high performance variation is shown in Table 3 and Figure 3. While SenDeb offers consistent performance for all different debiasing languages, we see more variation and a lower bias decrease for INLP. This is due to the high variation in performance, resulting in a higher overall average. As INLP uses multiple linear classifiers to define the projection matrix, high variability is introduced. Since DR was only implemented for gender, no performance gains can be obtained from the other bias types, therefore resulting in a lower overall performance increase.
Techniques using an **additional pretraining step** obtain the best results when debiasing in Dutch, as illustrated in Figure 4. Notably, Dutch is the lowest resource language out of these four languages during pretraining (Wu and Dredze, 2020). This additional pretraining step lets the model learn unbiased associations between words while becoming familiar with the lower-resource language
\begin{table}
\begin{tabular}{c c c} \hline Evaluation language & Best debiasing language & worst debiasing language \\ \hline English & Dutch & French \\ French & German & French \\ German & German & French \\ Dutch & French & English \\ \hline \end{tabular}
\end{table}
Table 2: Overview best- and worst-performing debiasing languages per evaluation language.
\begin{table}
\begin{tabular}{c c c c c} \hline INLP & Sendeb & DR & CDA & Dropout \\ \hline
1.11 & 13.21 & 7.1 & -0.16 & 10.21 \\ \hline \end{tabular}
\end{table}
Table 3: Average percentage difference in bias scores of each technique compared to the base model.
Figure 2: Average bias scores per evaluation and debiasing language.
resulting in lower overall bias. Therefore, we conclude that, for our set of languages, these techniques are most effective when applied to low-resource languages.
## 4 Related work
Significant research focuses on the cross-lingual performance of mBERT (Wu and Dredze, 2020; Pires et al., 2019; Libovicky et al., 2019). Limited research focuses on the cross-lingual transferability of debiasing techniques in mBERT (Stanczak and Augenstein, 2021; Sun et al., 2019). Liang et al. (2020) use DensRay in English to debias Chinese in mBERT for gender. Similarly, Zhao et al. (2020) analyze the cross-lingual transfer of gender bias mitigation using one method. Lauscher et al. (2021) also find that their proposed technique, ADELE, can transfer debiasing across six languages. Other studies analyze biases contained in multilingual language models. Kaneko et al. (2022) evaluate bias across multiple languages in masked language models using a new metric. Ahn and Oh (2021) study ethnic bias and its variability over languages proposing mBERT as debiasing technique. Finally, some studies also explore the cross-lingual transferability of downstream tasks (Levy et al., 2023).
## 5 Conclusion
Most studies focus on debiasing techniques for large language models, but rarely explore their cross-lingual transferability. Therefore, we offer a benchmark for SOTA debiasing techniques on mBERT across multiple languages (EN, FR, DE, NL) and show that debiasing is transferable
Figure 4: Average bias score per debiasing languages for both CDA and DO.
Figure 3: Average bias score per technique per debiasing language.
across languages, yielding promising results. We provide guidance for cross-lingual debiasing, highlighting SenDeb as the best-performing method, reducing bias in mBERT by 13%. Additionally, we find that, for the studied languages, debiasing with the lowest resource language is effective for techniques involving an additional training step (CDA and DO). This research is a first step into the cross-lingual transferability of debiasing techniques. Further studies should include languages from different cultures and other multilingual large language models to assess generalizability.
#### Limitations
A first limitation concerns the analysis focused on four closely related languages from a similar culture. A broader range of languages should be explored to ensure the generalizability of the findings. Our research was conducted employing a single multilingual model, mBERT. Extending this to other multilingual language models would provide valuable insights into the wider applicability of the results. Moreover, the evaluation of outcomes relied primarily on the CrowS-Pairs metric, although efforts were made to enhance the understanding by examining the absolute difference compared to the optimal model. Next, the consideration of gender was limited to binary classification, overlooking non-binary gender identities. This should be further addressed in future research. Furthermore, a comprehensive multilingual dataset to assess stereotypes across different languages is not available, and thus, the English CrowS-Pairs dataset was translated and corresponding sentences of the French dataset were used. Nevertheless, typical stereotypes for other languages were not adequately represented. Furthermore, the dataset used in the study exhibited certain flaws highlighted by Blodgett et al. (2021), such as the influence of selected names on predictions, which was observed to have a significant impact. This needs to be investigated further. Additionally, attribute lists for languages other than English were not available to the same extent. We tried to compile lists for French, Dutch, and German, excluding words with multiple meanings to minimize noise. However, our lists were not exhaustive, and therefore the omission of relevant attributes is possible. It is also worth noting that, in certain cases, the generic masculine form was considered the preferred answer, despite it being included in the attribute lists. Finally, the applicability of downstream tasks should be investigated (e.g. (Levy et al., 2023)). Hence, future research should encompass a wider language scope, include multiple models, address existing dataset flaws, and develop more comprehensive attribute lists for various languages.
#### Ethics Statement
We would like to address three key ethical considerations in this study that highlight ongoing challenges and complexities associated with mitigating bias in large language models. First, it is important to acknowledge that the gender bias examined in this paper is approached from a binary perspective. However, this does not capture the full range of gender identities present in reality. While we recognize this limitation, it was necessary to simplify the analysis for experimental purposes. In future research, we hope to address this limitation. Second, despite efforts to debias the multilingual large language model, it is important to note that not all forms of bias are completely mitigated. The applied debiasing techniques do lower the bias present in the model, however, there is still bias present in the model both within and outside the targeted bias types. Finally, we recognize that our evaluation datasets do not encompass all the different biases that might be present in the model. Therefore, even if a model would obtain a perfect score, it is still possible that other forms of bias persist.
## Acknowledgements
This research was funded by the Statistics Flanders research cooperation agreement on Data Science for Official Statistics. The resources and services used in this work were provided by the VSC (Flemish Supercomputer Center).
|
2301.06233 | Dimension approximation in smooth dynamical systems | For a non-conformal repeller $\Lambda$ of a $C^{1+\alpha}$ map $f$ preserving
an ergodic measure $\mu$ of positive entropy, this paper shows that the
Lyapunov dimension of $\mu$ can be approximated gradually by the
Carath\'{e}odory singular dimension of a sequence of horseshoes. For a
$C^{1+\alpha}$ diffeomorphism $f$ preserving a hyperbolic ergodic measure $\mu$
of positive entropy, if $(f, \mu)$ has only two Lyapunov exponents
$\lambda_u(\mu)>0>\lambda_s(\mu)$, then the Hausdorff or lower box or upper box
dimension of $\mu$ can be approximated by the corresponding dimension of the
horseshoes $\{\Lambda_n\}$. The same statement holds true if $f$ is a $C^1$
diffeomorphism with a dominated Oseledet's splitting with respect to $\mu$. | Yongluo Cao, Juan Wang, Yun Zhao | 2023-01-16T02:09:42Z | http://arxiv.org/abs/2301.06233v1 | # Dimension approximation in smooth dynamical systems
###### Abstract.
For a non-conformal repeller \(\Lambda\) of a \(C^{1+\alpha}\) map \(f\) preserving an ergodic measure \(\mu\) of positive entropy, this paper shows that the Lyapunov dimension of \(\mu\) can be approximated gradually by the Caratheodory singular dimension of a sequence of horseshoes. For a \(C^{1+\alpha}\) diffeomorphism \(f\) preserving a hyperbolic ergodic measure \(\mu\) of positive entropy, if \((f,\mu)\) has only two Lyapunov exponents \(\lambda_{u}(\mu)>0>\lambda_{s}(\mu)\), then the Hausdorff or lower box or upper box dimension of \(\mu\) can be approximated by the corresponding dimension of the horseshoes \(\{\Lambda_{n}\}\). The same statement holds true if \(f\) is a \(C^{1}\) diffeomorphism with a dominated Oseledet's splitting with respect to \(\mu\).
Key words and phrases:Dimension, hyperbolic measure, horseshoe, repeller 2010 Mathematics Subject classification: 37C45, 37D25, 37D20
## 1. Introduction
In smooth dynamical systems, a fundamental approximation result asserts that a \(C^{1+\alpha}\) diffeomorphism \(f\) which preserves a hyperbolic ergodic measure \(\mu\) of positive entropy can be approximated gradually by compact invariant locally maximal hyperbolic sets-horseshoes \(\{\Lambda_{n}\}\), in the sense that dynamical quantities on the horseshoes such as the topological entropy and pressure, Lyapunov exponents and averages of continuous functions are approaching to the ones of the measure \(\mu\).
This type of results are widely referred to the landmark work by Katok [9] or Katok and Mendoza (see [10]). Misiurewicz and Szlenk [29] earlier proved a related result for continuous and for piecewise monotone maps of the interval. Przytycki and Urbanski [34] obtained corresponding properties for holomorphic maps in the case of a measure with only positive Lyapunov exponent. A related setting of dyadic diophantine approximations is established by Persson and Schmeling in [32]. For a general \(C^{1+\alpha}\) diffeomorphism \(f\) preserving a hyperbolic ergodic measure \(\mu\) with positive entropy, assume that \(\mu\) has \(\ell\) different Lyapunov exponent \(\{\lambda_{j}\}_{j=1}^{\ell}\), on each approaching horseshoe \(\Lambda_{n}\), Avila, Crovisier and Wilkinson [2] obtained a continuous splitting
\[T_{\Lambda_{n}}M=E_{1}\oplus E_{2}\oplus\cdots\oplus E_{\ell}\]
and showed that the exponential growth of \(D_{x}f^{n}|_{E_{i}}\) is roughly \(\lambda_{i}\) for each \(i=1,2,\cdots,\ell\). A corresponding statement for \(C^{1+\alpha}\) non-conformal transformations (i.e., non-invertible maps) was shown in [13]. See Chung [14], Gelfert [18, 19] and Yang [44] for other results related to Katok's approximation construction of \(C^{1+\alpha}\) maps.
A natural question is how large that part of the dynamics described by these horseshoes is. So, it is interesting to estimate the Hausdorff dimension of the stable and/or unstable Cantor sets of a horseshoe. If \(\mu\) is a SRB measure (i.e. a measure
with a particular absolute continuity property on unstable manifolds; see [6] for precise definitions), it was showed in [36] that \(\mu\) can be approximated by ergodic measures supported on horseshoes with arbitrarily large unstable dimensions, which generalized Mendoza's result in [26] for diffeomorphisms in higher dimensional manifold. The approach in [36] was based on Markov towers that can be described by horseshoes with infinitely many branches and variable return times. However, there is an essential mistake in the proof of the key Proposition 5.1 in [36]. The authors in [43] proved the same result by a different method. They used the u-Gibbs property of the conditional measure of the equilibrium measure and the properties of the uniformly hyperbolic dynamical systems. Furthermore, in [43] the authors proved that the Hausdorff dimension of \(\mu\) can be approximated gradually by the Hausdorff dimension of the horseshoes \(\{\Lambda_{n}\}\) provided that the stable direction is one dimension. See also [24, 25, 27, 28, 37, 38] that represent works close to this topic.
In this work, our main task is to compare the dimension of the horseshoes \(\{\Lambda_{n}\}\) and the given hyperbolic ergodic measure \(\mu\) of a \(C^{r}\) (\(r\geq 1\)) diffeomorphism in a more general setting that \(\mu\) may be not a SRB measure. For a non-conformal repeller \(\Lambda\) of a \(C^{1+\alpha}\) map, utilizing the approximation result in [13], we show that the Lyapunov dimension (see (3.1) for the definition) of an \(f\)-invariant ergodic measure \(\mu\) supported on \(\Lambda\) can be approximated gradually by the Caratheodory singular dimension (see (3.6) for the definition) of the horseshoes \(\{\Lambda_{n}\}\). For a \(C^{1+\alpha}\) diffeomorphism \(f\) preserving a hyperbolic ergodic measure \(\mu\) of positive entropy, if \((f,\mu)\) has only two Lyapunov exponents \(\lambda_{u}(\mu)>0>\lambda_{s}(\mu)\), then the Hausdorff or lower box or upper box dimension of \(\mu\) can be approximated by the corresponding dimension of the horseshoes \(\{\Lambda_{n}\}\). The same statement holds true if \(f\) is a \(C^{1}\) diffeomorphism with a dominated Oseledec's splitting w.r.t. \(\mu\).
We arrange the paper as follows. In Section 2, we give some basic notions and properties about topological and measure theoretic pressures and dimensions of sets and measures. Statements of our main results will be given in Section 3. In Section 4, we will give the detailed proofs of the main results.
## 2. Definitions and preliminaries
In this section, we recall the definitions of topological pressure and various dimensions of subsets and/or of invariant measures.
### Topological and measure theoretic pressures
Let \(f:X\to X\) be a continuous transformation on a compact metric space \(X\) equipped with metric \(d\). A subset \(F\subset X\) is called an \((n,\epsilon)-\)separated set with respect to \(f\), if for any two different points \(x,y\in F\), we have \(d_{n}(x,y):=\max_{0\leq k\leq n-1}d(f^{k}(x),f^{k}(y))>\epsilon.\) A sequence of continuous functions \(\Phi=\{\phi_{n}\}_{n\geq 1}\) is called _sub-additive_, if
\[\phi_{m+n}\leq\phi_{n}+\phi_{m}\circ f^{n},\ \forall n,m\in\mathbb{N}.\]
Furthermore, a sequence of continuous functions \(\Psi=\{\psi_{n}\}_{n\geq 1}\) is called _superadditive_ if \(-\Psi=\{-\psi_{n}\}_{n\geq 1}\) is sub-additive.
#### 2.1.1. Topological pressure defined via separated sets
Given a sub-additive potential \(\Phi=\{\phi_{n}\}_{n\geq 1}\) on \(X\), put
\[P_{n}(f,\Phi,\epsilon)=\sup\Big{\{}\sum_{x\in F}e^{\phi_{n}(x)}|F\ \text{is an}\ (n,\epsilon)-\text{separated subset of}\ X\Big{\}}.\]
**Definition 2.1**.: _We call the following quantity_
\[P_{\rm top}(f,\Phi)=\lim_{\epsilon\to 0}\limsup_{n\to\infty}\frac{1}{n}\log P_{n}(f, \Phi,\epsilon) \tag{2.1}\]
_the sub-additive topological pressure of \((f,\Phi)\)._
**Remark 2.1**.: _If \(\Phi=\{\varphi_{n}\}_{n\geq 1}\) is additive in the sense that \(\varphi_{n}(x)=\varphi(x)+\varphi(fx)+\cdots+\varphi(f^{n-1}x)\triangleq S_{n }\varphi(x)\) for some continuous function \(\varphi:X\to\mathbb{R}\), we simply denote the topological pressure \(P_{\rm top}(f,\Phi)\) as \(P_{\rm top}(f,\varphi)\)._
Let \(\mathcal{M}_{f}(X)\) denote the space of all \(f-\)invariant measures on \(X\). For \(\mu\in\mathcal{M}_{f}(X)\), let \(h_{\mu}(f)\) denote the metric entropy of \(f\) with respect to \(\mu\) (see Walters' book [39] for details of metric entropy), and let
\[\mathcal{L}_{*}(\Phi,\mu)=\lim_{n\to\infty}\frac{1}{n}\int\phi_{n}d\mu.\]
The existence of the above limit follows from a sub-additive argument. In [11], the authors proved the following variational principle.
**Theorem 2.1**.: _Let \(f:X\to X\) be a continuous transformation on a compact metric space \(X\), and \(\Phi=\{\phi_{n}\}_{n\geq 1}\) a sub-additive potential on \(X\), then we have_
\[P_{\rm top}(f,\Phi)=\sup\Big{\{}h_{\mu}(f)+\mathcal{L}_{*}(\Phi,\mu):\mu\in \mathcal{M}_{f}(X),\ \mathcal{L}_{*}(\Phi,\mu)\neq-\infty\Big{\}}.\]
Though it is unknown whether the variational principle holds for super-additive topological pressure, Cao, Pesin and Zhao gave an alternative definition via variational principle in [13]. Given a sequence of super-additive continuous potentials \(\Psi=\{\psi_{n}\}_{n\geq 1}\) on a compact dynamical system \((X,f)\), the super-additive topological pressure of \(\Psi\) is defined as
\[P_{\rm var}(f,\Psi):=\sup\Big{\{}h_{\mu}(f)+\mathcal{L}_{*}(\Psi,\mu):\mu\in \mathcal{M}_{f}(X)\Big{\}}.\]
where
\[\mathcal{L}_{*}(\Psi,\mu)=\lim_{n\to\infty}\frac{1}{n}\int\psi_{n}d\mu=\sup_{ n\geq 1}\frac{1}{n}\int\psi_{n}d\mu.\]
The second equality is due to the standard sub-additive argument.
#### 2.1.2. Measure theoretic pressure
We first follow the approach in [33] to give the definitions of topological pressures on arbitrary subsets. Given a sub-additive potential \(\Phi=\{\phi_{n}\}_{n\geq 1}\) on \(X\), a subset \(Z\subset X\) and \(\alpha\in\mathbb{R}\), let
\[M(Z,\Phi,\alpha,N,\epsilon)=\inf\Big{\{} \sum_{i}\exp\big{(}-\alpha n_{i}+\sup_{y\in B_{n_{i}}(x_{i}, \epsilon)}\phi_{n_{i}}(y)\big{)}:\] \[\bigcup_{i}B_{n_{i}}(x_{i},\epsilon)\supset Z,\,x_{i}\in X\ \text{and}\ n_{i}\geq N\ \text{for all}\ i\Big{\}}.\]
Since \(M(Z,\Phi,\alpha,N,\epsilon)\) is monotonically increasing with \(N\), let
\[m(Z,\Phi,\alpha,\epsilon):=\lim_{N\to\infty}M(Z,\Phi,\alpha,N,\epsilon). \tag{2.2}\]
We denote the jump-up point of \(m(Z,\Phi,\alpha,\epsilon)\) by
\[P_{Z}(f,\Phi,\epsilon)=\inf\{\alpha:m(Z,\Phi,\alpha,\epsilon)=0\}=\sup\{ \alpha:m(Z,\Phi,\alpha,\epsilon)=+\infty\}.\]
**Definition 2.2**.: _We call the quantity_
\[P_{Z}(f,\Phi)=\liminf_{\epsilon\to 0}P_{Z}(f,\Phi,\epsilon)\]
_the topological pressure of \((f,\Phi)\) on the set \(Z\) (see [17] for the weighted version of this quantity)._
Similarly, for \(\alpha\in\mathbb{R}\) and \(Z\subset X\), define
\[R(Z,\Phi,\alpha,N,\epsilon)=\inf\Big{\{}\sum_{i}\exp\bigl{(}-\alpha N+\sup_{y \in B_{N}(x_{i},\epsilon)}\phi_{N}(y)\bigr{)}:\bigcup_{i}B_{N}(x_{i},\epsilon) \supset Z,\,x_{i}\in X\Big{\}}.\]
We set
\[\underline{r}(Z,\Phi,\alpha,\epsilon)=\liminf_{N\to\infty}R(Z,\Phi,\alpha,N, \epsilon),\]
\[\overline{r}(Z,\Phi,\alpha,\epsilon)=\limsup_{N\to\infty}R(Z,\Phi,\alpha,N, \epsilon)\]
and define the jump-up points of \(\underline{r}(Z,\Phi,\alpha,\epsilon)\) and \(\overline{r}(Z,\Phi,\alpha,\epsilon)\) as
\[\underline{CP}_{Z}(f,\Phi,\epsilon)=\inf\{\alpha:\underline{r}(Z, \Phi,\alpha,\epsilon)=0\}=\sup\{\alpha:\underline{r}(Z,\Phi,\alpha,\epsilon)= +\infty\},\] \[\overline{CP}_{Z}(f,\Phi,\epsilon)=\inf\{\alpha:\overline{r}(Z, \Phi,\alpha,\epsilon)=0\}=\sup\{\alpha:\overline{r}(Z,\Phi,\alpha,\epsilon)= +\infty\}\]
respectively.
**Definition 2.3**.: _We call the quantities_
\[\underline{CP}_{Z}(f,\Phi)=\liminf_{\epsilon\to 0}\underline{CP}_{Z}(f,\Phi, \epsilon)\text{ and }\overline{CP}_{Z}(f,\Phi)=\liminf_{\epsilon\to 0} \overline{CP}_{Z}(f,\Phi,\epsilon)\]
_the lower and upper topological pressures of \((f,\Phi)\) on the set \(Z\) respectively._
Given an \(f\)-invariant measure \(\mu\), let
\[P_{\mu}(f,\Phi,\epsilon)=\inf\{P_{Z}(f,\Phi,\epsilon)\colon\mu(Z)=1\}\]
and then we call the following quantity
\[P_{\mu}(f,\Phi):=\liminf_{\epsilon\to 0}P_{\mu}(f,\Phi,\epsilon)\]
the _measure theoretic pressure_ of \((f,\Phi)\) with respect to \(\mu\). Let further
\[\underline{CP}_{\mu}(f,\Phi,\epsilon)=\lim_{\delta\to 0}\inf\{ \underline{CP}_{Z}(f,\Phi,\epsilon)\colon\mu(Z)\geq 1-\delta\},\] \[\overline{CP}_{\mu}(f,\Phi,\epsilon)=\lim_{\delta\to 0}\inf\{ \overline{CP}_{Z}(f,\Phi,\epsilon)\colon\mu(Z)\geq 1-\delta\}.\]
We call the following quantities
\[\underline{CP}_{\mu}(f,\Phi)=\liminf_{\epsilon\to 0}\underline{CP}_{\mu}(f,\Phi, \epsilon),\quad\overline{CP}_{\mu}(f,\Phi)=\liminf_{\epsilon\to 0}\overline{CP}_{ \mu}(f,\Phi,\epsilon)\]
the _lower and upper measure theoretic pressures_ of \((f,\Phi)\) with respect to \(\mu\) respectively. It is proved in [12, Theorem A] that
\[P_{\mu}(f,\Phi)=\underline{CP}_{\mu}(f,\Phi)=\overline{CP}_{\mu}(f,\Phi)=h_{ \mu}(f)+\mathcal{L}_{*}(\Phi,\mu) \tag{2.3}\]
for any \(f\)-invariant ergodic measure \(\mu\) with \(\mathcal{L}_{*}(\Phi,\mu)\neq-\infty\).
**Remark 2.2**.: _In fact, one can show that_
\[\mathcal{P}_{\mu}(f,\Phi)=\inf\{\mathcal{P}_{Z}(f,\Phi):\mu(Z)=1\}\]
_here \(\mathcal{P}\) denotes either \(P\) or \(\underline{CP}\) or \(\overline{CP}\), see [46] for a proof._
### Dimensions of sets and measures
Now we recall the definitions of Hausdorff and box dimensions of subsets and measures. Given a subset \(Z\subset X\), For any \(s\geq 0\), let
\[\mathcal{H}^{s}_{\delta}(Z)=\inf\Big{\{}\sum_{i=1}^{\infty}(\mathrm{diam}U_{i})^ {s}:\left\{U_{i}\right\}_{i\geq 1}\text{ is a cover of }Z\text{ with }\mathrm{diam}U_{i}\leq\delta,\forall i\geq 1 \Big{\}}\]
and
\[\mathcal{H}^{s}(Z)=\lim_{\delta\to 0}\mathcal{H}^{s}_{\delta}(Z).\]
The above limit exists, though the limit may be infinity. We call \(\mathcal{H}^{s}(Z)\) the \(s-\)dimensional Hausdorff measure of \(Z\).
**Definition 2.4**.: _The following jump-up value of \(\mathcal{H}^{s}(Z)\)_
\[\dim_{H}Z=\inf\{s:\mathcal{H}^{s}(Z)=0\}=\sup\{s:\mathcal{H}^{s}(Z)=\infty\}\]
_is called the Hausdorff dimension of \(Z\). The lower and upper box dimension of \(Z\) are defined respectively by_
\[\underline{\dim}_{B}Z=\liminf_{\delta\to 0}\frac{\log N(Z,\delta)}{-\log \delta}\text{ and }\overline{\dim}_{B}Z=\limsup_{\delta\to 0}\frac{\log N(Z, \delta)}{-\log\delta},\]
_where \(N(Z,\delta)\) denotes the least number of balls of radius \(\delta\) that are needed to cover the set \(Z\). If \(\underline{\dim}_{B}Z=\overline{\dim}_{B}Z\), we will denote the common value by \(\dim_{B}Z\) and call it the box dimension of \(Z\)._
The following two results are well-known in the field of fractal geometry, e.g., see Falconer's book [15] for proofs.
**Lemma 2.1**.: _Let \(X\) and \(Y\) be metric spaces. For any \(r\in(0,1)\), \(\Phi:X\to Y\) is an onto, \((C,r)\)-Holder continuous map for some \(C>0\). Then_
\[\dim_{H}Y\leq r^{-1}\dim_{H}X,\quad\underline{\dim}_{B}Y\leq r^{-1}\underline{ \dim}_{B}X\quad\text{and}\quad\overline{\dim}_{B}Y\leq r^{-1}\overline{\dim} _{B}X.\]
**Corollary 2.1**.: _Let \(X\) and \(Y\) be metric spaces, and let \(\Phi:X\to Y\) be an onto, Lipschitz continuous map. Then_
\[\dim_{H}Y\leq\dim_{H}X,\quad\underline{\dim}_{B}Y\leq\underline{\dim}_{B}X \quad\text{and}\quad\overline{\dim}_{B}Y\leq\overline{\dim}_{B}X.\]
Given a Borel probability measure \(\mu\) on \(X\), the following quantity
\[\dim_{H}\mu =\inf\{\dim_{H}Z:Z\subset X\text{ and }\mu(Z)=1\}\] \[=\lim_{\delta\to 0}\inf\{\dim_{H}Z:Z\subset X\text{ and }\mu(Z)\geq 1-\delta\}\]
is called the _Hausdorff dimension of the measure \(\mu\)_. Similarly, we call the following two quantities
\[\underline{\dim}_{B}\mu=\lim_{\delta\to 0}\inf\{\underline{\dim}_{B}Z:Z\subset X \text{ and }\mu(Z)\geq 1-\delta\}\]
and
\[\overline{\dim}_{B}\mu=\lim_{\delta\to 0}\inf\{\overline{\dim}_{B}Z:Z\subset X \text{ and }\mu(Z)\geq 1-\delta\}\]
the _lower box dimension_ and _upper box dimension_ of \(\mu\), respectively.
If \(\mu\) is a finite measure on \(X\) and there exists \(d\geq 0\) such that
\[\lim_{r\to 0}\frac{\log\mu(B(x,r))}{\log r}=d\]
for \(\mu\)-almost every \(x\in X\), then
\[\dim_{H}\mu=\underline{\dim}_{B}\mu=\overline{\dim}_{B}\mu=d.\]
This criterion was established by Young in [45].
## 3. Statements of main results
In this section, we will give the statements of the main results in this paper, and the proof will be postponed to the next section.
### Dimension approximation for uniformly expanding systems
Let \(f:M\to M\) be a smooth map of a \(m_{0}\)-dimensional compact smooth Riemannian manifold \(M\), and \(\Lambda\) a compact \(f\)-invariant subset of \(M\). Let \(\mathcal{M}_{f}(\Lambda)\) and \(\mathcal{E}_{f}(\Lambda)\) denote respectively the set of all \(f\)-invariant measures and ergodic measures on \(\Lambda\).
#### 3.1.1. Definitions of repeller and Lyapunov dimension
We call \(\Lambda\) a _repeller_ for \(f\) or \(f\) is _expanding_ on \(\Lambda\) if
1. there exists an open neighborhood \(U\) of \(\Lambda\) such that \(\Lambda=\{x\in U:f^{n}(x)\in U\text{ for all }n\geq 0\}\);
2. there is \(\kappa>1\) such that \[\|D_{x}f(v)\|\geq\kappa\|v\|,\text{ for all }x\in\Lambda,\text{ and }v\in T_{x}M,\] where \(\|\cdot\|\) is the norm induced by the Riemannian metric on \(M\), and \(D_{x}f:T_{x}M\to T_{f(x)}M\) is the differential operator.
Given an \(f\)-invariant ergodic measure \(\mu\) supported on the repeller \(\Lambda\). Let \(\lambda_{1}(\mu)\geq\lambda_{2}(\mu)\geq\cdots\geq\lambda_{m_{0}}(\mu)\) and \(h_{\mu}(f)\) denote the Lyapunov exponents and the measure theoretic entropy of \((f,\mu)\) respectively, we refer the reader to [6] and [39] for detailed description of Lyapunov exponents and the the measure theoretic entropy. We further define the _Lyapunov dimension_ of \(\mu\) as follows:
\[\dim_{\text{L}}\mu:=\left\{\begin{array}{ll}\ell+\frac{h_{\mu}(f)-\lambda_{ m_{0}}(\mu)-\cdots-\lambda_{m_{0}-\ell+1}(\mu)}{\lambda_{m_{0}-\ell}(\mu)},&h_{ \mu}(f)\geq\lambda_{m_{0}}(\mu)\\ \frac{h_{\mu}(f)}{\lambda_{m_{0}}(\mu)},&0\leq h_{\mu}(f)<\lambda_{m_{0}}(\mu) \end{array}\right. \tag{3.1}\]
where \(\ell=\max\{i:\lambda_{m_{0}}(\mu)+\cdots+\lambda_{m_{0}-i+1}(\mu)\leq h_{\mu}( f)\}\).
The original definition of Lyapunov dimension in [1, 21, 22] is defined only for hyperbolic systems as follows: assume that \(\nu\) is an ergodic measure of a smooth diffeomorphism \(f\) with Lyapunov exponents \(\lambda_{1}\geq\cdots\geq\lambda_{u}>0\geq\lambda_{u+1}\geq\cdots\geq\lambda_ {m_{0}}\), then the the Lyapunov dimension is
\[\operatorname{Lyadin}\,\nu=\ell+\frac{\lambda_{1}+\cdots+\lambda_{u}+\cdots+ \lambda_{\ell}}{|\lambda_{\ell+1}|}\]
where \(\ell=\max\{i:\lambda_{1}+\cdots+\lambda_{i}\geq 0\}\). Assume further that \(\nu\) is a SRB measure, then \(h_{\nu}(f)=\lambda_{1}+\cdots+\lambda_{u}\). In consequence,
\[\operatorname{Lyadin}\,\nu=\ell+\frac{h_{\nu}(f)+\lambda_{u+1}+\cdots+\lambda_ {\ell}}{|\lambda_{\ell+1}|}\]
and \(\ell=\max\{i:-\lambda_{u+1}-\cdots-\lambda_{i}\geq h_{\nu}(f)\}\). Hence, the definition in (3.1) is a reasonable substitute. For a \(C^{1}\) expanding map \(f\), Feng and Simon [16] defined the Lyapunov dimension of an ergodic measure as the zero of the measure theoretic pressure \(P_{\mu}(f,\Phi_{f}(t))=0\) (see (3.4)). In this paper, we will prove the unique solution of the equation \(P_{\mu}(f,\Phi_{f}(t))=0\) is indeed our definition of Lyapunov dimension (see Theorem A). Furthermore, this paper shows that the Lyapunov dimension of
an ergodic measure defined in (3.1) equals to its Caratheodory singular dimension (see Proposition 3.1), so the Caratheodory singular dimension (see Section 3.1.3 for the detailed definition) can be regarded as a geometric explanation of the Lyapunov dimension.
#### 3.1.2. Singular valued potentials
Let \(\Lambda\) be a repeller of a smooth map \(f:M\to M\). Given \(x\in\Lambda\) and \(n\geq 1\), consider the differentiable operator \(D_{x}f^{n}:T_{x}M\to T_{f^{n}(x)}M\) and denote the singular values of \(D_{x}f^{n}\) (square roots of the eigenvalues of \((D_{x}f^{n})^{*}D_{x}f^{n}\)) in the decreasing order by
\[\alpha_{1}(x,f^{n})\geq\alpha_{2}(x,f^{n})\geq\cdots\geq\alpha_{m_{0}}(x,f^{n}). \tag{3.2}\]
For \(t\in[0,m_{0}]\), set
\[\varphi^{t}(x,f^{n}):=\sum_{i=m_{0}-[t]+1}^{m_{0}}\log\alpha_{i}(x,f^{n})+(t-[ t])\log\alpha_{m_{0}-[t]}(x,f^{n}). \tag{3.3}\]
Since \(f\) is smooth, the functions \(x\mapsto\alpha_{i}(x,f^{n})\), \(x\mapsto\varphi^{t}(x,f^{n})\) are continuous for any \(n\geq 1\). It is easy to see that for all \(n,\ell\in\mathbb{N}\)
\[\varphi^{t}(x,f^{n+\ell})\geq\varphi^{t}(x,f^{n})+\varphi^{t}(f^{n}(x),f^{\ell }).\]
It follows that the sequence of functions
\[\Phi_{f}(t):=\{-\varphi^{t}(\cdot,f^{n})\}_{n\geq 1} \tag{3.4}\]
is sub-additive, which is called the _sub-additive singular valued potentials_.
#### 3.1.3. Caratheodory singular dimension
We recall the definition of Caratheodory singular dimension of a repeller which is introduced in [13].
Let \(\Phi_{f}(t)=\{-\varphi^{t}(\cdot,f^{n})\}_{n\geq 1}\). Given a subset \(Z\subseteq\Lambda\), for each small number \(r>0\), let
\[m(Z,t,r):=\lim_{N\to\infty}\inf\Big{\{}\sum_{i}\exp\bigl{(}\sup_{y\in B_{n_{i} }(x_{i},r)}-\varphi^{t}(y,f^{n_{i}})\bigr{)}\Big{\}},\]
where the infimum is taken over all collections \(\{B_{n_{i}}(x_{i},r)\}\) of Bowen's balls with \(x_{i}\in\Lambda\), \(n_{i}\geq N\) that cover \(Z\). It is easy to see that there is a jump-up value
\[\dim_{C,r}Z:=\inf\{t:m(Z,t,r)=0\}=\sup\{t:m(Z,t,r)=+\infty\}. \tag{3.5}\]
The following quantity
\[\dim_{C}Z:=\liminf_{r\to 0}\dim_{C,r}Z \tag{3.6}\]
is called the _Caratheodory singular dimension of \(Z\)_. Particularly, the Caratheodory singular dimension of the repeller \(\Lambda\) is independent of the parameter \(r\) for small values of \(r>0\) (see [13, Theorem 4.1]).
For each \(f\)-invariant measure \(\mu\) supported on \(\Lambda\), let
\[\dim_{C,r}\mu:=\inf\{\dim_{C,r}Z:\mu(Z)=1\},\]
and the following quantity
\[\dim_{C}\mu:=\liminf_{r\to 0}\dim_{C,r}\mu\]
is called the _Caratheodory singular dimension_ of the measure \(\mu\).
#### 3.1.4. Approximation of Caratheodory singular dimension of repellers
Given a repeller \(\Lambda\) of a \(C^{1+\alpha}\) map \(f\), the following result shows that the zero of the measure theoretic pressure function is exactly the Lyapunov dimension of an ergodic measure \(\mu\in\mathcal{E}_{f}(\Lambda)\), and the Lyapunov dimension of an ergodic measure of positive entropy can be approximated by the Caratheodory singular dimension of a sequence of invariant sets. Recall that \(\Phi_{f}(t):=\{-\varphi^{t}(\cdot,f^{n})\}_{n\geq 1}\) is the sub-additive singular valued potentials with respect to \(f\) (See the definition in (3.4).).
**Theorem A**.: _Let \(f:M\to M\) be a \(C^{1+\alpha}\) map of an \(m_{0}\)-dimensional compact smooth Riemannian manifold \(M\), and \(\Lambda\) a repeller of \(f\). Then the following statements hold:_
1. _for every_ \(f\)_-invariant ergodic measure_ \(\mu\) _supported on_ \(\Lambda\)_, we have that_ \[\dim_{\mathrm{L}}\mu=s_{\mu}\] _where_ \(s_{\mu}\) _is the unique root of the equation_ \(P_{\mu}(f,\Phi_{f}(t))=0\)_;_
2. _let_ \(\mu\) _be an_ \(f\)_-invariant ergodic measure on_ \(\Lambda\) _with_ \(h_{\mu}(f)>0\)_, for any_ \(\varepsilon>0\) _there exists an_ \(f\)_-invariant compact subset_ \(\Lambda_{\varepsilon}\subset\Lambda\) _such that_ \(\dim_{C}\Lambda_{\varepsilon}\to\dim_{\mathrm{L}}\mu\) _as_ \(\varepsilon\) _approaching zero._
Some comments on the previous theorem are in order. First, we would like to point out that it is enough to require \(f\) to be a \(C^{1}\) map in the first statement, however, the map of higher smoothness \(C^{1+\alpha}\) is crucial in the last statement as it allows us to utilize some powerful results of Pesin theory. Second, if \(f\) is a local diffeomorphism preserving an ergodic expanding measure \(\mu\) of positive entropy, i.e., \((f,\mu)\) has only positive Lyapunov exponent, in this case one can also obtain a approximation result as in [13] so that we can obtain the second statement in the previous theorem in this setting. In [37], for a \(C^{2}\) interval map \(f\) with finitely many non-degenerate critical points, the author proved that the Hausdorff dimension of an expanding measure \(\mu\) can be approximated gradually by the Hausdorff dimension of a sequence of repellers.
For each \(f\)-invariant ergodic measure \(\mu\) supported on \(\Lambda\), the following result shows that the Caratheodory singular dimension of \(\mu\) is exactly its Lyapunov dimension.
**Proposition 3.1**.: _Let \(f:M\to M\) be a \(C^{1}\) map of an \(m_{0}\)-dimensional compact smooth Riemannian manifold \(M\), and \(\Lambda\) a repeller for \(f\). Then the following statements hold:_
1. _for each subset_ \(Z\subset\Lambda\)_, we have that_ \[\dim_{C}Z=t_{Z}\] _where_ \(t_{Z}\) _is the unique root of the equation_ \(P_{Z}(f,\Phi_{f}(t))=0\)_;_
2. _for each_ \(f\)_-invariant ergodic measure_ \(\mu\) _supported on_ \(\Lambda\)_, we have that_ \[\dim_{C}\mu=\dim_{\mathrm{L}}\mu.\]
### Dimension approximation in non-uniformly hyperbolic systems
In this section, we first recall an approximation result in non-uniformly hyperbolic systems that are proved by Avila _et al_[2], then we give the statement of our dimension approximation result in non-uniformly hyperbolic systems.
#### 3.2.1. Lyapunov exponents and holonomy maps
Let \(f:M\to M\) be a diffeomorphism on an \(m_{0}\)-dimensional compact smooth Riemannian manifold \(M\). By the Oseledec's multiplicative ergodic theorem (see [30]), there exists a total measure set \(\mathcal{O}\subset M\) such that, for each \(x\in\mathcal{O}\) and each invariant measure \(\mu\) there exist positive integers \(d_{1}(x),d_{2}(x),\cdots,d_{p(x)}(x)\), numbers \(\lambda_{1}(x)>\lambda_{2}(x)>\cdots>\lambda_{p(x)}(x)\) and a splitting
\[T_{x}M=E_{1}(x)\oplus E_{2}(x)\oplus\cdots\oplus E_{p(x)}(x)\]
satisfy that
1. \(D_{x}fE_{i}(x)=E_{i}(f(x))\) for each \(i\) and \(\sum_{i=1}^{p(x)}d_{i}(x)=m_{0}\);
2. for each \(0\neq v\in E_{i}(x)\) we have that \[\lambda_{i}(x)=\lim_{n\to\infty}\frac{1}{n}\log\|D_{x}f^{n}(v)\|.\]
Here we call the numbers \(\{\lambda_{i}(x)\}_{i=1}^{p(x)}\) the Lyapunov exponents of \((f,\mu)\). In the case that \(\mu\) is an \(f\)-invariant ergodic measure, the numbers \(p(x)\), \(\{d_{i}(x)\}\) and \(\{\lambda_{i}(x)\}\) are constants almost everywhere. We denote them simply by \(p\), \(\{d_{i}\}_{i=1}^{p}\) and \(\{\lambda_{i}\}_{i=1}^{p}\).
A compact invariant subset \(\Lambda\subset M\) is called a _hyperbolic set_, if there exists a continuous splitting of the tangent bundle \(T_{\Lambda}M=E^{s}\oplus E^{u}\), and constants \(C>0,\ 0<\lambda<1\) such that for every \(x\in\Lambda\)
1. \(D_{x}f(E^{s}(x))=E^{s}(f(x)),\ D_{x}f(E^{u}(x))=E^{u}(f(x))\);
2. for all \(n\geq 0,\ \|D_{x}f^{n}(v)\|\leq C\lambda^{n}\|v\|\) if \(v\in E^{s}(x)\), and \(\|D_{x}f^{-n}(v)\|\leq C\lambda^{n}\|v\|\) if \(v\in E^{u}(x)\).
Given a point \(x\in\Lambda\), for each small \(\beta>0\), the _local stable and unstable manifolds_ are defined as follows:
\[W^{s}_{\beta}(f,x)=\Big{\{}y\in M:d(f^{n}(x),f^{n}(y))\leq\beta,\ \forall n\geq 0 \Big{\}},\]
\[W^{u}_{\beta}(f,x)=\Big{\{}y\in M:d(f^{-n}(x),f^{-n}(y))\leq\beta,\ \forall n\geq 0 \Big{\}}.\]
The global stable and unstable sets of \(x\in\Lambda\) are given as follows:
\[W^{s}(f,x)=\bigcup_{n\geq 0}f^{-n}(W^{s}_{\beta}(f,f^{n}(x))),\,W^{u}(f,x)= \bigcup_{n\geq 0}f^{n}(W^{u}_{\beta}(f,f^{-n}(x))).\]
A hyperbolic set is called _locally maximal_, if there exists a neighbourhood \(U\) of \(\Lambda\) such that \(\Lambda=\bigcap_{n\in\mathbb{Z}}f^{n}(U)\). Recall that a _horseshoe_ for a diffeomorphism \(f\) is a transitive, locally maximal hyperbolic set that is totally disconnected and not finite.
Let \(W^{u}\) and \(W^{s}\) be the unstable and stable foliations of hyperbolic dynamical system \((f,\Lambda)\). For \(x,y\in\Lambda\) with \(x\) close to \(y\), let \(W^{u}_{\beta}(f,x)\) and \(W^{s}_{\beta}(f,x)\) be the local stable foliations of \(x\) and \(y\). Define the map \(h:W^{s}_{\beta}(f,x)\to W^{s}_{\beta}(f,y)\) sending \(z\) to \(h(z)\) by sliding along the leaves of \(W^{u}\). The map \(h\) is called the holonomy map of \(W^{u}\). The map \(h\) is Lipschitz continuous if
\[d_{y}(h(z_{1}),h(z_{2}))\leq Ld_{x}(z_{1},z_{2}),\]
where \(z_{1},z_{2}\in W^{s}_{\beta}(f,x)\) and \(d_{x},d_{y}\) are natural path metrics on \(W^{s}_{\beta}(f,x)\), \(W^{u}_{\beta}(f,y)\) with respect to a fixed Riemannian structure on \(M\). The constant \(L\) is the Lipschitz constant, and it is independent of the choice of \(W^{s}\). The map \(h\) is \(\alpha\)-H\(\ddot{o}\)lder continuous if
\[d_{y}(h(z_{1}),h(z_{2}))\leq Hd_{x}(z_{1},z_{2})^{\alpha},\]
where \(H\) is the Holder constant. Similarly we can define the holonomy map of \(W^{s}\).
#### 3.2.2. Approximation of Lyapunov exponents and entropy
For a \(C^{1+\alpha}\) diffeomorphism \(f:M\to M\), Katok [9] showed that an \(f\)-invariant ergodic hyperbolic measure (a measure has no zero Lyapunov exponents) with positive metric entropy can be approximated by horseshoes. However, Katok's result does not explicitly mention a control of the Oseledets splitting over the horseshoes. Recently, Avila _et al_[2] showed that there is a dominated splitting over the horseshoes, with approximately the same Lyapunov exponents on each sub-bundle of the splitting.
Recall that \(Df\)-invariant splitting on a compact \(f\)-invariant subset \(\Lambda\)
\[T_{\Lambda}M=E_{1}\oplus E_{2}\oplus\cdots\oplus E_{\ell},\ (\ell\geq 2)\]
is a _dominated splitting_, if there exists \(N\geq 1\) such that for every \(x\in\Lambda\), any unit vectors \(v,w\in T_{x}M\):
\[v\in E_{i}(x),w\in E_{j}(x)\ \text{with}\ i<j\Longrightarrow\|D_{x}f^{N}(v) \|\geq 2\|D_{x}f^{N}(w)\|.\]
We write \(E_{1}\succeq E_{2}\succeq\cdots\succeq E_{\ell}\). Furthermore, if there are numbers \(\lambda_{1}>\lambda_{2}>\cdots>\lambda_{\ell}\), constants \(C>0\) and \(0<\varepsilon<\min\limits_{1\leq i<\ell}\dfrac{\lambda_{i}-\lambda_{i+1}}{100}\) such that for every \(x\in\Lambda\), \(n\in\mathbb{N}\), \(1\leq j\leq\ell\) and each unit vector \(u\in E_{j}(x)\), it holds that
\[C^{-1}e^{n(\lambda_{j}-\varepsilon)}\leq\|D_{x}f^{n}(u)\|\leq Ce^{n(\lambda_{ j}+\varepsilon)},\]
then we say that
\[T_{\Lambda}M=E_{1}\oplus E_{2}\oplus\cdots\oplus E_{\ell},\ (\ell\geq 2)\]
is a \(\{\lambda_{j}\}_{1\leq j\leq\ell}-\)_dominated splitting_.
For the reader's convenience, we recall Avila, Crovisier and Wilkinson's approximation results in the following:
**Theorem 3.1**.: _Let \(f:M\to M\) be a \(C^{1+\alpha}\) diffeomorphism, and \(\mu\) an \(f\)-invariant ergodic hyperbolic measure with \(h_{\mu}(f)>0\). For each \(\varepsilon>0\) and a weak-\(*\) neighborhood \(\mathcal{V}\) of \(\mu\) in the space of \(f\)-invariant probability measures on \(M\). Then there exists a compact set \(\Lambda_{\varepsilon}^{*}\subset M\) and a positive integer \(N\) such that the following properties hold:_
1. \(\Lambda_{\varepsilon}^{*}\) _is a locally maximal hyperbolic set and topologically mixing with respect to_ \(f^{N}\)_;_
2. \(h_{\mu}(f)-\varepsilon<h_{\text{top}}(f,\Lambda_{\varepsilon})<h_{\mu}(f)+\varepsilon\) _where_ \(\Lambda_{\varepsilon}=\Lambda_{\varepsilon}^{*}\cup f(\Lambda_{\varepsilon}^{* })\cup\cdots f^{N-1}(\Lambda_{\varepsilon}^{*})\)_;_
3. \(\Lambda_{\varepsilon}\) _is_ \(\varepsilon\)_-close to the support of_ \(\mu\) _in the Hausdorff distance;_
4. _each invariant probability measure supported on the horseshoe_ \(\Lambda_{\varepsilon}\) _lies in_ \(\mathcal{V}\)_;_
5. _if_ \(\lambda_{1}>\lambda_{2}>\cdots>\lambda_{\ell}\) _are the distinct Lyapunov exponents of_ \((f,\mu)\)_, with multiplicities_ \(d_{1},d_{2},\cdots,d_{\ell}\)_, then there exists a_ \(\{\lambda_{j}\}_{1\leq j<\ell}-\)_dominated splitting_ \(T_{\Lambda_{\varepsilon}}M=E_{1}\oplus E_{2}\oplus\cdots\oplus E_{\ell}\) _with_ \(\dim E_{i}=d_{i}\) _for each_ \(i\)_, and for each_ \(x\in\Lambda_{\varepsilon}\)_,_ \(k\geq 1\) _and each vector_ \(v\in E_{i}(x)\)__ \[e^{(\lambda_{i}-\varepsilon)kN}\leq\|D_{x}f^{kN}(v)\|\leq e^{(\lambda_{i}+ \varepsilon)kN},\ \forall i=1,2,\cdots,\ell.\]
**Remark 3.1**.: _In the second statement, the original result does not give the inequality of the right hand side. However, only a slightly modification can give the upper bound of the topological entropy of \(f\) on the horseshoe._
#### 3.2.3. Statements of results
Let \(f:M\to M\) be a \(C^{1+\alpha}\) diffeomorphism of a compact Riemannian manifold \(M\), and let \(\mu\) be a hyperbolic ergodic \(f\)-invariant probability measure with positive entropy. Suppose that \((f,\mu)\) has only two Lyapunov exponents \(\lambda_{u}(\mu)>0>\lambda_{s}(\mu)\). Ledrappier, Young [23] and Barreira, Pesin, Schmeling [7] proved that
\[\mathrm{Dim}\mu=\frac{h_{\mu}(f)}{\lambda_{u}(\mu)}-\frac{h_{\mu}(f)}{\lambda_ {s}(\mu)} \tag{3.7}\]
where \(\mathrm{Dim}\) denotes either \(\dim_{H}\) or \(\underline{\dim}_{B}\) or \(\overline{\dim}_{B}\). Our strategy used to prove the dimension approximation in this setting is as follows. It follows from Theorem 3.1 that \(\mu\) can be approximated by a sequence of horseshoes \(\{\Lambda_{\varepsilon}\}_{\varepsilon>0}\). Using well-established properties of dimension theory in uniform hyperbolic systems, one can show that
\[\mathrm{Dim}(\Lambda_{\varepsilon}\cap W^{i}_{\beta}(f,x))\approx\frac{h_{\mu }(f)}{|\lambda_{i}(\mu)|}\]
for \(i=u,s\) and every \(x\in\Lambda\). Burns and Wilkinson [8] proved the holonomy maps of the stable and unstable foliations for \((f,\Lambda_{\varepsilon})\) are Lipschitz continuous. Consequently, one can show that
\[\dim_{H}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))+\dim_{H}( \Lambda_{\varepsilon}\cap W^{s}_{\beta}(f,x))\] \[\leq \mathrm{Dim}\Lambda_{\varepsilon}\] \[\leq \overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x) )+\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{s}_{\beta}(f,x))\]
for every \(x\in\Lambda_{\varepsilon}\). Hence, \(\mathrm{Dim}\mu\) is approximately equal to \(\mathrm{Dim}\Lambda_{\varepsilon}\). The detailed proofs will be given in the next section.
**Theorem B**.: _Let \(f:M\to M\) be a \(C^{1+\alpha}\) diffeomorphism, and \(\mu\) be an \(f\)-invariant ergodic hyperbolic measure with \(h_{\mu}(f)>0\). Assume that \((f,\mu)\) has only two Lyapunov exponents \(\lambda_{u}(\mu)>0>\lambda_{s}(\mu)\). For each \(\varepsilon>0\), there exists a horseshoe \(\Lambda_{\varepsilon}\) such that_
\[|\mathrm{Dim}\Lambda_{\varepsilon}-\mathrm{Dim}\mu|<\varepsilon\]
_where \(\mathrm{Dim}\) denotes either \(\dim_{H}\) or \(\underline{\dim}_{B}\) or \(\overline{\dim}_{B}\)._
In [42], the authors relaxed the smoothness of Theorem 3.1 to \(C^{1}\) under the additional condition that Oseledec's splitting \(E^{u}\oplus E^{s}\) of \((f,\mu)\) is dominated. In this setting, one do not have Lipschitz continuity of the holonomy map in general. However, using Palis and Viana's method [31] one can show that: for every \(\gamma\in(0,1)\), there is some \(D_{\gamma}>0\) such that the holonomy maps of the stable and unstable foliations for the hyperbolic dynamical system \((f,\Lambda_{\varepsilon})\) (See Lemma 4.2) are \((D_{\gamma},\gamma)\)-Holder continuous. Since \(\gamma\) is arbitrary, using the ideas in [41] one can prove the following theorem:
**Theorem C**.: _Let \(f:M\to M\) be a \(C^{1}\) diffeomorphism, and let \(\mu\) be an \(f\)-invariant ergodic hyperbolic measure with \(h_{\mu}(f)>0\). Assume that \((f,\mu)\) has only two Lyapunov exponents \(\lambda_{u}(\mu)>0>\lambda_{s}(\mu)\) and the corresponding Oseledec's splitting \(E^{u}\oplus E^{s}\) is dominated. For each \(\varepsilon>0\), there exists a horseshoe \(\Lambda_{\varepsilon}\) such that_
\[|\mathrm{Dim}\Lambda_{\varepsilon}-\mathrm{Dim}\mu|<\varepsilon\]
_where \(\mathrm{Dim}\) denotes either \(\dim_{H}\) or \(\underline{\dim}_{B}\) or \(\overline{\dim}_{B}\)._
## 4. Proofs
In this section, we provide the proof of the main results presented in the previous section.
### Proof of Theorem A
Given an \(f\)-invariant ergodic measure \(\mu\), let \(P(t):=P_{\mu}(f|_{\Lambda},\Phi_{f}(t))\), it is easy to see that the function \(t\mapsto P(t)\) is continuous and strictly decreasing on the interval \([0,m_{0}]\). It follows from (2.3) that \(P(0)=h_{\mu}(f)\geq 0\), and \(P(m_{0})\leq 0\) by Margulis-Ruelle's inequality. Consequently, there exists a unique root \(s_{\mu}\) of the equation \(P_{\mu}(f|_{\Lambda},\Phi_{f}(t))=0\).
If \(h_{\mu}(f)=0\), it is easy to see that \(h_{\mu}(f)=s_{\mu}=0\). Hence, \(\dim_{L}\mu=s_{\mu}\).
If \(0<h_{\mu}(f)<\lambda_{m_{0}}(\mu)\), then \(P(0)>0\) and \(P(1)<0\). This implies that \(s_{\mu}\in(0,1)\) and \(0=P(s_{\mu})=h_{\mu}(f)-s_{\mu}\lambda_{m_{0}}(\mu)\). In consequence, we have that
\[s_{\mu}=\dim_{L}\mu=\frac{h_{\mu}(f)}{\lambda_{m_{0}}(\mu)}.\]
If \(h_{\mu}(f)\geq\lambda_{m_{0}}(\mu)\), note that
\[0 =h_{\mu}(f)+\mathcal{L}_{*}(\Phi_{f}(s_{\mu}),\mu)\] \[=h_{\mu}(f)-\sum_{i=m_{0}-[s_{\mu}]+1}^{m_{0}}\lambda_{i}(\mu)-(s _{\mu}-[s_{\mu}])\lambda_{m_{0}-[s_{\mu}]}(\mu).\]
Hence,
\[s_{\mu}=[s_{\mu}]+\frac{h_{\mu}(f)-\sum_{i=m_{0}-[s_{\mu}]+1}^{m_{0}}\lambda_{ i}(\mu)}{\lambda_{m_{0}-[s_{\mu}]}(\mu)}.\]
On the other hand, since \(t\mapsto P(t)\) is strictly decreasing in \(t\), we have that
\[[s_{\mu}]=\max\{i:\lambda_{m_{0}}(\mu)+\cdots+\lambda_{m_{0}-i+1}(\mu)\leq h_ {\mu}(f)\}.\]
This yields that
\[s_{\mu}=\dim_{\mathrm{L}}\mu.\]
To prove the second statement, by Theorem 5.1 in [13], for each \(f\)-invariant ergodic measure \(\mu\) with positive entropy, and for each \(\varepsilon>0\) there exists an \(f\)-invariant compact subset \(\Lambda_{\varepsilon}\subset\Lambda\) such that the following statements hold:
* \(h_{\mathrm{top}}(f|_{\Lambda_{\varepsilon}})\geq h_{\mu}(f)-\varepsilon\);
* there is a continuous invariant splitting \(T_{x}M=E_{1}(x)\oplus E_{2}(x)\oplus\cdots\oplus E_{\ell}(x)\) over \(\Lambda_{\varepsilon}\) and a constant \(C>0\) so that \[C^{-1}\exp(n(\lambda_{j}(\mu)-\varepsilon))\leq\|D_{x}f^{n}(u)\|\leq C\exp(n( \lambda_{j}(\mu)+\varepsilon))\] for any unit vector \(u\in E_{j}(x)\), where \(\lambda_{1}(\mu)<\cdots<\lambda_{\ell}(\mu)\) are distinct Lyapunov exponents of \(f\) with respect to the measure \(\mu\).
By modifying the arguments in [13, Theorem 5.1], one may improve the estimate in (i) as follows:
* \(h_{\mu}(f)+\varepsilon\geq h_{\mathrm{top}}(f|_{\Lambda_{\varepsilon}})\geq h _{\mu}(f)-\varepsilon\).
Since \(\Lambda_{\varepsilon}\) is a repeller of \(f\), one can choose an \(f\)-invariant ergodic measure \(\mu_{\varepsilon}\) on \(\Lambda_{\varepsilon}\) so that \(h_{\mu_{\varepsilon}}(f)=h_{\mathrm{top}}(f|_{\Lambda_{\varepsilon}})\), it yields that
\[P_{\mathrm{top}}(f|_{\Lambda_{\varepsilon}},\Phi_{f}(t)) \geq h_{\mu_{\varepsilon}}(f)+\mathcal{L}_{*}(\Phi_{f}(t),\mu_{ \varepsilon})\] \[\geq h_{\mu}(f)+\mathcal{L}_{*}(\Phi_{f}(t),\mu)-(t+1)\varepsilon \qquad\text{ ( by (ii) )}\] \[\geq P_{\mu}(f|_{\Lambda},\Phi_{f}(t))-(m_{0}+1)\varepsilon.\]
On the other hand, since \(f\) is expanding, by the variational principle there exists an \(f\)-invariant ergodic measure \(\widetilde{\mu}_{\varepsilon}\) on \(\Lambda_{\varepsilon}\) so that
\[P_{\mathrm{top}}(f|_{\Lambda_{\varepsilon}},\Phi_{f}(t)) =h_{\widetilde{\mu}_{\varepsilon}}(f)+\mathcal{L}_{*}(\Phi_{f}(t ),\widetilde{\mu}_{\varepsilon})\] \[\leq h_{\mu}(f)+\mathcal{L}_{*}(\Phi_{f}(t),\mu)+(t+1)\varepsilon\] \[\leq P_{\mu}(f|_{\Lambda},\Phi_{f}(t))+(m_{0}+1)\varepsilon.\]
Hence,
\[\Big{|}P_{\mathrm{top}}(f|_{\Lambda_{\varepsilon}},\Phi_{f}(t))-P_{\mu}(f|_{ \Lambda},\Phi_{f}(t))\Big{|}\leq(m_{0}+1)\varepsilon.\]
By Theorem 4.1 in [13], the Caratheodory singular dimension \(\dim_{C}\Lambda_{\varepsilon}\) of \(\Lambda_{\varepsilon}\) is given by the unique root of the following equation
\[P_{\mathrm{top}}(f|_{\Lambda_{\varepsilon}},\Phi_{f}(t))=0.\]
This together with the first statement yield that
\[K|\dim_{C}\Lambda_{\varepsilon}-\dim_{\mathrm{L}}\mu| \leq\Big{|}P_{\mu}(f|_{\Lambda},\Phi_{f}(\dim_{C}\Lambda_{ \varepsilon}))-P_{\mu}(f|_{\Lambda},\Phi_{f}(\dim_{\mathrm{L}}\mu))\Big{|}\] \[=\Big{|}P_{\mu}(f|_{\Lambda},\Phi_{f}(\dim_{C}\Lambda_{ \varepsilon}))-P_{\mathrm{top}}(f|_{\Lambda_{\varepsilon}},\Phi_{f}(\dim_{C} \Lambda_{\varepsilon}))\Big{|}\] \[\leq(m_{0}+1)\varepsilon\]
where \(K=\min_{x\in\Lambda}m(D_{x}f)\) and \(m(\cdot)\) denotes the minimum norm of an operator. Consequently, we have that \(\dim_{C}\Lambda_{\varepsilon}\to\dim_{\mathrm{L}}\mu\) as \(\varepsilon\) approaching zero.
### Proof of Proposition 3.1
Given a subset \(Z\subset\Lambda\), since \(P_{Z}(f|_{\Lambda},\Phi_{f}(t))\) is continuous and strictly decreasing in \(t\), let \(t_{Z}\) denote the unique root of the equation \(P_{Z}(f|_{\Lambda},\Phi_{f}(t))=0\). For every \(t<t_{Z}\), we have that \(P_{Z}(f|_{\Lambda},\Phi_{f}(t))>0\). Fix such a number \(t\), and take \(\beta>0\) so that \(P_{Z}(f|_{\Lambda},\Phi_{f}(t))-\beta>0\). Since
\[P_{Z}(f|_{\Lambda},\Phi_{f}(t))=\liminf_{r\to 0}P_{Z}(f|_{\Lambda},\Phi_{f}(t),r)\]
there exists \(r_{0}>0\) such that for each \(0<r<r_{0}\) one has
\[P_{Z}(f|_{\Lambda},\Phi_{f}(t),r)>P_{Z}(f|_{\Lambda},\Phi_{f}(t))-\beta.\]
Fix such a small \(r>0\). By the definition of topological pressure on arbitrary subsets, one has
\[m(Z,\Phi_{f}(t),P_{Z}(f|_{\Lambda},\Phi_{f}(t))-\beta,r)=+\infty.\]
Hence, for each \(\xi>0\), there exists \(L\in\mathbb{N}\) so that for any \(N>L\) we have that
\[\exp(-N(P_{Z}(f|_{\Lambda},\Phi_{f}(t))-\beta))\inf\Big{\{}\sum_{ i}\exp(\sup_{y\in B_{n_{i}}(x_{i},r)}-\varphi^{t}(y,f^{n_{i}}))\Big{\}}\] \[\geq\inf\Big{\{}\sum_{i}\exp(-(P_{Z}(f|_{\Lambda},\Phi_{f}(t))- \beta)n_{i}+\sup_{y\in B_{n_{i}}(x_{i},r)}-\varphi^{t}(y,f^{n_{i}}))\Big{\}}>\xi\]
where the infimum is taken over all collections \(\{B_{n_{i}}(x_{i},r)\}\) of Bowen's balls with \(n_{i}\geq N\), which covers \(Z\). This yields that
\[\inf\Big{\{}\sum_{i}\exp(\sup_{y\in B_{n_{i}}(x_{i},r)}-\varphi^{t}(y,f^{n_{i} }))\Big{\}}>\xi\exp(N(P_{Z}(f|_{\Lambda},\Phi_{f}(t))-\beta)).\]
Letting \(N\to\infty\), we have that
\[m(Z,t,r)=+\infty.\]
Hence,
\[\dim_{C,r}Z\geq t\]
for all \(0<r<r_{0}\). Consequently, since \(t<t_{Z}\) is arbitrary, we have that
\[\dim_{C}Z\geq t_{Z}. \tag{4.1}\]
On the other hand, for each \(t>t_{Z}\) one has that \(P_{Z}(f|_{\Lambda},\Phi_{f}(t))<0\). Fix such a number \(t\), and take \(\widetilde{\beta}>0\) so that \(P_{Z}(f|_{\Lambda},\Phi_{f}(t))+\widetilde{\beta}<0\). By the definition of topological pressure on arbitrary subsets, for any \(R>0\), there exists \(0<r<R\) such that
\[P_{Z}(f|_{\Lambda},\Phi_{f}(t),r)<P_{Z}(f|_{\Lambda},\Phi_{f}(t))+\widetilde{ \beta}.\]
For such a small \(r>0\) one has
\[m(Z,\Phi_{f}(t),P_{Z}(f|_{\Lambda},\Phi_{f}(t))+\widetilde{\beta},r)=0\]
Hence, for each small \(\widetilde{\xi}>0\) there exists \(\widetilde{L}\in\mathbb{N}\) so that for any \(N>\widetilde{L}\) we have that
\[\exp(-N(P_{Z}(f|_{\Lambda},\Phi_{f}(t))+\widetilde{\beta}))\inf \Big{\{}\sum_{i}\exp(\sup_{y\in B_{n_{i}}(x_{i},r)}-\varphi^{t}(y,f^{n_{i}})) \Big{\}}\] \[\leq\inf\Big{\{}\sum_{i}\exp(-(P_{Z}(f|_{\Lambda},\Phi_{f}(t))+ \widetilde{\beta})n_{i}+\sup_{y\in B_{n_{i}}(x_{i},r)}-\varphi^{t}(y,f^{n_{i}} ))\Big{\}}\leq\widetilde{\xi}\]
where the infimum is taken over all collections \(\{B_{n_{i}}(x_{i},r)\}\) of Bowen's balls with \(n_{i}\geq N\), which covers \(Z\). This yields that
\[\inf\Big{\{}\sum_{i}\exp(\sup_{y\in B_{n_{i}}(x_{i},r)}-\varphi^{t}(y,f^{n_{i} }))\Big{\}}\leq\widetilde{\xi}\exp(N(P_{Z}(f|_{\Lambda},\Phi_{f}(t))+ \widetilde{\beta}))\]
Letting \(N\to\infty\), one has
\[m(Z,t,r)=0.\]
Consequently, for such \(r>0\) one has
\[\dim_{C,r}Z\leq t.\]
Hence, we have that
\[\dim_{C}Z=\liminf_{r\to 0}\dim_{C,r}Z\leq t_{Z}. \tag{4.2}\]
It follows from (4.1) and (4.2) that
\[\dim_{C}Z=t_{Z}.\]
To show the second statement, for a given \(f\)-invariant ergodic measure \(\mu\) supported on \(\Lambda\), and a subset \(Z\subset\Lambda\) with \(\mu(Z)=1\), we have that
\[P_{Z}(f|_{\Lambda},\Phi_{f}(t))\geq P_{\mu}(f|_{\Lambda},\Phi_{f}(t)).\]
By (1) of Theorem A and the first statement, one has
\[\dim_{C}Z\geq\dim_{L}\mu.\]
By the definition of Caratheodory singular dimension of arbitrary subsets, one has
\[\dim_{C,r}Z\geq\dim_{L}\mu\]
for all sufficiently small \(r>0\). Consequently, we have that
\[\dim_{C}\mu=\liminf_{r\to 0}\dim_{C,r}\mu=\liminf_{r\to 0}\inf\{\dim_{C,r}Z:\mu(Z)=1\} \geq\dim_{L}\mu.\]
To prove that \(\dim_{C}\mu=\dim_{L}\mu\), we assume that \(\dim_{C}\mu>\widetilde{t}>\dim_{L}\mu\). By the first statement in Theorem A, we have that
\[P_{\mu}(f|_{\Lambda},\Phi(\widetilde{t}))<0.\]
By the definition of measure theoretic pressure, for each \(n\in\mathbb{N}\), there exists \(0<r_{n}<\frac{1}{n}\) so that
\[\inf\{P_{Z}(f|_{\Lambda},\Phi(\widetilde{t}),r_{n}):\mu(Z)=1\}<0.\]
Hence, there exists a subset \(Z_{n}\subset\Lambda\) with \(\mu(Z_{n})=1\) so that
\[P_{Z_{n}}(f|_{\Lambda},\Phi(\widetilde{t}),r_{n})<0.\]
Put \(\widetilde{Z}:=\bigcap_{n\geq 1}Z_{n}\), then \(\mu(\widetilde{Z})=1\) and
\[P_{\widetilde{Z}}(f|_{\Lambda},\Phi(\widetilde{t})) =\liminf_{r\to 0}P_{\widetilde{Z}}(f|_{\Lambda},\Phi( \widetilde{t}),r)\] \[\leq\liminf_{n\to\infty}P_{Z_{n}}(f|_{\Lambda},\Phi(\widetilde{t }),r_{n})\leq 0\]
It follows from the first statement and the definition of Caratheodory singular dimension of \(\mu\) that
\[\dim_{C}\mu=\liminf_{r\to 0}\dim_{C,r}\mu\leq\liminf_{r\to 0}\dim_{C,r} \widetilde{Z}=\dim_{C}\widetilde{Z}\leq\widetilde{t},\]
which yields a contraction. Hence, we have that \(\dim_{C}\mu=\dim_{L}\mu\).
### Proof of Theorem B
Ledrappier, Young [23] and Barreira, Pesin, Schmeling [7] proved that
\[\text{Dim}\mu=\frac{h_{\mu}(f)}{\lambda_{u}(\mu)}-\frac{h_{\mu}(f)}{\lambda_{ s}(\mu)}\]
where Dim denotes either \(\dim_{H}\) or \(\underline{\dim}_{B}\) or \(\overline{\dim}_{B}\). Fix a small number \(\varepsilon>0\). By Theorem 3.1, there exists a horseshoe \(\Lambda_{\varepsilon}\) such that
* \(|h_{\text{top}}(f,\Lambda_{\varepsilon})-h_{\mu}(f)|<\varepsilon\);
* there exists a dominated splitting \(T_{\Lambda_{\varepsilon}}M=E^{u}\oplus E^{s}\) with \(\dim E^{i}=d_{i}\,(i=u,s)\), and for each \(x\in\Lambda_{\varepsilon}\), every \(n\geq 1\) and each vector \(v\in E^{i}(x)\)\((i=s,u)\)\[e^{(\lambda_{i}(\mu)-\varepsilon)n}<\|D_{x}f^{n}(v)\|<e^{(\lambda_{i}(\mu)+ \varepsilon)n}.\]
Fixed any \(k\in\mathbb{N}\), denote \(F=f^{2^{k}}\). Since \(\Lambda_{\varepsilon}\) is a locally maximal hyperbolic set for \(f\), \(\Lambda_{\varepsilon}\) is also a locally maximal hyperbolic set for \(F\). Notice that
\[W^{u}_{\beta}(F,x)\cap\Lambda_{\varepsilon}=W^{u}_{\beta}(f,x)\cap\Lambda_{ \varepsilon}\quad\text{and}\quad W^{s}_{\beta}(F,x)\cap\Lambda_{\varepsilon} =W^{s}_{\beta}(f,x)\cap\Lambda_{\varepsilon}.\]
Let \(\|\cdot\|\) and \(m(\cdot)\) denote the maximal and minimal norm of an operator. For every \(x\in\Lambda_{\varepsilon}\), Barreira [4] proved that
\[\underline{t}^{k}_{u}\leq\dim_{H}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x ))\leq\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\leq \overline{t}^{k}_{u}\]
where \(\underline{t}^{k}_{u}\), \(\overline{t}^{k}_{u}\) are the unique solutions of
\[P_{\text{top}}(F,-t\log\|D_{x}F|_{E^{u}(x)}\|)=0\quad\text{and}\quad P_{\text {top}}(F,-t\log m(D_{x}F|_{E^{u}(x)}))=0\]
respectively. Using the same arguments as in the proof of Theorem 6.2 and Theorem 6.3 in [3], one can prove that the sequences \(\{\underline{t}^{k}_{u}\}\) and \(\{\overline{t}^{k}_{u}\}\) are monotone. Furthermore, set
\[\underline{t}_{u}:=\lim_{k\to\infty}\underline{t}^{k}_{u}\quad\text{and}\quad \overline{t}_{u}:=\lim_{k\to\infty}\overline{t}^{k}_{u},\]
one can show that \(\underline{t}_{u}\), \(\overline{t}_{u}\) are the unique solutions of the following equations
\[P_{\rm var}(f,-t\{\log\|D_{x}f^{n}|_{E^{u}}\|\})=0,\quad P_{\rm top}(f,-t\{\log m (D_{x}f^{n}|_{E^{u}})\})=0\]
respectively.
Consequently, we have that
\[\underline{t}_{u}\leq\dim_{H}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x)) \leq\underline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\leq \overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\leq \overline{t}_{u}\]
and
\[\underline{t}_{u} =\sup\Big{\{}\frac{h_{\nu}(f)}{\lim_{n\to\infty}\frac{1}{n}\int \log\|D_{x}f^{n}|_{E^{u}}\|d\nu}:\nu\in\mathcal{M}_{f}(\Lambda_{\varepsilon}) \Big{\}},\] \[\overline{t}_{u} =\sup\Big{\{}\frac{h_{\nu}(f)}{\lim_{n\to\infty}\frac{1}{n}\int \log m(D_{x}f^{n}|_{E^{u}})d\nu}:\nu\in\mathcal{M}_{f}(\Lambda_{\varepsilon}) \Big{\}}.\]
Combining with (i) and (ii), one has
\[\frac{h_{\mu}(f)-\varepsilon}{\lambda_{u}(\mu)+\varepsilon}\leq\operatorname{ Dim}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\leq\frac{h_{\mu}(f)+ \varepsilon}{\lambda_{u}(\mu)-\varepsilon} \tag{4.3}\]
for every \(x\in\Lambda_{\varepsilon}\), where \(\operatorname{Dim}\) denotes either \(\dim_{H}\) or \(\underline{\dim}_{B}\) or \(\overline{\dim}_{B}\). One can show in a similar fashion that
\[-\frac{h_{\mu}(f)-\varepsilon}{\lambda_{s}(\mu)-\varepsilon}\leq\operatorname {Dim}(\Lambda_{\varepsilon}\cap W^{s}_{\beta}(f,x))\leq-\frac{h_{\mu}(f)+ \varepsilon}{\lambda_{s}(\mu)+\varepsilon} \tag{4.4}\]
for every \(x\in\Lambda_{\varepsilon}\).
**Lemma 4.1**.: _The holonomy maps of the stable and unstable foliations for \((f,\Lambda_{\varepsilon})\) are Lipschitz continuous._
Proof.: Fix a positive integer \(N\), put \(F:=f^{N}\) and \(\Lambda=\Lambda_{\varepsilon}\). Since \(\Lambda\) is a locally maximal hyperbolic set for \(f\), so is \(\Lambda\) for \(F\). Notice that
\[W^{u}_{\beta}(F,x)\cap\Lambda=W^{u}_{\beta}(f,x)\cap\Lambda\quad\text{and} \quad W^{s}_{\beta}(F,x)\cap\Lambda=W^{s}_{\beta}(f,x)\cap\Lambda. \tag{4.5}\]
Let
\[a_{F}=\|DF^{-1}|_{E^{u}}\|,\ b_{F}=\|DF|_{E^{s}}\|,c_{F}=\|DF|_{E^{u}}\|,\ d_{F}=\| DF^{-1}|_{E^{s}}\|.\]
It follows from (ii) that
\[1<\frac{\|D_{x}F|_{E^{i}(x)}\|}{m(D_{x}F|_{E^{i}(x)})}<e^{2\varepsilon N},\ \text{for every}\ x\in\Lambda\ \text{and}\ i\in\{s,u\}.\]
Hence,
\[a_{F}b_{F}c_{F}=\frac{\|DF|_{E^{s}}\|\cdot\|DF|_{E^{u}}\|}{m(DF|_{E^{u}})}<e^{ (\lambda_{s}(\mu)+3\varepsilon)N}<1\]
provided that \(\varepsilon>0\) is sufficiently small such that \(\lambda_{s}(\mu)+3\varepsilon<0\). By Theorem 0.2 in [8], we have that the holonomy map of the stable foliation for \((F,\Lambda)\) is \(C^{1}\). Similarly, note that
\[a_{F}b_{F}d_{F}=\frac{\|DF|_{E^{s}}\|}{m(DF||_{E^{s}})m(DF||_{E^{u}})}<e^{(- \lambda_{u}(\mu)+3\varepsilon)N}<1\]
provided that \(\varepsilon>0\) is sufficiently small such that \(\lambda_{u}(\mu)-3\varepsilon>0\). It follows from [8, Theorem 0.2] that the holonomy map of the unstable foliation for \((F,\Lambda)\) is \(C^{1}\).
Combing (4.5) one has the holonomy maps of the stable and unstable foliations for \((f,\Lambda)\) are Lipschitz continuous.
By Lemma 4.1 and the fact \(f\) is topologically mixing on \(\Lambda_{\varepsilon}\), one has \(\dim_{H}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\), \(\underline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\) and \(\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\) are independent of \(\beta\) and \(x\) (see the proof of Theorem 4.3.2 in [5] for more details). Let
\[A_{\varepsilon,x}=(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\times( \Lambda_{\varepsilon}\cap W^{s}_{\beta}(f,x)).\]
By the properties of dimension (e.g. see [15, 33]), one has
\[\begin{split}&\dim_{H}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))+ \dim_{H}(\Lambda_{\varepsilon}\cap W^{s}_{\beta}(f,x))\\ \leq&\dim_{H}A_{\varepsilon,x}\\ \leq&\underline{\dim}_{B}A_{\varepsilon,x}\\ \leq&\overline{\dim}_{B}A_{\varepsilon,x}\\ \leq&\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W ^{u}_{\beta}(f,x))+\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{s}_{\beta }(f,x)).\end{split} \tag{4.6}\]
Let \(\Phi:A_{\varepsilon,x}\to\Lambda_{\varepsilon}\) be given by
\[\Phi(y,z)=W^{s}_{\beta}(f,y)\cap W^{u}_{\beta}(f,z).\]
It is easy to see \(\Phi\) is a homeomorphism onto a neighborhood \(V_{x}\) of \(x\) in \(\Lambda_{\varepsilon}\). It follows from Lemma 4.1 that \(\Phi\) and \(\Phi^{-1}\) are Lipschitz continuous (see Theorem 4.3.2 in [5] for detailed proofs). It follows from Corollary 2.1 that
\[\operatorname{Dim}V_{x}=\operatorname{Dim}A_{\varepsilon,x},\]
where \(\operatorname{Dim}\) denotes either \(\dim_{H}\) or \(\underline{\dim}_{B}\) or \(\overline{\dim}_{B}\). Since \(\{V_{x}:x\in\Lambda_{\varepsilon}\}\) is an open cover of \(\Lambda_{\varepsilon}\), one can choose a finite open cover \(\{V_{x_{1}},V_{x_{2}},\cdots,V_{x_{k}}\}\) of \(\Lambda_{\varepsilon}\). It follows from (4.6) that
\[\begin{split}&\dim_{H}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x ))+\dim_{H}(\Lambda_{\varepsilon}\cap W^{s}_{\beta}(f,x))\\ \leq&\dim_{H}\Lambda_{\varepsilon}=\max_{1\leq i \leq k}\dim_{H}V_{x_{i}}\\ \leq&\overline{\dim}_{B}\Lambda_{\varepsilon}= \max_{1\leq i\leq k}\overline{\dim}_{B}V_{x_{i}}\\ \leq&\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W ^{u}_{\beta}(f,x))+\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{s}_{\beta }(f,x)),\end{split}\]
for every \(x\in\Lambda_{\varepsilon}\). Combining (4.3) and (4.4) we obtain
\[\lim_{\varepsilon\to 0}\operatorname{Dim}\Lambda_{\varepsilon}=\frac{h_{\mu}(f)} {\lambda_{u}(\mu)}-\frac{h_{\mu}(f)}{\lambda_{s}(\mu)}=\operatorname{Dim}\mu,\]
where \(\operatorname{Dim}\) denotes either \(\dim_{H}\) or \(\underline{\dim}_{B}\) or \(\overline{\dim}_{B}\). This completes the proof of Theorem B.
### Proof of Theorem C
For every pair \((f,\mu)\) satisfying the assumptions, Wang and Cao [40, Corollary 1] proved that
\[\dim_{H}\mu=\frac{h_{\mu}(f)}{\lambda_{u}(\mu)}-\frac{h_{\mu}(f)}{\lambda_{s} (\mu)}.\]
Fix a small number \(\varepsilon>0\). Wang, Cao and Zou [42, Theorem 1.1] proved that there exists a horseshoe \(\Lambda_{\varepsilon}\) such that
1. \(|h_{\operatorname{top}}(f,\Lambda_{\varepsilon})-h_{\mu}(f)|<\varepsilon\);
2. there exists a dominated splitting \(T_{\Lambda_{\varepsilon}}M=E^{u}\oplus E^{s}\) with \(\dim E^{i}=d_{i}\,(i=u,s)\), and for each \(x\in\Lambda_{\varepsilon}\), every \(n\geq 1\) and each vector \(v\in E^{i}(x)\)\((i=s,u)\), \[e^{(\lambda_{i}(\mu)-\varepsilon)n}<\|D_{x}f^{n}(v)\|<e^{(\lambda_{i}(\mu)+ \varepsilon)n}.\]
Fix a positive integer \(k\in\mathbb{N}\), denote \(F=f^{2^{k}}\). Since \(\Lambda_{\varepsilon}\) is a locally maximal hyperbolic set for \(f\), so is \(\Lambda_{\varepsilon}\) for \(F\). Notice that
\[W^{u}_{\beta}(F,x)\cap\Lambda_{\varepsilon}=W^{u}_{\beta}(f,x)\cap\Lambda_{ \varepsilon}\quad\text{and}\quad W^{s}_{\beta}(F,x)\cap\Lambda_{\varepsilon}= W^{s}_{\beta}(f,x)\cap\Lambda_{\varepsilon}.\]
For every \(x\in\Lambda_{\varepsilon}\), it follows from [41, Lemmas 3.5 and 3.6] that
\[\underline{t}^{k}_{u}\leq\dim_{H}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f, x))\leq\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\leq \overline{t}^{k}_{u},\]
where \(\underline{t}^{k}_{u}\), \(\overline{t}^{k}_{u}\) are the unique roots of
\[P_{\text{top}}(F,-t\log\|D_{x}F|_{E^{u}(x)}\|)=0,\quad P_{\text{top}}(F,-t\log m (D_{x}F|_{E^{u}(x)}))=0\]
respectively. Using the same arguments as in the proof of Theorem 6.2 and Theorem 6.3 in [3], one can prove that the sequences \(\{\underline{t}^{k}_{u}\}\) and \(\{\overline{t}^{k}_{u}\}\) are monotone. Set
\[\underline{t}_{u}:=\lim_{k\to\infty}\underline{t}^{k}_{u}\quad\text{and}\quad \overline{t}_{u}:=\lim_{k\to\infty}\overline{t}^{k}_{u},\]
where \(\underline{t}_{u}\), \(\overline{t}_{u}\) are the unique solutions of the following equations
\[P_{\text{var}}(f,-t\{\log\|D_{x}f^{n}|_{E^{u}(x)}\|\})=0,\quad P_{\text{top}} (f,-t\{\log m(D_{x}f^{n}|_{E^{u}(x)})\})=0\]
respectively. Hence, we have that
\[\underline{t}_{u}\leq\dim_{H}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x)) \leq\underline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\leq \overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\leq \overline{t}_{u}.\]
Since
\[\underline{t}_{u} =\sup\Big{\{}\frac{\lim_{n\to\infty}\frac{1}{n}\int\log\|D_{x} f^{n}|_{E^{u}(x)}\|d\nu}:\nu\in\mathcal{M}_{f}(\Lambda_{\varepsilon}) \Big{\}}\]
and
\[\overline{t}_{u} =\sup\Big{\{}\frac{\lim_{n\to\infty}\frac{1}{n}\int\log m(D_{x} f^{n}|_{E^{u}(x)})d\nu}:\nu\in\mathcal{M}_{f}(\Lambda_{\varepsilon}) \Big{\}}\]
using (i) and (ii) one can show that
\[\frac{h_{\mu}(f)-\varepsilon}{\lambda_{u}(\mu)+\varepsilon}\leq\text{Dim}( \Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\leq\frac{h_{\mu}(f)+\varepsilon }{\lambda_{u}(\mu)-\varepsilon} \tag{4.7}\]
for every \(x\in\Lambda_{\varepsilon}\), where Dim denotes either \(\dim_{H}\) or \(\underline{\dim}_{B}\) or \(\overline{\dim}_{B}\). Similarly, we obtain that
\[-\frac{h_{\mu}(f)-\varepsilon}{\lambda_{s}(\mu)-\varepsilon}\leq\text{Dim}( \Lambda_{\varepsilon}\cap W^{s}_{\beta}(f,x))\leq-\frac{h_{\mu}(f)+\varepsilon }{\lambda_{s}(\mu)+\varepsilon} \tag{4.8}\]
for every \(x\in\Lambda_{\varepsilon}\).
**Lemma 4.2**.: _Let \(\Lambda\) be a locally maximal hyperbolic set of a \(C^{1}\) diffeomorphism such that \(f\) is topologically mixing on \(\Lambda\). Assume that the diffeomorphism \(f|_{\Lambda}\) possesses a \(\{\lambda_{u}(\mu),\lambda_{s}(\mu)\}\)-dominated splitting \(T_{\Lambda}M=E^{u}\oplus E^{s}\) with \(E^{u}\succeq E^{s}\) and \(\lambda_{u}(\mu)>0>\lambda_{s}(\mu)\). Then for every \(\gamma\in(0,1)\) there exists \(D_{\gamma}>0\) such that the holonomy maps of the stable and unstable foliations for \(f\) are \((D_{\gamma},\gamma)\)-Holder continuous._
Proof.: For every \(x\in\Lambda\), \(n\in\mathbb{N}\) and each unit vector \(v\in E^{i}(x)\) (\(i=u,s\)),
\[e^{n(\lambda_{i}(\mu)-\varepsilon)}\leq\|D_{x}f^{n}(v)\|\leq e^{n(\lambda_{i}( \mu)+\varepsilon)}.\]
Fix a positive integer \(N\), put \(F:=f^{N}\). This implies that for every \(x\in\Lambda\),
\[1\leq\frac{\|D_{x}F|_{E^{u}}\|}{m(D_{x}F|_{E^{u}})}\leq e^{2N\varepsilon}, \quad 1\leq\frac{\|D_{x}F|_{E^{s}}\|}{m(D_{x}F|_{E^{s}})}\leq e^{2N\varepsilon}.\]
Notice that \(\Lambda\) is also a locally maximal hyperbolic set for \(F\) and
\[W^{u}_{\beta}(F,x)\cap\Lambda=W^{u}_{\beta}(f,x)\cap\Lambda\quad\text{and} \quad W^{s}_{\beta}(F,x)\cap\Lambda=W^{s}_{\beta}(f,x)\cap\Lambda\]
for every \(x\in\Lambda\).
Let \(\pi^{s}\) and \(\pi^{u}\) be the holonomy maps of stable and unstable foliations for \(f\), i.e. for any \(x\in\Lambda\), \(x^{\prime}\in W^{s}_{\beta}(f,x)\) and \(x^{\prime\prime}\in W^{u}_{\beta}(f,x)\) close to \(x\),
\[\pi^{s}:\ W^{u}_{\beta}(f,x)\cap\Lambda\to W^{u}_{\beta}(f,x^{\prime})\cap \Lambda\text{ with }\pi^{s}(y)=W^{s}_{\beta}(f,y)\cap W^{u}_{\beta}(f,x^{\prime})\]
and
\[\pi^{u}:\ W^{s}_{\beta}(f,x)\cap\Lambda\to W^{s}_{\beta}(f,x^{\prime\prime}) \cap\Lambda\text{ with }\pi^{u}(z)=W^{u}_{\beta}(f,z)\cap W^{s}_{\beta}(f,x^{\prime\prime}).\]
Therefore, \(\pi^{s}\) is also a map from \(W^{u}_{\beta}(F,x)\cap\Lambda\) to \(W^{u}_{\beta}(F,x^{\prime})\cap\Lambda\) and \(\pi^{u}\) is also a map from \(W^{s}_{\beta}(F,x)\cap\Lambda\) to \(W^{s}_{\beta}(F,x^{\prime\prime})\cap\Lambda\).
Let \(U\subset M\) be an open subset such that \(\Lambda=\bigcap_{n\in\mathbb{Z}}f^{n}(U)\), and \(\mathcal{U}\subset\mathrm{Diff}^{1}(M)\) be a neighbourhood of \(f\) such that, for each \(g\in\mathcal{U}\), \(\Lambda_{g}=\bigcap_{n\in\mathbb{Z}}g^{n}(U)\) is a locally maximal hyperbolic set for \(g\) and there is a homeomorphism \(h_{g}:\Lambda\to\Lambda_{g}\) satisfies that \(g\circ h_{g}=h_{g}\circ f\), with \(h_{g}\)\(C^{0}-\)close to identity if \(g\) is \(C^{1}-\)close to \(f\). For \(g\in\mathcal{U}\), let \(T_{\Lambda_{g}}M=E^{u}_{g}\oplus E^{s}_{g}\) denote the hyperbolic splitting over \(\Lambda_{g}\). For \(i\in\{u,s\}\), \(\{W^{i}_{\beta}(g,z):\ z\in\Lambda_{g}\}\) is continuous on \(g\) in the following sense: there is \(\{\theta^{i}_{g,x}:\ x\in\Lambda\}\) where \(\theta^{i}_{g,x}:\ W^{i}_{\beta}(f,x)\to W^{i}_{\beta}(g,h_{g}(x))\) is a \(C^{1}\) diffeomorphism with \(\theta^{i}_{g,x}(x)=h_{g}(x)\), such that if \(g\) is \(C^{1}-\)close to \(f\) then, for all \(x\in\Lambda\), \(\theta^{i}_{g,x}\) is uniformly \(C^{1}-\)close to the inclusion of \(W^{i}_{\beta}(f,x)\) in \(M\).
For any \(\gamma\in(0,1)\), let \(\mathcal{U}^{F}_{\gamma}\) be a small \(C^{1}\) neighborhood of \(F\) (recall \(F=f^{N}\)). Taking \(G\in\mathcal{U}^{F}_{\gamma}\cap\mathrm{Diff}^{2}(M)\) such that for every \(x\in\Lambda_{G}\) (here \(\Lambda_{G}\) is a locally maximal hyperbolic set for \(G\)), \(n\in\mathbb{N}\) and \(i=u,s\),
\[e^{nN(\lambda_{i}(\mu)-2\varepsilon)}\leq\|D_{x}G^{n}|_{E^{i}(x)}\|\leq e^{nN (\lambda_{i}(\mu)+2\varepsilon)}. \tag{4.9}\]
**Claim 4.1**.: _The following properties hold:_
* \(h_{G}|_{W^{u}_{\beta}(F,x)\cap\Lambda}\) _and_ \(\left(h_{G}|_{W^{u}_{\beta}(F,x)\cap\Lambda}\right)^{-1}\) _are_ \((C_{\gamma},\gamma)\)_-Holder continuous for some_ \(C_{\gamma}>0\)_._
* _the stable and unstable foliations_ \[\{W^{s}(G,z):z\in\Lambda_{G}\},\quad\{W^{u}(G,z):z\in\Lambda_{G}\}\] _are_ \(C^{1}\) _and invariant for_ \(G\)_. Thus the holonomy maps_ \[\pi^{s}_{G}: W^{u}_{\beta}(G,h_{G}(x))\cap\Lambda_{G}\to W^{u}_{\beta}(G,h_{G}(x^ {\prime}))\cap\Lambda_{G}\text{ with}\] \[\pi^{s}_{G}(y)=W^{s}_{\beta}(G,y)\cap W^{u}_{\beta}(G,h_{G}(x^{ \prime})),\] _and_ \[\pi^{u}_{G}: W^{s}_{\beta}(G,h_{G}(x))\cap\Lambda_{G}\to W^{s}_{\beta}(G,h_{G}(x^ {\prime\prime}))\cap\Lambda_{G}\text{ with}\] \[\pi^{u}_{G}(z)=W^{u}_{\beta}(G,z)\cap W^{s}_{\beta}(G,h_{G}(x^{ \prime\prime}))\] _are Lipschitz continuous._
Proof.: (a) See Claim 3.1 in [41].
(b) Since \(G\) satisfies (4.9), we conclude
\[\frac{\|DG|_{E^{u}}\|\cdot\|DG|_{E^{s}}\|}{m(DG|_{E^{u}})} \leq e^{4N\varepsilon}e^{N(\lambda_{s}(\mu)+2\varepsilon)}\] \[=e^{N(\lambda_{s}(\mu)+6\varepsilon)}\] \[\leq 1,\]
provide that \(\lambda_{s}(\mu)+6\varepsilon<0\). By Theorem 6.3 in [20] we have the stable foliation is \(C^{1}\). Similarly we obtain the unstable foliation is also \(C^{1}\). Then the corresponding maps are uniformly \(C^{1}\) (see [35, pages 540-541] for more details), this implies the desired result.
We proceed to prove Lemma 4.2. For any \(y\in W^{u}_{\beta}(F,x)\cap\Lambda\),
\[h_{G}(\pi^{s}(y)) =h_{G}\big{(}W^{s}_{\beta}(F,y)\cap W^{u}_{\beta}(F,x^{\prime}) \big{)}\] \[=W^{s}_{\beta}(G,h_{G}(y))\cap W^{u}_{\beta}(G,h_{G}(x^{\prime}))\] \[=\pi^{s}_{G}(h_{G}(y)).\]
For the above \(\gamma\), by Claim 4.1 there exists \(D_{\gamma}>0\) such that
\[\pi^{s}=h_{G}^{-1}\circ\pi^{s}_{G}\circ h_{G}\]
is \((D_{\gamma},\gamma)\)-Holder continuous. Using the same arguments one can prove \((\pi^{s})^{-1}\), \(\pi^{u}\) and \((\pi^{u})^{-1}\) are also \((D_{\gamma},\gamma)\)-Holder continuous.
By Lemma 4.2 and the fact \(f\) is topologically mixing on \(\Lambda_{\varepsilon}\), one has \(\dim_{H}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\), \(\underline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\) and \(\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\) are independent of \(\beta\) and \(x\) (see the proof of Lemma 3.4 in [41] for more details). Let
\[A_{\varepsilon,x}=(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))\times( \Lambda_{\varepsilon}\cap W^{s}_{\beta}(f,x))\]
be a product space. By the properties of dimension (see Theorem 6.5 in [33] for details), one has
\[\begin{split}&\dim_{H}(\Lambda_{\varepsilon}\cap W^{u}_{\beta}(f,x))+ \dim_{H}(\Lambda_{\varepsilon}\cap W^{s}_{\beta}(f,x))\\ \leq&\dim_{H}A_{\varepsilon,x}\\ \leq&\underline{\dim}_{B}A_{\varepsilon,x}\\ \leq&\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W ^{u}_{\beta}(f,x))+\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W^{s}_{ \beta}(f,x)).\end{split} \tag{4.10}\]
Let \(\Phi:A_{\varepsilon,x}\to\Lambda_{\varepsilon}\) be given by
\[\Phi(y,z)=W^{s}_{\beta}(f,y)\cap W^{u}_{\beta}(f,z).\]
It is easy to see that \(\Phi\) is a homeomorphism onto a neighborhood \(V_{x}\) of \(x\) in \(\Lambda_{\varepsilon}\). For any \(\gamma\in(0,1)\), by Lemma 4.2 there is \(E_{\gamma}>0\) such that \(\Phi\) and \(\Phi^{-1}\) are \((E_{\gamma},\gamma)\)-Holder continuous (see Step 2 in the proof of Theorem A in [41] for more details). By Lemma 2.1 and the arbitrariness of \(\gamma\), one has
\[\operatorname{Dim}V_{x}=\operatorname{Dim}A_{\varepsilon,x},\]
where Dim denotes either \(\dim_{H}\) or \(\underline{\dim}_{B}\) or \(\overline{\dim}_{B}\). Since \(\{V_{x}:x\in\Lambda_{\varepsilon}\}\) is an open cover of \(\Lambda_{\varepsilon}\), one can choose a finite open cover \(\{V_{x_{1}},V_{x_{2}},\cdots,V_{x_{k}}\}\) of \(\Lambda_{\varepsilon}\). It follows from (4.10) that
\[\begin{split}&\dim_{H}(\Lambda_{\varepsilon}\cap W_{\beta}^{u}(f,x ))+\dim_{H}(\Lambda_{\varepsilon}\cap W_{\beta}^{s}(f,x))\\ \leq&\dim_{H}\Lambda_{\varepsilon}=\max_{1\leq i \leq k}\dim_{H}V_{x_{i}}\\ \leq&\overline{\dim}_{B}\Lambda_{\varepsilon}=\max _{1\leq i\leq k}\overline{\dim}_{B}V_{x_{i}}\\ \leq&\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W _{\beta}^{u}(f,x))+\overline{\dim}_{B}(\Lambda_{\varepsilon}\cap W_{\beta}^{s }(f,x)),\end{split}\]
for every \(x\in\Lambda_{\varepsilon}\). This together with (4.7) and (4.8) yield that
\[\lim_{\varepsilon\to 0}\operatorname{Dim}\Lambda_{\varepsilon}=\frac{h_{ \mu}(f)}{\lambda_{u}(\mu)}-\frac{h_{\mu}(f)}{\lambda_{s}(\mu)}=\operatorname {Dim}\mu,\]
where \(\operatorname{Dim}\) denotes either \(\dim_{H}\) or \(\underline{\dim}_{B}\) or \(\overline{\dim}_{B}\). This completes the proof of Theorem C.
### Acknowledgments
This work is partially supported by The National Key Research and Development Program of China (2022YFA1005802). Y. Cao is partially supported by NSFC (11790274). J. Wang is partially supported by NSFC (11501400, 12271386) and the Talent Program of Shanghai University of Engineering Science. Y. Zhao is partially supported by NSFC (12271386) and Qinglan project of Jiangsu Province.
|
2308.05231 | A review on the questions of spin and spin quantum correlations in the
relativistic regime | The majority of current understanding of the quantum correlations is in the
field of non-relativistic quantum mechanics. To develop quantum information and
computation tasks fully, one must inevitably take into account the relativistic
effects. In this regard, the spin is one of the central tools. For this
purpose, it is of paramount importance to understand and characterize fully the
theory of spin in relativistic quantum information theory where the spin states
act as qubit. This area is still far from being resolved. As a result, this
article will explore the recent studies of the concepts of the spin and spin
quantum correlations in inertial frames and some apparent paradoxes regarding
this concept. We will mainly focus on the problem of characterizing the spin,
reduced spin density matrices and spin quantum correlations in inertial
reference frames and the apparent paradoxes involved therein. Another important
aspect is the use of tools of quantum field theory to extend several concepts
in non-relativistic domain to relativistic one. In this regard, we analyze the
development of the theory of relativistic secret sharing and a correlation
measure namely the entanglement of purification. | Shrobona Bagchi | 2023-08-09T21:29:03Z | http://arxiv.org/abs/2308.05231v2 | # A review on the questions of spin and spin quantum correlations in the relativistic regime
###### Abstract
Quantum correlations are one of the most important aspects of the modern day quantum information and computation theory. However, the majority of understanding of the quantum correlations is in the field of non-relativistic quantum mechanics. To develop the quantum information and computation tasks fully, one must inevitably take into account the relativistic effects. In this regard, the spin is one of the central tools to implement these qubit operations in almost all quantum information processing tasks. For this purpose, it is of paramount importance to understand and characterize fully the theory of spin in relativistic quantum mechanics and relativistic quantum information theory where the spin states act as qubit. This area is still far from being resolved as a current state of art. As a result, this article will explore the recent studies of the concepts of the spin and spin quantum correlations in inertial frames and some apparent paradoxes regarding this concept. We will mainly focus on the problem of characterizing the concept of spin, reduced spin density matrices and consequently spin quantum correlations in inertial reference frames and the apparent paradoxes involved therein, yet to be verified experimentally. Another important aspect is the use of tools of quantum field theory to extend concepts in non-relativistic domain to relativistic one. In this regard, we will analyze the development of the theory of relativistic secret sharing and a correlation measure namely the entanglement of purification. We will also explore how these developments may be mapped to quantum information processing task and discuss the future promises.
## 1 Introduction
Quantum correlations are an important part of modern quantum information theory [1, 2, 3]. This area is very well studied, developed and understood in the non-relativistic quantum information using non-relativistic quantum mechanics [4]. The study of these correlations have helped in the development and implementation of various quantum information processing protocols such as quantum teleportation, quantum cryptography and quantum secret sharing. However, this theory is not well understood and developed in relativistic quantum information theory that uses relativistic quantum mechanics or that uses quantum field theory [5, 6, 7, 8, 9]. There are several less understood area that require careful analysis and resolution. We handpick a few such areas and explore in detail the fundamental and underlying issues, explore some fundamental and important quantum information task in relativistic quantum information and end with conclusions and potential future directions. These discussion inevitably fall in the section of quantum information in high energy physics, since the effect of relativistic velocity or acceleration become relevant in high energy physics. At the end, we also discuss how an important
concept of entanglement of purification has been generalized in the area of holography, reflect on the conjecture and raise a few important issue regarding the operational significance of it in this area. These analysis shall also be helpful in the future space satellite based quantum communication.
As per our knowledge in quantum mechanics and relativistic quantum mechanics, the'spin' is usually thought of as an intrinsic 'angular momentum' associated with the elementary particles like electrons, protons or other particles as such. It is understood to be purely a quantum mechanical property without any counterpart in classical physics. Spin in quantum mechanics is thought of as an existing 'intrinsic' angular momentum of the particle and it is not due to the classical rotation of any internal component of the particle. Spin as a relativistic concept is still undergoing revisions to be understood to be in full glory and entirety. It is an essential part in majority of quantum information tasks in modern day quantum information and computation theory and applications. The important details start to emerge as one goes to inertial and non-inertial frames and several paradoxes and inconsistencies start to show in this arena in terms of its definitions, conceptualization and further generalizations such as the reduced spin density matrices in inertial frames. These observations point to the fact this is a partially developed concept and its development will lead to robust implementation of various quantum information processing tasks in relativistic regimes such as in future satellite based quantum communication. In the upcoming paragraphs, we visit some of these areas in short which explain later in details in the main sections later.
Spin qubit is a well understood concept in the non-relativistic quantum mechanics, It is understood very well through the implementation of the Stern-Gerlach experiment in the non-relativistic (low velocity which is much smaller than the speed of light in vacuum) limit. However, definition and understanding of the spin in the relativistic regime is underdeveloped till to this day. This has been analyzed in various works spread out in around the last decade and research on this is undergoing till today [10, 11, 12, 13, 14, 15, 16, 17]. One of the origin of this difficulty lies in the phenomenon of the entangling of the spin degree of freedom with the momentum degree of freedom in the Lorentz-boosted reference frames. The second difficulty arises from the fact that for a quantum particle moving in a superposition of velocities as a quantum mechanical possibility, it is impossible for it to suddenly transition to its rest frame, wherein its spin is defined and understood properly. Possible remedy for these problems have been proposed by various authors. They eventually propose solution to this problem and ways of experimentally observing the relativistic features of the spin which then in turn promises to open up the possibilities of devising quantum information protocols using spin as a qubit in the special-relativistic regime.
Another very important and recurring problem in relativistic quantum information or quantum information in high energy physics is the robust formulation of the reduced spin density matrix in the relativistic regime. Though the reduced density matrix for spin degree of freedom is well defined in non-relativistic quantum mechanics, its definition and formulation runs into problems when tries to simply extend this to relativistic regime. An apparent paradox involving the definition and formalism of spin density matrix in relativistic regime is given in [11]. It was shown that a model for particle detection wherein a linear application of the Wigner rotations was applied to the state of a massive relativistic particle in a superposition of two counter-propagating momentum states, leads to a paradox. The paradoxical behavior is that the probability of finding the relativistic quantum particle at different positions depends on the reference frame, which is unwanted feature in the theory. A solution to the paradox was given there. According to the proposed solution,the authors argue that that we cannot in general linearly apply the Wigner rotations to a quantum state without considering the appropriate physical interpretation.
Again in similar vein, there is another problem similar in nature to the above. An open problem in the field of relativistic quantum information is whether entanglement and the quantitative degree of violation of Bell's inequalities for massive relativistic particles are dependent on the frames of reference or not. At the heart of this question lies the effect that spin degree of freedom gets entangled with the momentum degree of freedom at relativistic regime. In a more advanced work, the authors here show that the Bell's inequalities for a pair of quantum particles can be maximally violated in a'special-relativistic regime', even without any post-selection of the momentum of the particles, shown via the use of the novel methodology of quantum reference frames. The authors in claim that the use of the quantum reference frames allow them to transform the problem to the rest frame of a particle, whose state can be in a superposition of relativistic momenta from the viewpoint of the laboratory frame of reference. In this work, the authors work with several problems of defining the spin density matrix in relativistic regime and show that when the relative motion of two particles is non-collinear, the appropriate measurements for violation of Bell's inequalities in the laboratory frame involve the "coherent Wigner rotations". In this work, the authors also show that the degree of violation of Bell's inequalities is independent of the choice of the newly introduced and defined quantum reference frames, which is a desired feature in a theory.
After the full description of the existing line of work of the subject of spin density matrices in relativistic reference frames, we turn our attention to the use of tools of quantum field theory in implementing quantum information protocols that are in the high energy regime [18, 19, 20]. In this respect we review a protocol of relativistic quantum secret sharing in relativistic regime. Here, the authors develop a quantum secret sharing protocol that relaxes usual assumptions and consider the effects due to the accelerating motion.
Another developing area of research is the various extensions and further development of various quantum information correlation measures via quantum field theory and conformal field theory [21, 22]. One of such a definition is the entanglement of purification which defines the total correlation measure for a quantum particle in an operational way [23]. Entanglement wedge cross section has been developed to account for the counterpart of entanglement of purification in quantum field theoretic and conformal field theoretic terms. It has been termed as the Holographic Entanglement of Purification by Tadashi Takayanagi and Koji Umemoto [23, 24, 25]. They suggest that it is a holographic counterpart of entanglement of purification, which measures a bipartite correlation in a given mixed state quantum state defined on an operational basis. We point out in our coming sections that a similar operational footing in the quantum field theoretic terms will be a promising area of research in future.
On the questions of spin and spin quantum correlations in relativistic quantum mechanics and relativistic quantum information
In this section, we review in short a few key concepts needed to understand the basic analysis of spin in quantum mechanics and the problems associated with it for its formulation in the relativistic regime. These concepts include those of reference frames as defined in theory of special relativity. Any physical system is defined using a set of coordinates that completely specifies its reference frame. From the principles of special relativity, we know that all physical laws hold the same way in all reference frames. The laws of physics transform covariantly in between the different reference frames. However, there is typically a reference frame which is the most convenient to use, where the system is at rest. This reference frame is called the rest frame of the physical system. These concepts of the classically defined reference frames as
in classical physics are very well understood in the absence of intricacies of certain quantum mechanical phenomenon. However, when we start to analyze some quantum properties in detail by using the traditionally defined reference frames, we are faced with some problems. In this respect it was shown in that when the external degrees of freedom for example the momentum of the physical system are in a quantum superposition with respect to the laboratory frame of reference, no classical reference frame transformation, as prescribed by the special theory of relativity, can map the description of physics from the laboratory to the rest frame [12]. This area of physics is not understood well enough. Subsequently the concept of quantum reference frame was introduced and leveraged to give an operational footing to the concept of spin in relativistic quantum mechanics [12]. It was claimed in [12] that such a formulation is able to solve some of the paradoxical features related to transformation of spin in relativistic quantum mechanics. There are several claims along this direction by various authors. As a result proper experimental verification are needed to settle the correct theory for spin in relativistic quantum mechanics and relativistic quantum information theory. These concepts are explained later in the later paragraphs.
One of the single-most important concept in physics include that of the concept of spin in non-relativistic quantum mechanics defined operationally via the Stern-Gerlach experiment. The spin of a Dirac spin half particle in two dimensional Hilbert space is defined by the \(2\times 2\) Pauli matrices \(\sigma_{i}(i=1,2,3)\). The Pauli matrices, together with the unit matrix of the form \(((1,0),(0,1))\) generate an irreducible representation of the \(SU(2)\) group. It is well known that the spin operator of a non-relativistic spin half particle in quantum mechanics is very well understood. Also, we know that there is a clear and well understood correspondence between quantum-mechanical operators and classical variables in the domain of non-relativistic quantum mechanics. This correspondence exists for all operators such as the position and momentum for example. However, in contrast to this observation, the connection between the quantum-mechanical operators and classical variables in relativistic quantum mechanics is much more subtle and complex and is not very clear in some cases till today.
In this respect, we review the concept of spin in relativistic quantum mechanics from different perspectives, provided by various authors until now. They involve the analysis of an apparent paradox caused by linear Wigner rotation and the quantum reference frames. These are presented below.
### Apparent paradox of Wigner rotations for relativistic effects of spin of quantum particle
The intricacies of the conceptual and analytical foundation of spin of relativistic quantum particle was presented in [11]. In their analysis, the authors presented an apparent paradox involving the definition and formalism of spin density matrix in super-relativistic regime, i.e., in an inertial frame. It was shown there that a method of particle detection that they presented in [11] along with application of linear Wigner rotations, which then corresponds to momentum-dependent changes of the particle spin owing to the fact that spin and momentum degrees of freedom get entangled in super-relativistic scenario under Lorentz transformations, applied to the state of a massive relativistic quantum particle in a superposition of two different momentum states leads to a paradox. The paradoxical feature presented in [11] is that the probability of finding the relativistic quantum particle at different positions depend on the reference frame when the above stated formalism is applied, which should not be the case. As a solution to the paradox which is also simple, the authors suggested that one cannot in general linearly apply the Wigner rotations to a quantum state without considering the appropriate physical interpretation about it.
We sketch out the main steps of their analysis here. The initial state taken is this case is of the following form
\[|\Psi\rangle=\frac{1}{\sqrt{2}}[|p\hat{g},Z\rangle-|p\hat{g},-Z\rangle], \tag{1}\]
in the frame \(S_{0}\). Here, \(|p,\pm Z\rangle\) represents a state for the particle with four-momentum \((p^{0},\vec{p})\), and the spin state pointing in the \(\pm z\) direction, which is the eigenvector of the Pauli matrix \(\sigma_{z}\) with eigenvalue \(\pm 1\) with regard to the reference frame \(S_{0}\). Here, the authors have used Wigner's definition for spin and they have taken \(c=1\) without any loss of generality. Therefore, from the perspective of the rest frame, the quantum particle is in a superposition of momentum of opposite directions. Using proper algebraic steps it was found out by the authors that the probability density of finding the relativistic quantum particle around position \(y\) obeys the following expression
\[p_{0}(y)\propto\sin^{2}(\frac{py}{\hbar}) \tag{2}\]
If one then makes a change of reference frame to a frame that moves with velocity \(\beta z\) in relation to \(S(0)\), each momentum component of the state \(|\Psi\rangle\) undergoes a different spin transformation. The spin and the momentum degrees of freedom get entangled when linear Wigner rotations are applied in a non-trivial way. See [11] for details. In the reference frame \(S_{1}\), the momentum state of the particle changes from before, though the \(y\) component remains the same. In this work, the authors then analyze the expressions about the \(y\) dependence of the particle wavefunction and do not analyze for \(z,x\) direction for simplification without any loss of generality. With some reasonable approximation, the probability expressions found with respect to reference frame \(S_{1}\) is given by the following
\[p_{1}(y)\propto\cos^{2}(\frac{\phi}{2})\sin^{2}(\frac{py}{\hbar})+\sin^{2}( \frac{\phi}{2})\cos^{2}(\frac{py}{\hbar}), \tag{3}\]
where the angle \(\phi\) is related to the boost parameter for reference frame \(S_{1}\)[11]. This new expression for the probability therefore points to a paradox that has crept in the calculation in between, done in the traditional way. The authors in [11] argues that this is a paradox since the probability of finding the particle around some position should depend on the reference frame. As a result, this paradox therefore points towards a deficiency of the current state of art of the theory of spin in quantum mechanics. Thereafter the authors in [11] test this observation in a different way via the use of quantum measurements using detectors. They consider measurements of the particle position using a detector that, by construction, responds only to the charge or the mass of the particle, but does not in any case depends on its spin. Using this formalism and again placing a small number of reasonable approximations that are relevant in experimental setup, they again show the discrepancy in the expressions for probability expressions as calculated with respect to different reference frames.
To summarize, they have shown that the application of the momentum-dependent linear Wigner rotations to the quantum state of a massive relativistic particle in a superposition of momentum states along with a model for particle detection leads to a paradox, since the probability of finding the particle at different positions would depend on the reference frame in that formalism. Considering the physical implementation of the quantum state, they have argued that the Wigner rotation depends on the preparation method, such that, with a change of the reference frame, the spin transformation of a quantum state of the relativistic quantum particle in a superposition of different momenta is not exactly equivalent to the linear application of the momentum-dependent Wigner rotation to each momentum component of the state. This they say solves the apparent paradox. Their work, together with a few previous works on the
subject, show that relativistic quantum transformations cannot in general be computed only by following a mathematical procedure as in the traditional literature of relativistic quantum mechanics. The authors argue that the physical meaning of the transformations must always take more importance than their straightforward application in that physical scenario.
Though they have proposed the above solution to the apparent paradox, however they also stress on the fact that their solution may not be the only viable solution to this apparent paradox. It may be possible that by modeling the particle detection by some more complicated scheme the paradox could be solved keeping the linearity property of the Wigner rotations. Consequently different solutions were proposed by different authors to this problem, one of the main contenders in the name of quantum reference frames [12] is discussed in the next sections. However, it is important to note that the way to settle and reach a proper solution to this apparent paradox needs experimental verification under different conditions and approximations.
### Relativistic Stern Gerlach Experiment and Quantum Reference Frames
To provide a consistent description of the relativistic effects on the spin of a quantum particle, more theories were proposed, one of which is based on the relatively recent proposals of quantum reference frames. Along this line, the relativistic treatment of the Stern-Gerlach experiment was proposed and was termed as the relativistic Stern-Gerlach experiment [12]. The theory of the quantum reference frames was evoked there to give an operational interpretation to the spin in relativistic quantum mechanics. It was noted in [12], that when the particle has relativistic velocities, the spin degree of freedom transforms in a momentum-dependent way, as was noted by previous authors. Then, if a standard Stern-Gerlach measurement is performed on a particle in a pure quantum state moving in a superposition of relativistic velocities, the operational identification of the spin fails, because it was shown in [12] that no orientation of the Stern-Gerlach apparatus returns an outcome with unit probability. The question that then arises is whether it is possible to find 'covariant measurements' of the spin and possibly momentum, which predict invariant probabilities in different Lorentzian reference frames for the case of a quantum relativistic particle moving in a superposition of velocities [12]. This is therefore an alternate solution to the solution proposed in [11] as described in the previous section. If such measurements are possible to construct, then it would be possible to map the description of spin in the rest frame of the particle to the frame of the laboratory in an unambiguous way. This would enable one to derive the corresponding observables to be measured in the laboratory frame to verify the correct theory of the spin of a relativistic quantum particle.
The trial for finding such covariant measurements that preserve probability values in different reference frames is motivated by the potential applications where the spin degree of freedom is used as a qubit, to encode and transmit quantum information in the relativistic regime. Earlier protocols as such therefore are no longer valid in a relativistic context. This severely constraints and undervalues the wide range of applicability of techniques involving spin as a quantum information carrier in the relativistc regime. It is then important to explore possible methods which can overcome this limitation. In the context of relativistic quantum information, this question has been extensively discussed in relation to Wigner rotations and has been related to the problem of identifying a covariant spin operator [11, 12, 13, 14, 15]. A variety of relativistic spin operators have been proposed till date. Some of them are called the Frenkel, the Pauli-Lubanski, the Pryce, the Foldy-Wouthuysen, the Czachor the Fleming the Chakrabarti, and the Fradkin-Good spin operators.
To remedy the above problems, the authors in [12] introduce the concept of'superposition
of Lorentz boosts' which allow them to make the relativistic quantum particle "jump" into the rest frame even if the particle is not in a momentum eigenstate but in a quantum state with a superposition of momentum in general. It is well known that in the rest frame, the spin observables satisfy the \(SU(2)\) algebra and are operationally defined through the famous Stern-Gerlach experiment. The authors in [12] aim to make this same concept work in inertial reference frames. In the work [12], the authors transform the set of spin observables in the rest frame to an isomorphic set of observables in the laboratory frame. The transformed observables are in general entangled in the spin and momentum degrees of freedom as expected. The new set fulfills the \(SU(2)\) algebra again and is operationally defined through an experiment which the authors label as the'relativistic Stern-Gerlach experiment'. In this experiment, the authors construct the interaction term and the measurement term between the spin-momentum degrees of freedom and the electromagnetic field in the laboratory frame which gives the same probabilities as the Stern-Gerlach experiment in the rest frame, as desired and stated earlier in the paragraph. This set of observables in the laboratory frame allows the authors to partition the total Hilbert space into two subspaces corresponding to the two outcomes which can be termed as "spin up" and "spin down". Hence, with techniques of the quantum reference frames, the relativistic spin can effectively be described as a qubit in an operationally well-defined way as claimed by the authors. Thus the quantum reference frames and the relativistic Stern-Gerlach experiment promise to be a robust candidate representing the theory of intrinsic spin of relativistic quantum particles and its transformations in between different reference frames. However, the correctness of the corners of this theory is still open to experimental demonstrations to be established fruitfully.
With the above background in mind, we now move on to the description of the relativistic Stern Gerlach experiment as discussed in [12]. One considers an experiment performed in the laboratory reference frame referred to as C. One allows the particle to have any quantum state, and in particular to move in a superposition of momenta. This condition implies that there is a non-classical relationship between the two reference frames. This means that the rest frame A and the laboratory frame C are not related by a standard boost transformation as in classical special relativity. The authors in this work [12] implement a method to generalize the boost transformation to this case of the relativistic quantum particle. As for the coordinates for the mathematical analysis, \(x\) and \(z\) are used to describe the external degrees of freedom of particle \(A\) and the intrinsic spin degrees of freedom as \(\tilde{A}\) of the relativistic quantum particle. The state of the particle time \(t=0\) is taken to be the following
\[|\Psi\rangle=\cos\theta|\psi^{+}\rangle+\sin\theta|\psi^{-}\rangle, \tag{4}\]
where again we have the following definitions
\[|\psi^{\pm}\rangle=|\psi_{z}\rangle_{A}|\phi_{x}^{\pm}\rangle_{A\tilde{A}}, \tag{5}\]
is a division of the total wave function into components in \(x\) and \(z\) directions in rest frame and lab frames as per the notation. It is assumed that the motion along \(z\) direction is non-relativistic, without any loss of generality. Writing these wave functions in terms of superposition of momentum states, we have the following expressions
\[|\psi_{z}\rangle_{A}=\int dp_{z}\psi_{z}|p_{z}\rangle_{A}, \tag{6}\]
with \(\psi_{z}\) denoting Gaussian wave functions in the momentum variable \(p_{z}\) centered around \(p_{z}=0\) and standard deviation \(s_{z}\). The other component is denoted as the following
\[|\phi_{x}^{\pm}\rangle_{A\tilde{A}}=\int d\mu(p_{x})\phi_{x}|p_{x},\Sigma_{p_{ x}}^{\pm}\rangle_{A\tilde{A}}, \tag{7}\]
where \(\phi_{x}^{\pm}\) is a general wavepacket expression and \(|\Sigma_{p_{\mu}}^{\pm}\rangle_{A\bar{A}}\) are the eigenvectors of the operator obtained via Lorentz boost and Pauli-Lubanski operator as defined in [12], with eigenvalues \(\pm 1\). In the laboratory frame, it is possible to define the observables corresponding to the spin operators in the rest frame by transforming the spin, as defined in the rest frame, with a QRF which then correspond to the transformationThey are called the manifestly covariant Pauli-Lubanski spin operator. Now after this, the authors engineer a Hamiltonian with the following interaction term Hamiltonian in the laboratory frame as follows
\[H_{int}=\mu B_{z}\xi, \tag{8}\]
where, \(B_{z}=B_{z}^{0}-\alpha z\). \(\xi\) is the term containing the components which are obtained using the components of the manifestly covariant Pauli Lubanski spin operator modified with parameters dependent on the boost parameters, as defined in [12]. Let us now see how the operators \(\xi\) came into the picture. The authors in [12] note that in the laboratory frame C, when the particle A is in a superposition of momentum states, no spin measurement in a standard Stern-Gerlach experiment gives a result with probability one, because of following two reasons: the spin and momentum are entangled, and the relation between the laboratory and the rest frame is not a classical special relativistic reference frame transformation. In order to devise such measurement that will give consistent probability values in all reference frames as in the rest frame, the authors in [12] note that in the laboratory frame, it is possible to define the observables corresponding to the spin operators in the rest frame by transforming the spin, as defined in the rest frame, with a Quantum Reference Frame transformation, the expression of which is then derived as \(\xi\), the details of which can be found in [12].
Now let us look at the dynamics of the relativistic quantum particle due to the Hamiltonian as described above. The Hamiltonian term is an appropriate interaction Hamiltonian containing a magnetic field in the \(z\) direction in the laboratory frame. The state is then evolved with the action of this Hamiltonian and its form is written down appropriately in the interaction picture as a function of time. It was the shown in that under the effect of the interaction with the magnetic field, the gaussian wavepacket along \(z\) gets split into two wavepackets, moving in opposite directions according to the state of the spin. After this appropriate projection operators are applied to the wave packet and probabilities of obtaining spin "up" or spin "down" as value is obtained denoted by \(p^{\pm}\). It was shown via this calculation in [12] that for a time when the two wavepackets become distinguishable, the probabilities for obtaining up and down spins are found out to be \(\cos^{2}\theta\) and \(\sin^{2}\theta\), when irrevalent terms are neglected under appropriate limit and subsequent approximation. The authors claim that in this way, one can solve the problem of ambiguity of finding the correct expressions for probabilities in rest frames in the relativistic Stern-Gerlach experiment.
In this work, the authors claimed to have provided a correct operational description of the spin of a special-relativistic quantum particle which has been elusive for a while. Such operational description was initially difficult to obtain with standard traditional treatments due to the combined effect of special relativity and quantum mechanical properties, which makes the spin and momentum entangled and an impossibility of jumping to the rest frame with traditional tools. To remedy this problem, the authors in have introduced the concept and mathematical characterization of'superposition of Lorentz boosts' transformation to the rest frame of a quantum particle, moving in a superposition of relativistic velocities from the point of view of the laboratory reference frame. As a result of their analysis based on the quantum reference frames, probabilities obtained in the relativistic Stern-Gerlach experiment are shown to remain the same in the rest frame and in the laboratory frame, which was challenging task to accomplish before. This approach is relatively new with respect to some earlier approaches and proposed theoretical remedies. However, it should be emphasized that the theoretical treatment offered in this work
is yet to undergo several experimental checks in different limits, experimental conditions and relevant approximations to be accepted and established as a correct theory for relativistic effects in spin of a quantum particle.
### Other effects related to relativistic treatment of spin of a quantum particle
There are several other effects associated to the correct description of spin in relativistic quantum mechanics. An open question in the field of relativistic quantum information is the question of invariance of a measure of entanglement and/ or the quantitative degree of violation of Bell's inequalities for massive relativistic particles in different frames of reference. Such questions can also be extended similarly to other quantum information theoretic correlation measures. Likewise as before, again at the core of this dilemma is the effect that spin gets entangled with the momentum degree of freedom at relativistic velocities. In [13], the authors claim to show that the Bell's inequalities for a pair of particles can be maximally violated in a special-relativistic regime, even in the case of no post-selection of the momentum of the particles, again via the use of the concept of quantum reference frames. They specifically show that, when the relative motion of two particles is non-co-linear, the optimal measurements for violation of Bell's inequalities in the laboratory frame involve "coherent Wigner rotations" [13]. Thus, they also touch upon a debated concept of appropriate application of Wigner rotations in in this physical set up. In this formalism, the authors also show that the degree of violation of Bell's inequalities is independent of the choice of the quantum reference frame, which is a desired effect. As a result, this work attempts to settle some important open questions involving the fundamental concepts of spin and relativity. However, for it to be established to be a correct theory or otherwise, several experimental checks are needed to be performed in a consistent and reproducible way. As a result, experimental proposals to test out these theories in the laboratory under different conditions pose to be the next important steps in full development of this area of research and enquiry.
### Experimental efforts to test relativistic theory of spin of quantum particle
Experimentally there have been many efforts recently to measure the quantum spin correlations of elementary particles. One of those proposed experiments involve the study of quantum spin correlations of relativistic electron pairs for testing the non-locality of relativistic quantum mechanics. Finding the right expression and formalism for spin and spin quantum correlations in relativistic quantum mechanics is an important direction of research both from the perspective of space communications as well as testing the fundamentals of quantum physics and quantum gravity. Here we report on two attempts at experiments to measure the spin quantum correlations in relativistic scenarios.
An experiment investigating the quantum spin correlations of relativistic electrons is reported here [16]. The project presented in [16] tries to make the first measurement of the quantum spin correlation function for a pair of massive relativistic particles. This measurement is claimed to be the first attempt to verify the predictions of relativistic quantum mechanics in the domain of spin correlations. This is an interesting direction of research since it has the capability of settling competing theories of spin quantum correlations experimentally or even point out unknown deficiencies in the foundations of relativistic quantum mechanics. As per their description of the proposed experiment, the measurement is carried out on a pair of electrons in the final state of Moller scattering. The measurement attempts to measure correlations of spin projections on chosen directions for the final state pair after the complete evolution via the chosen dynamics.
The detector consists of two Mott polarimeters, in which the spins of both Moller electrons are measured simultaneously. However, the results have not yet been linked to the theoretical predictions of the quantum reference frames or other competing theories of spin in relativistic quantum mechanics. This remains a promising future direction of research.
Another direction of research has been the study of quantum spin correlations of relativistic electron pairs for the purpose of testing non-locality of relativistic quantum mechanics. The theory developed along this direction has been discussed in the previous sections. Therefore an experiment directed at testing the predictions offered by the theory for example as offered by the quantum reference frames will be extremely helpful in settling the correct theory of relativistic effects of quantum spin among many competing theories. This will help advance the understanding of fundamental theory in nature. This project is supposed to be a Polish-German project (QUEST) that will aim at studying precisely the relativistic quantum spin correlations of the Einstein-Podolsky-Rosen-Bohm type, through appropriate measurements and the corresponding probabilities for relativistic electron pairs. This experiment will also use the Moller scattering method and Mott polarimetry technique.
## 3 Quantum Information With Quantum Field Theory: Relativistic Quantum Secret Sharing
In the previous section, we discussed the fundamental question of the concept of spin in relativistic quantum mechanics. We discussed how the concept of spin is an interesting topic, and how its understanding and unraveling of exact nature and experimental verification will help in development of intricate quantum technologies that use the spin as qubit in quantum information processing tasks. However, though it is important to understand the concept of spin in relativistic quantum mechanics, one can also use the concept of quantum information processing tasks in relativistic quantum information using tools from quantum field theory. One of these approaches has been to use cavity dynamics for the case of non-inertial motion in general. In this section, we report on the protocol of quantum secret sharing in the relativistic setting. Here we focus more on the theory of a specific quantum information task called relativistic quantum secret sharing using tools from quantum field theory.
In (2,3)-threshold quantum secret sharing, the 'dealer' which is one pf the parties taking part in the protocol, encodes the quantum secret in three quantum shares in a localized manner. The authors in [12] use the framework of accelerating cavities for this purpose, as it is a suitable choice to study the effect of non-uniform motion on localized quantum fields. Accelerating cavities are popular to study the relativistic effects on quantum information protocols. However, the authors in [12] develop a different approach from the approach of accelerating cavities. They formulate the evolution of the quantum field inside an accelerating cavity as a bosonic quantum Gaussian channel which they then use to include the effects of non-uniform motion of the quantum shares.
The authors in [20] focus on a relativistic variant of a (2,3)-threshold quantum secret sharing protocol. In the relativistic protocol presented in [20], similar to the non-relativistic case, a dealer encodes the quantum secret into several quantum shares and distributes them to all the players. In this set up, the players are all located at different regions in the Minkowski spacetime and the dealer and the players are all stationary. Under such circumstances, during the dealer's distribution, the quantum shares experience non-uniform motion (non-inertial), as they are transmitted to spacetime points in the future light cone of the dealer. Then, a subset of players within the access structure collaborate to retrieve the quantum secret by sharing their individual shares. However, to reach the same spacetime point, the shares go
through phases of accelerating and decelerating motion while being transmitted, rendering the dynamics to be non-inertial in general, in contrast to the special-relativistic regime described in the previous sections. The authors investigate how the non-inertial motion of the shares affects the fidelity of the quantum secret sharing protocol. The authors in this work claim to have solved continuous-variable quantum secret sharing wherein the quantum shares move non-uniformly in Minkowski spacetime. The tools used in this approach mainly comprise of the tools developed in continuous variable quantum information such as the formalism of the Gaussian quantum channels, dynamics of quantum field inside cavity [20]. The authors do not use spin as the qubit in this relativistic scenario and yet are able to implement the relativistic quantum secret sharing protocol mainly via using the formalism of quantum field inside cavity which may themselves be in non-inertial motion [20]. The authors in [12] specify that they use the framework of Gaussian quantum information to write the evolution and dynamics of the quantum field inside the cavity central to their implementation of the protocol of quantum secret sharing, as a Gaussian quantum channel, in the non-inertial regime. They use this channel to study the effect of non-inertial motion of the shares on the fidelity of the quantum secret sharing. As a result, this work shows that various methods can be utilized to study the effects of non-inertial motion, without directly referring to the transformation of spin in super-relativistic regime.
## 4 Entanglement of purification and entanglement wedge cross-section
In this section, we discuss the relativistic quantum information from another angle. We discuss the quantification and characterization of correlation measures used in non-relativistic quantum information theory using quantum field theory in relativistic regime. The quantity that is usually used is the entanglement entropy. The entanglement entropy is used to quantify the entanglement of the pure quantum states. Using the AdS/CFT correspondence, the entanglement entropy has a holographic counterpart given by the area of minimal surface [25]. This characterization therefore provides a relationship between spacetime geometry and quantum entanglement
The entanglement is a pure quantum correlation. However, the mixed quantum states contain both the classical and quantum correlation together contained in the total correlation of the quantum state. Now, a usual measure of the total correlation of a mixed quantum state is the mutual information. However, the justification of the mutual information as a measure of total correlation is sometimes questioned. Also, there were attempts at separating the quantum correlation from the classical correlation in the quantum mutual information via the introduction of the measure of quantum correlation called quantum discord. Another approach was proposed to quantify the total correlation in a quantum state via the transformation of the entanglement of Bell pairs via local operation and classical communication. This measure is called the entanglement of purification. Its many properties were studied in [24]. Later a holographic quantity was proposed as a counterpart of this entanglement of purification using the concept of entanglement wedge cross-section, as a conjecture. This quantity is called the holographic entanglement of purification. The definition of entanglement of purification in non-relativistic quantum mechanics is given as follows. If we have a given mixed quantum state \(\rho_{AB}\), then at first we purify the mixed quantum state \(\rho_{AB}\) to \(|\Psi\rangle_{AA^{\prime}BB^{\prime}}\) and then, the entanglement of purification of the mixed quantum state \(\rho_{AB}\) is given as the follows
\[E_{P}(\rho_{AB})=\min_{A^{\prime}B^{\prime}}E_{f}(AA^{\prime}:BB^{\prime}) \tag{9}\]
In the work of holographic entanglement of purification, the authors have considered the quantity \(E_{W}\) defined as the minimal cross section of entanglement wedge in AdS/CFT. The authors
have shown there that they have observed that its properties actually coincide with those of the quantity called entanglement of purification, which measures total correlation between two subsystems for a mixed quantum state that comprises of both classical and quantum correlations. Based on their observations and calculations on the entanglement wedge cross-section, the authors have conjectured that Entanglement wedge coincides with entanglement of purification in holographic conformal field theories. The authors also have given a heuristic argument for this identification. Open question remains whether this conjecture is true or not. Several works are ongoing to check this conjecture in mathematical terms.
Another very important and promising open direction along this line of research is finding an operational interpretation of the definition of entanglement wedge cross section in a similar way the entanglement of purification was motivated and operationally found in non-relativistic quantum mechanics. The entanglement of purification was motivated operationally in terms of the conversion of the purely quantum correlation called entanglement in the maximally entangled state called the Bell pairs into the total correlation measure in the arbitrary quantum state using local operation and classical communication (LOCC). Therefore, to settle the conjecture another direction could be to try to find an operational footing of the entanglement wedge cross-section as well.
## 5 Conclusions
In this section, we summarize what we have discussed in the previous sections and open questions. Developing the fundamentals of understanding of nature and the natural phenomenon with a robust mathematical construct and therein subsequent reproducible experimental verifications has been one of the strongest pillars of physics as we know it today. Many technological applications have stemmed out from the robust structure of physical phenomenon developed in theories in physics. Thus, we can imagine that such further developments will open up immense possibilities in future technologies that have high potential for finding solutions for persistent problems in lives of people and such. Thus it is clear that from a point of view of understanding of nature, technological developments and even resource allocation, development of fundamental theories of nature is an area of research that hold immense potential.
In view of the above motivation, we have covered here a few aspects of physics that are important for the development quantum information theory in relativistic regime, i.e., in high energy physics where it is paramount to use the relativistic effects via relativistic quantum mechanics and quantum field theory. It is well known that the concept of spin is an ill understood concept in relativistic quantum physics. Prior approaches to spin in quantum physics has been discussed and some recent promising approaches have been presented. It has also been discussed how a robust formulation of the concept of spin is crucial for the development of quantum information in high energy physics. We also presented a review of the problem of spin quantum correlations and have presented the resources in scientific literature that are trying to verify this experimentally in recent years. This issue is still unresolved and a consistent resolution of this has the potential to open doors for the applications of relativistic quantum information and extension of them in space based quantum technologies.
With the above open problem in mind, we also take note of the fact that tools of quantum field theory can be leveraged to again develop quantum information protocols in relativistic quantum information in high energy arena, especially in the regime of non-inertial motion. Such a protocol of relativistic quantum secret sharing has been discussed in detail. Similar techniques can be leveraged to develop such protocols further in high energy physics.
In the last section of this article we have covered a section of the development of definition of a total correlation measure in the language of quantum field theory and conformal field theory.
The concept of entanglement of purification which is an active and open area of research has been discussed here. Another open area of research has been pointed out in this arena which has high potential of development in future. This area is the operational characterization of the quantum correlation measures defined in terms of tool as in quantum field theory and conformal theory. This area of enquiry is based on the question of experimental verification in tabletop setups. This can be an active area of research in future that might be challenging yet highly promising and with potential for rich dividents.
## Acknowledgments
SB acknowledges funding from Korea Institute of Science and Technology. S. B. acknowledges support from the National Research Foundation of Korea (2020M3E4A1079939, 2022M3K4A1094774) and the KIST institutional program (2E31531).
|
2301.09100 | Atypical plug formation in internal elastoviscoplastic fluid flows over
a non-smooth topology | An experimental and computational investigation of the internal flow of
elastoviscoplastic fluids over non-smooth topologies is presented in two
complimentary studies. In the first study, we visualize the creeping flow of a
Carbopol gel over a cavity embedded in a thin slot using Optical Coherence
Tomography (OCT) and confocal microscopy. We measure the size and shape of the
plug as a function of Bingham and Weissenberg numbers. An asymmetry in the plug
shape is observed which is also evident in our second study -- numerical
simulations using adaptive finite element method based upon an augmented
Lagrangian scheme. We quantify the asymmetry and present the results as a
function of the product of the Weissenberg and Bingham numbers which collapse
onto a single curve for each of these geometries. These findings underscore the
theoretical underpinnings of the synergy between elasticity and plasticity of
these complex fluids. | Miguel E. Villalba, Masoud Daneshi, Emad Chaparian, D. Mark Martinez | 2023-01-22T10:49:27Z | http://arxiv.org/abs/2301.09100v3 | # Atypical plug formation in internal elastoviscoplastic fluid flows over a non-smooth topology
###### Abstract
An experimental and computational investigation of the internal flow of elastoviscoplastic fluids over non-smooth topologies is presented in two complimentary studies. In the first study, we visualize the creeping flow of a Carbopol gel over a cavity embedded in a thin slot using Optical Coherence Tomography (OCT) and confocal microscopy. We measure the size and shape of the plug as a function of Bingham and Weissenberg numbers. An asymmetry in the plug shape is observed which is also evident in our second study--numerical simulations using adaptive finite element method based upon an augmented Lagrangian scheme. We quantify the asymmetry and present the results as a function of the product of the Weissenberg and Bingham numbers which collapse onto a single curve for each of these geometries. These findings underscore the theoretical underpinnings of the synergy between elasticity and plasticity of these complex fluids.
keywords: Complex fluid, Yield Stress, Elastoviscoplastic fluid, Particle image velocimetry +
Footnote †: journal: Journal of Non-Newtonian Fluid Mechanics
## 1 Introduction
In this work, we examine the formation of a plug created by the flow of a yield stress fluid in small cavity, i.e. a geometry found in a number of natural and industrial settings [1]. The primary motivation for this work stems from an industrial application, namely pressure screens which are commonly found in the pulp and paper industry. Here, fouling of the screen apertures reduces both the capacity and efficiency of the screening process. Although jamming events for dry granular materials or colloidal suspensions are well-investigated, we find that for non-Brownian fibre or rod-like suspensions these events are relatively unexplored. The essential difference between these two bodies of literature is that suspension flows can jam under dilute conditions. Key to unravelling this is the understanding of the interaction between the rheology of the suspension and the resulting frictional forces leading to a local jam.
We note that the rheology of suspensions with high enough solid loadings or with sufficiently strong inter-particle interactions can adopt a yield stress due to the network structures formed by either mechanical and chemical interactions [2; 3; 4]. Yield-stress fluids are characterized by a transition from solid-like to fluid-like behaviour above a threshold, \(\tau_{y}\)--the yield stress. This implies a co-existence of a liquid and solid phase, which may cause the material to plug small apertures in a pressure driven flow [5; 6]. Beside the yield stress, these materials exhibit complex properties such as elasticity and thixotropy [7; 8; 9] which further complicate their flow features and our understanding of solid-liquid transition. In this study, we focus on characterising the stagnant regions forming in the apertures and their link to the rheology of the material. To do so, we simplify the cross-flow geometry and reduce this problem to an idealized confined flow of a yield-stress fluid over a cavity, i.e. a sudden expansion-contraction in a thin slot.
A large body of numerical works have studied the flow behaviour of yield-stress fluids in expansions and contractions by exploiting the constitutive rheological models such as the Bingham and Herschel-Bulkley models [5; 10; 11; 12; 13; 14; 15]. These studies predicted the plug regions to be symmetrical in the absence of inertia and showed how the growth and shrinkage of these regions depend on the yield stress of the material. However, recent experimental investigations demonstrated anomalous complexities including the asymmetry of the yielding surfaces in the flows developed over the cavity [16; 17; 18; 19; 20]. In these cases, stagnant zones form in the corners of the expansions and contractions. These stagnant regions develop into an internal asymmetric plug over the cavity and are separated from the yielded regions by a discernible interface [19; 20]. These complexities arise from the complex rheology of the practical yield-stress fluids in terms of their elasticity/viscoelasticty or thixotropy which are not captured by conventional (i.e. "simple") viscoplastic models. In this context, the flow over a cavity, the problem of interest in this study, can prove to be a fertile playground to better understand the effect of complex non-ideal rheology of practical yield-stress fluids on the flow hydrodynamic features.
Asymmetry is not limited to the flow of yield-stress fluids over cavities and has also been reported previously in various configurations. The flows of Carbopol gels around obstacles have been a subject of experimental investigations for both unconfined [21; 22; 23; 24] and confined settings [6]. Interestingly, a fore-aft asymmetry was observed between the upstream plug and the downstream plug behind the obstacle [21] regardless of surface roughness [22] and the shape or number of obstacles [6; 25]. Daneshi et al. [6] showed that by increasing the yield stress or by slowing the flow, the extent of asymmetry intensifies. In general, the key finding of all these studies is that flow asymmetry is robust regardless of geometry or confinement even in the creeping flow regime. The common conjecture in these studies is that the complex rheology effects, such as elasticity and thixotropy, lead to the asymmetrical flow. However, characterization of this asymmetry and its link to the rheological parameters of the material remain largely unexplored.
Recent numerical works have also demonstrated asymmetry in the flow of yield-stress fluids by using novel elastoviscoplastic models [26; 27; 28; 29; 30]. The elastoviscoplastic models were first introduced by Saramito [31; 32] and were obtained from combining the Bingham/Herschel-Bulkley model [33] and the Oldroyd model [34] which includes the effect of yielding and elastic behaviour of the material, respectively. These elastoviscoplastic models can predict elastic creep and recoil below the yielding point as well as the viscoelastic relaxation in the liquid regime. Crucially, Cheddadi et al. [26] compared the viscoplastic (Bingham model), viscoelastic (Oldroyd-B model) and elastoviscoplastic flows around a circular obstacle. They reproduced similar flow asymmetries observed in the experimental counterpart of their work for a practical yield-stress fluid and demonstrated that these flow asymmetries are only present in the elastoviscoplastic flows in the case of low Weissenberg numbers [26]. These findings challenge the common conjecture that elasticity is the only cause for asymmetry and suggest that a combination of elasticity and plasticity of the fluid comes to play a role. More recently, Chaparian & Tammisola [30] quantified the asymmetry in the context of elastoviscoplastic fluid flows through wavy channels and shed more light on the characterization of the hydrodynamic complexities of these type of flows in terms of fluid rheology. In this work, we aim to take this analysis further more systematically from both the experimental and numerical perspective.
Here, we investigate the internal flow of yield-stress fluids around a cavity/aperture, in a regime where plug formation is possible. Our objective is to improve the understanding of the plug phenomenology and the asymmetry in the flows and yield surfaces. In particular, we attempt to formulate the extent of the asymmetry in terms of the rheological properties of the working fluid and the flow parameters. An experimental flow visualization technique and a previously developed numerical method [30] have been adapted to study the 2D flow of Carbopol gels over a cavity. Effects of different rheological parameters on the formation and shape of the unyielded surfaces are examined and a link between the governing parameters of the problem and the extent of asymmetry proposed. In addition, we extend our work to the flow in a more complex geometry, namely a 3D flow confined in a Hele-Shaw cell aperture with a step. The objective is to examine whether our findings in the quasi 2D flow hold valid in a 3D cross-flow filtration geometry as well.
The paper is outlined as follows. Section 2 describes the fluids used in our experiment, their rheological properties, experimental set-ups, and the methods implemented to analyze the experimental data. Section 3 presents the experimental results and the numerical simulations of the 2D flow over a cavity. Finally,
concluding remarks are made in the last section.
## 2 Experimental Design & Materials
### Material and rheology
In this work, we use Carbopol gel which has been widely known as a model yield-stress fluid due to its optical transparency and neglegleb thixotropic behaviour [25]. This material has been widely used in many visualization experiments and rheological studies [7; 35]. Carbopol gels of two different concentrations \(0.060\,\left(wt/wt\%\right)\) and \(0.075\,\left(wt/wt\%\right)\) were prepared according to the procedure explained in our previous works [6; 25]. The rheological curves and parameters of the fluids were measured using a rotational rheometer (MCR501, Anton Paar) with an angular resolution of \(0.01\,mrad\) and a torque resolution of \(0.1\,nNm\). A parallel plate geometry with the diameter of \(50\,mm\) was used for the rheological measurements with a ramp rate of \(0.05\,Pa/s\). To remove any effect of wall slip, sandpaper with an average roughness of \(35\,\,\mu m\) was glued onto the plates.
A shear stress ramp-up and ramp-down tests were performed to measure the flow curves for the two Carbopol gels used in this work. The results are shown in Fig. 1. The rheological behaviour of Carbopol gels during the ramp-down test was well characterised by the Herschel-Bulkley constitutive law. The ramp-up curve and ramp-down curve overlap above the yielding point which is suggestive of the non-thixotropic behaviour over this region. The elastic response of the material manifests itself as finite deformation at the start of the ramp-up curve and as elastic recoil at the end of the ramp-down curve. Note that the negative shear rates obtained at the end of ramp-down curve are absent in Fig. 1 due to the logarithmic scale. The Herschel-Bulkley fits of the yield stress, \(\tau_{y}\), consistency, \(K\), and powerlaw index, \(n\), are given in Table 1. Moreover, we measured the shear storage and loss moduli, \(G^{\prime}\) and \(G^{\prime\prime}\), from a small amplitude oscillatory rheometry test at a frequency of \(\omega=1\,\,Hz\) and a strain amplitude of \(\gamma=0.01\%\). These parameters are also reported in Table 1.
In addition to Carbopol gels, we conducted a small number of experiments with two other fluids to benchmark our observations. A polyethelene oxide solution (PEO) of \(c=0.75\,(wt/wt\%)\) was prepared as a prototypical viscoelastic fluid with no yield stress. A glycerol-water solution of \(c=31\,(wt/wt\%)\) was prepared as a prototypical viscous Newtonian fluid.
Figure 1: Flow curves for two different Carbopol solutions measured by a stress controlled ramp-up and ramp-down test with a ramp rate of \(0.05\,\,Pa/s\). The dashed lines show the Herschel-Bulkley fits.
### Experimental Setup and methodology
In this work, we study two different types of flows: a 2D flow over a cavity and a 3D flow in a thin slot over an aperture (see Fig. 2). These topologies are embedded in two channels where the far field flow is unidirectional.
Direct visualization of the flow using state-of-the-art imaging devices was exploited to characterize the velocity fields and yielding surfaces developing around the cavity and aperture [6; 25]. For each of these geometries, the details of the experimental setup including the channel geometry and its fabrication as well as the visualization techniques employed to monitor the flow are explained in the following subsections.
#### 2.2.1 The 2D flow over a cavity set-up
The cavity in the channel (see Fig. 2b) consists of a thin (\(1\:mm\) thick) aluminium plate enclosed by two clear ultra-scratch-resistant cast acrylic plates (\(6.3\:mm\) thick). The aluminium plate separates the acrylic plates creating a flow cell of thickness \(H=1\:mm\). The channel has a width \(W=64\:mm\) and length \(L_{t}=105\:mm\). This channel has one outlet which is open to atmosphere at the same height as the inlet. A cavity was machined in the center of one of the acrylic plates (see Fig. 2b) by means of computer numerical control (CNC) milling machine. The cavity, which spans the whole width of the channel, has a depth of \(H=1\:mm\) and length of \(L=4\:mm\). The surface of the cavity was roughened using a sandpaper to inhibit the possibility of wall-slip. The distance from the entrance of the channel to the cavity is long enough to ensure fully-developed flow at the upstream of the cavity.
For this experiment, the flow of Carbopol gel is supplied to the cell via a syringe pump in the range of \(Q=0.01-0.3\:ml/min\). Table 2 summarizes the experimental conditions including the flow rates and the corresponding range of mean velocities \(U_{c}\). Before imaging, the channel was flushed with ethanol and then distilled water. To start each experiment, we filled the channel with Carbopol gel at a fixed flow rate \(Q\). Then, after giving the flow enough time to reach steady state, the flow visualization was commenced.
For imaging the flow, we used white polystyrene beads (Magspheres), with the mean diameter of \(4\:\mu m\) at a concentration of \(0.005(wt/wt\%)\), as the tracing particles. The particles were well mixed and the samples were subsequently sonicated to ensure homogeneity. We visualized the flow in the longitudinal centre-plane of the cell to measure the cross-slot streaklines and velocity fields developing over the cavity (see bottom panel of Fig. 2b). Since the observation plane is in the centre of the channel, we expect that the effect of the side walls are negligible and the flow can be considered to be 2D.
Flow visualization was done using an Optical Coherence Tomography (OCT). The OCT is a real-time imaging device that works based on inter-ferometry with a broad bandwidth light source. The OCT focuses a collimated beam of light with the center wave length of \(1300\:nm\) to the specimen by using a 5x-objective lens. The lateral scanning of the sample is performed by moving vertical scanning beam laterally through a two-galvo mirror system. The device captures cross-sectional images from the backscattered light coming from the tracing beads. The depth of focus of the vertical imaging is approximately \(3.5\:mm\) in a medium with the same refractive index as air, while the lateral field of view is adjusted to be \(5\:mm\). The spatial resolution of vertical imaging of the sample is around \(3.5\:\mu m\) while its lateral resolution is around \(13\:\mu m\). The images of flow were automatically recorded to a hard drive at a rate of \(10\:Hz\).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \multirow{2}{*}{Fluid} & Concentration & \(\tau_{y}\) & \(K\) & & \(G^{\prime}\) & \(G^{\prime\prime}\) \\ & \((wt/wt\%)\) & \((Pa)\) & \((Pas^{n})\) & & \((Pa)\) & \((Pa)\) \\ \hline \multirow{2}{*}{Carbopol} & 0.060 & 0.20 & 0.39 & 0.57 & 4.2 & 0.7 \\ & 0.075 & 1.47 & 1.53 & 0.46 & 16 & 2.0 \\ \hline PEO & 0.75 & 0 & 1.02 & 0.49 & - & - \\ \hline Glycerol-Solution & 31 & 0 & - & - & - & - \\ \hline \end{tabular}
\end{table}
Table 1: List of the fluids used in our experiments and their rheological properties. Note that the parameters were obtained from the regression using a Herschel-Bulckely model. The goodness of fit R-squared was found to be greater than \(0.99\).
#### 2.2.2 The 3D flow over aperture set-up
The aperture flow experiments were carried out in a narrow cell, which its schematic is shown in Fig. 2c. The cell was constructed using a thin Teflon PTFE sheet which is enclosed by two clear ultra-scratch-resistant cast acrylic plates (\(6.3\:mm\) thick). The acrylic plates include grooves and o-rings for liquid sealing. The acrylic plates and spacer were installed in an aluminium frame. The channel has a width of \(w=16.38\:mm\), length of \(L_{t}=105\:mm\) and height of \(H=1\:mm\). The channel geometry consists of a single aperture with a square step (see bottom panel of Fig. 2c). A Teflon sheet was cut to the shape of the aperture by means of a computer controlled waterjet cutter. The details of the aperture geometry are shown in the lower panel of Fig. 2c, where the span of the opening \(L=6\:mm\) and the step size \(s=3\:mm\). Similar to the cavity of Fig. 2b, the ratio of the depth (\(s\)) to the total length (\(L\)) of the aperture is \(1/4\). The channel is wide and thin enough such that the flow in the upstream region is unidirectional, while the flow over the aperture is expected to be 3D.
Before running each experiment, the channel was cleaned with distilled water and ethanol. Then, the channel was filled with the working fluid via a syringe pump at a flow rate of \(Q=1\:ml/min\). Subsequently, the working fluid was pumped into the cell at a lower constant flow rate for approximately \(20\:mins\) before the flow visualization commenced. The range of flow rates in these experiments are reported in Table 2.
For imaging the flow, we used fluorescent beads (Magspheres), with a mean diameter of \(2.9\:\mu m\) seeded at a concentration of \(0.005\:(wt/wt\%)\), as the tracing particles. Flow visualization was conducted using a swept-field laser-scanning confocal microscope with a 4X-objective lens. The observation plane was fixed at the central horizontal \(x-y\) plane, but its lateral position is controlled by a motor
Figure 2: A schematic diagram of the experimental set-up is shown in panel (a). The flow is generated by a syringe pump and monitored in a thin slot. The thin slot includes either a cross-wise cavity or a planar aperture. The schematic diagrams of the 2D flow in a cavity and the 3D flow through an aperture are depicted in panels (b) and (c), respectively. We used optical coherence tomography andorescent confocal microscopy to image the flow (in \(x-y\) plane) around the cavity and planar flow (i.e. \(x-y\) plane) around the aperture, respectively. The corresponding fields of view for these geometries are also shown in these two panels. Note that blue arrows show the incoming feed flow, while red arrows show the direction of the outgoing flow. For the cavity, \(L=4\:mm\) while for the aperture \(L=12\:mm\).
field of view of the lens is smaller than the size of the aperture, the full flow field was created by stitching together the images of smaller areas. The spatial and temporal resolution of the device is 1.6 \(\mu m\) and 50 \(ms\), respectively.
### Flow and plug characterization
The incoming flow at the upstream of the cavity/aperture was studied before in a similar geometry as in Daneshi et al. (2016). They showed that the velocity at the upstream of the channel is unidirectional and fully developed such that its cross-wise velocity profile matches the analytical one obtained from the Poiseuille flow of a Herschel-Bulkley fluid. Moreover, the fluid was pre-sheared before injection into the cell and along the tubes connecting the syringe to the channel at a higher shear rate than the nominal one in the channel. This might lead to the removal of the shear history of the fluid and minimization of any effects of possible thixotropic behaviour of the fluid.
In this work, we characterize the position of the static yield surfaces and the flow field developing around the cavity/aperture. The streaklines of flow of Carbopol over the cavity/aperture were produced by simply taking the average of the series of the images of flow. The corresponding velocity fields were obtained by using a particle image velocimetry (PIV) technique implemented in a commercial analysis package (Lavision Davis 8.0). A MATLAB code was used to post-process the PIV data and generate the velocity fields and contours.
The profiles of static plugs (yield surfaces) were obtained using two methods which are similar to those developed in (Daneshi et al., 2016). In the first method, the yield surfaces were extracted directly from the streakline images. By means of a light intensity thresholding algorithm, the stationary tracer particles (seen as bright points) were separated from the moving tracers (seen as gray streaks). In the second method, the yielding surfaces were computed from a noise-based threshold in the velocimetry measurements. The two methods show good agreement with each other (see Fig. 3). Finally, using the plug profile data and velocity measurements, we determined the size and shape of the unyielded regions.
Sample images of the flow of Carbopol gels over a cavity are shown in Fig. 3a and Fig. 3b for the 0.060% (\(wt/wt\)) and 0.075% (\(wt/wt\)) gels, respectively. The figures on the left depict the streaklines of the flow obtained from averaging the flow images. The images on the right depict the corresponding flow fields with the normalized velocity contours. As is clear from these figures, a plugged region forms inside the cavity, where the bright tracer particles become immobilized and the velocity approaches zero. Outside this region there is flow which is identified by the movement of the tracer particles. The static yield surfaces which separates the plugged region and flow region are highlighted in these figures by a dashed line.
## 3 Results and Discussions
In the first phase of this section (i.e. subsections 3.1 & 3.2), we present the experimental and numerical results in regard to the 2D flow over the cavity (i.e., Fig. 2b). Both experimental and computational results explore the flow and characterize the position and shape of the yield surfaces. The focus is on the effect of elasticity and plasticity of the fluid in the formation of fouling regions, in particular, the asymmetrical shape of the yield surfaces. In the second phase of this section (i.e. subsection 3.3), we look into experimental results regarding the 3D flow over a more complex geometry (i.e., Fig. 2c).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Experiment &
\begin{tabular}{c} Carbopol \\ (\(wt/wt\%\)) \\ \end{tabular} & \(Q\) & \(U_{c}\) & & \(Wi\) \\ \hline \multirow{2}{*}{Cavity} & 0.060 & 0.025-0.30 & 0.0065-0.13 & 9.04-1.64 & 0.27-5.4 \\ & 0.075 & 0.010-0.30 & 0.0026-0.08 & 14.4-3.19 & 0.08-2.4 \\ \hline \multirow{2}{*}{Aperture} & 0.060 & 0.010-0.30 & 0.010-0.30 & 7.01-1.05 & 0.14-3.9 \\ & 0.075 & 0.010-0.30 & 0.010-0.30 & 7.71-1.67 & 0.10-2.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental matrix: list of geometries, Carbopol gels and their rheological properties, flow parameters including the ranges of flow rates and average velocities in the upstream of the flow cell \(U_{c}\), and ranges of governing dimensionless groups including \(B\) and \(Wi\).
Note that in this study, the flow is inertialess and the governing dimensionless groups include Bingham number, which defines the ratio of the yield stress to the characteristic viscous stress,
\[B=\frac{\tau_{y}}{K}\left(\frac{H}{U_{c}}\right)^{n}, \tag{1}\]
and the Weissenberg number, which describes the ratio of the fluid relaxation time (\(\lambda\)) to the nominal characteristic time of the flow,
\[Wi=\frac{\lambda U_{c}}{L}. \tag{2}\]
Please note that \(H\) and \(L\) are defined in Fig. 2 for both geometries. We used \(L\) as the characteristic length in the Weissenberg number, as it is a more natural choice since the elastic stresses are dominated by normal stresses which are developed mostly over the cavity/aperture [36].
Under the assumption that Carbopol gels behaves like a Kelvin-Voigt viscoelastic solid below the yielding point, the relaxation time can be estimated by \(\lambda=G^{\prime\prime}/\omega G^{\prime}\), where \(G^{\prime}\) and \(G^{\prime\prime}\) were measured from oscillatory test in the linear regime (see Table 1).
In this work, we quantify the extent of flow asymmetry by calculating the norm of the velocity difference,
\[Asymmetry=\int|U_{r}-U_{l}|\ \mathrm{d}y \tag{3}\]
where \(U_{r}\) and \(U_{l}\) are the dimensionless horizontal velocity profiles along two symmetric lines separated by the same distance \(\Delta x\) from the symmetry line of the cavity or aperture. For the cavity \(\Delta x=\pm 1\,mm\) while
Figure 3: Representative experimental results for the flow of (a) \(0.060\%\) (\(wt/wt\)) and (b) \(0.075\%\) (\(wt/wt\)) Carbopol gels over a cavity. The images on the left panels show the streaklines of the flow while those on the right panels present the dimensionless speed contours, \(U/U_{c}\). For the \(0.060\%\) (\(wt/wt\)) Carbopol, (i) \(U_{c}=0.006\)\(mm/s\), \(B=9\), (ii) \(U_{c}=0.03\)\(mm/s\), \(B=4.1\) and (iii) \(U_{c}=0.05\)\(mm/s\), \(B=2.76\). For the \(0.075\%\) (\(wt/wt\)) Carbopol, (i) \(U_{c}=0.003\)\(mm/s\), \(B=14\), (ii) \(U_{c}=0.03\)\(mm/s\), \(B=5\) and (iii) \(U_{c}=0.05\)\(mm/s\), \(B=3.6\). The yellow and white dashed lines highlights the yield surfaces.
for the aperture \(\Delta x=\pm 3\,mm\) from the centerline. Fig. 4 shows a graphical representation of our definition of asymmetry for the flow of a Carbopol gel, glycerol solution and PEO solution at a fixed flow rate. The figure includes the flow fields (left panel) and their corresponding horizontal velocity profiles (right panel) along the chosen lines. Note that the flow profiles measured along these two lines are identical in the case of the glycerol and PEO solutions (see Fig. 4b-c), whereas there is a clear difference between them for the Carbopol gel (see Fig 4a). We discuss further this flow asymmetry in what follows.
### Plug Phenomenology and Asymmetry in Cavity Flows
In this section, the flow dynamics and the static plug morphology are characterized for the flow of Carbopol over a cavity by means of particle image velocimetry discussed in section 2.3.
The static yield surfaces are located always inside the gap regardless of the average velocity and Carbopol concentration (see Fig. 3). As expected, at an asymptotically small flow rates the yield surfaces approach the channel bottom wall (i.e. the top of the cavity). However, we observe an erosion of the plugged region by either increasing the velocity or decreasing the gel concentration (i.e. the yield stress). This suggests that the position of yield surfaces and the size of the plugged region developing in the cavity can be formulated as a function of the Bingham number. Fig. 5 presents the dimensionless size of the plug, i.e. the ratio of the area of the plugged region (\(A_{p}\)) to the cavity area (\(HL\)), versus the Bingham number for the flow with different average velocities and Carbopol with two different concentrations. As demonstrated from this figure, the plug area grows by increasing the Bingham number.
Interestingly, when plotted against \(B\), we note that the plug area data for the two Carbopol gels roughly collapse on a single curve. This implies that the area of plug is only a function of \(B\) and does not noticeably
Figure 4: Representative results obtained for the flow of (a) \(0.060\%\) (\(wt/wt\)) Carbopol (b) glycerol solution and (c) PEO solution around an aperture are shown here. The left panels represent the contours of speed, normalised by the far field mean velocity \(U_{c}=Q/WH\), and streamlines on the midplane of the slot around the aperture. In all these cases, the flow rate is fixed at \(Q=1\,ml/min\). The extent of asymmetry is defined as the norm of the difference in the \(x-\)component of the velocity along two symmetric lines separated by the same distance \(\Delta x\) from the axis of symmetry of the aperture (see eq. 3). The velocity component in the \(x-\)direction along those lines, \(U_{l}\) and \(U_{r}\) are plotted in the right panels. Note that the flow of the glycerol solution and PEO solution are symmetrical, while there is a discernible asymmetry in the flow of Catbopol gel which manifests itself in the velocity profiles depicted in (a) and also in the yield surface.
change by the power-law index of the gel. Similar behavior was observed by Daneshi et al. [6] for Carbopol flow around obstacles considering the plug length as a function of \(B\).
Now, we turn our attention to the morphology of the plugs. In particular, we note asymmetry in the static plug regions of the flow of Carbopol gels over the cavity, where the yield surfaces drop deeper on the left half of the cavity. Remarkably, the flows in the experiment are virtually inertialess and, as such, one would expect a symmetric flow for viscoplastic materials. However, the flow asymmetry is clearly visible across the entire range of \(B\) and \(Wi\) numbers presented here for Carbopol samples whereas the flow was entirely symmetric for the PEO and glycerol solutions (see Fig. 4b-c).
The extent of asymmetry (calculated by Eq. 3) for Carbopol as a function \(Wi\) is shown in Fig. 6a. The error bars displayed in the figure include the uncertainties from both the visualization resolution and the asymmetry variation from moving the symmetry lines (see Fig. 4) by \(\pm 10\%\Delta x\). Evidently from this figure, the extent of asymmetry becomes more pronounced by increasing \(Wi\), similar to what was reported for the viscoelastic polymers in the literature [37; 38]. However, the surprising feature of these results is that the asymmetry is significant even at very small \(Wi\) numbers where for a viscoelastic fluid we used, PEO solution, the flow is symmetrical.
We also note that at a fixed \(Wi\), the flow is more asymmetric for the \(c=0.075\%\) (\(wt/wt\)) Carbopol than that of the \(0.060\%\) (\(wt/wt\)) Carbopol. These observations imply that, in addition to the elasticity (i.e. Weissenberg number), the yield stress of the material plays a role. To better understand the role of these dimensionless numbers, we plot the asymmetry on \(Wi-B\) plane with the extent of asymmetry is shown by color of each symbol (see Fig. 6b). The extent of asymmetry decreases as we go to smaller Weissenberg numbers and larger Bingham numbers (bottom right of the \(Wi-B\) plane in panel (b)) for both Carbopol gels. Encouraged by the suggestion of Chaparian & Tammisola [30], we plot the extent of asymmetry versus \(Wi\times B\) in the inset of Fig. 6b. As it can be seen, the data fit satisfactorily on a master curve with this new scaling.This insinuates that \(Wi\times B\) which accounts for both elastic and plastic behaviour of the material can be used to explain the asymmetry. This holds valid for the solutions without yield stress, i.e. glycerol solution and PEO, as well. Note that although PEO solution is known as a viscoelastic fluid, the lack of asymmetry in the flow of PEO is attributed to the small Weissenberg numbers in the range of flow rates in our experiments. These results suggest that the asymmetry is triggered by the yield stress of the fluid at low \(Wi\) numbers.
### Numerical simulation of the 2D flow over cavity using elastoviscoplastic rheological model
In this study, we simulate the Stokes flow in the 2D cavity (Fig. 2b) using the same numerical scheme proposed by Chaparian & Tammisola [30] for an elastoviscoplastic rheological model introduced by Saramito
Figure 5: Variation of the dimensional area of the plugged region \(A_{p}\) developing inside the 2D cavity normalized by the area of the cavity (i.e. \(HL\)) plotted as function of \(B\). The red and blue symbols represent the data obtained for \(0.060\%\) (\(wt/wt\)) and \(0.075\%\) (\(wt/wt\)) Carbopol gels, respectively.
[31]:
\[\underbrace{\lambda\over\dot{\mathbf{\gamma}}_{e}}+\underbrace{\left(1-\frac{ \tau_{y}}{\|\boldsymbol{\tau}\|_{\mathbf{v}}\|_{v}}\right)_{+}}_{\dot{\mathbf{ \gamma}}_{p}}=\mu\dot{\mathbf{y}}, \tag{4}\]
This computational method is based on splitting the entire problem into viscoelastic and viscoplastic parts. The viscoplastic subproblem (i.e. \(\dot{\mathbf{\gamma}}_{p}\) part in the constitutive equation) is handled by the augmented Lagrangian method [39]. Then, the two subproblems are superimposed (i.e. \(\dot{\mathbf{\gamma}}_{e}+\dot{\mathbf{\gamma}}_{p}\)) until the equation (4) is satisfied up to the numerical convergence.
We use an open source finite element environment FreeFEM++ [40] for discretization and meshing which has been validated in our previous studies [41; 42; 43]. Anisotropic adaptive mesh is combined with this method to get smoother yield surfaces and ensure high resolution of the flow features [30; 39]. We shall mention
Figure 6: The quantified asymmetry of the 2D flow over the cavity (Fig. 2b). The results were reported for the flows of a glycerol solution, a PEO solution, and \(0.060\%\) (\(wt/wt\)) & \(0.075\%\) (\(wt/wt\)) Carbopol gels. Panel (a) shows the extent of asymmetry as a function of \(Wi\). Panel (b) recasts the data of (a) in \(Wi-B\) panel where the extent of asymmetry is presented by the color of the symbols. The extent of asymmetry versus \(Wi\times B\) is plotted in the inset of panel (b), where the data obtained for both Carbopol gels collapse into a master curve. Note that the extent of asymmetry calculated for PEO solution is less than \(\mathcal{O}(10^{-3})\), while the \(Wi\) estimated for this material is around \(0.075\) which is an order of magnitude larger than those for Carbopol gels and lie beyond the range of axes in panel (a) and (b), respectively.
that to avoid numerical instabilities, although the simulated geometry is exactly the same as Fig. 2b, we slightly round the sharp corners and carefully monitor that the effect is locally restricted. Note that the level of asymmetry is measured according to the same metric introduced at the beginning of Section 3.
Four sample computations are shown in Fig. 7; two viscoplastic simulations (i.e. \(Wi=0\) in panels (a) & (c)) and two elastoviscoplastic cases. As can be observed, by increasing the Bingham number, the unyielded plugs grow which is intuitive. The important point here is that the asymmetry in the flow field of elastoviscoplastic counterparts although the geometry is completely symmetric which is attributed to the elasticity of the fluid. Please note that the Weissenberg numbers are small in both cases.
We visualized the asymmetry in two different ways in Fig. 8 by spanning the Weissenberg and the Bingham numbers up to \(0.0125\) and \(5\), respectively, and by plotting as function of \(Wi\times B\). Hence, the contours in panel (a) and the \(y-\)axis of panel (b) represent the level of asymmetry. The white curves sketched on top of the contours in panel (a) represents the \(Wi\times B=const.\) lines. Some general points can be drawn from this figure. Perhaps the most important one is that at a fixed Weissenberg number, if we increase the Bingham number, the flow will be more and more asymmetric. In other words, plasticity triggers the elastic effects as well. This is in the same direction as the conclusions made by Chaparian and co-workers in a series of studies in various physical problems from elastoviscoplastic fluid flows in porous media and complex geometries [29; 44] to particle migration when the carrier fluid is elastoviscoplastic [44]. This is also consistent with our experimental results (see Fig. 6b). Another interesting point is that the level of asymmetry is almost the same when \(Wi\times B\) is constant: please see all the curves in Fig. 8b almost coincide on top of each other. Please note that the level of asymmetry grows almost linearly with \(Wi\times B\). Similar trend was observed in the experimental results discussed in the previous subsection.
### Plug Phenomenology and Asymmetry in a Blocked Aperture
Here, we present the results regarding the flow of Carbopol gels over a blocked aperture. Similar to the flow over cavity, an unyielded region forms in the aperture where its size changes by the mean velocity in the channel and Carbopol concentration. However, the flow in this geometry is different from the 2D flow over the cavity: here, the flow is three-dimensional and the shear stresses along the depth of the channel are significant. The raw images of the flow and the velocity contours in this geometry are displayed in Fig. 9a for the \(0.060\%\) (\(wt/wt\)) Carbopol gel and in Fig. 9b for the \(c=0.075\%\) (\(wt/wt\)) gel. The static yield surfaces develop in the aperture which always lie below the span of the channel wall.
As expected, at higher flow rates, the plugged regions are eroded more and the unyielded regions shrink. Intuitively, it is also evident that the higher concentration Carbopol has larger plug zones at the same flow rate. Fig. 10 shows the plug area \(A_{p}\) normalized by the area of the aperture step (\(sL\)) as a function of the
Figure 7: Velocity contour for the case : (a) \(B=10,Wi=0\); (b) \(B=10,Wi=0.0125\); (c) \(B=20,Wi=0\); (d) \(B=20,Wi=0.0025\). The yield surfaces are shown in cyan.
Bingham number. Again here, when plotting plug area as a function of \(B\), we obtain a collapse of the data for two different gels. This is in line with the results obtained for the 2D flow over cavity and implies that the plug size can be explained as a function of Bingham number and does not change noticeably by the degree of shear thinning of the material.
Similarly to the 2D flow over the cavity, we observe that flow asymmetries clearly increase by \(Wi\) (see Fig. 11a). This observation further highlights the robustness of the asymmetry in the flows of Carbopol gels over non-smooth geometries regardless of the flow type. Fig. 11a and Fig. 11b confirms this observation quantitatively. The higher concentration Carbopol has more asymmetric flow and the difference in flow asymmetries between the two Carbopol gels is greater in the aperture than in the flow over cavity. In tandem with the 2D flow over the cavity, in Fig 11b, we see that asymmetry increases at a fixed \(Wi\) with the Bingham number. The extent of asymmetry as a function of \(Wi\times B\) is shown in the inset of Fig 11b for the aperture geometry. When plotted against \(Wi\times B\), the data obtained for both Carbopol gels lie on each other again -- the same as what we have observed for for the 2D flow over a cavity.
## 4 Concluding Remarks
In this study, we revisited a classic fluid mechanics problem, flow in a conduit with an abrupt expansion-contraction. Due to the yield stress of the material, an unyielded region fills inside the cavity. The position and shape of the yield surface and the flow characteristics in its vicinity are scrutinized in this work, from both experimental and numerical perspectives.
In the first phase of this study, we examined the two-dimensional flow over a cavity. A high resolution micro-PIV measurements were conducted to determine the flow field developing around the cavity and the position of the yield surface. We also complemented these experiments with computations (i.e. section 3.2) developed based on an elastoviscoplastic constitutive law describing the rheology of the fluid. The experiments show the growth of the dead zones developing in the cavity with the Bingham number which is intuitive. A through analysis of the results reveals the asymmetry of the yield surfaces and the flow field developing around the cavity. The surprising feature of our findings is that the asymmetry is absent in the low \(Wi\) number flow of a viscoelastic fluid, PEO solution, while there is a markedly noticeable level of asymmetry for the flow of elastoviscoplastic fluid, Carbopol gel, in similar ranges of flow rates and \(Wi\) numbers. These observations are robust in the computations as well, confirming the existence of asymmetry in the low Weissenberg number flow of elastoviscoplastic fluids. This implies that a combination of elastic and plastic effects magnify the level of asymmetry.
The experiments were repeated in a more complex geometry, where there is a three-dimensional flow over an aperture in a thin slot. In contrast with the previous geometry, in this case, the stresses are not
Figure 8: Results of the numerical simulation where (a) shows the asymmetry contour in Weissenberg-Bingham plane and (b) shows asymmetry versus \(Wi\times B\).
restricted into 2D. Indeed, the confinement in the \(z-\)direction (see Fig. 2b) dictates significant shear stress to the fluid and cannot be neglected compared to the normal and shear stresses developed in the \(x-y\) plane across the aperture. Despite these differences, similar trends in the flow asymmetries have been observed in the \(x-y\) plane, which is consistent with the observed shear stress profiles.
Figure 10: The dimensional area of plugged region \(A_{p}\) developing inside the aperture normalized by the area of the step of the aperture (i.e. \(sL\)) versus \(B\). The red and blue symbols represent the data obtained for 0.060% (\(wt/wt\)) and 0.075% (\(wt/wt\)) Carbopol gels, respectively.
Figure 9: Representative results regarding the flows of (a) 0.060% (\(wt/wt\)) and (b) 0.075% (\(wt/wt\)) Carbopol gels over an aperture. The top series of each panel shows the streaklines flow and the bottom series of each panel shows the corresponding normalized speed contour (\(U/U_{c}\)). The yellow and white dashed lines highlights the yield surfaces. From left to right (from (\(i\)) to (\(iv\))) the upstream velocity increases from 0.01 to 0.28 \(mm/s\). Hence, they correspond to (a): 0.060% (\(wt/wt\)) Carbopol gel, (i) \(B=7\), (ii) \(B=1.8\) and (iii) \(B=1.3\) and (iv) \(B=1.0\); and to (b): 0.075% (\(wt/wt\)) Carbopol gel (i) \(B=7.7\), (ii) \(B=2.6\) and (iii) \(B=1.9\) and (iv) \(B=1.6\).
in this case which implies that the asymmetry scalings presented is universal and are not dependent on geometry, Carbopol concentration, or flow rates.
Asymmetry has been known as a characteristic of viscoelastic fluid flows in these types of geometries, where \(Wi\) number is sufficiently high. In particular, the asymmetry observed for the flow of viscoelastic fluids over a cavity is reminiscent of the "die swelling" effect [37] and can be attributed to the viscoelastic relaxation of the stresses developing as the fluid passes over the cavity. The elastic strains develop as the fluid elements pass through the expansion. This leads to an asymmetric stress field where a high stress zone forms near the expansion in contrast to the flow down-stream in the vicinity of the contraction. The scenario is more complex for the flow of yield-stress fluids where asymmetry exists even at very low \(Wi\)
Figure 11: The extent of asymmetry associated with the 3D flow inside the aperture (Fig. 2c). The results are reported for the flows of a glycerol solution, a PEO solution, and 0.060% (\(wt/wt\)) & 0.075% (\(wt/wt\)) Carbopol gels. Panel (a) shows the extent of asymmetry as a function of \(Wi\). Panel (b) recasts the data of (a) in \(Wi-B\) panel where the extent of asymmetry is presented by the color of the symbols. The extent of asymmetry versus \(Wi\times B\) is plotted in the inset of panel (b), where the data obtained for both Carbopol gels collapse into a master curve. The extent of asymmetry calculated for PEO solution is around \(\mathcal{O}(10^{-3})\), hence, similar to Fig. 6, here the data obtained for PEO solution is absent in panels (a) and (b).
numbers. To characterize and delve into the anomalous asymmetry observed for low \(Wi\)-number flow of yield-stress fluids in these geometries, we quantified the asymmetry by measuring the average difference in the velocity of the fluid passing the geometry symmetry line. As preliminary introduced by Chaparian & Tammisola [30], when the level of asymmetry is plotted against \(Wi\times B\), a collapse of data is achieved. This feature is obtained not only for the experimental and computational data regarding the two-dimensional flow over the cavity but also for the experimental measurements concerning the 3D flow over an aperture. It suggests that \(Wi\times B\) could be used to quantify the elastic behaviour of an elastoviscoplastic fluid rather than the \(Wi\) number. In other words, an elastoviscoplastic fluid manifests more elastic characteristic than its viscoelastic counterpart. Indeed, the additive plasticity of the fluid triggers the elastic behaviour. This fact has been reported previously in the context of flow over a single or arrays of obstacles [6; 26; 29], flow through a wavy channel [30], and particle migration in elastoviscoplastic fluids [44].
This aspect can be more understood by reviewing the nature of the \(Wi\) and \(B\) numbers. The viscoelastic relaxation time \(\lambda\) is associated with the necessary time for the stress of the fluid to decay when the motion is brought to a halt. In this case the viscosity is responsible for the work dissipation due to stress relaxation. Hence, for a viscoelastic fluid, the \(Wi\) number is the ratio of the relaxation time to the viscous hydrodynamic time scale: \(Wi=\lambda\tau_{c}/\mu=\lambda U_{c}/\ell\) where \(\tau_{c}\) is the characteristic viscous stress. However, in the case of an elastoviscoplastic material, not only does the viscosity dissipate the stored elastic energy, but also plasticity of the material would contribute in the stress relaxation. Therefore, the "effective" \(Wi\) number for an elastoviscoplastic fluid increases with the plasticity or indeed the Bingham number. In other words, \(Wi\times B=\lambda\ \tau_{y}/\mu\) which is the ratio of the relaxation time over the "viscoplastic" time scale seems more reasonable to characterize how elastic the material is.
A lot still remains to be understood about the complex behaviours of elastoviscoplastic materials and the augmentation of elasticity and plasticity in yield-stress fluids. Here, we shed some light on of the important class of yield-stress fluid flows, i.e. internal flows over non-smooth geometries which has a lot of applications in various industries, e.g. filtration. The present study provides some base fundamental scalings from which to further the knowledge about elastoviscoplastic materials at more details in complex hydrodynamic flows and/or the rheological behaviour of practical yield-stress fluids near the yielding point.
## Acknowledgment
The financial support through the NSERC Collaborative Research program, in conjunction with Advanced Fibre Technologies, is gratefully acknowledged.
|
2303.16881 | Systematic KMTNet Planetary Anomaly Search. IX. Complete Sample of 2016
Prime-Field Planets | As a part of the ``Systematic KMTNet Planetary Anomaly Search" series, we
report five new planets (namely, OGLE-2016-BLG-1635Lb, MOA-2016-BLG-532Lb,
KMT-2016-BLG-0625Lb, OGLE-2016-BLG-1850Lb, and KMT-2016-BLG-1751Lb) and one
planet candidate (KMT-2016-BLG-1855), which were found by searching $2016$
KMTNet prime fields. These $buried$ planets show a wide range of masses from
Earth--class to Super--Jupiter--class, and are located in both the disk and the
bulge. The ultimate goal of this series is to build a complete planet sample.
Because our work provides a complementary sample to other planet detection
methods, which have different detection sensitivities, our complete sample will
help us to obtain a better understanding of planet demographics in our Galaxy. | In-Gu Shin, Jennifer C. Yee, Weicheng Zang, Hongjing Yang, Kyu-Ha Hwang, Cheongho Han, Andrew Gould, Andrzej Udalski, Ian A. Bond, Michael D. Albrow, Sun-Ju Chung, Youn Kil Jung, Yoon-Hyun Ryu, Yossi Shvartzvald, Sang-Mok Cha, Dong-Jin Kim, Seung-Lee Kim, Chung-Uk Lee, Dong-Joo Lee, Yongseok Lee, Byeong-Gon Park, Richard W. Pogge, Przemek Mróz, Michał K. Szymański, Jan Skowron, Radosław Poleski, Igor Soszyński, Paweł Pietrukowicz, Szymon Kozłowski, Krzysztof A. Rybicki, Patryk Iwanek, Krzysztof Ulaczyk, Marcin Wrona, Mariusz Gromadzki, Fumio Abe, Richard Barry, David P. Bennett, Aparna Bhattacharya, Hirosane Fujii, Akihiko Fukui, Ryusei Hamada, Yuki Hirao, Stela Ishitani Silva, Yoshitaka Itow, Rintaro Kirikawa, Iona Kondo, Naoki Koshimoto, Yutaka Matsubara, Shota Miyazaki, Yasushi Muraki, Greg Olmschenk, Clément Ranc, Nicholas J. Rattenbury, Yuki Satoh, Takahiro Sumi, Daisuke Suzuki, Mio Tomoyoshi, Paul J. Tristram, Aikaterini Vandorou, Hibiki Yama, Kansuke Yamashita | 2023-03-29T17:51:33Z | http://arxiv.org/abs/2303.16881v1 | # Systematic KMTNet Planetary Anomaly Search. IX. Complete Sample of 2016 Prime-Field Planets
###### Abstract
As a part of the "Systematic KMTNet Planetary Anomaly Search" series, we report five new planets (namely, OGLE-2016-BLG-1635Lb, MOA-2016-BLG-532Lb, KMT-2016-BLG-0625Lb, OGLE-2016
BLG-1850Lb, and KMT-2016-BLG-1751Lb) and one planet candidate (KMT-2016-BLG-1855), which were found by searching 2016 KMTNet prime fields. These _buried_ planets show a wide range of masses from Earth-class to Super-Jupiter-class, and are located in both the disk and the bulge. The ultimate goal of this series is to build a complete planet sample. Because our work provides a complementary sample to other planet detection methods, which have different detection sensitivities, our complete sample will help us to obtain a better understanding of planet demographics in our Galaxy.
## 1 Introduction
To build a complete microlensing planet sample, we conduct a series of works called "Systematic KMTNet Planetary Anomaly Search" based on a large microlensing survey archive obtained by the Korea Microlensing Telescope Network (KMTNet: Kim et al., 2016). We identify planet-like anomalies using the "AnomalyFinder" algorithm (Zang et al., 2021, 2022) instead of a traditional "by-eye" method, which can systematically identify almost all candidates showing anomalies on the light curve1. However, to reveal the origin of the anomaly requires (preliminary) models including possible degenerate solutions to figure out the mass ratio of the lens component (i.e., \(q\)). Also, it requires investigating the data for the anomaly to check whether or not the anomaly is caused by a false-positive signal. Thus, detailed analyses for all anomalous events found by the AnomalyFinder require significant resources and human efforts.
Footnote 1: Although the AnomalyFinder (AF) detects anomalies using criteria optimized to the KMTNet data, some anomalous events can be omitted because the criteria are not perfect, yet. For example, AF missed KMT-2021-BLG-2294Lb (Shin et al., 2023). Thus, the “by-eye” method can help us to improve the criteria and understand the completeness of the final planet sample.
Hence, for the KMTNet data obtained from 2016 to 2021, we conduct the work separately for each bulge season and for observing fields with different cadences, which are divided into the prime fields (high cadence: \(\Gamma=2.0\)-\(4.0\,\mathrm{hr}^{-1}\)) and sub-prime fields (low cadence: \(\Gamma=0.2\)-\(1.0\,\mathrm{hr}^{-1}\)). The KMTNet field information is described in Kim et al. (2018). We have already done the systematic searches for the 2018 prime field (Wang et al., 2022; Hwang et al., 2022; Gould et al., 2022), 2018 sub-prime fields (Jung et al., 2022), 2019 prime fields (Zang et al., 2021; Hwang et al., 2022; Zang et al., 2022), and 2019 sub-prime fields (Jung et al., 2023). In addition, Zang et al. (2023) present a complete sample of planets with the mass ratio \(q<10^{-4}\) discovered from all candidate events observed from 2016 to 2019.
This is the 9th work to build the complete sample, which is conducted for the 2016 prime-fields (i.e., BLG01, BLG41, BLG02, BLG42, BLG03, BLG43). The AnomalyFinder algorithm and candidate review identified 106 anomalous events (plus 14 events that were already published). Based on visual inspection and/or preliminary modeling, 79 were eliminated as binaries. For the remaining 13 new candidates with at least one solution with \(q<0.06\), we re-reduce the photometry to check for/remove the systematics in the data sets. Based on further analysis with the best quality data sets, 7 were eliminated because they had no reliable planetary solutions (i.e., \(q<0.03\)) with \(\Delta\chi^{2}<10.0\). We also investigate one additional 2016 prime-field event for the detailed analysis, which was identified using "by-eye" method and reported as a planet-like event, but was not in the final AnomalyFinder candidate list (see Appendix B). Then, we find 5 new planets and one planet candidate based on detailed analyses, which are OGLE-2016-BLG-1635Lb, MOA-2016-BLG-532Lb, KMT-2016-BLG-0625Lb, OGLE-2016-BLG-1850Lb, KMT-2016-BLG-1751Lb, and KMT-2016-BLG-1855. We note that these planetary systems are designated by the survey projects that first announced the events as is traditional, even though the planetary systems were discovered based on the systematic search using the KMTNet data archive. We describe observations of each survey in Section 2. Then, we describe the light curve analysis for the planet candidates in Section 3. We note that, for the 9 non-planetary events, we report the analysis results in Appendix A for the record. In Section 4, we present analyses for color-magnitude diagrams of the 5 planetary events. In Section 5, we present properties of the planetary systems determined based on the Bayesian analyses. Lastly, we summarize the results of this work in Section 6.
## 2 Observations
In Tables 1 and 2 (see Appendix A), we present observational information for the anomalous events, which have at least one solution with \(q<0.06\) found from preliminary modeling. For the anomalous events, we gather all available data taken from microlensing surveys for preliminary modeling. The KMTNet pipeline data are available from the KMTNet Alert System (Kim et al., 2018, [https://kmtnet.kasi.re.kr/](https://kmtnet.kasi.re.kr/)\(\sim\)ulens/). They were obtained using three identical 1.6 m telescopes equipped with wide-field (4 square degree) cameras. The telescopes are located at the Cerro Tololo Inter-American Observatory in Chile (KMTC), the South African Astronomical Observatory in South Africa
(KMTS), and the Siding Spring Observatory in Australia (KMTA), which are in well-separated time zones to achieve near-continuous observations. Thus, the "prime-fields" of the KMTNet have high-cadences (\(\Gamma\geq 2\,\mathrm{hr}^{-1}\)) in \(I\)-band (Johnson-Cousins _BVRI_ filter system). Also, for the KMTC observations, KMTNet regularly takes one observation in \(V\)-band for every 10th \(I\)-band observation. We note that, for the KMTS observations, it takes one \(V\)-band observation for every 20th \(I\)-band observation.
The OGLE (Optical Gravitational Lensing Experiment: Udalski 2003; Udalski et al. 2015) data are available from the OGLE Early Warning System (Udalski et al. 1994, [http://ogle.astrouw.edu.pl/ogle4/ews/ews.html](http://ogle.astrouw.edu.pl/ogle4/ews/ews.html)) and were obtained using the 1.3 m Warsaw telescope with a \(1.4\,\mathrm{deg}^{2}\) camera located at Las Campanas Observatory in Chile. The OGLE observations were mainly made in \(I\) band. Also, they periodically observe in \(V\) band.
The MOA (Microlensing Observations in Astrophysics: Bond et al. 2001; Sumi et al. 2003) data are available on their alert system website ([http://www.massey.ac.nz/~iabond/moa/alerts/](http://www.massey.ac.nz/~iabond/moa/alerts/)), and were obtained using a 1.8 m telescope located at Mt. John University Observatory in New Zealand. The MOA observations were taken using the MOA-Red filter (hereafter, referred to \(R\) band), which is roughly the sum of the Cousins \(R\) and \(I\) bands (wavelength ranges: 609-1109 nm, transmission ranges: 0.0-0.978).
The data of each survey were reduced by their own pipelines (KMTNet: Albrow et al. 2009, OGLE: Wozniak 2000, and MOA: Bond et al. 2001), which adopt/modify the difference image analysis technique (Tomaney & Crotts 1996; Alard & Lupton 1998). We note that, for planet-like events listed in Tables 1 and 2, the KMTNet data are re-reduced using an optimized version of pySIS (Yang et al. in prep) to obtain the best quality of data sets (hereafter, "TLC (tender lowing care)" reductions) for the analyses. Also, we reduce \(V\)-band data to determine the source color using the pyDIA package (Albrow 2017; Bramich et al. 2013). We also note that some events require re-reduced data obtained from the OGLE and MOA surveys for the detailed analyses. For the KMT-2016-BLG-1751 and KMT-2016-BLG-0374 events, MOA did not alert these events. However, the events are located in the MOA fields. Therefore, the MOA team provided re-reduced data for these two events. OGLE-2016-BLG-1850 has a long baseline extending to 2017 season. The OGLE team provided re-reduced data for this event including the long baseline.
## 3 Light Curve Analysis
### Basics of the Analysis
We conduct detailed analysis for 13 candidates with re-reduced data sets using the optimized pySIS package (Yang et al. in prep). The analysis of the 5 planetary events and one planet candidate is presented in this section and remaining 7 events are briefly presented in Appendix A. We follow the methodology of the light curve analysis described in Shin et al. (2023). We briefly describe the analysis process, which consists of two steps, to present terminology used in this work.
First, we conduct a grid search to find all possible solutions, in particular, local minima having planetary mass ratios (i.e., \(q\lesssim 0.03\)). For the grid search, we start from the static 2L1S case, i.e., without motions of the lenses or source (STD), where \(n\mathrm{L}m\mathrm{S}\) indicates number of lenses (\(n\)) and sources (\(m\)), respectively. To describe a microlensing light curve, the STD model requires seven parameters: \((t_{0},u_{0},t_{\mathrm{E}},s,q,\alpha,\rho_{*})\), which are respectively defined as the time at the peak of the light curve, impact parameter, Einstein timescale, projected separation between binary lens components in units of the angular Einstein radius (\(\theta_{\mathrm{E}}\)), mass ratio of the lens components (i.e., \(q\equiv M_{\mathrm{secondary}}/M_{\mathrm{primary}}\)), angle between the source trajectory and binary axis, and angular source radius (\(\theta_{*}\)) scaled by \(\theta_{\mathrm{E}}\) (i.e., \(\rho_{*}\equiv\theta_{*}/\theta_{\mathrm{E}}\)). We set \((s,q)\) as grid parameters for the grid search, which are most sensitive to describe anomalies on the light curve. The ranges of \((s,q)\) are \(\log_{10}(s)\in[-1.0,1.0]\) and \(\log_{10}(q)\in[-5.5,1.0]\) with 100 grid points for each range. For five remaining parameters, we optimize them using \(\chi^{2}\) minimizing method called the Markov Chain Monte Carlo (MCMC) algorithm (Doran & Muller 2004). We note that \(\alpha\) is treated as a semi-grid parameter because it is also sensitive to describing the anomalies: we start 21 seeds for \(\alpha\) parameter within the range of \(\alpha\in[0,2\pi]\).
Second, once we find all plausible models, we refine model parameters for all cases by allowing all parameters can be freely vary within physically possible ranges. During this second process, we re-scale the errors of the data sets based on the best-fit model to make each data point contribute \(\chi^{2}\sim 1.0\). The error re-scaling procedure is described in Yee et al. (2012). Briefly, \(e_{\mathrm{R}}=k\sqrt{e_{\mathrm{O}}^{2}+e_{\mathrm{S}}^{2}}\) where \(e_{\mathrm{R}}\) is re-scaled error, \(k\) is the re-scaling factor, \(e_{\mathrm{O}}\) is original error, and \(e_{\mathrm{S}}\) is systematics term.
Based on the STD models, we consider higher-order effects if the solutions have a high chance to detect the effects. Specifically, we firstly consider the annual microlens parallax (APRX) effect (Gould 1992) if the models show relatively long timescale (i.e., \(t_{\mathrm{E}}\gtrsim 15\) days at least). Once we find the APRX effect, we also consider the lens-orbital (OBT)
effect because the OBT may affect the APRX measurements. Lastly, in the cases of the APRX effect detected, we test the "xallarap" effect (which is spelled backward of "parallax": Griest and Hu, 1992; Han and Gould, 1997; Paczynski, 1997; Poindexter et al., 2005), which reflects the accelerating orbital motion of secondary source without brightness contribution of the secondary source. Because the xallarap effect can mimic the APRX effect, the xallarap test is required to confirm the APRX measurements.
From the detailed analysis, we claim the detection of planetary systems if the fiducial solutions satisfy both detection criteria: (1) the solution(s) should have \(q\lesssim 0.03\) and (2) the solution(s) should have \(\Delta\chi^{2}\lesssim 10\) compared to other non-planetary solution(s).
Lastly, we note that, to indicate the degenerate solutions that we found, we follow the unified notation of the \(s^{\dagger}\) formalism described in Hwang et al. (2022) and Ryu et al. (2022). Also, we can check our solutions using the formalism for validation. Here, we briefly present the \(s^{\dagger}\) formalism for the description of each event in the following sections.
The separations (\(s^{\dagger}_{\pm}\)) caused by major and minor images (Gould and Loeb, 1992) are expected as
\[s^{\dagger}_{\pm}\equiv\frac{\sqrt{u_{\rm anom}^{2}+4}\pm u_{\rm anom}}{2}, \tag{1}\]
where \(u_{\rm anom}=(\tau_{\rm anom}^{2}+u_{0}^{2})^{1/2}\) is the offset of the source position from the host obtained from the scaled time offset from the peak of the light curve, \(\tau_{\rm anom}\equiv(t_{\rm anom}-t_{0})/t_{\rm E}\). The expected \(s^{\dagger}_{\pm}\) can be compared to the empirical results. The comparison depends on the type of anomaly shape and the number of solutions. In general, the "bump"-shaped anomaly caused by the major image perturbation should correspond to the \(s^{\dagger}_{+}\) expectation, while the "dip"-shaped anomaly caused by the minor image perturbation should correspond to the \(s^{\dagger}_{-}\) expectation. For the number of solutions (i.e., degenerate cases), if we have unique solution, the empirical \(s\) should correspond to one of \(s^{\dagger}_{\pm}\). If we have two degenerate solutions such as \(s_{\pm}\), the empirical solutions have a relation of
\[s^{\dagger}=\sqrt{s_{+}s_{-}}, \tag{2}\]
which should correspond to one of \(s^{\dagger}_{\pm}\) values. The \(\alpha\) can be also predicted as
\[\tan\alpha=\frac{u_{0}}{\tau_{\rm anom}}. \tag{3}\]
More specifically, \(\alpha=\tan^{-1}(u_{0}/\tau_{\rm anom})+j\pi\), where \(j=(0,1)\) for (major, minor) images, and where the range of \(\tan^{-1}\) is defined as \([0,\pi]\). We note that the \(\alpha\) expectation depends on the coordinate system of the modeling. Lastly, in the case of the dip-shaped anomaly, we can obtain the first order approximation of the \(q\) values. That is,
\[q=\left(\frac{\Delta t_{\rm dip}}{4t_{\rm E}}\right)^{2}\frac{s\sin^{2}\alpha }{u_{\rm anom}}=\left(\frac{\Delta t_{\rm dip}}{4t_{\rm E}}\right)^{2}\frac{s} {|u_{0}|}|\sin^{3}\alpha|. \tag{4}\]
We note that the predicted \(q\) generally matches the empirical \(q\) value within a factor of \(\sim 2\). This expectation is useful for judging how valuable an event is to conduct a detailed analysis (i.e., whether or not it is a planetary event), even if the expectation could not be very accurate. The theoretical origins of the heuristic analysis and such degeneracies are described in Gaudi and Gould (1997), Griest and Safizadeh (1998), and Zhang and Gaudi (2022).
### Ogle-2016-Blg-1635
The light curve of OGLE-2016-BLG-1635 (which we identified as KMT-2016-BLG-0269) exhibits a bump-shaped anomaly at HJD\({}^{\prime}=7624.6\). In Figure 1, we present the light curve with degenerate (i.e., \(s_{\pm}\)) models. We also present the model parameters in Table 3. From a heuristic analysis, we find \(\tau_{\rm anom}=0.023\) and \(u_{\rm anom}=0.036\) based on \(t_{\rm anom}=7624.60\), \(t_{0}=7624.12\), \(u_{0}=0.028\), and \(t_{\rm E}=21\) days. As a result, we expect \(s^{\dagger}_{-}=0.98\) and \(s^{\dagger}_{+}=1.02\). The \(s^{\dagger}_{-}\) is consistent with \(s^{\dagger}=0.99\) for our solutions. Although degenerate solutions exist, the mass ratios of both cases are less than 0.03, which implies that the companion is a planet by our formal definition.
The timescale of this event is about 21 days, which implies that there is a possibility of measuring the APRX considering the empirical criterion \(t_{\rm E}\gtrsim 15\) days. Thus, we test the APRX models for this event. We find \(\chi^{2}\) improvements \(\Delta\chi^{2}=7.24\) and \(\Delta\chi^{2}=9.96\) for the \(s_{-}\) and \(s_{+}\) cases, respectively. The \(\Delta\chi^{2}\) values are too small to claim the APRX detection. Moreover, the APRX parameters are not converged for the \(s_{-}\) case. For the \(s_{+}\) case, the
APRX model favors values of \(|\pi_{\rm E}|>10\) that are not reliable because they are caused by over fitting systematics at the baseline. Hence, we conclude that the STD models should be the fiducial solutions for OGLE-2016-BLG-1635. We can only measure the upper limits of \(\rho_{*}\) (i.e., \(3\sigma\) ranges) because the source does not cross the caustic as shown in Figure 1.
### Moa-2016-Blg-532
In Figure 2, we present the light curve of MOA-2016-BLG-532 (which we identified as KMT-2016-BLG-0506), which shows a clear deviation from the 1L1S model with a finite source. Although the anomaly is neither obviously bump-shaped nor dip-shaped, we find that the heuristic analysis is valid. It yields \(\tau_{\rm anom}=-0.032\) and \(u_{\rm anom}=0.034\) from \(t_{\rm anom}=7636.20\) and \(t_{\rm E}=21\) days. Then, we expect \(s_{+}^{\dagger}=1.017\), which matches exactly with \(s^{\dagger}=1.017\) (derived from the modeling). The light curve can be well described by a 2L1S interpretation with both planet and binary cases (see light curves and geometries in Figure 2). In Table 4, we present the best-fit parameters. However, we find that the binary case shows worse fits by \(\Delta\chi^{2}=20.62\) and \(22.32\) for the \(s_{+}\) and \(s_{-}\) cases, respectively. The \(\Delta\chi^{2}\) amounts are larger than our criterion to claim the planet detection. Although we conclude that this event is caused by a planetary lens system, we report both cases because the crucial part of the light curve (HJD\({}^{\prime}\) = 7637.1-7637.4) for clearly distinguishing between planet and binary solutions is not covered.
For this event, \(\rho_{*}\) is measured. The signal of the finite source effect comes from the peak of the light curve, which cannot be properly described by the 1L1S interpretation (see the residual of Figure 2). The peak part can be described by 2L1S solutions (planet cases) by _touching_ the cusp of the central caustic (see Figure 2). As a result, \(\rho_{*}\) is well measured.
We also test the APRX effect because of relatively long timescale of the event (i.e., \(t_{\rm E}\sim 21\) days). We find that the \(\chi^{2}\) improvements of \(\Delta\chi^{2}=21.57\) and \(\Delta\chi^{2}=12.73\) for the \(s_{+}\) and \(s_{-}\) cases, respectively. However, the APRX fits show values too big for both cases, i.e., \(|\pi_{\rm E,}N|>10\), which comes from over fitting systematics at the baseline. This fact implies that the APRX measurement is not reliable. Thus, we do not adopt the results of the APRX models.
### Kmt-2016-Blg-0625
As shown in Figure 4, the light curve of KMT-2016-BLG-0625 shows a clear bump-shaped anomaly at HJD\({}^{\prime}\sim 7662.95\). Based on the heuristic analysis, we find \(\tau_{\rm anom}=0.609\) and \(u_{\rm anom}=0.613\) from \(t_{\rm anom}=7662.95\) and \(t_{\rm E}=11.5\) days. Then, we can expect that \(s_{-}^{\dagger}=0.739\) and \(s_{+}^{\dagger}=1.352\), which are consistent with \(s_{-}=0.741\) and \(s_{+}=1.367\), respectively, among the solutions presented in Table 5. Also, we expect \(\alpha=0.12\) or \(3.26\) radians, which are consistent with \(\alpha=0.12\) and \(3.22\) of the \(s_{+}\) and \(s_{-}\) cases, respectively.
As shown in Table 5, we find four planetary solutions (\(s_{\pm}\) and \(s_{\pm}^{\prime}\)) that can explain the anomaly. Because of the gaps near the anomaly, the \(\Delta\chi^{2}\) values between the models (i.e., \(\Delta\chi^{2}=0.98\)-\(3.30\)) are too small to distinguish between them, although the model light curves show quite different features caused by the different caustic geometries presented in Figure 4. Although we cannot break the degeneracy of the planetary solutions, all cases indicate the companion of the lens system is a planet, i.e., \(q=\mathcal{O}(10^{-4})\) (see Table 5).
As shown in Figure 4, all planetary solutions produce the anomaly by crossing the caustic(s). As a result, we can measure the \(\rho_{*}\) despite the non-optimal coverage. We do not test for the APRX measurement because of the relatively short timescale (i.e., \(t_{\rm E}\sim 11\) days).
Because the bump-type planetary anomaly can often lead to a 2L1S/1L2S degeneracy (Gaudi, 1998), we check the 1L2S case for this event. In Table 5, we present the best-fit model of the 1L2S interpretation. We find that the 1L2S case is disfavored by \(\Delta\chi^{2}=7.35\). However, the \(\Delta\chi^{2}\) amount is not enough to conclusively resolve the 2L1S/1L2S degeneracy. Nevertheless, because we measure the \(\rho_{*}\) of the secondary source, we can measure the lens-source relative proper motion of the secondary source (\(\mu_{\rm rel,S_{2}}\)) to check the 1L2S model. We find (see Section 4) that \(\mu_{\rm rel,S_{2}}=0.83\pm 0.22\,{\rm mas\,yr^{-1}}\). By comparison, Gould (2022) found that for observed microlensing events with planetary-type anomalies, low proper motions have probabilities
\[p(\leq\mu_{\rm rel})=\frac{(\mu_{\rm rel}/2\sigma_{\mu})^{\nu+1}}{[(\nu+1)/2]!} \rightarrow\frac{\mu_{\rm rel}^{2}}{4\sigma_{\mu}^{2}}\to 2.8\times 10^{-2} \left(\frac{\mu_{\rm rel}}{1\,{\rm mas\,yr^{-1}}}\right)^{2}, \tag{5}\]
where \(\sigma_{\mu}=3\,{\rm mas\,yr^{-1}}\) and \(\nu=1\). See also Equation (9) of Jung et al. (2023). Applying this formula to the 1L2S solution, we find \(p=1.9\%\). This would, in itself, be a reasonably strong argument against the 1L2S solution. When combined with the fact that this solution is disfavored by \(\Delta\chi^{2}=7.35\), we consider it to be decisive. Therefore, we
reject the 1L2S solution and conclude that KMT-2016-BLG-0625 is caused by a planetary lens system. However, we note that the mass ratio \(q\) varies by a factor \(\sim 3\) over the four degenerate solutions.
### Ogle-2016-Blg-1850
The light curve of OGLE-2016-BLG-1850 (which we identified as KMT-2016-BLG-1307) shows a dip-shaped anomaly at HJD\({}^{\prime}\sim 7663\). Based on the heuristic analysis, we can expect \(s_{-}^{\dagger}=0.812\) and \(q=0.9\times 10^{-4}\) (based on \(\tau_{\rm anom}=0.126\) and \(u_{\rm anom}=0.419\) that are found from \(t_{\rm anom}=7663.15\) and \(t_{\rm E}=63.0\) days), which corresponds well with the empirical values: \(s^{\dagger}=0.813\) and \(q\sim 1.0\times 10^{-4}\).
In Figure 5, we present the observed light curve with zoom-ins of the anomaly. We also present the best-fit model light curves of the STD and APRX cases shown in Table 6. We find that both STD models (i.e., inner and outer cases) can describe the planetary anomaly as shown in Figure 6. However, the STD cases show a very long timescale (\(t_{\rm E}\sim 210\) days), which implies that the light curve is likely to be affected by a strong APRX effect. As expected, we find that the STD model cannot properly describe the 2017 baseline. Thus, we consider the ARPX effect. Then, we find a substantial \(\chi^{2}\) improvement of \(\Delta\chi^{2}\gtrsim 100\), which mostly comes from the better fit of 2017 baseline (see Figure 5). Also, all APRX solutions can well describe the planetary anomaly as shown in Figure 6. In Figure 7, we present caustic geometries of all cases for comparison. We note that OGLE-2016-BLG-1850 is a non-caustic-crossing event. As a result, we cannot precisely measure \(\rho*\) (only upper limits are available).
In Figure 8, we present the distributions of the APRX measurements, which are well converged. However, tests are required before we conclude that the APRX models should be the fiducial solutions for this event. First, because the lens-orbital motion can affect the APRX measurements (especially, the uncertainty of the APRX measurement), we test the lens-orbital effect (OBT). We conduct OBT+APRX models for each APRX case. We find that the OBT+APRX models show negligible \(\chi^{2}\) improvements, which are \(\Delta\chi^{2}\lesssim 0.5\) for the inner cases and \(\Delta\chi^{2}\lesssim 3.0\) for the outer cases, respectively. We also find that there is no effect on the uncertainties of the APRX measurements.
Second, to check the APRX models, we add xallarap to the models by introducing five parameters, which are North and East components of the xallarap vector (\(\xi_{{\rm E},N}\), \(\xi_{{\rm E},E}\)), the phase angle (\(\phi\)), the inclination of the orbit (\(i\)), and orbital period (\(P\)). We find that xallarap cases show \(\chi^{2}\) improvements of \(\Delta\chi^{2}=17.0\)-22.1 compared to the APRX cases, which are marginal \(\Delta\chi^{2}\) amounts to firmly claim the xallarap models can be fiducial solutions for this event. Moreover, although the best-fit model favors \(\log_{10}(P)=0.2\) as shown in Figure 9, we find that the xallarap models at \(\log_{10}(P)=0.0\) show \(\Delta\chi^{2}\lesssim 6.0\) compared to the best-fit xallarap model of each case. The clues imply that the asymmetry of the light curve is due to the APRX effect rather than xallarap effect. Thus, we conclude that the fiducial solutions for this event are the APRX models.
### Kmt-2016-Blg-1751
In Figure 10, we present the observed light curve of KMT-2016-BLG-1751, which shows a clear planetary anomaly (i.e., dip-feature) at the peak of the light curve. Based on the heuristic analysis (\(\tau_{\rm anom}\sim 0.00\) and \(u_{\rm anom}=0.11\) from \(t_{\rm anom}=7501.00\) and \(t_{\rm E}=10.0\) days), we expect \(s_{-}^{\dagger}=0.946\), which is well matched to both \(s^{\dagger}=\sqrt{s_{-}s_{+}}=0.944\) and \(s^{\dagger}=\sqrt{s_{-}^{\prime}-s_{+}^{\prime}}=0.947\). We also expect \(q\simeq 0.003\) (for both \(s^{\dagger}\) cases), which agrees with the \(q\) values presented in Table 7 to within a factor of \(\sim 2\).
We find that several solutions can explain the anomaly because the coverage of the anomaly (HJD\({}^{\prime}=7500.8\)-7502.4) is non-optimal. Thus, despite including MOA data, the gap in the anomaly produces degenerate solutions. In Table 7, we present model parameters of the solutions. In Figures 11 and 12, we also present the \(s\)-\(q\) parameter space with the locations of each solution and their caustic geometries. The competing solutions show relatively small \(\Delta\chi^{2}\) values compared to the best-fit solution (i.e., \(s_{+}\) case): 8.53, 5.70, 8.78, and 10.78 for \(s_{+}^{\prime}\), \(s_{-}\), \(s_{-}^{\prime}\), and \(s_{-}^{\prime\prime}\) cases, respectively. For the \(s_{\pm}\) and \(s_{\pm}^{\prime}\) case, we obtain a best-fit value for \(\rho_{*}\). However, as might be expected from the geometries, we find that the measurements are consistent with zero at \(3\sigma\). Thus, in these cases, we effectively have only an upper limit on \(\rho_{*}\), so we will apply a \(\rho_{*}\) weight function in the Bayesian analysis in Section 5. For the \(s_{-}^{\prime\prime}\) case, \(\rho_{*}\) is measured from the caustic-crossing. However, the \(s_{-}^{\prime\prime}\) solution does not satisfy our \(\chi^{2}\) criterion (i.e., \(\Delta\chi^{2}<10.0\)). Thus, we remove the \(s_{-}^{\prime\prime}\) case from our fiducial solutions for determining lens properties of this event. However, because the \(\Delta\chi^{2}\) of this case is very close to the \(\chi^{2}\) criterion, we present parameters and figures of this solution for completeness. Lastly, because of the short timescale (i.e., \(t_{\rm E}\sim 10\) days), we do not conduct the APRX modeling for this event.
In Figure 13, we present the observed light curve of KMT-2016-BLG-1855 with the best-fit model curve and caustic geometry. The observed light curve exhibits anomalies at the peak. We find that the anomaly can be described by a source approaching a diamond-shaped central caustic, which is in the regime of a Chang & Refsdal lens (C-R: Chang & Refsdal, 1979). The best-fit model shows \(\frac{1}{q}=0.023\pm 0.012\) which satisfies our mass ratio criterion to claim planet detection. However, we find that there exist possible solutions caused by the close/wide degeneracy (Griest & Safizadeh, 1998), the offset degeneracy (Zhang & Gaudi, 2022; Zhang et al., 2022), and 2L1S/1L2S degeneracy (Gaudi, 1998). We also check for the \(\alpha\)-degeneracy (i.e., degeneracy caused by the angle of source trajectory), which can occur for C-R lenses. These solutions are denoted \(n(\pi/2)\), where \(n=(1,2,3)\). In Tables 8 and 9, we present model parameters of the best-fit and degenerate models. In Figures 14 and 15, we present all possible solutions with their caustic geometries and residuals for comparison.
We find total 7 degenerate solutions including the 1L2S case. For the 2L1S cases, we find the initial parameters of A, B, C, and D solutions based on the grid search. We also find initial parameters for their paired offset-degeneracy solutions (i.e., A\({}^{\prime}\), B\({}^{\prime}\), C\({}^{\prime}\), and D\({}^{\prime}\)) using heuristic analysis: \(s^{\prime}={(s_{\pm}^{\dagger})}^{2}/s_{\pm}\) where subscripts \(+\) and \(-\) indicates the \(s>1\) and \(s<1\) cases, respectively. Note that we transform our coordinates system from "secondary" to "primary" components to conduct the heuristic analysis because the analysis is valid for the "primary" coordinates system. Then, we refine the model parameters to check the degeneracy (note that we restore the coordinates for direct comparison).
For the A and A\({}^{\prime}\) pair, the heuristic analysis predicts \(s^{\prime}=3.680\). The paired offset-degeneracy solution of the A case (i.e., A\({}^{\prime}\)) is consistent with the \(3(\pi/2)\) C-R case, which has an empirical value of \(s=3.780\pm 0.088\). This A family degeneracy can be resolved (see below). For the B and B\({}^{\prime}\) pair, the heuristic analysis predicts \(s^{\prime}=3.600\), which is consistent with the empirical \(s=3.708\pm 0.121\) from the B\({}^{\prime}\) case. This B family is a C-R lensing case, which shows large uncertainties in the set of \((t_{\rm E},s,q)\) parameters. For the C and C\({}^{\prime}\) pair, the heuristic analysis predicts \(s^{\prime}=1.162\), which is consistent with the empirical value of \(s=1.161\pm 0.042\) from the C\({}^{\prime}\) case. Indeed, the C family is caused by close/wide degeneracy. For the D case, the heuristic analysis expects \(s^{\prime}=2.951\). However, we find that the paired offset solution evolves toward the B case. Indeed, the caustic geometry of the D case is asymmetric, which is different from the C-R lens case. Thus, because the source trajectory is not perpendicular the binary axis, the paired solution from the heuristic analysis cannot describe the peak of the light curve, and we would not necessarily expect it to (Gaudi & Gould, 1997). In all cases, the \(\rho_{*}\) measurements are uncertain and give only upper limits of \(\rho_{*}\) values as expected from non-crossing caustic geometries.
All the 2L1S models nominally have long timescales (\(t_{\rm E}\)), but they also have \(q>1.0\) (i.e., they approach the secondary, less-massive, lens component). For these cases, the actual timescale (\(t_{\rm E}^{\prime}\)) of the event should be scaled by \(t_{\rm E}^{\prime}=t_{\rm E}\sqrt{q}\) as shown in Tables 8 and 9. Hence, given that \(t_{\rm E}^{\prime}\sim 15\) days, it is not surprising that we do not detect the APRX effect (i.e., \(\Delta\chi^{2}_{\rm STD-APRX}=2.7\)). Thus, we conclude that the STD models are the fiducial solutions for this event.
As shown in Figures 14 and 15, all cases describe the peak anomaly well. Although they nominally have \(\Delta\chi^{2}>10\) compared to the best-fit case, we find that \(\chi^{2}\) differences mostly come from the baseline part (HJD\({}^{\prime}>7600\)). The best-fit case has a wide caustic, which creates a very shallow bump peaking at HJD\({}^{\prime}\sim 7717\), \(\Delta I\sim 0.01\) magnitudes above the baseline observations (see the blue dashed line). However, systematics may exist in the baseline data at this level, especially considering the dispersion of the baseline data (i.e., \(\Delta I\sim 0.65\) magnitudes). Thus, we compute \(\Delta\chi^{2}\) without the baseline data of HJD\({}^{\prime}>7600\) (which \(t_{0}+\sim 1.5\,t_{\rm E}\sim 7595.25\)) because the \(\chi^{2}\) contributions at the baseline cannot be considered reliable. After this cut, the \(\Delta\chi^{2}\) values for all cases (except the A\({}^{\prime}\) case) are less than 9 as shown in Tables 8 and 9. Hence, we cannot claim to resolve most degenerate solutions.
The 1L2S model is completely degenerate with the 2L1S models and cannot be excluded based on physical considerations. First, the finite source effect is not measured; the \(\rho_{*}\) distributions of both sources reach zero within \(3\sigma\). Moreover, because of the severe extinction (\(A_{I}=5.97\); Gonzalez et al., 2012), additional information to conclusively resolve the degeneracy, such as the source color (see Section 4), is not available for this event.
Thus, we treat KMT-2016-BLG-1855 as a planet candidate, and we strongly counsel against cataloging it as a "planet".
## 4 Cmd Analysis
For the five planetary events, we measure the angular source radius (\(\theta_{*}\)) using the conventional method described in Yoo et al. (2004), i.e., the color-magnitude diagram (CMD) analysis. The \(\theta_{*}\) measurement is important. If we measure
\(\rho_{*}\) from the finite-source effect, we can determine \(\theta_{\rm E}=\theta_{*}/\rho_{*}\). Furthermore, even if we cannot measure the \(\rho_{*}\), \(\theta_{*}\) is required to apply the \(\rho_{*}\) distributions as constraints on the Bayesian analysis.
We proceed with this analysis based on multi-band observations (\(I\)- and \(V\)-bands) taken from KMTNet survey (i.e., KMTC). We align the KMTNet instrumental color and magnitudes to the OGLE-III scales using cross-matching of field stars. We note that the position of the red giant clump centroid (RGC) is determined based on the OGLE-III CMD (Szymanski et al., 2011). In Figure 16, we present CMDs of the five planetary events for the best-fit cases with the positions of RGC, source, and blend. We also present all information from the CMD analysis, including \(\theta_{*}\), \(\theta_{\rm E}\) and \(\mu_{\rm rel}\), in Table 10. We note that the intrinsic color of the RGC is adopted from Bensby et al. (2011). The de-reddened magnitude of the RGC is adopted from Nataf et al. (2013). The de-reddened colors and magnitudes of source and blend are determined by assuming they experienced same amount of stellar extinction of the RGC. Lastly, we determine the \(\theta_{*}\) using the surface brightness-color relation adopted from Kervella et al. (2004).
Note that we proceed differently for the special case of the putative second source in the 1L2S solution for KMT-2016-BLG-0625. We find \(I_{\rm S,0}=19.345\pm 0.010\) using the method of Yoo et al. (2004). Then, we derive \(I_{\rm S_{2},0}=25.112\pm 0.231\) based on the \(q_{\rm flux}\) value of the 1L2S model. We convert the de-reddened \(I\)-band magnitude of the second source to absolute \(I\)-band magnitude (\(M_{I}\)) for the second source by adopting \(M_{I,\rm RGC}=-0.12\pm 0.09\) and \(I_{\rm RGC,0}=14.335\) from Nataf et al. (2013): \(M_{I,\rm S_{2}}=10.656\pm 0.248\). We can estimate the radius of the second source, \(R_{\rm S_{2}}\sim 0.208\,R_{\odot}\), based on studies for stellar properties (Pecaut et al., 2012; Pecaut & Mamajek, 2013). Thus, we find the angular radius of the second source is \(\theta_{*,\rm S_{2}}\sim 0.128\,\mu\)as, which yields \(\mu_{\rm rel,S_{2}}=0.83\pm 0.22\,{\rm mas\,yr^{-1}}\). Note that we adopt the distance to the second source (\(D_{\rm S_{2}}\sim 7.59\,{\rm kpc}\)) from Nataf et al. (2013).
For KMT-2016-BLG-1855, the field is highly extincted (\(A_{I}=5.97\); Gonzalez et al., 2012), so is not possible to measure the source color in \(V\)-band from the KMTNet data. We construct an \(I\)-\(H\) CMD for this event by cross-matching the OGLE-III catalog (Szymanski et al., 2011) to VVV DR2 (Minniti et al., 2017), and we convert the KMT pyDIA \(I\) magnitude of the source to the OGLE-III system. This that suggests the source is a red clump giant. However, the clump is extended in both color and magnitude in the CMD. Both the lack of a color measurement and the uncertainty in the clump magnitude would make \(\theta_{\star}\) highly uncertain. However, there are no meaningful constraints on \(\rho_{\star}\), so we do not calculate a value for \(\theta_{\star}\) because it has no bearing on the analysis.
We also measure astrometric offsets between baseline objects and sources to check whether or not the blend light can be used as constraints. For all planetary events, we find that the blend is separated by \(>0.3^{\prime\prime}\), so it is dominated by a star that is not the lens.
## 5 Planet Properties
### Bayesian Formalism
To determine the lens properties, two additional observables are simultaneously required. They are the angular Einstein ring radius (\(\theta_{\rm E}\)) and the amplitude of the microlens parallax vector (\(|\pi_{\rm E}|\)), which are measured from the effects of the finite source and microlens parallax, respectively. However, the events for which both observables are simultaneously measured are relatively rare. Indeed, we can measure only one of these observables out of five planetary events presented in this work. Thus, we estimate the lens properties using the Bayesian analysis. We follow the Bayesian formalism described in Shin et al. (2023) to generate the Galactic prior. Then, we apply the measured observable as a constraint on the Galactic prior. In Table 11, we present the applied constraints and the lens properties for each event. For the notation of the constraint, \(t_{\rm E}\) indicates a Gaussian weight function constructed based on the best-fit value of the \(t_{\rm E}\) parameter and its uncertainty. \(\theta_{\rm E}\) indicates the Gaussian weight adopted from the measured \(\theta_{\rm E}\) if \(\rho_{*}\) is certainly measured. \(\rho_{*}\) indicates a weight function built based on the \(\Delta\chi^{2}\) distribution as a function of \(\rho_{*}\) if the \(\rho_{*}\) measurement is uncertain. Lastly, \(\boldsymbol{\pi}_{\rm E}\) indicates a constraint using the 2D APRX distributions described in Ryu et al. (2019). In Table 11, we present various lens properties for each event because each event has degenerate solutions, which yield different lens properties. For ease of cataloging, we present "adopted" values for each property by adopting the method described in Jung et al. (2023), i.e., weighted average values.
### Ogle-2016-Blg-1635
For the Bayesian analysis of this event, we apply constraints obtained from \(t_{\rm E}\) (i.e., the Gaussian weight) and \(\rho_{*}\) weight functions on the Galactic prior, because the \(\rho_{*}\) measurements are uncertain and the APRX is not measured. Note that we can evaluate the effect of the \(\rho_{*}\) weight on the posterior before conducting the Bayesian analysis. If the
lower limit on the relative lens-source proper motion, \(\mu_{\rm rel,+3\sigma}\equiv\theta_{*}/(\rho_{*,+3\sigma}t_{\rm E})\lesssim 1 \,{\rm mas}\,{\rm yr}^{-1}\), the effect is minor. As expected (see the \(\mu_{\rm rel}\) column of Table 10), the effects of \(\rho_{*}\) are minor for both solutions.
The Bayesian results indicate that the lens system of this event consists of an M dwarf host star (\(M_{\rm host}\sim 0.4\,M_{\odot}\)) and a super-Jupiter-mass planet with a mass of \(M_{\rm planet}\sim 11.5\,M_{\rm J}\), which is close to the limit of planetary objects. The planet orbits the host with a projected separation \(a_{\perp}\sim 1.31\) au or \(\sim 3.82\) au, which is beyond its snow line. The planetary system is located in the Galactic bulge with the distance \(\sim 6.6\) kpc from us. Hence, this event is caused by a typical microlensing planetary system, which is a giant planet orbiting an M dwarf host beyond snow line (Ida & Lin, 2005; Kennedy & Kenyon, 2008).
### Moa-2016-Blg-532
For this event, the \(\rho_{*}\) values are measured. Thus, we apply \(t_{\rm E}\) and \(\theta_{\rm E}\) constraints on the Bayesian analyses. The lens system of this event consists of a late-type M-dwarf host star (\(M_{\rm host}\sim 0.1\,M_{\odot}\)) and a super-Neptune-mass planet (\(M_{\rm planet}\sim 7.2\,M_{\rm N}\)) orbiting with a projected separation, \(a_{\perp}\sim 0.56\) au or \(\sim 1.36\) au. The planetary system is located at the distance with \(D_{\rm L}\sim 7.4\) kpc from us. Similarly to OGLE-2016-BLG-1635 this event is also caused by a typical microlensing planet.
### Kmt-2016-Blg-0625
Despite the non-optimal coverage, we can measure \(\rho_{*}\) for this event. Thus, we apply \(t_{\rm E}\) and \(\theta_{\rm E}\) constraints on the Bayesian analyses. For this event, The Bayesian results for the lens system spans a wide range of properties because of the degenerate solutions (i.e., due to different \(q\) and \(\rho_{*}\) for each solution, see Table 5). The host star is an M dwarf with the mass range of \(M_{\rm host}\sim 0.2\)-\(0.3\,M_{\odot}\). For the \(s_{-}\) case (the best-fit solution), the planet could be a Neptune-mass planet with a mass of \(M_{\rm planet}\sim 1.36\,M_{\rm N}\) orbiting the host with a projected separation of \(a_{\perp}\sim 1.3\) au. For the remaining cases, the planet could be a super-Earth-mass planet with a mass range of \(M_{\rm planet}\sim 2.0\)-\(9.0\,M_{\oplus}\) orbiting the host with a projected separation range of \(a_{\perp}\sim 0.9\)-\(1.9\) au. The planetary system belongs to the Galactic bulge with a distance range of \(D_{\rm L}\sim 6.1\)-\(6.7\) kpc.
### Ogle-2016-Blg-1850
For this event, we find the APRX effect on the light curve. However, the \(\rho_{*}\) measurements are uncertain. Thus, we apply \(t_{\rm E}\), \(\rho_{*}\) weights, and \(\mathbf{\pi}_{\rm E}\) constraints on the Bayesian analyses. The \(\mathbf{\pi}_{\rm E}\) constraints have major effects on the posteriors, while the \(\rho_{*}\) constraints have only minor effects as expected from \(\mu_{\rm rel,+3\sigma}\lesssim 1\,{\rm mas}\,{\rm yr}^{-1}\) (see Table 10).
The planetary system of this event consists of an M-dwarf host star (\(M_{\rm host}\sim 0.2\)-\(0.3\,M_{\odot}\)) and a super-Earth-mass planet. We find that the planet mass of the inner cases (\(M_{\rm planet}\sim 9\,M_{\oplus}\)) is smaller than those of the outer cases (\(M_{\rm planet}\sim 11\,M_{\oplus}\)). The planet orbits the host with a projected separation of \(a_{\perp}\sim 1.4\)-\(1.5\) au beyond its snow line. The system is located at the distance of \(D_{\rm L}\sim 2\) kpc from us, i.e., in the disk, which is expected by considering the strong microlens parallax effect.
### Kmt-2016-Blg-1751
For this event, we conduct Bayesian analyses for \(s_{\pm}\) and \(s^{\prime}_{\pm}\) solutions by applying \(t_{\rm E}\) and \(\rho_{*}\) weights as constraints. We find that the lens system consists of an M-dwarf host (\(M_{\rm host}\sim 0.18\,M_{\odot}\)) and a Jupiter-class planet (\(M_{\rm planet}\sim 0.7\)-\(1.2\,M_{\rm J}\)), which is located at the distance of \(\sim 7.05\) kpc from us. The planet orbits the host with the projected separation of \(a_{\perp}\sim 1.2\)-\(1.4\) au.
We note that, as mentioned in Section 3.6, the \(s^{\prime\prime}_{-}\) case is removed from our fiducial solutions. Thus, although we conduct the Bayesian analysis for this case, we do not include the lens properties of this cease in Table 11. However, for completeness, we present the lens properties of this case. The Bayesian analysis applied \(t_{\rm E}\) constraint indicates that the lens system consists of a M-dwarf host star (\(M_{\rm host}=0.18^{+0.28}_{-0.11}\,M_{\odot}\)) and a Super Neptune-mass planet (\(M_{\rm planet}=2.5^{+4.25}_{-1.57}\,M_{\rm N}\)) with a projected separation of \(1.28^{+0.54}_{-0.44}\) au. The system is located at the distance of \(7.04^{+0.54}_{-1.38}\) kpc and the relative lens-source proper motion is \(\mu_{\rm rel}=7.56^{+3.45}_{-2.68}\,{\rm mas}\,{\rm yr}^{-1}\). The results are similar to the lens properties of our fiducial solutions because the \(t_{\rm E}\) value of the \(s^{\prime\prime}_{-}\) is similar to them, except for the planet mass, which is caused by the smallest \(q\) value of the \(s^{\prime\prime}_{-}\) model. On the other hand, the Bayesian analysis applying \(t_{\rm E}\) and \(\rho_{*}\) weights shows an extreme lens system caused by the unusually small \(\theta_{\rm E}\). That is, the lens system could consist of very-low mass host (\(M_{\rm host}=0.02^{+0.05}_{-0.01}\,M_{\odot}\) and sub Neptune-mass planet (\(M_{\rm planet}=0.310^{-7.1}_{-0.19}\,M_{\rm N}\)) with the projected separation of \(0.31^{+0.06}_{-0.07}\). The system could be located at the distance of \(8.17^{+1.05}_{-1.04}\) kpc. The relative lens-source proper motion is
\(\mu_{\rm rel}=1.59^{+0.21}_{-0.34}\,{\rm mas\,yr^{-1}}\), which is inconsistent with the typical value of bulge-lens/bulge-source microlensing event (5-10 mas yr\({}^{-1}\)). If the lens star were resolved by future adaptive optics imaging, this could definitively rule out the \(s_{-}^{\prime\prime}\) solution.
## 6 Summary
We found 5 new planetary systems and one planet candidate through a systematic anomaly search for 2016 prime-fields of the KMTNet data archive. These "buried" planets have various properties. For OGLE-2016-BLG-1635, the planetary system consists of an M dwarf host (\(M_{\rm host}\sim 0.4\,M_{\odot}\)) and a super-Jupiter-mass planet (\(M_{\rm planet}\sim 11.5\,M_{\rm J}\)), which orbits the host with a projected separation of 1.3 or 3.8 au. The system is located at a distance of \(\sim 6.6\) kpc from us. For MOA-2016-BLG-532, the lens system indicates that a super-Neptune-mass planet (\(M_{\rm planet}\sim 7.2\,M_{\rm N}\)) orbits a late M-dwarf host star (\(M_{\rm host}\sim 0.1\,M_{\odot}\)) with a projected separation of 0.6 or 1.4 au. The planetary system is located at a distance of \(\sim 7.4\) kpc from us. For KMT-2016-BLG-0625, because of the degenerate solutions, the planetary system consists of an M dwarf host star with mass in the range 0.1-0.3 \(M_{\odot}\) and a planet with mass in the range 2.0 \(M_{\oplus}\)-1.4 \(M_{\rm N}\). The system is located at a distance in the range 6.1-6.7 kpc. For OGLE-2016-BLG-1850, the planetary system consists of an M dwarf host star (\(M_{\rm host}\sim 0.3\,M_{\odot}\)) and a Super-Earth-mass planet (\(M_{\rm planet}=9\)-11 \(M_{\oplus}\)) with a projected separation of \(\sim 1.5\) au. The system is located at a distance of 2 kpc. For KMT-2016-BLG-1751, we adopt the lens properties of \(s_{\pm}\) and \(s_{\pm}^{\dagger}\) cases, which indicate that a Jupiter-class planet (\(M_{\rm planet}=0.7\)-1.2 \(M_{\rm J}\)) orbits an M-dwarf host (\(M_{\rm host}\sim 0.18\,M_{\odot}\)). The system is located at a distance of \(\sim 7.05\) kpc.
Our goal in the series including this work is to build a complete planet sample discovered by the microlensing method for the 2016-2021 KMTNet data archive. In Table 12, we present all microlensing planets discovered on the KMTNet prime fields in 2016, which are published planets that are recovered by the AnomalyFinder and five newly discovered in this work. The horizontal line separates planets expected to be part of the final statistical sample and those whose mass ratios are likely too uncertain or too large to be included.
As discussed in Clanton & Gaudi (2014, 2014) and Shin et al. (2019), each planet detection method has different detection sensitivity, which provides complementary planet samples for studying planet demographics and planet frequency in our Galaxy. Our works are important for a complete microlensing planet sample. Indeed, although the sample size of microlensing planets is relatively small compared to other methods such as radial velocity and transits, the microlensing planet sample is less biased to host mass because, in principle, the microlensing method can detect any foreground objects regardless of their host brightness. Thus, a complete microlensing planet sample can help us to obtain a better understanding of planet demographics in our Galaxy.
This research has made use of the KMTNet system operated by the Korea Astronomy and Space Science Institute (KASI), and the data were obtained at three host sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia. I.-G.S., S.-J.C., and J.C.Y. acknowledge support from N.S.F Grant No. AST-2108414. Work by C.H. was supported by grants of the National Research Foundation of Korea (2017R1A4A1015178 and 2019R1A2C2085965). Y.S. acknowledges support from BSF Grant No. 2020740. W.Z. and H.Y. acknowledge support by the National Science Foundation of China (grant No. 12133005). The MOA project is supported by JSPS KAKENHI Grant Number JP24253004, JP26247023, JP23340064, JP15H00781, JP16H06287, JP17H02871 and JP22H00153. The computations in this paper were conducted on the Smithsonian High Performance Cluster (SI/HPC), Smithsonian Institution ([https://doi.org/10.25572/SIHPC](https://doi.org/10.25572/SIHPC)).
## Appendix A Non-Planetary Events
We report on the analysis of binary-lens events, that are found by AnomalyFinder as candidate planetary events. From the initial analyses, we find that the light curves of these events could be described both binary-lens and planet-lens interpretations. However, based on analyses using TLC reductions, we find that these events disfavor planetary solutions (\(q<0.03\)) by \(\Delta\chi^{2}>15\). Thus, we cannot claim certain detection of the planets. In Table 2, we present observational information of these events.
### Ogle-2016-Blg-0987
The light curve of OGLE-2016-BLG-0987 (which we identified as KMT-2016-BLG-0020) shows deviations from the 1L1S interpretation (\(\Delta\chi^{2}_{\rm 1L1S-2L1S}=159.68\)). From the 2L1S modeling, we find several 2L1S solutions that can explain the deviations. Among the solutions, four are binary-lens cases and three are planet-lens cases. The best-fit solution is the binary-lens case with \((s,q)=(0.492\pm 0.013,0.108\pm 0.005)\). However, the lowest \(\chi^{2}\) planetary solution (\((s,q(\times 10^{-4})=(0.702\pm 0.081,59.198\pm 49.265)\)) shows \(\Delta\chi^{2}=26.60\) compared to the best-fit solution. The \(\Delta\chi^{2}\) cannot satisfy our criterion to claim the planet detection. Also, the planetary solutions cannot describe the subtle bump feature at HJD\({}^{\prime}\sim 7528\). Thus, we conclude OGLE-2016-BLG-0987 should be removed from the planet sample. We note that, although this event is not likely to be caused by a planetary lens system, the best-fit solution indicates that the companion could be a low-mass object such as a brown dwarf.
### Moa-2016-Blg-123
For this event (which we identified as KMT-2016-BLG-0106), we find seven local solutions based on analysis using the TLC reductions. However, not all local minima satisfy our \(q\) criterion (i.e., \(q<0.03\)) for claiming a planet detection. The best-fit solution indicates that the event was caused by a binary lens system, i.e., \((s,\,q)=(2.671\pm 0.089,1.113\pm 0.099)\). Among the local minima, the model showing the lowest \(q\) value (\(q=0.052\pm 0.003\)) is disfavored by \(\Delta\chi^{2}=122.70\) compared to the best-fit solution. Thus, we conclude this event should be removed from the planet sample.
### Ogle-2016-Blg-0558
For this event (which we identified as KMT-2016-BLG-0157), we found two solutions that showed planet-like mass ratio (\(q\sim 0.03\)) from the initial analysis. We, therefore, refine the solutions based on the TLC reductions. The analysis using the TLC reductions clearly shows localized solutions (i.e., \(s_{\pm}\) cases) with \((s,\,q)=(0.580\pm 0.009,0.048\pm 0.003)\) and \((2.158\pm 0.027,0.057\pm 0.003)\) for the \(s_{-}\) and \(s_{+}\) cases, respectively. However, the mass ratios do not satisfy our criterion (\(q<0.03\)) to claim the planet detection although the companion is likely to be a low-mass object such as a brown dwarf. Hence, we remove this event from the planet sample.
### Kmt-2016-Blg-0374
We find plausible solutions within the planetary regime (\(q<0.03\)) from the initial analysis. However, based on the analysis using TLC reductions, we find that the best-fit solutions are binary-lens cases with \((s,q)=(6.89,0.55)\) and \((0.20,0.22)\) for the \(s_{+}\) and \(s_{-}\) cases, respectively. We also find that the planet-like models are disfavored by \(\Delta\chi^{2}=19.74\) and \(18.60\) for the \(s_{+}\) and \(s_{-}\) cases, respectively. The planet-like solutions cannot satisfy the criterion for the planet detection. Thus, we conclude to remove this event from our planet samples.
### Kmt-2016-Blg-0446
The AnomalyFinder detects subtle a deviation on the light curve at HJD\({}^{\prime}\sim 7631.0\)-\(7636.0\) based on the pipeline data. The anomaly can be explained by planetary models. However, the TLC reductions reveal that the anomaly is a false-positive. Then, we find that the light curve can be explained by the 1L1S interpretation rather than any 2L1S interpretations. Thus, we remove KMT-2016-BLG-0446 from our sample.
### Ogle-2016-Blg-1722
We find that the best-fit solution of OGLE-2016-BLG-1722 (which we identified as KMT-2016-BLG-1716) is caused by a binary lens with the mass ratio, \(q=1.247\pm 0.204\) (i.e., \(q\sim 0.80\)) for the \(s_{-}\) case (the competing \(s_{+}\) solution also exists). The best-fit light curves are caused by approaching a diamond-shaped caustic. Thus, a four-fold degeneracy exists (i.e., four solutions with different source trajectories for different \(\alpha\) values).
We also find alternative planetary solutions with the mass ratio, \(q=(26.095\pm 8.013)\times 10^{-4}\) for the \(s_{-}\) case. However, \(\Delta\chi^{2}_{\rm(planet-binary)}=29.23\). The \(\Delta\chi^{2}\) cannot satisfy our criterion (\(\Delta\chi^{2}<10\)) to claim a planet detection. Indeed, these planetary models clearly show worse fits in their residuals. Hence, we decide to remove OGLE-2016-BLG-1722 from our planet candidate sample for full analysis.
### Ogle-2016-Blg-0974
The best-fit solutions of OGLE-2016-BLG-0974 (which we identified as KMT-2016-BLG-1863) is a binary-lens model with \((s,q)=(0.277\pm 0.005,0.306\pm 0.031)\). There exists a \(s_{+}\) solution, \((s,q)=(5.626\pm 0.215,0.782\pm 0.133)\), with
\(\Delta\chi^{2}=6.96\) caused by the close/wide degeneracy. We find that the solutions having the lowest \(\chi^{2}\) in the planetary regime (\(q<0.03\)) are disfavored by \(\Delta\chi^{2}=78.89\) and \(73.52\) for the \(s_{-}:[s,\,q]=[0.576\pm 0.007,(161.679\pm 8.552)\times 10^{-4}]\) and \(s_{+}:[s,\,q]=[1.657\pm 0.025,(176.374\pm 10.013)\times 10^{-4}]\) cases, respectively. Thus, we conclude that OGLE-2016-BLG-0974 is caused by a binary lens system rather than a planetary lens system.
## Appendix B OGLE-2016-BLG-0185
We also present the analysis of OGLE-2016-BLG-0185, which was identified by eye as a planet candidate but not selected as anomalous in the AnomalyFinder process. We conduct a detailed analysis based on TLC reductions for this event. We find that the best-fit solution is a binary lens case with \((s,q)=(4.834\pm 0.201,3.480\pm 1.110)\). This is equivalent to \(\frac{1}{q}=0.287\pm 0.085\), which clearly implies a binary lens origin. We also search for a planetary model. The best planetary model that satisfies our mass-ratio criterion has \((s,q)=(0.724\pm 0.037,0.013\pm 0.004)\), but is disfavored by \(\Delta\chi^{2}=46\) compared to the best-fit model. Thus, we conclude that this event was caused by a binary lens system.
Although OGLE-2016-BLG-0185 turned out to be a binary lens event, it is still an important test case for verifying the AnomalyFinder process and assessing possible failure modes. In fact, the AnomalyFinder algorithm did identify a series of possible anomalies in this event, but the human operator rejected them as "fake." In OGLE-2016-BLG-0185, the anomaly occurs over the peak of the event, but because the event occurs early in the microlensing season, it is only sparsely covered, and the primary deviation from a point lens occurs in only the KMTA datasets. In addition, the event has a short timescale. Hence, due to the \(\chi^{2}\) likelihood estimation, a point lens fit is biased toward the points at the peak (which have the smallest errorbars) and the baseline points (which dominate the numbers), and so it normalized the flux levels of the KMTA data so that the peak points (due to the anomaly) lay on the point lens light curve. As a result, the "anomalies" identified by the AnomalyFinder were in the rising and falling parts of the light curve and caused by the bad flux normalization rather than the actual anomaly (see Figure 17).
OGLE-2016-BLG-0185 is qualitatively similar to KMT-2021-BLG-2294, which was also missed by the AnomalyFinder process (Shin et al., 2023). They are both short timescale events (\(t_{\rm E}=10.8\pm 1.3\) and \(7.1\pm 0.3\) days, respectively) and had anomalies that occurred at the peak of the events. On the other hand, the reasons the anomalies were missed are distinctly different: in OGLE-2016-BLG-0185, the wrong anomaly was identified, but in KMT-2021-BLG-2294, the anomaly did not meet the detection threshold. The latter case is acceptable from the perspective of constructing a statistical sample of events. However, the failure for OGLE-2016-BLG-0185 is more concerning, but could be compensated for by adding an additional criterion to the AnomalyFinder algorithm to check for outliers in flux normalization.
|
2305.12969 | WISP Searches on a Fiber Interferometer under a Strong Magnetic Field | A novel table-top experiment is introduced to detect photon-axion conversion:
WISP Searches on a Fiber Interferometer (WISPFI). The setup consists of a
Mach-Zehnder-type interferometer with a fiber placed inside an external
magnetic field, where mixing occurs and is detected by measuring changes in
amplitude. Hollow-core photonic crystal fibers (HC-PCF) will be used to achieve
resonant mixing that is tuneable by regulating the gas pressure in the fiber.
An unexplored axion mass-range (28 meV to 100 meV) can be probed reaching the
two-photon coupling expected for the QCD axion. | Josep Maria Batllori, Yikun Gu, Dieter Horns, Marios Maroudas, Johannes Ulrichs | 2023-05-22T12:25:31Z | http://arxiv.org/abs/2305.12969v3 | # WISP Searches on a Fiber Interferometer under a Strong Magnetic Field
###### Abstract
A novel table-top experiment is introduced to detect photon-axion conversion: WISP Searches on a Fiber Interferometer (WISPFI). The setup consists of a Mach-Zehnder-type interferometer with a fiber placed inside an external magnetic field (14 T), where mixing occurs and is detected by measuring changes in phase/amplitude. We will use hollow-core photonic crystal fibers (HC-PCF) to achieve resonant mixing that is tuneable by regulating the gas pressure in the fiber. An unexplored axion mass-range (50 meV-100 meV) can be probed reaching the two-photon coupling expected for the QCD axion.
Axions, Interferometer, Hollow-core fibers pacs: 12.30.-c, 12.30.-c, 12.30.Jb
## I Introduction
Axions are weakly-interacting pseudoscalar particles introduced to solve the strong CP problem in quantum chromodynamics (QCD) which have been identified to be a candidate for Cold Dark Matter (CDM) [1; 2; 3; 4]. The QCD axion with mass \(m_{a}\) inherits a non-vanishing two-photon coupling-strength \(g_{a\gamma\gamma}\propto m_{a}\) that is model dependent. This two-photon coupling provides a rich phenomenology that can be explored both experimentally as well as observationally. While cosmological and astrophysical searches are sensitive to a wide range of the axion parameter space, laboratory experiments searching for axions as CDM (so-called haloscopes) have achieved so far the best sensitivity and start to rule-out the benchmark QCD axion models for a narrow mass-range from 2.81 \(\mathrm{\SIUnitSymbolMicro eV}\) to 3.31 \(\mathrm{\SIUnitSymbolMicro eV}\)[5]. However, these results depend upon the local density of CDM, which is poorly constrained and could be substantially smaller than the average at similar galactocentric distances [6]. On the other hand, laboratory experiments that do not rely on axions to form CDM (e.g. light-shining-through-wall [4]) are less sensitive and none of the existing (and projected) experiments achieve sufficient sensitivity to probe the QCD axion (for an overview see e.g. [7]).
In this paper, we introduce a new experimental setup called WISPFI (WISP searches on a Fiber Interferometer) that focuses on photon-axion conversion in a waveguide by measuring photon reduction in the presence of a strong external magnetic field [8]. In this novel approach, light guiding over long distances can be achieved together with resonant detection at a spatially-confined region inside the bore of a strong magnet. The basic idea of WISPFI is to use a Mach-Zehnder type interferometer (MZI) where a laser beam is split into two arms with one arm used as a reference and the other arm placed inside a strong magnetic field which induces a photon-to-axion conversion (see Fig. 1, further details are given in a later section). Then, a phase shift and amplitude reduction can be measured in the presence of a non-vanishing photon-axion coupling \(g_{a\gamma\gamma}\). The measurable effect of axion-photon mixing relies on the Primakoff effect. The resulting conversion probability [9]\(P_{\gamma\to a}\propto g_{a\gamma\gamma}^{2}(BL)^{2}\ll 1\), where \(g_{a\gamma\gamma}\) is the axion-photon coupling coefficient, \(BL\) is the product of the transversal magnetic field \(B\) and the length \(L\) that the photon beam passes through the external magnetic field. As a comparison, in light-shining-through-wall experiments, the signal rate depends on the product of photon-to-axion and axion-to-photon conversion probabilities which therefore scales \(P_{\gamma\to a\rightarrow\gamma}\propto g_{a\gamma\gamma}^{4}(BL)^{4}\ll P_{ \gamma\to a}\).
## II Photon-Axion Mixing in Hollow-Core Photonic Crystal Fibers (HC-PCF)
The photon-to-axion conversion probability assuming a mode propagating in the z-direction [9] is:
\[P_{\gamma\to a}=\sin^{2}(2\theta)\sin^{2}\left(k_{\mathrm{osc}}z \right), \tag{1}\]
Figure 1: Schematic view of the experimental setup of WISPFI considering a partial-free space MZI for detecting photon-axion oscillations. In red, the laser beam in free space is shown, while in black and blue the light beams propagating through the SMF and HC-PCF, respectively are represented. The various acronyms correspond to: electro-optical modulator (EOM), lenses (L), Faraday isolator (FI), beam–splitter (BS), mirrors (M), temperature-controller pad (TCP), fiber stretcher (FS), voice-coil (VC), half-waveplate (HWP), polarized beam-splitter (PBS), photo-detector (PD), and low-pass filter (LPF). |
2310.18994 | Investigation of correlation effects in FeSe and FeTe by LDA + U method | Correlation effects are observed strong in Iron chalcogenides superconductors
by experimental and theoretical investigations. We present a comparative study
of the influence of Coulomb interaction and Hund's coupling in the electronic
structure of FeSe and FeTe. The calculation is based on density functional
theory (DFT) with local density approximation(LDA+U) framework employed in
TB-LMTO ASA code. We found the correlation effects were orbital selective due
to the strength of interorbital hybridization among different Fe-3d orbitals
mediated via chalcogen (Se/Te-p) orbitals is different in both the compounds,
however Coulomb interaction is screened significantly by Te-p bands in FeTe.
Similarly the orbital section is different in both the compounds because of the
difference in the chalcogen height. | H. Lohani, P. Mishra, B. R. Sekhar | 2023-10-29T12:31:10Z | http://arxiv.org/abs/2310.18994v1 | # Investigation of correlation effects in FeSe superconductor by LDA+U method
###### Abstract
Correlation effects are observed strong in Iron chalcogenides superconductors by experimental and theoretical investigations. We present a comparative study of the influence of Coulomb interaction and Hund's coupling in the electronic structure of FeSe and FeTe. The calculation is based on density functional theory (DFT) with local density approximation(LDA+U) framework employed in TB-LMTO ASA code. We found the correlation effects were orbital selective due to the strength of interorbital hybridization among different Fe-3d orbitals mediated via chalcogen (Se/Te-p) orbitals is different in both the compounds, however Coulomb interaction is screened significantly by Te-p bands in FeTe. Similarly the orbital section is different in both the compounds because of the difference in the chalcogen height.
## 1 Introduction
Iron based superconductors, particularly members of FeSe\({}_{1-x}\)Te\({}_{x}\) family attract much attention due to their nature of strong electron correlation unlike other superconductors. A recent advancement in this field is the synthesis of single-layer films of FeSe on SrTiO\({}_{3}\) substrates exhibiting superconductivity (T\({}_{c}\)=80 K) which turns insulating with the addition of one more layer[1]. This unusual behavioral difference between single and double layer films of FeSe is a signature of strong electron correlation which has been experimentally observed[2, 3]. Superconductivity in the FeSe\({}_{1-x}\)Te\({}_{x}\) compounds was first reported by Hsu _et al._[4] in the FeSe (x = 0) compound exhibiting a T\({}_{c}\) around 8 K which rises up to 37 K under pressure (7GPa)[5]. On the other hand, the other extreme composition of this family, Fe\({}_{1.068}\)Te, though not a superconductor show a spin density wave (SDW) ordering at 67K[12] with an accompanying structural transition from tetragonal to monoclinic. With Se doping superconductivity emerges in FeTe with a simultaneous decrease in the SDW and the value of T\({}_{c}\) reaches maximum 15 K for x = 0.5 doped case. The Fe content is also detrimental for superconductivity; with excess Fe favors the spin localization destroying the superconductivity[14, 15]. Both FeSe and FeTe have tetragonal crystal structure belonging to space group symmetry P\({}_{4}\)/nmm. It consists a square planar sheet of Fe atoms, which is tetrahedrally coordinated with anion (Se/Te) atoms. However, the height of anion atom from the Fe square plane is different in these two compounds and this plays a pivotal role in determining the electronic properties of these systems[10, 13].
A recent ARPES study on FeSe\({}_{1-x}\)Te\({}_{x}\) compositions by leki _et al._[39] has shown clearly the strong electronic correlation in these compounds, where a small quasi particle weight in FeTe transforms into a sharp one with increase in Se content. Other ARPES results [32, 37] on these compounds have shown significant band renormalization which was supported by the calculations based on local density approximation (LDA). Tamai _et al._[35] has found the mass renormalization factor to be m\({}^{*}\)/m = 20 from their ARPES study on FeSe\({}_{0.42}\)Te\({}_{0.58}\). It is close to the value observed in highly correlated systems like transition metal oxides. Also, our angle integrated valence band photoemission study on FeSe\({}_{1-x}\)Te\({}_{x}\)[19] revealed significant spectral weight shifts in the near E\({}_{f}\) region with Se doping leading to the formation of a pseudogap. Further, a temperature dependent orbital selective spectral weight transfer was also reported by us[19]. Although, such manifestations of the strong coulomb correlation were also shown in many other photoemission studies[20, 36], these experimental observations are not well addressed by LDA based electronic structure calculations. However, results of some recent calculations[17, 18, 31], where Coulomb correlations were included by using LDA+DMFT frame work, are very close to the experimental findings. In this paper we are presenting our calculations showing the evolution of electronic structure by the incorporation of different values of Coulomb interaction U and intra-atomic exchange J based on LDA+U scheme in FeSe and FeTe. We observed multi
orbital correlation effect in Fe-3d states which is more prominent in FeSe in comparison to FeTe. We have discussed our results referring to the difference in the geometry of the anion tetrahedra in both the compounds.
## 2 Details of Calculation
The band structure calculations we performed were based on first principles Langreth-Meh-Hu gradient corrected von Barth Hedin parametrized LDA [27] energy and potential. Lattice parameters used in the calculations are taken from the experimental data[21, 22] published earlier by others. Correlation effects of the Fe-3d orbitals have been examined by employing different values of Coulomb interaction parameter U and Hund's coupling J[28]. Empty spheres were introduced to make the volume of the unit cell equal to the total volume of the spheres within the permissible limit of atomic sphere approximation (ASA). Fe-4s, 4p, 3d ; Se-4s, 4p, 4d and Te-5s, 5p, 5d orbitals are used as the basis set for the valence energy region. A mesh of 12\(\times\)12\(\times\)8 is used for sampling the irreducible part of the Brillouin zone integration. Height of the anion atom from the Fe square plane was relaxed by minimizing the total energy using Quantum Espresso code[29]. We checked our calculation parameters by comparing them with the LDA results reported earlier[36, 30].
## 3 Results and Discussion
### Dos
Fig (1a) and (1b) show the plots of density of states (DOS) for FeSe and FeTe respectively for the range of 4.0 to -7.0 eV binding energy (BE). The valence band (VB) DOS comprises of the region from E\({}_{f}\) to -6.0 eV BE while the unoccupied DOS extends from E\({}_{f}\) to 3.0 eV BE. The near E\({}_{f}\) states from 2.8 to -2.4 eV are predominantly Fe-3d derived for both FeSe and FeTe. These Fe-3d states, separated in lower and upper bands exhibit a clear pseudogap feature just above the E\({}_{f}\) (0.24 eV) in the case of FeSe whereas it is less prominent in the case of FeTe. The states around -2.2 to -6.0 eV are originate from the Fe-3d and anion (Se/Te)-p hybridized states. Interestingly, for the case of FeSe, there exists a sharp gap at -2.3 eV which is not present in case of FeTe. Due to the smaller electronegativity of Te compared to Se, the hybridized states between Fe-3d and anion-p orbitals are placed at a lower BE in FeTe in comparision to those of FeSe. DOS have also been calculated after downfolding the valence orbital of the anion atom (Fig (1c) and (1d)) in order to get a deeper insight into the role of anion orbitals. The gap at -2.3 eV present in Fig (1a) is due to the splitting of bonding and antibonding bands between Fe-3d and Se-p hybridized states which becomes less prominent when the valence orbitals of Se atom are downfolded as shown in Fig (1d). Similarly the pseudogap feature across the E\({}_{f}\) is also associated with the hybridization between Fe-3d and Se-p orbitals. Unlike FeSe, the anion orbitals do not play any major role in modifying the DOS in FeTe as is clear from Fig (1b) and (1d). It indicates a weak hybridization between Fe-3d and Te-p orbitals in FeTe. The role of the anion orbitals is linked to the structural geometry of FeSe and FeTe. The insets of Fig (1c) and (1d) show the geometry of anion tetrahedra in FeSe and FeTe respectively. This tetrahedral geometry depends on two important parameters; firstly, the height of anion from the Fe square plane (z) which is 1.64 A and 1.46 A in case of FeTe and FeSe respectively and secondly, the anion-Fe-anion angle (\(\alpha\)). The enhanced z height in case of FeTe reduces the interorbital hoping among the Fe-3d orbitals, mediated via anion p orbitals. Similarly, the value of \(\alpha\) is 99.9\({}^{\circ}\) in case of FeTe which increases to the perfect tetrahedron value 109.4\({}^{\circ}\) in case of FeSe. The large value of \(\alpha\) and small value of anion height (z) makes a stronger hybridization between the Fe-3d and anion-p orbitals in case of FeSe in comparison to FeTe. This difference in hybridization strength is reflected in the plot of DOS Fig (1a) and Fig (1c).
Coulomb correlation effects are important in bands of narrow width, especially in Fe-3d states. So the changes in Fe-3d states under the influence of different values of U have been calculated and shown for FeSe and FeTe respectively in Panel (a) and (b) of Fig 2. In FeSe, Fe-3d states start localizing with the application of U and noticeable changes occur at higher values of U. For U = 4.0 eV case, the pseudogap feature disappears around E\({}_{f}\) and only two peaks are observed in the VB comparared to the case of U = 0.0 eV. These two peaks merge and shift towards higher BE with U = 5.0 eV. In case of FeTe, the Fe-3d states also become localized under the application of U. However the amount of shift towards higher BE at large values of U (3.5 and 5.0 eV) is less and states are more at the vicinity of E\({}_{f}\) at smaller values of U (1.0 and 2.0 eV). In addition to this, narrowing of the lower and upper bands of Fe-3d states with increase of U is less. It indicates that the effect of Coulomb correlation is weak in FeTe. Possible reason for this, is the presence of Te-p states at lower BE which screen the U strongly [23]. On the basis of previous reports [23, 17, 31] the value of U=4.0 and 3.5 eV are chosen to see the evolution of Fe-3d states under the influence of J in FeSe and FeTe as shown in Fig 3(a) and 3(b) respectively. It is observed that the Fe-3d states are modified significantly even by introduction of a small value of J=0.1 eV in case of FeSe. The Hund's coupling, shifts all the Fe-3d states towards
lower BE, with a simultaneous appearance of pseudogap slightly above E\({}_{f}\). With increase in the value of J further, there is no substantial changes in the DOS. In the case of FeTe, the Fe-3d states are also shifted towards lower BE, particularly the states near E\({}_{f}\) increase gradually, with the incorporation of J though the changes are less as compared to FeSe. These results show that Hund's coupling J is a key factor in the formation of Fe-3d DOS.
In order to highlight the correlation effects, DOS of different Fe-3d orbitals are plotted at different values of U and J in FeSe and FeTe as shown in Panel (a) and (b) of Fig 4 respectively. In FeSe, in the absence of U and J, near E\({}_{f}\) states and pseudogap feature arises from d\({}_{yz/xz}\) and d\({}_{x^{2}-y^{2}}\) orbitals. The states originating from d\({}_{xy}\) orbital have largest splitting with two peaks at -1.7 and 1.6 eV BE in the DOS. Additionally, a clear gap is observed in d\({}_{3z^{2}-r^{2}}\) states around E\({}_{f}\) which are quite localized at -0.8 eV. Application of Coulomb interaction(U=4.0 eV), results in localization of the states derived from all four orbitals and the states shift towards higher BE. This shift is the largest in the states of d\({}_{x^{2}-y^{2}}\) orbital. Major effect of U is observed in d\({}_{yz/xz}\) states, where broad states near E\({}_{f}\) transforms into two clear peaks at higher BE. Hence the pseudogap feature vanishes across E\({}_{f}\). Application of a small value of Hund's coupling J=0.1 eV, restore the d\({}_{yz/xz}\) states near E\({}_{f}\) and no significant changes are seen by further increasing the J value from 0.1 to 1.2 eV. This nature of pseudogap, which occurs in full range of Hund's coupling but absent when J=0.0 at U=4.0 eV, is consistent with the previous work of Ansgar _et al._[18] where this pseudogap is attributed to a resonance in self energy caused by spin fluctuations. In case of FeTe in the absence of U and J values, near E\({}_{f}\) states and pseudogap are also formed by d\({}_{yz/xz}\) and d\({}_{x^{2}-y^{2}}\) states like FeSe but a gap is present at -0.7 eV in d\({}_{yz/xz}\) states as well as number of these states are more across the E\({}_{f}\). The d\({}_{3z^{2}-r^{2}}\) and d\({}_{x^{2}-y^{2}}\) states shift towards higher BE by switching on the U=3.5 eV and the amount of shift is quite small in comparison to FeSe. These states shows an incremental shift towards E\({}_{f}\) with increase the J value in this case. These changes in DOS are presented in table 1, where the occupancy of electrons in different orbitals of Fe-d is tabulated, for quantitative analysis. The occupancy of d\({}_{yz/xz}\) and d\({}_{x^{2}-y^{2}}\) orbitals show an opposite behaviour for FeSe and FeTe under the influence of Coulomb correlation energy. On the other hand, a remarkable enhancement is observed in the occupancy of d\({}_{x^{2}-y^{2}}\) orbital after introducing j= 0.1 eV in FeSe. It is almost twice in comparison to J=0.0 eV case. The change in the occupancy of Fe-3d orbitals due to the effect of U and J presented here summaries the orbital selective effect in Fe-3d orbitals in FeSe and FeTe. The individual occupancy of four orbitals are different which are further enhanced by Coulomb interaction and Hund's coupling. It turns out to a orbital selective nature of the correlation effect which is crucially depends on individual band filling factor[43].
### Band
Fig 5 shows the near E\({}_{f}\) band structure of FeSe with U = 0.0 (5a), U = 4.0 eV (5b) and FeTe with U = 0.0 (5c) and U = 3.5 eV (5d). Intially when U and J = 0.0 eV, three hole like and two electron like bands are observed at \(\Gamma\) and M point respectively in both FeSe and FeTe. However, the outer hole like bands are quasidegenerate in case of FeSe. In order to show the contribution of different Fe-3d orbitals, the fat bands are calculated for both FeSe and FeTe. These bands are plotted in Fig 5(e-h) for the FeSe case with U and J = 0.0 eV. Fatness of bands indicate that the innermost hole like band has a d\({}_{yz/xz}\) and the outer two have d\({}_{xy}\) and d\({}_{x^{2}-y^{2}}\) orbital characters in both the compounds. Similarly the electron like bands are composed mainly of d\({}_{x^{2}-y^{2}}\) with little contribution from the d\({}_{yz/xz}\) orbital in FeSe while inner electron like band (-0.23 eV) has d\({}_{yz/xz}\) and outer one (-0.47 eV) has d\({}_{x^{2}-y^{2}}\) orbital character in FeTe. Another major difference is that the Te-p bands are intermixed with Fe-d bands opposite to that of FeSe, where Fe-d bands are quite separated from the Se-p bands. These Te-p bands screen the effect of Coulomb interaction U in FeTe, hence the value of U is smaller in FeTe in comparison to FeSe. After applying Coulomb interaction degeneracy of the d\({}_{xy}\) and d\({}_{x^{2}-y^{2}}\) hole like bands at the \(\Gamma\) point has been lifted as shown in Fig(5b). The d\({}_{x^{2}-y^{2}}\) band moves towards lower BE and other two d\({}_{xy}\) and d\({}_{yz/xz}\) band move towards higher BE at \(\Gamma\) point. The same band moves in opposite direction in case of FeTe by the application of U= 3.5 eV. On the other hand, the separation between the two electron like bands at the M point is enhanced in FeTe unlike the FeSe case where both the electron like bands are quasidegenerate and shift towards the lower BE under the influence of U = 4.0 eV.
Evolution of the near Fermi bands in FeSe and FeTe, with the tuning of different values of J, are shown in panel a and b of Fig 6 respectively. In FeSe, at \(\Gamma\) point, The d\({}_{x^{2}-y^{2}}\) band which was crossing the E\({}_{f}\) comes down below the E\({}_{f}\) and the d\({}_{yz/xz}\) band which was not crossing E\({}_{f}\), crosses E\({}_{f}\) after applying J = 0.1 eV. Similarly, degeneracy of the two electron like bands at M point is also lifted. In FeTe, a gradual shifting is observed in the hole like d\({}_{x^{2}-y^{2}}\) band at the \(\Gamma\) point towards the E\({}_{f}\) and a gradual decrease in the separation between two electron like bands at the M point under the influence of Hund's coupling.
In a recent ARPES report on FeSe two hole like bands have been observed around the \(\Gamma\) point and two bands, one electron like and the other hole like, at the M point from 40 meV below E\({}_{f}\)[16]. Another ARPES study on FeTe\({}_{1-x}\)Se\({}_{x}\) for x=0, 0.2, 0.3, 0.4 and 0.45 compounds by Ieki _et al._[39] reported three clear hole like bands at the \(\Gamma\) point (\(\alpha\), \(\alpha^{\prime}\) and \(\beta\)) which evolve with Se doping and a shallow electron pocket at the M point for x = 0.45. The
and \(\beta\) crosses E\({}_{f}\) whereas \(\alpha\) is around 20 meV below E\({}_{f}\). Similar results have also been experimentally observed in Fe\({}_{1.04}\)Te\({}_{0.66}\)Se\({}_{0.34}\)[37], FeSe\({}_{0.42}\)Te\({}_{0.58}\)[35], Fe\({}_{1.03}\)Te\({}_{0.7}\)Se\({}_{0.3}\)[42] and FeTe\({}_{0.55}\)Se\({}_{0.45}\)[40]. On the other hand in Fe\({}_{1.02}\)Te [48] and Fe\({}_{1-x}\)Te/Se [32] only two hole like bands are observed at \(\Gamma\) point. The band renormalization factor also varies highly at different points of Brillouin zone. For example, in case of FeSe\({}_{0.42}\)Te\({}_{0.58}\) Tamai _et al._[35] observed m*/m=20 for electron like band at the M point whereas it is just 6 for one of the hole like band at the \(\Gamma\) point. It is clear from Fig (5a) and (5c) that initially when U ans J are not incorporated all the three hole like bands crosses the E\({}_{f}\) in both the compounds as well as electron like band is placed at higher BE (-0.29 eV) in FeSe and (-0.23 and -0.47 eV) in FeTe at the M point. Only When correlation effect taken into account, one of the hole like band comes down below the E\({}_{f}\) and other two crosses the E\({}_{f}\) as well as electron like band approaches towards E\({}_{f}\) as clear from Fig 6 where in case of FeTe at U = 3.5 and J = 0.8 eV, innermost hole like band appear at 0.18 eV below E\({}_{f}\) at the \(\Gamma\) point and electron like band position closer to E\({}_{f}\) by 0.1 eV in comparison to U and J = 0.0 eV case. This trend qualitatively matches with above mentioned experimental findings, although there is a difference in the absolute energy scale. The orbital character of these near E\({}_{f}\) bands, revealed by photoemission studies using polarized light source, [37, 40] also agree with our results. These results signify the importance of electronic correlation in these compounds however further calculation is required to handle the correlation effects in a better way to bring down the difference between the experimental findings and calculated results based on LDA+U scheme.
From the transfer integral values, calculated by Miyake _et al._ for FeSe and FeTe [23], it is clear that this value is large between d\({}_{xy}\) and nearest neighbour (nn) d\({}_{3z^{2}-r^{2}}\) orbital. It builds a strong interorbital hybridization between d\({}_{xy}\) and d\({}_{3z^{2}-r^{2}}\) orbitals which leads to a localization of d\({}_{3z^{2}-r^{2}}\) states at higher BE with a clear gap from E\({}_{f}\). The d\({}_{xy}\) orbitals point towards the nn Fe site so it has a largest band width and shows two well separated peaks one in the valence band and the other in the conduction band. On the other hand, transfer integrals between d\({}_{xy}\) and nn d\({}_{x^{2}-y^{2}}\) orbital is small but for second nn it depends on the height of anion. It is large in case of FeSe in comparison to FeTe. Thus chalcogen-p enhance the interorbital hybridization between d\({}_{xy}\) and d\({}_{x^{2}-y^{2}}\) orbital which reflects in a prominent pseudogap structure in d\({}_{x^{2}-y^{2}}\) states in FeSe. Moreover, interorbital hybridization between d\({}_{xy}\) and second nn d\({}_{yz/xz}\) orbital is also mediated via anion-p orbital, hence it is also large in case of FeSe and contribute to pseudogap feature. On the contrary, larger height of Te anion allows a finite value of transfer integrals between d\({}_{yz/xz}\) with nn d\({}_{3z^{2}-r^{2}}\) and d\({}_{x^{2}-y^{2}}\) orbitals by breaking the mirror plane symmetry in case of FeTe. These inter orbital hybridizations are responsible for the gap in d\({}_{yz/xz}\) states at -0.8 eV, which is absent in FeSe (Fig(4)). When Coulomb interaction is introduced, it reduces the interorbital hopping and mainly d\({}_{yz/xz}\) and d\({}_{x^{2}-y^{2}}\) states are affected which have large number of states near E\({}_{f}\). As a consequence of this, the electron is transferred from the in-plane d\({}_{x^{2}-y^{2}}\) and d\({}_{xy}\) to the out of plane d\({}_{yz/xz}\) and d\({}_{3z^{2}-r^{2}}\) orbitals and localized them at higher BE in case of FeSe. In FeTe, the transfer occurs from d\({}_{xy}\) and d\({}_{yz/xz}\) to d\({}_{x^{2}-y^{2}}\) orbitals as is clear from Fig 2 and Fig 4. On the other hand, the application of J blocks the fluctuations in the occupancy of different Fe-3d orbitals which is clearly seen from the occupation table where a small value of J = 0.1 eV redistribute the electrons among different d-orbitals. A clear orbital selective effects (increase in the occupation of d\({}_{yz/xz}\) and d\({}_{x^{2}-y^{2}}\)) is seen under the influence of Hund's coupling. Since crystal field splitting is large in FeTe because the value of \(\alpha\) deviates largely from the ideal tetrahedron value unlike the FeSe [45]. It increases the energy difference between the d\({}_{x^{2}-y^{2}}\) with the d\({}_{yz/xz}\) and d\({}_{xy}\) orbitals. So Hund's coupling promotes a gradual transfer of electrons from the highly occupied d\({}_{x^{2}-y^{2}}\) (at J = 0.0 eV) to d\({}_{xz/yz}\) and d\({}_{xy}\) orbitals with an increase of J value contrary to FeSe, where a small value of J is sufficient to transfer the electrons among these orbitals in order to reduce their Coulomb repulsion energy. Thus the different strength of interorbital hybridization and crystal field splitting, which is mainly governed by the anion height, change the occupancy of electrons and band structure of individual Fe-3d orbital. Hund's coupling promotes this differentiation and act like a band decoupler which was previously studied by Medici _et al._[46, 47]. This could be the origin of the orbital selective correlation effects seen in iron chalcogenide compounds.
## 4 Conclusion
We presented a systematic study of the effect of Coulomb interaction and Hund's coupling in Fe-3d states in FeSe and FeTe. In both the compounds states around E\({}_{f}\) are predominantly originated from Fe-3d orbital having a pseudogap feature just above the E\({}_{f}\), whereas hybridized states between Fe-3d and chalcogen-p orbitals lie at higher BE. This hybridization crucially depends on the chalcogen height from the Fe plane and it is weak in case of FeTe where the height of Te anion is higher in comparison to Se anion height in FeSe. Coulomb interaction localizes and shifts the Fe-3d states towards higher BE energy in both the compounds, however this interaction is strongly screened by Te-p bands in FeTe. It is observed that this effect is significant in d\({}_{yz/xz}\) and d\({}_{x^{2}-y^{2}}\) states in case of FeSe. Electrons in these localized states again become itinerant under the influence of J and a clear orbital selective changes are seen in the electronic structure. Similar to U, Hund's coupling effect is also prominent in FeSe in comparison to FeTe. The
orbital selective nature of the correlation effect is linked to the different values of the interorbital hybridization among different Fe-d orbitals which is mediated via chalcogen-p orbitals. The strength of these interorbital hybridization mainly governs by the geometry of anion tetrahedra, height of anion from the Fe plane (z) and anion-Fe-anion angle \(\alpha\). The difference in the anion tetrahedra geometry turns out to a different orbital selective nature of the correlation effect in both the compounds.
|
2308.13453 | Learning to Intervene on Concept Bottlenecks | While deep learning models often lack interpretability, concept bottleneck
models (CBMs) provide inherent explanations via their concept representations.
Moreover, they allow users to perform interventional interactions on these
concepts by updating the concept values and thus correcting the predictive
output of the model. Up to this point, these interventions were typically
applied to the model just once and then discarded. To rectify this, we present
concept bottleneck memory models (CB2Ms), which keep a memory of past
interventions. Specifically, CB2Ms leverage a two-fold memory to generalize
interventions to appropriate novel situations, enabling the model to identify
errors and reapply previous interventions. This way, a CB2M learns to
automatically improve model performance from a few initially obtained
interventions. If no prior human interventions are available, a CB2M can detect
potential mistakes of the CBM bottleneck and request targeted interventions.
Our experimental evaluations on challenging scenarios like handling
distribution shifts and confounded data demonstrate that CB2Ms are able to
successfully generalize interventions to unseen data and can indeed identify
wrongly inferred concepts. Hence, CB2Ms are a valuable tool for users to
provide interactive feedback on CBMs, by guiding a user's interaction and
requiring fewer interventions. | David Steinmann, Wolfgang Stammer, Felix Friedrich, Kristian Kersting | 2023-08-25T15:54:22Z | http://arxiv.org/abs/2308.13453v3 | # Learning to Intervene on Concept Bottlenecks
###### Abstract
While traditional deep learning models often lack interpretability, concept bottleneck models (CBMs) provide inherent explanations via their concept representations. Specifically, they allow users to perform interventional interactions on these concepts by updating the concept values and thus correcting the predictive output of the model. Traditionally, however, these interventions are applied to the model only once and discarded afterward. To rectify this, we present concept bottleneck memory models (CB2M), an extension to CBMs. Specifically, a CB2M learns to generalize interventions to appropriate novel situations via a two-fold memory with which it can learn to detect mistakes and to reapply previous interventions. In this way, a CB2M learns to automatically improve model performance from a few initially obtained interventions. If no prior human interventions are available, a CB2M can detect potential mistakes of the CBM bottleneck and request targeted interventions. In our experimental evaluations on challenging scenarios like handling distribution shifts and confounded training data, we illustrate that CB2M are able to successfully generalize interventions to unseen data and can indeed identify wrongly inferred concepts. Overall, our results show that CB2M is a great tool for users to provide interactive feedback on CBMs, _e.g._, by guiding a user's interaction and requiring fewer interventions.
## 1 Introduction
Deep learning models often represent black-box models that make it difficult for human users to understand their decision processes Adadi & Berrada (2018); Cambria et al. (2023); Saeed & Omlin (2023) and interact with these Schramowski et al. (2020); Teso et al. (2023). One recent branch within eXplainable Artificial Intelligence (XAI) focuses on the potential of so-called concept bottleneck models (CBMs) Koh et al. (2020); Stammer et al. (2021) to tackle such issues.
These are designed to be inherently human-interpretable. They perform inference (_e.g._, for bird image classification, _cf._ Fig. 1 (top)), by transforming the initial raw input into a set of human-understandable concepts (_e.g._, wing shape and color) with a bottleneck network and provide a final task prediction based on the activation of these concepts with a predictor network. The concept activations thereby serve as an inherent explanation of the model's decision Teso et al. (2023). Arguably even more valuable, these concept activations can be used as a means for humans to perform targeted interactions, _e.g._, for querying further explanations Abid et al. (2022) or to correct the model's concept prediction Koh et al. (2020).
A recent surge of research has focused on the benefits of leveraging interactions in AI models in general Ouyang et al. (2022); Miller (2019), and also CBMs in particular Teso et al. (2023). Multiple such approaches focus on leveraging interactions for mitigating errors of the predictor network Bontempelli et al. (2021); Stammer et al. (2021). However,
little work has focused on mitigating potential errors of the initial bottleneck network. Moreover, although _interventional interactions_ on a CBM's concept activations are a natural tool for this purpose, they have received little attention since their introduction by Koh _et al._Koh et al. (2020).
One likely reason for this is that interventions according to Koh et al. (2020) represent a once-use tool for updating model performance by adding human-provided concept labels to an increasing number of randomly selected concepts. For sustainably improving a model's performance, however, this approach is inefficient and potentially demands a large number of repeated user interactions, where providing such repeated feedback has been identified to lead to a loss in focus of human users Amershi et al. (2014).
In this work, we therefore argue to harvest the rich information present in previously collected interventions in a multi-use approach. Specifically, let us suppose a user corrects a model's inferred concepts through a targeted intervention. In that case, the intervention carries information on where the model did not perform well. This can be used to improve prediction in similar future situations (Fig. 1 bottom). In this context, we therefore introduce Concept Bottleneck Memory Models (CB2M) as a novel and model-agnostic extension to CBMs. CB2M are based on adding a two-fold memory of interventions to the CBM architecture, which allows to keep track of previous model mistakes as well as previously applied interventions. This memory enables a CB2M to reapply interventions when the CBM repeats mistakes and thus automatically corrects them without the need for additional human feedback. This ultimately allows to overcome the issue of one-time interventions of standard CBMs and enables the model to learn more effectively from provided human feedback. Overall, however, human feedback can be unavailable, and obtaining it is costly. CB2M mitigates this issue by its ability to detect potential model mistakes prior to initial human feedback where its memory module can be used to specifically select data points for human inspection, and thus guide human feedback to where it is really needed.
We illustrate the full potential of CB2M in our experimental evaluations on several challenging tasks, such as handling distribution shifts and confounding factors across several datasets. In summary, we make the following contributions: (i) We identify the potential of extracting generalizable knowledge from human interventions as a means of correcting concept bottleneck models. (ii) We introduce CB2M, a flexible extension to any CBM-like architecture for handling such interactive interventions. (iii) Our experimental evaluations show that CB2M can truly learn from interventions by generalizing them to previously unseen examples. (iv) We further show that CB2M are also able to detect model mistakes without the need for initial human knowledge and thus allow to query a user for targeted interventions.
The rest of the paper proceeds as follows: In Sec. 2, we provide a brief background followed by the introduction of CB2M. We present our experiments evaluations in Sec. 3. Afterwards, we relate CB2M to other work in Sec. 4 before concluding the paper together with potential future research directions in Sec. 5.
## 2 Concept Bottleneck Memory Models (CB2M)
Let us first introduce the background notations on CBMs and interventions before introducing CB2M for detecting model mistakes and generalizing interventions to novel, unseen examples.
Figure 1: **Reusing a CBM intervention can correct model mistakes for multiple examples.** Top: CBMs generate a human interpretable concept representation to solve the final task. Human users can correct these concept predictions via targeted interventions (blue) influencing the final prediction. Bottom: Human interventions hold valuable information that can be reused in suitable situations to automatically correct model errors without further human interactions.
### Background
A CBM which solves the task of transforming inputs \(\mathcal{X}\) to outputs \(\mathcal{Y}\) consists of two parts. The bottleneck model \(g:x\to c\) transforms an input \(x\in\mathcal{X}\) into its concept representation \(c\). Afterward, the predictor network \(f:c\to y\) uses this representation to generate the final target output \(y\in\mathcal{Y}\). The correct, _i.e._, ground-truth values for \(c\) and \(y\) are written as \(c^{*}\) and \(y^{*}\), respectively. We refer to overall model (task) accuracy as \(\text{Acc}_{f}\) and to concept accuracy as \(\text{Acc}_{g}\).
Human interactions with the concept representations are called interventions. An intervention \(i\in\mathcal{I}\) is a set of tuples \(i=\{(c^{\prime}_{j},j)|j\in\mathcal{J}_{i}\}\), with updated concept values \(c^{\prime}_{j}\) and concept indices \(j\). \(\mathcal{J}_{i}\) is the set of all indices for intervention \(i\). Applying an intervention to a sample \(x\) overwrites the predicted concept values with those of the intervention, which we denote as \(x|i\).
As CBMs consist of two processing modules, the bottleneck and predictor networks, errors can occur in either, with different consequences on how to handle these Bontempelli et al. (2021). If the bottleneck makes an error, this error will most likely also negatively influence the predictor. On the other hand, it is also possible that the predictor makes a wrong final prediction despite having received a correct concept representation. In the latter case, the concept space is either insufficient to solve the task, or the predictor network is susceptible to, _e.g._, some spurious correlations. Where other works have investigated handling an insufficient concept space through additional (unsupervised) concepts Sawada & Nakamura (2022), or correcting a predictor with spurious correlations Stammer et al. (2021) CB2M focuses on mitigating errors that originate from the bottleneck model. This is achieved by utilizing interventions on the concept space. Let us discuss this in more detail in the following.
### Concept Bottleneck Memory Models
Let us now introduce Concept Bottleneck Memory Models (CB2M) as a flexible extension for any CBM architecture. The bottleneck and predictor networks of the CBM remain unchanged but are extended by a two-fold memory module \(\mathcal{M}\) which consists of a _mistake memory_\(\mathcal{M}^{m}\) coupled with a _intervention memory_\(\mathcal{M}^{i}\). The _mistake memory_ operates on encodings \(x_{e}\), _i.e._, the input of the last layer of the bottleneck network. It measures the similarity between two data points \(x\) and \(x^{\prime}\) via the euclidean distance of their encodings, \(d(x_{e},x^{\prime}_{e})=\|x_{e}-x^{\prime}_{e}\|\). The _intervention memory_ directly keeps track of known interventions and associates them to elements of the _mistake memory_, meaning that the memorized intervention \(i\) can be used to correct the memorized mistake of \(x_{e}\). We denote an associated encoding and intervention as \(\alpha(x_{e},i)\). Overall, this joint memory can be used to detect model mistakes (orange in Fig. 2) or enable automatic reuse of memorized interventions (green in Fig. 2), which we explain in detail in the following paragraphs.
By extending the vanilla CBM with a memory, CB2M can be used for two distinct tasks (_cf._ Fig. 2): (i) detecting potential model mistakes and (ii) generalizing interventions to new examples. Besides the general advantage of knowing when an AI model has made an incorrect prediction, this knowledge is even more relevant for CBMs as this can be utilized to focus human attention toward obtaining a targeted intervention. Thus, with the ability to handle task (i)
Figure 2: **Overview of CB2M to detect mistakes or generalize interventions.** A vanilla CBM (grey) is extended with a two-fold memory (orange and green). The memory compares encodings of new samples to known mistakes to (i) detect model errors or (ii) automatically correct the model via reuse of interventions.
CB2M is especially relevant when humans want to provide interventional feedback to a CBM. Furthermore, after humans have intervened on a CBM, they have, in fact, provided valuable knowledge also for future situations. Thus, an applied intervention carries information on how to correct a particular model mistake. We claim that this information should not be discarded as in the original work of Koh et al. (2020), but be reused when similar mistakes occur again. This is where task (ii) of CB2M comes into play.
Detecting Wrongly Classified Instances.Intuitively, if a data point is similar to other examples where the model has made mistakes, the model will more likely repeat these mistakes on the new data point. Therefore, we utilize the _mistake memory_\(M_{m}\) to keep track of previous mistakes (_cf._ Alg. 1 for pseudo-code). First, the memory is filled with encodings of datapoints, for which the model did not initially generate the correct output and for which the concept accuracy is smaller than a threshold \(t_{a}\in[0,1]\). This leads to: \(\mathcal{M}^{m}=\{x_{e}:f(g(x))\neq y^{*}\wedge Ac_{g}(x)<t_{a}\}\). For a new unseen instance \(\hat{x}\), we then compare its encoding \(\hat{x}_{e}\) with the mistakes in the memory \(\mathcal{M}^{m}\) (Alg. 1, lines 4 - 8). If we find \(k\) mistakes with a distance to \(\hat{x}_{e}\) smaller than \(t_{a}\), we consider a model to be making a known mistake. Formally, we thus predict a model mistake for a new unseen instance \(\hat{x}\) if:
\[\forall j\in\{1,\...\,,k\}:\exists x_{e,j}\in\mathcal{M}^{m}:d(\hat{x}_{e},x _{e,j})\leq t_{d} \tag{1}\]
This mistake memory can initially be filled with known model mistakes, for example, based on the validation set. However, once the CB2M is in use, the memory of mistakes will continuously be updated via interactive feedback, and new encodings will be added. This can constantly improve detection during deployment as corrective interventions can immediately be requested after detecting a potentially misclassified sample.
Generalization of Interventions.Next to detecting model errors with the _mistake memory_, we can use both the _mistake memory_ and the _intervention memory_ together to generalize interventions. As initially introduced in Koh et al. (2020), interventions for correcting predicted concept activations only apply to a single sample. However, we claim that these interventions also contain valuable information for further samples and should thus be reused, thereby reducing the need for additional future human interactions. Intuitively, if an intervention is applicable for one example, it is likely also relevant for similar inputs, at least to a certain degree.
To achieve such intervention generalization from one sample to several, we utilize both parts of the CB2M memory. Specifically, whenever an intervention \(i\) is applied to a model, we store it in the _intervention memory_\(\mathcal{M}^{i}\) and keep the encoding of the original input point in the _mistake memory_\(\mathcal{M}^{m}\). We also keep track of the corresponding entries \(\alpha(x_{e},i)\). When the model receives a new sample \(\hat{x}\), we next check for similar encodings in the _mistake memory_\(\mathcal{M}^{m}\) according to Eq. 1. Here, we use \(k=1\) and only consider the most similar mistake and its interventions. If there is indeed a mistake encoding \(x_{e}\) within distance \(t_{d}\) of \(\hat{x}_{e}\), we apply its associated intervention \(i\) (with \(\alpha(x_{e},i)\)) to the new data point \(\hat{x}\). If there is no similar mistake, we let the model perform its prediction as usual.
The threshold \(t_{d}\) is crucial for intervention generalization, as it directly controls the necessary similarity to reapply memorized interventions. Selecting a suitable value for \(t_{d}\) differs from the mistake prediction use case as we want to generalize as many interventions as possible under the constraint that the generalized interventions remain valid. To this end, we call an intervention \(i\) for a sample \(x\)_valid_ if the class prediction after intervening is not worse than before. We write this as \(val(x,i):f(g(x))=y^{*}\implies f(g(x|i))=y^{*}\). Given that, we want to maximize \(t_{d}\), while the following remains true:
\[\forall x,x^{\prime}\in\mathcal{X}:d(x_{e},x^{\prime}_{e})\leq t_{d}\Rightarrow \forall i\in\mathcal{I}:val(x,i)\Rightarrow val(x^{\prime},i) \tag{2}\]
We can also express this in terms of full dataset accuracy, where our dataset accuracy after applying interventions should be larger than (or equal to) the accuracy without interventions:
\[Acc_{f}(\mathcal{X}|\mathcal{M})\geq Acc_{f}(\mathcal{X}). \tag{3}\]
Here \(\mathcal{X}|\mathcal{M}\) is the dataset \(\mathcal{X}\) with applied interventions from the memory \(\mathcal{M}\):
\[\begin{split}\mathcal{X}|\mathcal{M}=&\{x|i:x\in \mathcal{X}:\exists x^{\prime}_{e}\in\mathcal{M}^{m}:\exists i\in\mathcal{M}^ {i}:d(x_{e},x^{\prime}_{e})\leq t_{d}\wedge\alpha(x^{\prime}_{e},i)\}\\ &\cup\{x:x\in\mathcal{X}:-\exists x^{\prime}_{e}\in\mathcal{M}^{m }:d(x_{e},x^{\prime}_{e})\leq t_{d}\}\end{split} \tag{4}\]
Thus, we want to find the largest \(t_{d}\) that still satisfies Eq. 3. To do that, we can set up the memory \(\mathcal{M}\) based on the validation set by adding all model mistakes to \(\mathcal{M}^{m}\) and simulating corresponding interventions with ground-truth labels for \(\mathcal{M}^{i}\). The selection of \(t_{d}\) is then done on the training set. This results in \(\mathcal{M}^{m}=\{x_{e}:x\in\mathcal{X}_{val}\wedge f(g(x))\neq y^{*}\}\) and \(\mathcal{M}^{i}=\{i:i\in\mathcal{I}\wedge x_{e}\in\mathcal{M}^{m}\wedge\alpha (x_{e},i)\wedge\forall j\in\mathcal{J}_{i}:c^{\prime}_{j}=c^{*}_{j}\}\).
### General Perspective
Assumptions About Human Feedback.A CB2M leverages human feedback to improve upon the CBM via interventions. To this end, it is assumed that the feedback provided by humans is correct. This is a common assumption in work on CBMs Koh et al. (2020); Chauhan et al. (2022) and (inter)active learning in general Settles (2009); Berg et al. (2019). As this assumption does not always hold true, we delve deeper into it in the focus of CBMs. Often, it is easier for humans to provide concept information than to provide information on the complete task. For example, when considering bird species classification _cf._ Fig. 1, it is easier to identify the bird's color than its species. This phenomenon occurs when concepts are "low-level" and human-understandable. In other domains, such as the medical one, providing correct concept labels may require expert domain knowledge, but it is still possible and easier to infer concept labels than class labels. Disregarding a human's ability to provide correct feedback, _e.g._, concept labels, this does not imply they will. A user with malicious intentions could actively provide wrong concept labels to misguide the system. This potential danger has to be considered when incorporating human feedback, _i.e._, also in the context of CB2M. Recent work has begun tackling this issue Ju et al. (2022).
Handling Bottleneck Errors via CB2M.As stated above, while CBMs have many advantageous properties compared to standard deep models, their structure can also make error analysis more difficult due to separate processing via the bottleneck and predictor networks Marconato et al. (2023). CB2M stands in line with research that tackles error sources in CBMs separately Bontempelli et al. (2021). Where several previous works have tackled mitigating errors in the predictor network Sawada and Nakamura (2022); Stammer et al. (2021); Teso et al. (2023), CB2M is designed to tackle errors of the bottleneck network via interventions. These approaches, overall, stand in contrast to techniques that consider the errors of the whole model Jiang et al. (2018); Hendrycks and Gimpel (2017). However, our evaluations particularly support the claim of Bontempelli et al. Bontempelli et al. (2021) on the advantages of dividing the treatment of different error sources in CBMs. As such, CB2M should be seen as an important tool for this goal, where other methods are required for specific errors of the predictor network.
## 3 Experimental Evaluations
To evaluate the potential of CB2M in intervention generalization and mistake detection, we provide evidence via several evaluations. First, we evaluate the ability of CB2M to detect the existence of similar data points. Afterward, we continue to more challenging scenarios and investigate CB2M in the context of unbalanced and confounded data as well as data affected by distribution shifts. Let us first describe the experimental setup.
### Setup
**Data:** The Caltech-UCSD Birds (CUB) dataset Wah et al. (2011) consists of 11788 images of 200 different bird classes. We use the data splits provided by Koh et al. (2020), resulting in training, validation, and test sets with 40, 10, and 50% of the total images. Additionally, we add 4 more folds of training and validation set to perform 5-fold validation. Images in the dataset are annotated with 312 concepts (_e.g._, beak-color:black, beak-color:brown, etc.), which can be grouped into concept groups (one group for all beak-color:_ concepts). We follow the approach of previous work Koh et al. (2020); Chauhan et al. (2022) and use only concepts that occur for at least 10 classes and then perform majority voting on the concept values for each class. This results in 112 concepts from 28 groups.
We further provide evidence based on the MNIST LeCun and Cortes (1998), confounded ColorMNIST (C-MNIST) Rieger et al. (2020) and SVHN Netzer et al. (2011) datasets. For all three, we train the model for the parity MNIST task
as in Mahinpei et al. (2021). Hereby, the digit in the image is considered the concept, and the class label describes whether the digit is even or odd. Furthermore, rather than evaluating on the original MNIST dataset, we focus on an unbalanced version of this task. In this setting, we remove 95% of the training data of one class (for the results in the main paper, the digit "9", for other digits _cf_. App. A.4). We refer to App. A.3 for results on the original MNIST dataset, indicating that current base models yield very high performances and make additional interventions unnecessary. Lastly, for C-MNIST, each digit is colored in the same color during model training (the confounder), but the coloring is random during test time.
We use the standard train and test splits for these datasets and create validation sets with 20% of the training data. As for CUB, we generate 5 training and validation folds in total. When considering human interventions, we follow the common assumption that humans provide the correct concept values as long as the requested concepts are present in the input (_e.g_., visible in an image).
**Models:** For CUB, we use the same model setup as Koh et al. (2020), instantiating the bottleneck model with the Inception-v3 architecture Szegedy et al. (2016) and the predictor network with a simple multi-layer perceptron. For MNIST variants and SVHN, we use MLPs for bottleneck and predictor networks as in Mahinpei et al. (2021). All CBMs are trained with the independent scheme, as this has shown to be the most effective model training regarding interventions Shin et al. (2023). Further training details can be found in App. A.1. We use CB2M as described in Sec. 2.2 to enable the generalization of interventions and detection of model mistakes. CB2M parameters are tuned for generalization and detection separately on the training and validation set. For all detection experiments, the memory of CB2M is filled with wrongly classified instances of the validation set according to the parameters. For generalization experiments, we simulate human interventions on the validation set and use CB2M to generalize them to the test set. Whenever we consider interventions on a subset of concepts or concept orderings, we use ECTP Shin et al. (2023).
**Metrics:** We evaluate CB2M and the base model with several different metrics. We use both concept and class accuracy of the underlying CBM (with and without CB2M) to observe improvements in the final task and to investigate the intermediate concept representation. We evaluate the detection of model mistakes using AUROC and AUPR, in line with related work Ramalho and Miranda (2019). To observe how interventions improve model performance, we propose normalized relative improvement (NRI), which measures improvement independent of different baseline values. NRI measures the percentage of the maximum possible improvement in class accuracy achieved:
\[\text{NRI}=\frac{\Delta}{\Delta_{\text{max}}}=\frac{\text{Acc}_{f}-\text{Acc}_{ f,\text{base}}}{\text{Acc}_{f,\text{max}}-\text{Acc}_{f,\text{base}}} \tag{5}\]
Where \(\text{Acc}_{f}\) (\(\text{Acc}_{f,\text{base}}\)) refers to the model accuracy after (before) applying interventions and \(\text{Acc}_{f,\text{max}}\) is the maximum possible accuracy to achieve through interventions. To apply this metric, \(\text{Acc}_{f,\text{max}}\) has to be known. To estimate this, one can use, _e.g_., the accuracy of the predictor network given ground-truth concept information on the validation set.
### Results
Beyond One-Time Interventions.First, we analyze how well CB2M generalizes interventions to unseen data points. If a standard CBM receives a new input similar to a previous datapoint with a corresponding intervention, that intervention is not used at all. CB2M, on the other hand, allows the reuse of information provided in previous interventions. To evaluate the generalization in these cases, we provide results on the CUB dataset where, due to its
\begin{table}
\begin{tabular}{l l|c c|c c} \hline \hline & & \multicolumn{2}{c|}{Concept Acc. (\(\uparrow\))} & \multicolumn{2}{c}{Class Acc. (\(\uparrow\))} \\ Dataset & Setting & CBM & CB2M & CBM & CB2M \\ \hline \multirow{2}{*}{CUB} & Identified & \(86.4\pm 2.7\) & \(\mathbf{99.0}\pm 0.7\) & \(5.0\pm 1.7\) & \(\mathbf{88.7}\pm 5.4\) \\ & Full set & \(94.7\pm 0.6\) & \(\mathbf{98.7}\pm 3.5\) & \(64.8\pm 2.7\) & \(\mathbf{69.1}\pm 5.5\) \\ \hline \multirow{2}{*}{Parity MNIST (unbalanced)} & Identified & \(85.3\pm 2.6\) & \(\mathbf{98.7}\pm 0.4\) & \(22.5\pm 5.7\) & \(\mathbf{93.7}\pm 1.9\) \\ & Full set & \(97.5\pm 0.2\) & \(\mathbf{98.0}\pm 0.3\) & \(91.2\pm 0.1\) & \(\mathbf{94.0}\pm 1.2\) \\ \hline \multirow{2}{*}{Parity C-MNIST} & Identified & \(82.2\pm 0.6\) & \(\mathbf{95.5}\pm 1.2\) & \(20.1\pm 7.1\) & \(\mathbf{85.9}\pm 4.7\) \\ & Full set & \(87.1\pm 0.0\) & \(\mathbf{88.4}\pm 0.4\) & \(68.6\pm 0.3\) & \(\mathbf{74.9}\pm 2.1\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **CB2M generalizes interventions to unseen data points. Performance of CBM and CB2M on data points identified for generalized interventions and on the full datasets. Over all datasets, generalizing interventions with CB2M improves concept accuracy substantially. This further results in a drastic improvement of class accuracy on identified data points and a noticeable improvement on the full set. (Best values bold, average and standard deviation over augmented test set versions (CUB) or 5 runs (other)).**
small number of images per class, we expand the dataset by applying the following data augmentations to the images: color jitter, blurring, blackout, as well as salt&pepper, and speckles noise. In this way, these augmented images act as a proxy for real-world similar inputs, _e.g._, images with different lighting conditions. We thus fill CB2M with simulated human interventions on the test set and evaluate how they generalize to a novel augmented test set version. Tab. 1 shows that CB2M substantially improves upon the base CBM, both on instances identified for intervention generalization and on the full set.
Next, we evaluate CB2M under more challenging settings, namely training with highly unbalanced or confounded training data. As seen in Tab. 1 the base CBM struggles to learn the underrepresented digit in the unbalanced Parity MNIST dataset. On the confounded Parity C-MNIST dataset1 we observe that the standard CBM is strongly influenced by the confounding factor which negatively impacts the bottleneck performance during test time. By generalizing from few human interventions CB2Ms, on the other hand, can substantially improve performance compared to the vanilla CBM. In both cases, the reapplied interventions reach a concept accuracy close to 100%, showing that the interventions successfully correct the bottleneck errors. Correcting the concept representation on the instances identified for reapplied intervention also substantially boosts the class accuracy on these instances. Overall, we can conclude that CB2M is able to generalize intervention successfully. This holds true for naturally similar inputs and scenarios like unbalanced and confounded data.
Footnote 1: For this dataset, we assume that we have access to some human interventions on unconfounded data.
Asking for Interventions.Next, we go from the generalization of interventions to the second use-case for which CB2M can be deployed, namely for detecting model mistakes prior to human feedback. For this, we compare CB2M to two baselines. The _random_ baseline for mistake detection simply marks random samples as mistakes. In contrast, _softmax_ based detection of mistakes uses the softmax probability of the strongest activated class as a proxy to predict whether the model made a mistake Hendrycks & Gimpel (2017). Where the _softmax_ baseline uses information from the end of the model, _i.e._, after the predictor network, CB2Ms estimate model errors only based on the bottleneck network. While detecting mistakes of the whole model covers all potential model errors, we hypothesize that detecting mistakes of the bottleneck network is more suitable for interventions, as they are tied to the bottleneck network. We compare CB2M to the baselines on CUB and the Parity MNIST (unbalanced) datasets. Additionally, we evaluate the detection on Parity C-MNIST, where the methods have access to a small number of unconfounded data points.
First, in Tab. 2, we show that the mistake detection of CB2M performs on par with the _softmax_ baseline for CUB and Parity MNIST (unbalanced) while even outperforming it for Parity C-MNIST. On the latter, CB2M is able to make better use of the access to a small number of unconfounded samples than _softmax_. Once model mistakes have been detected, CBMs provide a straightforward way to improve a model via human feedback. Hence, we next evaluate the effect of interventions applied after detection of CB2M and the baselines on model performance.
In Tab. 3, we report the normalized relative improvement (NRI) on the test set to evaluate the improvement of interventions applied to the detected mistakes. Both for CUB and Parity MNIST (unbalanced), interventions can improve model performance on an instance (close to 100%. This results in similar NRIs for all methods on the identified instances. More important, however, is the effect observed on the full dataset. Here, we can see that interventions after random selection only have a small effect. Interventions applied after the softmax baseline already cause a larger improvement, while interventions after samples detected for CB2M yield the largest improvement. Based on this, we can conclude that the mistakes detected via CB2M are indeed more suited for interventions than the baselines.
Often, intervening on a few concepts is already sufficient because they carry most of the relevant information. As human interactions are expensive, we want to avoid unnecessary human feedback and only ask for interventions on
\begin{table}
\begin{tabular}{l l l|c c c} \hline \hline Dataset & Confounded & Metric & Random & Softmax & CB2M \\ \hline CUB & No & AUROC (\(\uparrow\)) & \(51.1\pm 0.7\) & \(83.7\pm 1.1\) & \(\mathbf{84.8}\pm 0.7\) \\ & & AUPR (\(\uparrow\)) & \(77.3\pm 0.4\) & \(94.0\pm 0.6\) & \(\mathbf{94.6}\pm 0.3\) \\ \hline Parity MNIST & No & AUROC (\(\uparrow\)) & \(50.5\pm 0.1\) & \(\mathbf{90.7}\pm 1.7\) & \(88.7\pm 0.4\) \\ (unbalanced) & & AUPR (\(\uparrow\)) & \(91.2\pm 0.1\) & \(\mathbf{98.8}\pm 0.3\) & \(98.5\pm 0.1\) \\ \hline Parity C-MNIST & Yes & AUROC (\(\uparrow\)) & \(50.3\pm 0.7\) & \(65.7\pm 0.3\) & \(\mathbf{83.4}\pm 0.8\) \\ & & AUPR (\(\uparrow\)) & \(69.0\pm 0.6\) & \(79.8\pm 0.3\) & \(\mathbf{91.5}\pm 0.4\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **CB2M detects wrongly classified instances.** AUROC and AUPR values on the test set. For the confounded Parity C-MNIST, CB2M can even achieve substantially better detection than the baselines. (Best values bold, average and standard deviations over 5 runs.)
the relevant concepts. As shown in Shin et al. (2023); Chauhan et al. (2022), selecting the concepts for intervention can reduce the required human interactions by a large amount. In Fig. 3, we show the increase in performance when applying interventions after CB2M detection for a progressive number of concepts. One can observe that interventions on a few concept groups (e.g. 10) already yield a large portion of the maximum improvement (around 60%). Applying interventions beyond 19 concept groups barely shows further improvements.2 Even if the presented results here are based on "full" interventions for brevity, this shows that we do not need interventions on all concepts to achieve the benefits of CB2M.
Footnote 2: Performing interventions on all available concepts cannot necessarily improve class accuracy to 100%, as interventions cannot provide values for invisible concepts.
Generalization under Distribution Shift.Lastly, we want to evaluate the benefits of CB2M when the base CBM is object to a distribution shift. To that end, we first train a CBM on Parity MNIST and then evaluate it on Parity SVHN. As seen in Tab. 4, the base model does not perform well under the shift, with a class accuracy barely over 50% (which is equal to random guessing). Nevertheless, if we add some human-generated interventions to CB2M, we can still improve the model performance under the distribution shift. However, it is notable that the improvement is smaller than in the other use cases shown in previous sections. This is most likely because model encodings on SVHN are less concentrated due to the drastic distribution shift. Still, we believe these results show the potential of CB2M other settings like online learning.
## 4 Related Work
In the following, we highlight research related to CB2M. First, we discuss works around CBMs before highlighting some related works on model error detection.
Concept Bottleneck Models.Concept bottleneck models as a general network architecture were popularized recently by Koh et al. (2020). The two staged model first generates an intermediate concept representation before
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & \multicolumn{2}{c}{Concept Acc. (\(\uparrow\))} & \multicolumn{2}{c}{Class Acc. (\(\uparrow\))} \\ Setting & CBM & CB2M & CBM & CB2M \\ \hline Identified & \(63.1\pm 1.2\) & \(\mathbf{87.3\pm 0.1}\) & \(39.9\pm 0.3\) & \(\mathbf{60.8\pm 0.4}\) \\ Full set & \(68.0\pm 0.9\) & \(\mathbf{75.3\pm 0.4}\) & \(51.0\pm 0.1\) & \(\mathbf{57.3\pm 0.2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: **CB2M generalization under distribution shift.** The CBM is trained on Parity MNIST and evaluated on SVHN. Despite the low base model performance, CB2M can still generalize human interventions on SVHN. (Best values bold, standard deviations over 5 runs.)
Figure 3: **Less is more: Intervening on a subset of all concepts already yields large improvements. Interventions on identified instances by CB2M on CUB achieve large portions of the total improvement by interventions on less than 50% of concept groups. Average and standard deviation over 5 runs, concept ordering computed with ECTP.**
\begin{table}
\begin{tabular}{l c c} \hline \hline Setting & Random & Softmax & CB2M \\ \hline \multicolumn{3}{c}{CUB} \\ Identified & \(95.4\pm 0.6\) & \(\mathbf{96.3\pm 0.6}\) & \(95.9\pm 0.5\) \\ Full Set & \(34.3\pm 5.7\) & \(70.1\pm 3.1\) & \(\mathbf{75.5\pm 4.5}\) \\ \hline \multicolumn{3}{c}{Parity MNIST (unbalanced)} \\ Identified & \(\mathbf{100.0\pm 0.0}\) & \(\mathbf{100.0\pm 0.0}\) & \(\mathbf{100.0\pm 0.0}\) \\ Full Set & \(13.2\pm 4.2\) & \(62.1\pm 4.9\) & \(\mathbf{69.6\pm 4.1}\) \\ \hline \multicolumn{3}{c}{Parity C-MNIST} \\ Identified & \(\mathbf{100.0\pm 0.0}\) & \(\mathbf{100.0\pm 0.0}\) & \(\mathbf{100.0\pm 0.0}\) \\ Full Set & \(60.0\pm 9.8\) & \(87.3\pm 0.8\) & \(\mathbf{89.7\pm 6.1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Interventions based on CB2M detection successfully improve model performance.** NRI of interventions on identified instances and the full test set. Samples for intervention identified by the three methods. As expected, interventions improve performance on selected instances for all three methods. More importantly, using CB2M leads to considerably larger improvements on the full dataset. (Best values bold, standard deviations over 5 runs.)
generating the final task output. Since their introduction, CBMs have been extended in various ways. To reduce the dependency on fully supervised concept information, CBM-AUC Sawada & Nakamura (2022) combine explicit concept supervision with unsupervised concept learning found in self-explaining neural networks Alvarez-Melis & Jaakkola (2018). Similarly, PostHoc CBMs Yukekgonul et al. (2022) and label-free CBMs Oikarinen et al. (2023) encompass concepts from concept libraries (e.g. with CAV Kim et al. (2018)) to require less concept supervision. Stammer _et al._ Stammer et al. (2022) introduces a method for learning concepts with weaker supervision based on discretizing prototype representations.
The internal concept representation is the key factor for CBM interpretability. However, a CBM sometimes encodes more than the intended concept information in its internal representation Margeloiu et al. (2021). This problem of concept leakage undermines the inherent interpretability properties, as the information provided by a concept becomes unclear Havasi et al. (2022); Mahinpei et al. (2021). As a result, GlanceNets Marconato et al. (2022) and concept embedding models Zarlenga et al. (2022) have been introduced to solve leakage problems in different ways, and CBMs have been extended to drop concept predictions when not enough knowledge is available Lockhart et al. (2022). As various CBM architectures exist, the flexibility of our presented CB2M is desirable. The only requirements to combine CB2M with other CBM architectures are access to the model encodings and the ability to apply interventions.
There has also been some research done on the intervention procedure of CBMs. By overwriting the internal concept representation of a model, humans can provide corrective feedback to the CBM. The intervention procedure introduced in Koh et al. (2020) is quite simple, selecting concepts randomly and providing interventions to all examples. Since then, Shin _et al._ Shin et al. (2023) presented several heuristics to order concepts for intervention, drastically reducing the number of concepts necessary to correct the model. Similarly, SIUL Sheth et al. (2022) uses Monte Carlo Dropout to estimate the importance of concepts and, therefore, the need for interventions. Interactive CBMs Chauhan et al. (2022) extend this idea even further by providing a policy to optimize concept selection over several examples under consideration of intervention costs Chauhan et al. (2022). Still, these works only consider the ordering of concepts for interventions, whereas we explicitly consider the example selection. And even more importantly, neither of these methods changes the fact that vanilla CBM interventions only have a one-time effect. They can, however, be combined with CB2M to improve the intervention procedure even more.
Uncertainty Estimation for Error Detection.One use case of CB2M is to detect potential model mistakes (which can then be improved via interventions). Detecting data points where the model performs poorly is often touched upon under the topic of uncertainty estimation. While the construction of uncertainty-aware networks can often provide benefits in terms of mistake detection Gawlikowski et al. (2021), our work is more related to methods that do not make particular assumptions about the model architecture. This is essential to ensure that our method can be combined with different CBM architectures.
One popular approach to detect model mistakes is using the softmax probabilities of the model Hendrycks & Gimpel (2017). One can predict if a model makes a mistake by setting a threshold to the softmax probability of the most likely class. As a different example, TrustScore Jiang et al. (2018) measures the agreement between the model output and a KNN classifier for the same task to detect model mistakes. Even if these models are also used to detect model mistakes, they are not specifically tailored to CBMs. As a CBM consists of two different networks, errors can occur in either. Both the softmax baseline and TrustScore only look at the model prediction as a whole, while our method is able to specifically detect mistakes related to the bottleneck network (which can then be corrected via interventions). In contrast, neighborhood uncertainty classifiers Ramalho & Miranda (2019) learn a neural network on top of a KNN of latent model representations to predict uncertainty. This method also uses a latent representation of the model. However, we do not learn a neural network on top of similarity information, keeping our technique simpler and more flexible. That way, it can even be adapted if more details about model mistakes arrive at model deployment, which happens every time a human provides an intervention.
## 5 Conclusion
In this work, we have introduced CB2M, a flexible extension to CBM models. We have shown that the two-fold memory of CB2M can be used to generalize interventions to previously unseen datapoints, thereby overcoming the issue of current one-time intervention approaches without the necessity of further human interactions. Additionally, we have demonstrated that CB2M can be utilized to detect model mistakes prior to any human interactions, allowing query humans to efficiently provide interventional feedback in a targeted manner. CB2M can be combined with any concept model architecture to greatly improve interventions and make human interactions more efficient.
There remains room to improve CB2M in the future. First, one could make the memory of CB2M differentiable Plotz & Roth (2018). This would allow to learn parameters like \(t_{d}\) directly instead of relying on heuristics. Therefore,
a differentiable memory could enhance the coverage of intervention generalization. Here, we present CB2M with generalizing interventions only from the closest mistake, _i.e._, using \(k=1\). To further boost the performance gains with CB2M, it is also conceivable to combine interventions from multiple mistakes. A simple way to achieve this could be, _e.g._, weighted averaging, but more complex aggregation methods are also possible. Such aggregation techniques would furthermore allow condensing information in the memory into prototypes, keeping the memory smaller and better understandable. As shown, CB2M excel particularly when the model makes systematic errors, like with confounded data or when the model fails to learn a specific class. However, if the model performance is really good and the only mistakes are a few singular outliers (potentially close to the decision boundary), CB2M do not work as well. If there are no similar model mistakes, detection solely based on mistakes in the memory is not sufficient. Possible directions to apply CB2M in such circumstances could be integrating correctly classified instances to the memory, allowing for a better inlier detection. As it is crucial for all work on interactive machine learning, we encourage future research in the context of malicious human interactions, specifically how to prevent them from misusing the memory of CB2M. Finally, an interesting future direction is the combination of CB2M with other concept-based models, for example CEM Zarlenga et al. (2022), post-hoc CBMs Yukekgonul et al. (2022) or even tabular CBMs Zarlenga et al. (2023).
**Acknowledgments** This work benefited from the Hessian Ministry of Science and the Arts (HMWK) projects "The Third Wave of Artificial Intelligence - 3AI", "The Adaptive Mind" and Hessian.AI, the "ML2MT" project from the Volkswagen Stiftung as well as from the ICT-48 Network of AI Research Excellence Center "TAILOR" (EU Horizon 2020, GA No 952215).
|
2306.05945 | Improving Estimation of the Koopman Operator with Kolmogorov-Smirnov
Indicator Functions | It has become common to perform kinetic analysis using approximate Koopman
operators that transforms high-dimensional time series of observables into
ranked dynamical modes. Key to a practical success of the approach is the
identification of a set of observables which form a good basis in which to
expand the slow relaxation modes. Good observables are, however, difficult to
identify {\em a priori} and sub-optimal choices can lead to significant
underestimations of characteristic timescales. Leveraging the representation of
slow dynamics in terms of Hidden Markov Model (HMM), we propose a simple and
computationally efficient clustering procedure to infer surrogate observables
that form a good basis for slow modes. We apply the approach to an analytically
solvable model system, as well as on three protein systems of different
complexities. We consistently demonstrate that the inferred indicator functions
can significantly improve the estimation of the leading eigenvalues of the
Koopman operators and correctly identify key states and transition timescales
of stochastic systems, even when good observables are not known {\em a priori}. | Van A. Ngo, Yen Ting Lin, Danny Perez | 2023-06-09T15:01:43Z | http://arxiv.org/abs/2306.05945v1 | # Improving Estimation of the Koopman Operator with Kolmogorov-Smirnov Indicator Functions
###### Abstract
It has become common to perform kinetic analysis using approximate Koopman operators that transforms high-dimensional time series of observables into ranked dynamical modes. Key to a practical success of the approach is the identification of a set of observables which form a good basis in which to expand the slow relaxation modes. Good observables are, however, difficult to identify _a priori_ and sub-optimal choices can lead to significant underestimations of characteristic timescales. Leveraging the representation of slow dynamics in terms of Hidden Markov Model (HMM), we propose a simple and computationally efficient clustering procedure to infer surrogate observables that form a good basis for slow modes. We apply the approach to
an analytically solvable model system, as well as on three protein systems of different complexities. We consistently demonstrate that the inferred indicator functions can significantly improve the estimation of the leading eigenvalues of the Koopman operators and correctly identify key states and transition timescales of stochastic systems, even when good observables are not known _a priori_.
## 1 Introduction
Elucidating the kinetics describing rare structural or chemical reactions is crucial to understand many biophysical and biochemical systems [1, 2, 3, 4, 5]. Even when long fully-resolved trajectories are available, e.g., via extensive molecular dynamics (MD) simulations, extracting a reliable representation of kinetics in terms of a handful of physical observables can be elusive due to the high dimensionality and complexity of most application-relevant systems, particularly in the absence of an intuitive reaction coordinate. A powerful approach aimed at tackling such problems exploits the correspondence between the spectral properties of the so-called Koopman operator and those of the dynamics generator. It allows to efficiently reduce high dimensional timeseries into a compact and tractable representation (See Sec. 2). Various techniques and spectral analysis methods based on this mathematical formalism include time-independent component analysis (TICA) for improving Markov State Modeling (MSM) [6, 7, 8, 9], the variational approach for Markov processes (VAMP) incorporated in MSM-builder [10, 11] and PyEMMA [12], as well as different variants of the extended dynamical mode decomposition (EDMD) approach [13]. These methods underscore that analyzing the spectral properties of the Koopman operator is powerful to understand and characterize the dynamics of complex systems.
In essence, an estimated Koopman operator based on EDMD (or VAMP or TICA) is an approximation of the dynamics generator of measurable observables in an infinite-dimensional Hilbert space [14, 15, 16]. This approximate Koopman operator returns the expectation value at a time \(t+\tau\) from a value of an observable at time \(t\)[17, 18]. In practice, a finite set
of timeseries of observables is used to obtain an estimation of the Koopman operator in a desirably small subspace relevant to slow kinetics. The eigenvalues and eigenvectors of such approximate Koopman operator can then be used to describe slow relaxation modes together with their corresponding characteristic timescales. The accuracy of the approximation can however strongly depend on the choice of observables.
In some cases, an intuitive reaction coordinate can be easily determined. For instance, two torsional angles of the backbone of the alanine dipeptide as the reaction coordinates are sufficient to describe the slow dynamics. In cases where intuitive reaction coordinates cannot be determined, many possible observables including root mean square displacement (RMSD), all possible torsional angles, native contacts, and backbone distances can be considered [18]. Depending on the target system's complexity, these generic observables may not form a good basis to extract transition timescales, requiring more elaborate schemes to define a better observable [19]. Identifying a compact yet sufficiently-complete set of observables that is able to reliably approximate the true relevant eigenfunctions of the Koopman operator remains an outstanding challenge, although guidelines are gradually emerging [18, 20].
To understand this challenge better, we can define the properties of an optimal set of observables. Mathematically, optimal observables should effectively act as basis functions to approximate the eigenfunctions corresponding to a "slow" subspace that describes rare transitions (e.g., protein folding/unfolding) or slowest relaxation modes of the exact Koopman operator [21], which are of course _a priori_ unknown. While general eigenfunctions can be extremely complex, those that represent slow transition between \(M\) metastable states have simplified features: they can be shown to be collectively approximated by linear combinations of \(M\) indicator functions, each of which takes non-zero values over one of the metastable sets and zero otherwise [22]. Even when a good basis to extract such indicator functions is not available, it can be shown that these functions can be inferred by representing the evolution of non-ideal observables in terms of a Hidden Markov Model (HMM) with \(M\) hidden states [23].
However, fitting an HMM model to high-dimensional data given an unknown \(M\) number
of hidden states is generally non-trivial, requiring iterative methods such as Expectation Maximization [24, 25, 26] or Sequential Monte Carlo methods [27], all of which are considerably more complex than the methods based on the traditional linear Koopman approaches. Note that the task of choosing "right" observables can be alleviated by using non-linear optimization methods such as neural networks [28, 29] or kernel methods [30, 31]. These methods often require very large amounts of data and careful regularization to avoid over-fitting, and hence require more expertise and careful application/fine-tuning than linear Koopman operator methods, which possess an appealing simplicity of use and interpretation.
In this study, we propose a simple and scalable alternative to traditional HMM-inference algorithms based on the two-sample Kolmogorov-Smirnov (KS) [32, 33] test and agglomerative clustering [34], hereinafter referred to as _KS clustering_. This KS clustering is used to identify good "surrogate" observables that conceptually correspond to indicator functions over hidden/metastable states. The key idea is that the _statistics_ of the time evolution of even imperfect observables should contain information that can be used to distinguish the metastable states a system visits, hence allowing one to infer a good basis for slow eigenfunctions from imperfect observables.
It is worth noting an important trade-off in this algorithm: while it produces accurate characteristic timescales, the resulting surrogate observables are not explicit functions of the degrees of freedom of the target dynamical system, and so the eigenfunctions that are produced cannot directly be interpreted mechanistically. The KS clustering however possess an advantage over methods where HMM membership functions are defined in the observable space, since it can implicitly construct surrogate functions that cannot be explicitly expressed in the observable space.
This article is organized as the follows. In Sections 2.1 and 2.2, we provide theoretical background for the representation of the Koopman operator that can be estimated from time series of observables obtained from stochastic systems. Section 2.3 describes key concepts in HMM with multiple states that can be included in indicator functions. Section 2.4 explains
further why indicator functions with the KS clustering algorithm work best in a reduced observable space for defining hidden states. Section 2.5 illustrates the computation of the Koopman operator with an HMM of two hidden states. In Section 3, we demonstrate numerical results for the two-state HMM (Sec. 3.1) and apply the KS clustering to three protein systems (Sec. 3.2). Finally, we discuss some implications of the results in Section 4.
## 2 Theoretical background
### Koopman representation of dynamical systems
Throughout this manuscript, we consider Markovian stochastic dynamics driving the evolution of a thermal system whose microscopic state is denoted as \(\omega\) in a state space \(\Omega\). For example, for a three-dimensional \(N\)-atom molecular system with \(\Omega=\mathbb{R}^{3N}\) evolving under overdamped dynamics, the microscopic state can be fully characterized by \(\omega=\left(x_{1},\ldots,x_{N},y_{1}\ldots,y_{N},z_{1},\ldots,z_{N}\right)\), which are the Cartesian coordinates of all atoms. In the stochastic setting, an ensemble of trajectories at time \(t\) is characterized by \(\rho(t,\omega)\) as a joint probability density function in the continuous state space \(\Omega\).
The dynamics is prescribed by an infinitesimal generator \(\mathcal{L}\). To fix ideas, we consider overdamped Langevin dynamics with Gaussian white noise, \(\mathcal{L}=-\sum_{i=1}^{3N}\left(\partial_{\omega_{i}}V\left(\omega\right) \right)\partial_{\omega_{i}}+2k_{B}T\partial_{\omega_{i}}^{2}\), where \(V(\omega)\) is the potential describing the interactions between the atoms. For simplicity, we consider systems with detailed balance, which guarantees reversibility [35]. Note that the formalism itself is not specific to Langevin dynamics, but is applicable to general reversible dynamics. The infinitesimal generator \(\mathcal{L}\) uniquely defines the evolution of the probability density function,
\[\frac{\partial}{\partial t}\rho\left(t,\omega\right)=\mathcal{L}^{\dagger}\rho \left(t,\omega\right), \tag{1}\]
where \(\mathcal{L}^{\dagger}\) is the adjoint of \(\mathcal{L}\). For reversible dynamics, \(\mathcal{L}=-\mathcal{L}^{\dagger}\). We assume that the stochastic system is ergodic such that a unique stationary distribution, \(\rho_{\rm stat}\), exists and
satisfies \(\mathcal{L}^{\dagger}\rho_{\text{stat}}(\omega)=0\).
Let's consider an observable \(O\) that is a real-valued function of \(\omega\) and define the so-called stochastic Koopman operator \(\mathcal{K}_{t}\)[17] as
\[\left(\mathcal{K}_{t}O\right)\left(\omega\right)\triangleq\mathbb{E}\left[O \left(\omega_{t}\right)\left|\omega_{0}=\omega\right],\forall\omega\in\Omega, \tag{2}\]
where \(\omega_{t}\) and \(\omega_{0}\) denote the stochastic processes measured at time \(t\) and \(0\), respectively. Using the semi-group notation, the finite-time Koopman operator \(\mathcal{K}_{t}=e^{t\mathcal{L}}\) maps the current value of an observable \(f\) to its expectation over the probability distribution induced by the process \(O_{t}\) at a later time \(t\), given the initial process \(O_{0}\). A function \(\phi\) is defined as a Koopman eigenfunction if it satisfies \(\left(\mathcal{K}_{t}\phi\right)=e^{\lambda t}\phi\) or equivalently \(\mathcal{L}\phi=\lambda\phi\).[14, 15, 16, 36] In this study, we consider systems with point spectra, that is, systems with countable \(\lambda_{i}\), \(i=1\ldots\), which can be ordered by their moduli, whose corresponding eigenfunctions \(\phi_{i}\) satisfy \(\mathcal{L}\phi_{i}=\lambda_{i}\phi_{i}\).
### Data-driven estimation of the Koopman operator
The linearity of \(\mathcal{K}_{t}\) and the correspondence between its eigenvalues/eigenfunctions and those of the generator \(\mathcal{L}\) discussed in Sec. 2.1 can be leveraged to create powerful data-driven methods to efficiently learn the characteristics of dynamics from timeseries of observables. Methods such as TICA,[6] VAMP[17, 21] and EDMD[13] provide linear finite-dimensional approximations to the Koopman operator acting in the space of selected observables. An estimated Koopman operator can be obtained by minimizing the \(L^{2}\)-norm of the difference between the left- and right-hand sides of Eq. (2) using a least-squares minimization over pairs of configurations separated by a lag time \(t\).
For brevity, we consider discrete-time Markov processes below, noting that it is straightforward to generalize the analysis to continuous-time Markov processes. In the discrete-time setting, an approximation to the Koopman operator[13, 37] given a sample path \(\{\omega_{t}\}\)
\(t=0,1,\ldots\), and a vector-valued observable \(O\) is given by:
\[K_{k}\equiv C(k)\cdot C^{-1}(0), \tag{3}\]
where \(C(k)\) with \(k\in\mathbb{N}\) is the \(k\)-lag correlation of the observable
\[C(k):=\frac{1}{T}\sum_{s=0}^{T-1}O(\omega_{s+k})\otimes O(\omega_{s}), \tag{4}\]
where the dyadic (outer) product is denoted by \(\otimes\).
The eigenvalues of \(K_{k}\) are approximations of \(e^{\lambda_{i}k}\), and its eigenvectors \(\Phi_{i}\) can be used to approximate the eigenfunction \(\phi_{i}\) of the true Koopman eigenfunctions defined in Sec. 2.1[13, 37] as \(\phi_{i}(\omega_{s})\simeq O(\omega_{s})\cdot\Phi_{i}\), where \(\cdot\) denotes the inner product of the two vectors.[13] This simple and elegant procedure can be shown to converge to the exact eigenvalues and eigenfunctions in the limit of infinitely long timeseries of a set of observables \(f\) which linearly span a Koopman invariant subspace containing the corresponding set of Koopman eigenfunctions \(\left\{\phi_{i}\right\}_{i}\).[37, 38] Based on the Rayleigh-Ritz method, Wu and Noe[17] showed that the data-driven estimation of the Koopman operator is a variational problem when using methods such as TICA[6, 7, 8, 9] and EDMD,[13] with the approximate characteristic timescales approaching the actual values from below in the infinite-data limit. When observables are chosen as the first \(M\) Koopman eigenfunctions, \(\left\{\phi_{i}\right\}_{i=1}^{M}\), the variational bound is tight, thus recovering the optimal estimates of the Koopman eigenvalues and eigenfunctions. In practice, selecting a finite number of observables is the only option. This selection procedure is often system-specific, relying on educated guesses or _a priori_ information of an intuitive reaction coordinate. The quality of the estimate can depend sensitively on a choice of observables, as we now show.
### Hidden Markov Chain as an effective model for describing systems with multiple metastable states
In the following, we focus on the problem of characterizing the kinetics of systems with metastable states, which are defined to have distinguishable statistics, essential to the approximation of the Koopman operator discussed in the previous sections. Let's consider a system with \(M\) such metastable states and denote the \(i^{\text{th}}\) metastable state, \(i=1\ldots M\), by \(\Omega_{i}\) with \(\Omega=\cup_{i=1}^{M}\Omega_{i}\), and \(\Omega_{i}\cap\Omega_{j}=\varnothing\) if \(i\neq j\). Conceptually, a metastable state is such that a typical trajectory would relax to quasi-stationary distribution (QSD) within one such state much faster than it would leave the state [39]. Many systems in biology, chemistry, and materials science exhibit strong metastability, which makes their study using direct simulation methods such as molecular dynamics challenging due to the long waiting times between state-to-state transitions. This setting implies the existence of a slow subspace containing \(M\) slow eigenvalues, well separated or statistically distinguishable from the rest of the spectrum. As discussed above, indicator functions of the form:
\[\mathbf{1}_{\Omega_{i}}(\omega):=\left\{\begin{array}{ll}1,&\text{if }\omega\in\Omega_{i},\\ 0,&\text{else},\end{array}\right. \tag{5}\]
would form an excellent approximate basis for the _global_ eigenfunctions of the slow subspace of the Koopman operator \(\mathcal{K}\).
Define the _local_ infinitesimal operator \(\mathcal{L}_{i}^{\dagger}\) as the operator describing the dynamics on \(\Omega_{i}\), applying absorbing boundary conditions on the boundary \(\partial\Omega_{i}\) of \(\Omega_{i}\), as in Ref. [39]. Then, the largest eigenvalue and eigenfunction pair satisfies
\[\mathcal{L}_{i}^{\dagger}\rho_{1}^{(i)}\left(\omega\right)=\lambda_{1}^{(i)} \rho_{1}^{(i)}\left(\omega\right),\quad i=1\ldots M, \tag{6}\]
where \(\lambda_{1}^{(i)}<0\) implies decaying dynamics and \(\rho_{1}^{(i)}\left(\omega\right)=0\) if \(\omega\notin\Omega_{i}\). The eigenfunction \(\rho_{1}^{(i)}(\omega)\) is referred to as the quasi-stationary distribution (QSD) on state \(\Omega_{i}\). Note that
the \(\lambda_{1}^{(i)}\) are now _local_ quantities, in contrast to the discussion in the preceding sections that focus on the eigenvalues of the _global_ generator. Specifically, the eigenvalue \(\lambda_{1}^{(i)}\) quantifies the expected timescale \(-1/\lambda_{1}^{(i)}\), over which a system residing in state \(\Omega_{i}\) would finally escape [39]. The metastability of the state can be quantified by the ratio of the expected escaping time to the relaxation time to the QSD, i.e., \((\lambda_{2}^{(i)}-\lambda_{1}^{(i)})/\lambda_{1}^{(i)}\). In the following, we assume that all states are sufficiently metastable so that \((\lambda_{2}^{(i)}-\lambda_{1}^{(i)})/\lambda_{1}^{(i)}\gg 1\), \(\forall i\).
Let's suppose we periodically observe the a trajectory of the system on some timescale
\[-1/(\lambda_{2}^{(i)}-\lambda_{1}^{(i)})\ll\tau\ll-1/\lambda_{1}^{(i)} \tag{7}\]
We can set the timescale \(\tau=1\) by choosing an appropriate unit of time. With a probability \(\sim 1-\exp(\lambda_{1}^{(i)})\lesssim 1\), we would observe that the system remains in state \(\Omega_{i}\); then, by construction, the next observed configurations would be sampled from a distribution approaching the QSD on state \(\Omega_{i}\). Alternatively, with a probability \(\sim\exp(\lambda_{1}^{(i)})\ll 1\), we would observe that the system escaped to another state \(\Omega_{j}\neq\Omega_{i}\), where its configurations will be sampled from the QSD on state \(\Omega_{j}\). Crucially, when observed on the timescale \(\tau=1\) that is large compared to the internal relaxation time within basins, the time series of any observable should be well approximated by a sequence of i.i.d. random variables drawn from a distribution specific to the QSD of the metastable state where the system is currently trapped. In other words, such a time series will be well approximated by discrete-time Hidden Markov Model (HMM) with \(M\) hidden states, where the approximation becomes increasingly good as each state becomes increasingly metastable and statistically distinguishable (see Ref. [23] for an in-depth analysis). In this representation, transitions between hidden states are described by a discrete-time stochastic matrix \(\mathbb{P}\left\{S_{t+1}=i|S_{t}=j\right\}\) with \(t=0,1,\ldots\) being the observation times and \(S_{t}\) being a hidden state realized in random processes. In HMM, we cannot directly measure \(S_{t}\), but rather, we measure an observable \(O_{t}\) according to an observation model \(\mathbb{P}\left\{O_{t}|S_{t}\right\}\). In our case, the observation model depends on the QSD measured on each state, i.e., \(O_{t}=O(\omega_{t})\), where \(O\) is a prescribed deterministic function of the system's state, and \(\omega_{t}\) is distributed
according to the QSD of the states in which \(S_{t}\) resides.
This discussion highlights the important property of generic physical observables: if hidden metastable states exist as described above, the statistical distribution of generic observables should contain useful information that can be used to implicitly reconstruct indicator functions over hidden states. We now show how the statistics of these variations following transitions between states dictate how well the corresponding slow timescales can be estimated from the optimized Koopman operator.
### Kolmogorov-Smirnov Clustering
The challenge in practice is to find a set of observables that can approximate the indicator functions over metastable states as characterized in Sec. 2.3. Our approach to this challenge is based on a simple characteristics of HMM: if during two time intervals \([t_{1},t_{1}+\Delta t]\) and \([t_{2},t_{2}+\Delta t]\) the dynamical system is in the same metastable state and delays between observations obey Eq. (7), then the distributions of observables measured over these time intervals should be statistically equivalent; and if the system is in different metastable states during the two intervals, the corresponding distributions of observables can presumably be statistically distinguishable from one another. The "statistical distance" between distributions measured in different time intervals can be quantified using the two-sample Kolmogorov-Smirnov (KS) test [32]. Namely, \(D_{1,2}=\sup_{x}|F_{1}(x)-F_{2}(x)|\) (Figure 1a) is the KS statistic measuring the maximum difference between two empirical cumulative distributions \(F_{1}(x)\) and \(F_{2}(x)\), which are computed from the data collected in two time intervals \([t_{1},t_{1}+\Delta t]\) and \([t_{2},t_{2}+\Delta t]\), respectively. A so-called distance matrix contains all \(D_{i,j}\) values, which are computed for all pairs of intervals. Based on this distance matrix, indicator functions can be constructed (see below).
In conventional applications, the KS statistics are used to reject the null hypothesis that the two samples were drawn from the same underlying distribution. In the present context, the KS statistics of the distributions corresponding to every pair of intervals are
instead used as a statistical distance measure that allows for the clustering of all intervals into a number of different groups. Specifically, each group will contain intervals that are statistically similar to one another, while intervals that are very statistically different will be assigned to different groups. This can be done with any clustering methods that can operate from a user-provided pairwise distance matrix; in the following, this was accomplished via hierarchical agglomerative clustering [40], which returns a hierarchy of clusters, i.e., clusters being merged in a bottom-up fashion until a preset number of cluster or a critical inter-cluster distance threshold has been reached.
Assuming that \(M\) different groups are identified by such clustering, we can build \(M\) surrogate timeseries conceptually corresponding to indicator functions over metastable states (rigorously, only \(M-1\) such timeseries are needed, since the \(M^{\text{th}}\) one can be expressed as a linear combination of the \(M-1\) others, up to an additive constant). Surrogate indicator function \(l_{i}\) with \(1\leq i\leq M\) can be created by assigning a value of 1 to a given time interval when it was deemed a member of cluster \(i\) and a value of zero otherwise. This procedure can easily be generalized to multiple observables using multi-dimensional generalizations of the KS test [41], or by defining the distance between two multi-dimensional distributions as the maximal distance between any corresponding pair of one-dimensional distributions, which we use in the following.
The overall algorithm is illustrated in Fig. 1. One first identifies a set of base observables
Figure 1: (Panel A) Illustration of the KS test. The test statistics corresponds to the maximum difference between two cumulative distribution functions measured. (Panel B) Schematic illustration of the proposed KS-clustering approach.
(1) which are processed using a conventional linear EDMD procedure (2). The original time-series are then compressed into a lower-dimensional space via projection into the eigenvectors corresponding to the slowest relaxation modes (3). In this reduced space, the KS clustering is applied to identify indicator functions over the KS clusters (4), which are then used to construct surrogate time-series (5). The projected descriptors from step (3) are combined with the surrogate functions from step (5) and used as input for a final EDMD analysis (6) to compute slow modes and corresponding timescales (7).
### A two-state HMM with Gaussian observation noise
To illustrate the arguments presented in Sec. 2.1-2.4, we consider an analytically solvable HMM with two discrete state \(S\in\{1,2\}\) and a single observable \(O\in\mathbb{R}\) (note that this analysis can be generalized to general \(M\)-state HMM). We use the standard notation that the upper-case symbols with a subscript time stand for random processes, and lower-case symbols stand for dummy variables or sample paths of random processes. The Markov transition between the hidden states is characterized by a Markov matrix \(\mathbf{M}\) whose entries \(M_{ij}=\mathbb{P}\left\{S_{t+1}=i|S_{t}=j\right\}\):
\[\mathbf{M}:=\begin{bmatrix}1-p_{+}&p_{-}\\ p_{+}&1-p_{-}\end{bmatrix}. \tag{8}\]
That is, with a probability \(p_{+}\) (resp. \(p_{-}\)) the hidden state jumps from 1 to 2 (resp. 2 to 1) in a single step. We note that the stationary distribution of the hidden state \(\pi:=\left[\pi_{1},\pi_{2}\right]^{T}=\left[p_{-}/\left(p_{-}+p_{+}\right),p_ {+}/\left(p_{-}+p_{+}\right)\right]^{T}\) satisfies \(\mathbf{M}\cdot\pi=\pi\). As we are interested in systems whose hidden states are metastable on the observation timescale, we have \(p_{+}\), \(p_{-}\ll 1\). We consider a univariate Gaussian observation model, where the observation \(O_{t}\) at time \(t\), which is a random variable (modeling the quasi-stationary distribution), depends on only on the current
hidden state \(S_{t}\):
\[\rho\left(O_{t}=\omega|S_{t}=s\right)=\frac{1}{\sqrt{2\pi\sigma_{s}^{2}}}e^{- \frac{\left(\omega-\mu_{s}\right)^{2}}{2\sigma_{s}^{2}}} \tag{9}\]
where \(\left(\mu_{1},\sigma_{1}\right)\) and \(\left(\mu_{2},\sigma_{2}\right)\) fully characterize the observation model. We remark that the only timescale of the process is the autocorrelation time of the hidden states, which is \(-1/\log(1-p_{-}-p_{+})\). Without loss of generality, we impose a zero-mean condition on the observable, that is, \(\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{i=0}^{T-1}\omega_{i}=\pi_{1}\mu_{1 }+\pi_{2}\mu_{2}=0\).
Our goal is to analytically express the result of an EDMD procedure given an infinitely long timeseries \(O_{t}\)'s. This corresponds to substituting the analytical expression
\[C(k):=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\sum_{s,s^{\prime}\in\{1, 2\}}\,\mathbb{P}\left\{\omega_{2}|s^{\prime}\right\}\mathbb{P}\left\{\omega_ {1}|s\right\}\,\left[\mathbf{M}^{k}\right]_{s^{\prime},s}\,\pi\left(s\right) \,\mathrm{d}\omega_{1}\,\mathrm{d}\omega_{2}. \tag{10}\]
into Eq. 3. It is elementary to show that
\[C(0)=\pi_{1}\left(\mu_{1}^{2}+\sigma_{1}^{2}\right)+\pi_{2}\left(\mu_{2}^{2}+ \sigma_{2}^{2}\right) \tag{11}\]
and for \(k\in\mathbb{Z}_{\geq 1}\),
\[C(k)=\left(1-p_{+}-p_{-}\right)^{k}\left(\mu_{1}-\mu_{2}\right)^{2}\pi_{1}\pi _{2}. \tag{12}\]
Consequently, the estimated Koopman operator using this single observable with \(k\)-lag is
\[K_{k}=\gamma\left(1-p_{+}-p_{-}\right)^{k}, \tag{13}\]
where
\[\gamma:=\frac{\left(\mu_{1}-\mu_{2}\right)^{2}\pi_{1}\pi_{2}}{\pi_{1}\left( \mu_{1}^{2}+\sigma_{1}^{2}\right)+\pi_{2}\left(\mu_{2}^{2}+\sigma_{2}^{2} \right)}. \tag{14}\]
The only characteristic timescale of the process is then estimated as
\[\tau_{k}=-\frac{k}{\log K_{k}}=\frac{1}{-\log\left(1-p_{+}-p_{-}\right)+\frac{1}{ k}\log\gamma}. \tag{15}\]
For general \(\sigma_{1}\), \(\sigma_{2}\neq 0\) and a finite lag \(k<\infty\), \(\gamma<1\) leads to an overestimation of the timescale \(\tau_{k}\). This result is consistent with the variational principle [17] which states that sub-optimal observables result into overestimations of the characteristic timescales; \(-\log(\gamma)\) can be seen as a noise-to-signal metric which discounts the timescale estimation by EDMD. While in principle the correct timescale is recovered in the limit \(k\rightarrow\infty\), approaching this limit could require extremely long trajectories. In contrast, using zero-mean timeseries of the indicator function over the hidden states as a basis results in an unbiased estimation of \(\tau_{k}=-1/\log\left(1-p_{+}-p_{-}\right)\) for any value of \(k\).
## 3 Numerical Results
### Two-state HMM with Gausssian noise
We now use the two-state HMM discussed above to numerically demonstrate that the approximated indicator functions inferred from the KS clustering significantly improve the timescale estimation. For each set of parameters considered, a reference trajectory of \(5\times 10^{5}\) steps is first generated using a standard kinetic Monte Carlo procedure. In the following, the transition probabilities between the hidden states was fixed at \(p_{+}=p_{-}=10^{-4}\); and the conditional means of the observation were set to \(\mu_{1}=-0.5\) and \(\mu_{2}=0.5\), ensuring that the long-time observable mean tends to 0. In the first parameter set, we consider \(\sigma_{1}=\sigma_{2}=0.1\), corresponding to a well-separated observation model (see time series in Fig. 2A and empirical distribution in Fig. 2B). In the second parameter set, we consider \(\sigma_{1}=\sigma_{2}=0.5\); in this case the stationary distribution is unimodal (see time series in Fig. 2E and empirical distribution in Fig. 2F); this is representative of a situation where the base observables are
poor at distinguishing the two states (which is a very common occurrence in practice).
In both parameter sets, the ground-truth characteristic timescale is \(-\log\left(1-p_{1}-p_{2}\right)\approx 5000\) discrete time steps. The discounting factors \(\gamma\) from Eq. (14) are \(1/1.04\) and \(1/2\), respectively. These two cases are thus representative of good and bad descriptors, where the convergence of EDMD with respect to the lag time would be fast and slow as shown by the timescales (dashed red curve) in Fig. 3A and B, respectively. Even for the "good" case (Fig. 3A), the slowest characteristic timescale obtained from \(\omega_{t}\) (raw data used for step (2) in Fig. 1B) is severely underestimated for lag times less than 1000 steps. The estimate is even worse in the "bad" case (Fig. 3B), where a 20% underestimation still persists even for a lag of 10,000 steps.
We now demonstrate that the use of the KS indicator functions can greatly improve the estimated timescales. To obtain the KS indicator functions (see Sec. 2.4), we applied the procedure (steps 1-6) described in Fig. 1b. At step (4), the data were split into time
Figure 2: Illustration of the KS-clustering approach using timeseries generated with the Hidden Markov Models defined in Eqs. (8) and (9) with \(p_{+}=p_{-}=10^{-4}\) and \(\mu_{1}=-0.5\), \(\mu_{2}=0.5\). Algorithmic parameters \(\Delta t=500\) for the KS clustering procedure. Panels A-D: \(\sigma_{1}=\sigma_{2}=0.1\); Panels E-H: \(\sigma_{1}=\sigma_{2}=0.5\). Panels A and E are the time series generated by a standard kinetic Monte Carlo sampling procedure, and the histogram of the time series are shown in panels B and F. The red-colored indicator function based on the KS clustering is illustrated in Panels C and G, as well as green-colored indicator functions (often called discretized trajectories) based on a standard \(k\)-means clustering in Panels D and H. [2, 12, 42]
intervals of \(\Delta t=500\), from which the inter-interval KS distance matrix was computed. This matrix was then used as an input to the hierarchical agglomerative clustering algorithm [40] with \(M=2\) clusters. The KS indicator functions are plotted in Fig. 2C and G for both cases. Note that since the dimension of the test system is one, steps (1-3) can be skipped. As a point of comparison, we also considered a standard MSM approach using k-mean clustering with 2 clusters; the corresponding discretized trajectories are illustrated in Fig. 2D and H. For the "good" case, Figures 2C and D show consistent indicator functions obtained from both clustering algorithms. Fig. 3A shows that both of the clustering algorithms yield accurate timescales at all lagtimes. When the distributions corresponding to the two hidden states are not well separated (c.f. Fig. 2E and F), Fig. 3B shows that the accuracy of the estimated timescale obtained from the standard MSM with k-means clustering is basically identical to the direct EDMD approach, while the KS-clustering approach accurately estimates the slow timescale at all lagtimes. This results from the fact that the KS clustering uses observable statistics over extended periods of time to differentiate metastable states, while a k-means approach operates on each sample separately, and hence cannot distinguish distributions that significantly overlap with one another.
To further examine the effects of \(\mu_{i}\) and \(\sigma_{i}\) on the estimation of timescales, we fixed \(p_{+}=p_{-}=10^{-4}\) and varied \(\Delta\mu=\mu_{2}-\mu_{1}=2\mu_{2}\) and \(\sigma_{1}=\sigma_{2}=\sigma\). Equation (14) then simplifies to \(\gamma=1/\left[4(\sigma/\Delta\mu)^{2}+1\right]\). Figure 4 shows that the results from direct EDMD on \(\omega_{t}\) are underestimated in a way that is in good agreement with the predictions of Eq. (15). In contrast, the use of the indicator functions obtained from the KS clustering recovers reasonable timescale even at short lagtimes and when the noise amplitude is very large compared to \(\Delta\mu\) in comparison with the exact indicator functions. The very good agreement between the results obtained from inferred indicator functions and actual hidden states (which are not normally accessible) indicates that the fluctuations in predicted timescales result from the finite trajectory lengths.
The KS clustering method requires selecting two tunable parameters: the length of the
bins \(\Delta t\) used to partition the timeseries and the number of target clusters \(M\) used in the clustering algorithm. The impact of these two parameters is shown in Fig. 5. Figures 5A and C show that \(\Delta t\) around 500 steps produces accurate results, while larger values lead to an overestimation of the characteristic timescale. This behavior can be rationalized by considering the \(\Delta t>>\tau\) limit. In this case, multiple visits to different metastable states are averaged out within each interval. This results into an underestimation of the transition probability and a corresponding overestimation of the transition timescales. In order to be accurate, on one hand, the algorithm requires that most time intervals contain sections of trajectory remaining in the same hidden states, whose timescales satisfy Eq. 7. On the other hand, \(\Delta t\) should not be too small (e.g., \(<50\) in our cases), so as to maximize the statistical power of the KS test for accurately detecting transitions between metastable states. We therefore recommend \(\Delta t\simeq\tau_{M}/10\), where \(\tau_{M}\) is the timescale of the fastest mode in the target "slow" subspace. Of course, \(\tau_{M}\) is not known _a priori_, so \(\Delta t\) can be estimated using the EDMD preprocessing step discussed above. This estimate can be validated and
Figure 3: Timescales (Left axis) computed for the two-state Hidden Markov Models. The dashed and solid black or red curves are the data obtained via EDMD from \(\omega_{t}\) and \(\chi_{\text{ind}}\), respectively. Panel A: \(\sigma_{1}=\sigma_{2}=0.1\); Panel B: \(\sigma_{1}=\sigma_{2}=0.5\). Label \(\omega_{t}\) indicates the estimated timescales obtained from the raw data via EDMD without clustering. Label \(\chi_{\text{ind}}\) indicates the timescales obtained from the indicator functions computed via the KS clustering applied to \(\omega_{t}\). An ensemble of 100 time-series was used. Each timeseries has \(5\times 10^{5}\) steps.
readjusted after a carrying out the whole procedure.
Figures 5B and D shows the effect of the number of clusters/hidden states \(M\). These results suggest that the estimated transition timescales can slightly be overestimated when \(M\) is larger than the number of hidden states (2 in this case), although the dependence is rather weak. Consider the extreme limit where \(M\) is equal to the total number of time intervals, and hence each interval will be assigned to its own cluster. In this case, it is
Figure 4: Timescale (\(\tau_{k}\)) computed for different \(\sigma_{1}=\sigma_{2}=\sigma\) and lagtime \(k\) as functions of \(\Delta\mu=\mu_{2}-\mu_{1}=2\mu_{2}\), fixing \(p_{1}=p_{2}=10^{-4}\). The dashed green line is the ground true timescale, \(\tau_{\rm truth}=5000\) steps. Here, we used the exact indicator functions (\(\tilde{\chi}_{ind}\)), which were generated by the kinetic Monte Carlo procedure, for comparison with the indicator function \(\chi_{ind}\) generated by the KS clustering with number \(M=2\) of clusters used for all calculations here.
possible for EDMD to create a spurious linear combination of indicator functions whose characteristic timescale approaches the trajectory length. We again recommend using the EDMD preprocessing step discussed above to estimate the number of slow characteristic timescales prior to the KS-clustering procedure.
### Application to Protein Dynamics
To illustrate the performance of the KS clustering on more complex systems, we consider three small proteins, NLT9, BBA, and Trp-Cage (Fig. 6A-C), each of which was simulated
Figure 5: Dependence of timescales on \(\Delta t\) and number of clusters (\(M\)), which is equal to the number of indicator functions, \(\chi_{\rm ind}\). Panels A-B: \(\sigma_{1}=\sigma_{2}=0.1\); Panels C-D: \(\sigma_{1}=\sigma_{2}=0.5\); Panels A and C: \(M=2\); Panels B and D: \(\Delta t=500\). Other parameters are the same as in Figure 2. An ensemble of 10 timeseries was used to compute the confidence intervals. Each timeseries has \(5\times 10^{5}\) steps.
for at least \(200\,\mathrm{\SIUnitSymbolMicro s}\)[19]. We describe each trajectory by the time series of the torsional angles of each amino acid in the proteins, indexed with \(i\)[43], yielding 190, 152, and 85 observables, respectively. We then applied EDMD to estimate the Koopman operator for the three systems. The trajectories were then projected onto the \(M\) slowest eigenvectors of the Koopman operator. The resulting \(M\) timeseries were used as input features to the procedure (steps 1-6) shown in Figure 1B. Note that when \(M=1\), no surrogate indicator function is introduced and the results are identical to that of a conventional EDMD approach.
Figure 6 shows the projection of the original trajectory into the slowest eigenmode (middle row) of the Koopman operator inferred from the original data (blue) and from the augmented data. While the data remains relatively noisy when projected based on the
Figure 6: Protein systems. A: Ribosomal Protein L9 (NTL9) (PDB: 2HBB), B: Beta-beta-alpha Fold (BBA) (PDB: 1FME), and C: Trp-Cage (PDB: 2JOF). \(\mathrm{Lagtime}=100\) steps or \(200\), \(40\) and \(40\) ns for NTL9, BBA and Trp-Cage, respectively; \(\Delta t=2100\) steps or \(4.2,0.84,\mathrm{and}\,0.84\)\(\mu s\) for NTL9, BBA and Trp-Cage, respectively. Middle Panels: The timeseries of the first slowest components in the protein systems. Bottom Panels: the longest time scale as a function of selected components.
torsional angles only (blue), the almost piecewise character becomes evident when EDMD is augmented by the surrogate indicator functions (black), which is a clear indication that improved eigenfunctions are produced (see Sec. 2.5). This strongly suggests that the KS clustering properly identifies the metastable states of the system. This improved identification dramatically affects the estimated timescales compared to the original EDMD (which corresponds to \(M=1\)), sometimes by up to an order of magnitude (c.f. bottom row of Figures 6). The slowest characteristic timescale is also observed to be relatively insensitive to a choice of \(M\), beyond a very large initial jump as soon as at least one surrogate indicator function is added (i.e., when \(M\geq 2\)).
As a more stringent test of the approach, we repeated the same procedure using a single base observable, the RMSD with respect to the native state. Using only the RMSD in an EDMD approach produces poor estimates of the timescales, namely 0.25, 0.05, and 0.1 us for NLT9, BBA, and Trp-Cage, respectively. When combined with indicator functions (\(M=2\), see SI) for NLT9, BBA, and Trp-Cage, the timescales are increased to 22.6, 2, and 3 us, respectively. These estimated timescales from the RMSDs are close to the estimates from a much richer set of observables [19] and timescales shown in Fig. 6. This suggests that the KS-clustering is comparatively much less sensitive to the details of the observable definition compared to the conventional EDMD approach, insofar the observable distributions in different states is statistically distinguishable.
## 4 Conclusion
EDMD-type methods are powerful kinetic analysis tools that are particularly appealing due to their formal and practical simplicity. The results of this type of analysis are however often sensitive to the ability of a given set of observables to serve as a good basis for the slow subspace of the generator. This would even be true of non-linear generalization of EDMD or even conventional MSM approaches when the observable distributions corresponding to
different hidden states are not separable in the space spanned by the observable. Rationally enriching the observable space is an outstanding challenge that has yet to find a simple solution.
The main objective of this work was to propose a strategy to retain the simplicity of linear EDMD-type methods, while improving the ability of the method to extract accurate kinetic information from relatively poor observables. The KS-clustering approach proposed here does not require the distributions to be separable in the observable space, but only to be sufficiently statistically distinct that those corresponding to different hidden states can be identified using a two sample KS test. The surrogate indicator functions obtained by clustering using the KS metric were shown provide reliable estimates of known characteristic timescales, even when the distribution of observables over the different metastable states strongly overlap in the given observable space. It is however worth nothing that the approximate eigenfunctions obtained by KS-clustering are not explicit functions of the observables, which is certainly a limitation in terms of their direct interpretability. However, the availability of timeseries "labeled" in terms of putative hidden states can potentially be correlated with known observables to interpret the nature of the transitions between hidden states. When enriched with surrogate indicator functions over the implicit metastable set, the simple linear EDMD procedure is shown to produce very accurate timescale estimations at any lagtime, in contrast to conventional linear approaches that can require very long lagtimes to produce accurate estimates. The procedure is generic and scalable, and provides a simple tool to improve linear approaches at low cost.
## Acknowledgement
This work has been authored by employees of Triad National Security, LLC which operates Los Alamos National Laboratory (LANL) under Contract No. 89233218CNA000001 with the U.S. Department of Energy/National Nuclear Security Administration. The work has been
supported by LDRD (Laboratory Directed Research and Development) program at LANL under project 20190034ER (Massively-Parallel Acceleration of the Dynamics of Complex Systems: a Data-Driven Approach). V.A.N was partially supported by Director's Postdoctoral Fellowship, 20170692PRD4, for this work. V.A.N. is supported by Oak Ridge National Laboratory, which is managed by UT-Battelle under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF). We also thank DE Shaw for making their MD data available for this study.
Supporting information contains a PDF with one figure, and python scripts and pandas pickles, which are made available at [https://gitlab.com/ngoav/the-ks-clustering](https://gitlab.com/ngoav/the-ks-clustering).
|
2301.10451 | Knowledge-augmented Graph Neural Networks with Concept-aware Attention
for Adverse Drug Event Detection | Adverse drug events (ADEs) are an important aspect of drug safety. Various
texts such as biomedical literature, drug reviews, and user posts on social
media and medical forums contain a wealth of information about ADEs. Recent
studies have applied word embedding and deep learning -based natural language
processing to automate ADE detection from text. However, they did not explore
incorporating explicit medical knowledge about drugs and adverse reactions or
the corresponding feature learning. This paper adopts the heterogenous text
graph which describes relationships between documents, words and concepts,
augments it with medical knowledge from the Unified Medical Language System,
and proposes a concept-aware attention mechanism which learns features
differently for the different types of nodes in the graph. We further utilize
contextualized embeddings from pretrained language models and convolutional
graph neural networks for effective feature representation and relational
learning. Experiments on four public datasets show that our model achieves
performance competitive to the recent advances and the concept-aware attention
consistently outperforms other attention mechanisms. | Shaoxiong Ji, Ya Gao, Pekka Marttinen | 2023-01-25T08:01:45Z | http://arxiv.org/abs/2301.10451v3 | Knowledge-augmented Graph Neural Networks with Concept-aware Attention for Adverse Drug Event Detection
###### Abstract
Adverse drug events (ADEs) are an important aspect of drug safety. Various texts such as biomedical literature, drug reviews, and user posts on social media and medical forums contain a wealth of information about ADEs. Recent studies have applied word embedding and deep learning -based natural language processing to automate ADE detection from text. However, they did not explore incorporating explicit medical knowledge about drugs and adverse reactions or the corresponding feature learning. This paper adopts the heterogenous text graph which describes relationships between documents, words and concepts, augments it with medical knowledge from the Unified Medical Language System, and proposes a concept-aware attention mechanism which learns features differently for the different types of nodes in the graph. We further utilize contextualized embeddings from pretrained language models and convolutional graph neural networks for effective feature representation and relational learning. Experiments on four public datasets show that our model achieves performance competitive to the recent advances and the concept-aware attention consistently outperforms other attention mechanisms.
_Keywords--_ Adverse Drug Event Detection, Graph Neural Networks, Knowledge Augmentation, Attention Mechanism
## 1 Introduction
Pharmacovigilance, i.e., drug safety monitoring, is a critical step in drug development (Wise et al., 2009). It detects adverse events and safety issues and promotes drug safety through post-market assessment; therefore, it promotes safe drug development and shows significant promise in better healthcare service delivery. A drug-related negative health outcome is referred to as an Adverse Drug Event (ADE) (Donaldson et al., 2000). Given the significant harm caused by ADEs, it is essential to detect them for pharmacovigilance purposes.
Clinical trials are the common way to detect ADEs. However, some ADEs are hard to investigate through clinical trials due to their long latency (Sultana et al., 2013). Additionally, regular trials cannot cover all aspects of drug use. Through the voluntary Post-marketing Drug Safety Surveillance System (Li et al., 2014), users report their experiences on drug usage and related safety issues. Nevertheless, the system suffers several limitations such as incomplete reporting, under-reporting, and delayed reporting.
Recent advances in automated pharmacovigilance are based on collecting large amounts of text about adverse drug events from various platforms, such as medical forums (e.g., AskaPatient), biomedical publications, and social media, and training Natural Language Processing (NLP) models to automatically detect whether a given textual record contains information about adverse drug reactions, which is usually framed as a binary classification task. Text mentions of adverse drug events include a plethora of drug names and adverse reactions. Figure 1 shows an example annotated with concepts from the Unified Medical Language System (UMLS). To understand the drug information and corresponding adverse reactions, the NLP model needs to capture abundant medical knowledge and be able to do relational reasoning.
Early studies used rule-based methods (Xu et al., 2010; Sohn et al., 2014) with manually built rules or applied machine learning algorithms such as conditional random fields (Nikfarjam et al., 2015; Wang et al., 2022), support vector machine (Bollegala et al., 2018), and neural networks (Cocos et al., 2017; Huynh et al., 2016). These approaches can process text with manual feature engineering or enable automated feature learning with
deep learning methods, allowing for automated ADE detection. However, they are limited in capturing rich contextual information and relational reasoning.
Graphs are expressive and can represent various data. For example, nodes in a graph for a collection of texts can represent various entities, such as words, phrases, and documents, while edges represent relationships between them. Such text graphs together with graph neural networks are widely used in NLP applications such as sentiment classification and review rating (Yao et al., 2019; Lin et al., 2021; Zhang et al., 2020). Recently, graphs have been used for text representation with graph boosting (Shen et al., 2020) or contextualized graph embeddings (Gao et al., 2022) for ADE detection. Other works have applied knowledge graph embeddings and link prediction to ADE prediction in drug-effect knowledge-graphs (Kwak et al., 2020; Joshi et al., 2022). However, medical knowledge plays an important role in ADE detection from text, and so far there are no studies that incorporate medical knowledge in a text graph and learn concept-aware representations that inject medical concepts (e.g., the UMLS concepts as illustrated in Figure 1) into the text embeddings.
Our previous model CGEM (Gao et al., 2022) applied a heterogeneous text graph, embodying word, concept, and document relations for ADE corpus, to learn contextualized graph embeddings for ADE detection. Here, we extend this work by showing how the graph can be augmented with medical knowledge from the UMLS metathesaurus (Bodenreider, 2004). In addition, we deploy concept-aware self-attention that applies different feature learning for various types of nodes. We name our model as KnowCAGE (Knowledge-augmented Concept-Aware Graph Embeddings). Our contributions are thus summarized as follows:
* We introduce medical knowledge, i.e., the UMLS metathesaurus, to augment the contextualized graph embedding model for representation learning on drug adverse events.
* A concept-aware self-attention is devised to learn discriminable features for concept (from the medical knowledge), word and document nodes.
* Experimental results evaluated in four public datasets from medical forums, biomedical publications and social media show our approach outperforms recent advanced ADE detection models in most cases.
## 2 Related Work
Recent advances on adverse drug event detection use word embeddings and neural network models to extract text features and capture the drug-effect interaction. Many studies deploy recurrent neural networks to capture the sequential dependency in text. For example, Cocos et al. (2017) utilized a Bidirectional Long Short-Term Memory (BiLSTM) network and Luo (2017) proposed to learn sentence- and segment-level representations based on LSTM. To process entity mentions and relations for ADE detection and extraction, pipeline-based systems (Dandala et al., 2019) and jointly learning methods (Wei et al., 2020) are two typical approaches.
Several recent publications studied graph neural networks for ADE detection. Kwak et al. (2020) built a drug-disease graph to represent clinical data for adverse drug reaction detection. GAR (Shen et al., 2021) uses graph embedding-based methods and adversarial learning. CGEM (Gao et al., 2022) combines contextualized embeddings from pretrained language models with graph convolutional neural networks.
Some other studies also adopted other neural network architectures such as capsule networks and self-attention mechanism. Zhang et al. (2020) proposed the gated iterative capsule network (GICN) using CNN and a capsule network to extract the complete phrase information and deep semantic information. The gated iterative unit in the capsule network enables the clustering of features and captures contextual information. The attention mechanism prioritizes representation learning for the critical parts of a document by assigning
Figure 1: An example of a text mentioning an adverse drug event from Karimi et al. (2015). The recognition of drugs and adverse reactions requires medical knowledge and relational reasoning.
them higher weight scores. Ge et al. (2019) employed multi-head self-attention and Wunnava et al. (2020) developed a dual-attention mechanism with BiLSTM to capture semantic information in the sentence.
Another direction of related work is knowledge augmentation for deep learning models. Many publications adopt knowledge graph to guide the representation learning in various applications (Ji et al., 2022). For example, Ma et al. (2018) injected commonsense knowledge into a long short-term memory network and (Liang et al., 2022) enhanced the graph convolutional network with affective knowledge to improve aspect-based sentiment analysis. Knowledge injection is also used for other applications such as hate speech detection (Pamungkas et al., 2021), mental healthcare (Yang et al., 2022), and personality detection Poria et al. (2013).
## 3 Methods
This section introduces the proposed graph-based model with knowledge augmentation, i.e., Knowledge-augmented Concept-Aware Graph Embeddings (KnowCAGE), as illustrated in Figure 2. The model consists of four components: 1) Knowledge-augmented Graph Construction, 2) Heterogeneous Graph Convolution, 3) Concept-aware Attention, and 4) Ensemble-based ADE classification layers. Following TextGCN (Yao et al., 2019), we construct a heterogeneous text graph, which contains three types of nodes: words, documents and concepts, and we augment it with medical knowledge from the Unified Medical Language System metahesaurus. Heterogeneous graph convolution network is then used to encode the text graph and learn rich representations. We use the contextualized embeddings from pretrained BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) to represent the node features in the heterogenous text graph. The adjacency matrix and feature matrix obtained from the embedding layers are inputs to graph neural network encoders which take into account the relationships and information between and within the nodes. Considering different types of nodes, we use a concept-aware self-attention, inspired by the entity-aware representation learning (Yamada et al., 2020), which treats the different types of nodes differently, allowing the most significant content to have the largest contribution to the final prediction. To boost the prediction of ADE even further, we follow the BertGCN model (Lin et al., 2021) and apply an ensemble classifier with contextualized embeddings on one hand and the graph networks on the other, and learn a weight coefficient to balance these two prediction branches.
### Knowledge-augmented Graph Construction
We firstly build the heterogenous text graph for the whole document collection by using the external knowledge source - UMLS - to augment the word/document graph with concept information. Representing text in a heterogeneous graph can provide different perspectives for text encoding and improve ADE detection. In the UMLS metahesaurus, different words or phrases are assigned different Concept Unique Identifiers (CUI), where each CUI represents one concept class. Every concept class has an attribute "preferred name" which is a short description or a synonym of this concept. Our model uses UMLS to obtain the "preferred name" of each word in the dataset and add words in "preferred name" to the graph as the concept nodes. Therefore, the augmented graph also contains concept nodes in addition to the word and document nodes. The number
Figure 2: An illustration of the model architecture with knowledge-augmented graph embeddings and concept-aware representations
of total nodes \(n=n_{d}+n_{w}+n_{c}\), where \(n_{d}\), \(n_{w}\) and \(n_{c}\) are the numbers of documents, words and concepts, respectively. There are five types of edges, i.e., word-word, word-concept, document-concept, concept-concept and document-word edges. The weights of document-word edges and document-concept edges are calculated as the term frequency-inverse document frequency (TF-IDF), while the weights of the other edges are defined as the positive point-wise mutual information (PMI) of the respective words or concepts. Specifically, the weight between the node \(i\) and the node \(j\) is computed as:
\[\mathbf{A}_{ij}=\left\{\begin{array}{ll}\text{PMI}(i,j),&\text{PMI}>0;i,\text { j: word/concept}\\ \text{TF-IDF}_{\text{ij}},&\text{i: document, j: word/concept}\\ 0,&\text{otherwise}\end{array}\right.\]
We use pretrained contextualized embeddings from language models. Given the dimension of embeddings denoted as \(d\), the pooled output of contextualized document encoding are denoted as \(\mathbf{H}_{doc}\in\mathbb{R}^{n_{d}\times d}\). We initialize word and concept nodes with a zero matrix to get the initial feature matrix which is used as input to the graph neural network:
\[\mathbf{H}^{[0]}=\left(\begin{array}{c}\mathbf{H}_{doc}\\ \mathbf{0}\end{array}\right), \tag{1}\]
where \(\mathbf{H}^{[0]}\in\mathbb{R}^{[n_{d}+n_{w}+n_{c})\times d}\) and \([0]\) denotes the initial layer.
### Heterogeneous Graph Convolution
We adopt graph neural networks over the heterogeneous text graph to learn complex relations between words, concepts, and documents. Specifically, given the initial input features \(\mathbf{H}^{[0]}\) obtained from pretrained language models and the adjacency matrix \(\mathbf{A}\), we update the representations via graph convolution. A forward pass of the \(i\)-th layer of a convolutional graph network can be denoted as:
\[\mathbf{H}^{[i+1]}=f\left(\mathbf{\hat{A}}\mathbf{H}^{[i]}\mathbf{W}^{[i]} \right), \tag{2}\]
where \(\mathbf{\hat{A}}\) is the normalized adjacency matrix, \(\mathbf{H}^{[i]}\) are the hidden representations of \(i\)-th layer, \(\mathbf{W}^{[i]}\) is the weight matrix, and \(f(\cdot)\) is an activation function. The KnowCAGE framework can adopt various types of convolutional graph neural networks. Our experimental study chooses three representative models, i.e., Graph Convolutional Network (GCN) (Kipf and Welling, 2017), Graph Attention Network (GAT) (Velickovic et al., 2018), and Deep Graph Convolutional Neural Network (DGCNN) (Zhang et al., 2018). GCN is a spectral-based model with a fixed number of layers where different weights are assigned to layers and the update of node features incorporates information from the node's neighbors. It employs convolutional architectures to get a localized first-order representation. Graph attention layers in GAT assign different attention scores to one node's distant neighbors and prioritize the importance of different types of nodes. DGCNN concatenates hidden representations of each layers to capture rich substructure information and adopt a SortPooling layer to sort the node features.
### Concept-aware Attention Mechanism
Different types of nodes have various impacts on the prediction of adverse drug events. Inspired by the contextualized entity representation learning from the knowledge supervision of knowledge bases (Yamada et al., 2020), we propose to use a concept-aware attention mechanism that distinguishes the types of nodes, especially the concept nodes, and better captures important information related to the positive or negative ADE classes.
Two types of nodes may not have the same impact on each other. Thus, we use different transformations for different types of nodes in the concept-aware attention mechanism in order to learn concept-aware attentive representations. We obtain key and value matrices \(\mathbf{K}\in\mathbb{R}^{l\times d_{h}}\) and \(\mathbf{V}\in\mathbb{R}^{l\times d_{h}}\) similarly to the key and value in the self-attention of transformer network (Vaswani et al., 2017). Concept-aware attention has nine different query matrices \(\mathbf{Q}\) for concept nodes \(c\), word nodes \(w\) and document nodes \(d\), i.e., \(\mathbf{Q}_{ww}\), \(\mathbf{Q}_{cc}\), \(\mathbf{Q}_{dd}\), \(\mathbf{Q}_{cw}\), \(\mathbf{Q}_{wc}\), \(\mathbf{Q}_{wd}\), \(\mathbf{Q}_{dw}\), \(\mathbf{Q}_{dc}\), and \(\mathbf{Q}_{cd}\in\mathbb{R}^{l\times d_{h}}\). Then, we use \(\mathbf{Q}\), \(\mathbf{K}\) and \(\mathbf{V}\) to compute the attention scores. For example, for \(i\)-th document and \(j\)-th concept nodes, i.e., \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\in\mathbb{R}^{d_{h}}\), we calculate the attention score as:
\[\alpha_{ij}=\text{Softmax}\left(\frac{(\mathbf{K}\mathbf{x}_{j})^{\top}\mathbf{ Q}_{cd}\mathbf{x}_{i}}{\sqrt{l}}\right) \tag{3}\]
The concept-aware representation \(\mathbf{h}_{i}\in\mathbb{R}^{l}\) for the \(i\)-th document is obtained as:
\[\mathbf{h}_{i}=\sum_{j=1}^{n_{c}}\alpha_{ij}\mathbf{V}\mathbf{x}_{j} \tag{4}\]
We can obtain the representations of word and concept nodes in the same way. These concept-aware representations are fed to the graph network as the node features in the next iteration of model updating.
### Classification Layers and Model Training
We apply the two linear layers and a softmax function over the concept-aware document embeddings \(\mathbf{h}_{i}\) to compute probability of classifying the document in each class \(\mathbf{p}_{g}\), representing the presence or absence of mentions of ADE in the document. Besides, the interpolation of the prediction probability of two classifiers is adopted to combine the prediction of graph-based modules and pretrained language model-based predictions (Lin et al., 2021). We use a similar classification module to process the contextualized embeddings from the pretrained language model (the upper branch in Fig. 2) and denote the corresponding classification probabilities by \(\mathbf{p}_{c}\). A weight coefficient \(\lambda\in[0,1)\) is introduced to balance the results from graph-based encoding and contextualized models:
\[\mathbf{p}=\lambda\mathbf{p}_{g}+(1-\lambda)\mathbf{p}_{c}. \tag{5}\]
This interpolation strategy can also be viewed as a weighted ensemble of two classifiers.
ADE detection is a binary classification task and the classes are highly imbalanced in most datasets. To complicate the matter further, most datasets contain only a small number of samples, making the downsampling method to balance the classes inappropriate. This study applies the weighted binary cross-entropy loss function to alleviate this problem. The weighted loss function is denoted as:
\[\mathcal{L}=\sum_{i=1}^{N}[-w_{+}y_{i}\log(p_{i})-w_{-}(1-y_{i})\log(1-p_{i})], \tag{6}\]
where \(w_{+}=\frac{N_{1}}{N_{0}+N_{1}}\) and \(w_{-}=\frac{N_{0}}{N_{0}+N_{1}}\) are weights of documents predicted as positive or negative samples respectively, \(N_{0}\) and \(N_{1}\) are the numbers of negative/positive samples in the training set, and \(y_{i}\) is the ground-truth label of a document. The Adam optimizer (Kingma and Ba, 2015) is used for model optimization.
## 4 Experimental Setup
Our goal is to conduct experiments on four ADE datasets and answer the following research questions.
**RQ1:**: How does the proposed model perform in ADE detection on texts from various sources, compared to other methods?
**RQ2:**: How does the heterogeneous graph convolution with knowledge augmentation improve the accuracy of ADE detection?
**RQ3:**: Does the concept-aware attention improve the accuracy of the heterogeneous graph convolution to detect ADE?
**RQ4:**: What is the impact of pretraining domains and contextualized language representation on the performance of the method in ADE election?
In this section we will describe the setup of the experiments and in the next section we will present the results of the experiments.
### Data and Preprocessing
We used four datasets from the medical forum, biomedical publications and social media, as summarized in Table 1, for evaluation. We preprocess data by removing stop words, punctuation, and numbers. For the data collected from Twitter, we use the tweet-preprocessor Python package 1 to remove URLs, emojis, and some reserved words for tweets.
Footnote 1: [https://pypi.org/project/tweet-preprocessor/](https://pypi.org/project/tweet-preprocessor/)
**TwiMed (TwiMed-Twitter and TwiMed-Pub) 2** The TwiMed dataset (Alvaro et al., 2017) includes two sets collected from different domains, i.e., TwiMed-Twitter from social media and TwiMed-Pub for biomedical publications. In each document, people from various backgrounds annotate diseases, symptoms, drugs, and their
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & Documents & ADE & non-ADE \\ \hline SMM4H & 2,418 & 1,209 & 1,209 \\ TwiMed-Pub & 1,000 & 191 & 809 \\ TwiMed-Twitter & 625 & 232 & 393 \\ CADEC & 7,474 & 2,478 & 4,996 \\ \hline \hline \end{tabular}
\end{table}
Table 1: A statistical summary of datasets
relationships. A document annotated as outcome-negative is regarded as an adverse drug event. Models are tested using 10-fold cross-validation.
**SMM4H**3 This dataset from Social Media Mining for Health Applications (#SMM4H) shared tasks (Magge et al., 2021) is collected from Twitter with a description of drugs and diseases. We use the official validation set to evaluate the model performance for a fair comparison with baseline models developed in the SMM4H shared task.
Footnote 3: [https://healthlanguageprocessing.org/smm4h-2021/task-1/](https://healthlanguageprocessing.org/smm4h-2021/task-1/)
**CADEC**4 The CSIRO Adverse Drug Event Corpus contains patient-reported posts from a medical forum called AskaPatient (Karimi et al., 2015). It includes extensive annotations on drugs, side effects, symptoms, and diseases. We use 10-fold cross-validation to evaluate the model's performance.
Footnote 4: [https://data.csiro.au/collection/csiro:10948](https://data.csiro.au/collection/csiro:10948)
### Baselines and Evaluation
We compare the performance of our method with two sets of baseline models: 1) models designed for ADE detection and 2) pretrained contextualized models, and report Precision (P), Recall (R), and F1-score.
Customized models for ADE detection are as follows. **CNN-Transfer**(Li et al., 2020) (CNN-T for short) used a convolutional neural network (CNN) for feature extraction and exploited adversarial transfer learning to boost the performance. **HTR-MSA**(Wu et al., 2018) adopted CNN and Bidirectional Long Short-Term Memory (BiLSTM) networks to learn text representations, and learned hierarchical text representation for tweets. It also employed multi-head self-attention. **ATL**(Li et al., 2020) applied adversarial transfer learning to ADE detection with corpus-shared features exploited. **MSAM**(Zhang et al., 2019) used the BiLSTM network to learn semantic representations of sentences and the multi-hop self-attention mechanism to boost the classification performance. **IAN**(Alimova and Solovyev, 2018) interactively learned attention representations through separate modeling on targets and context. **ANNSA**(Zhang et al., 2021) proposed a sentiment-aware attention mechanism to obtain word-level sentiment features and used adversarial training to improve the generalization ability. **CGEM**(Gao et al., 2022), a predecessor of our work, developed a contextualized graph-based model that utilizes contextualized language models and graph neural networks, and also devised an attention classifier to improve the performance.
The previously mentioned ADE detection baselines did not use the SMM4H dataset in their experiments. Therefore, we compare our model with pretrained language models. We use the base version of pretrained models for a fair comparison. Yaseen and Langer (2021) combined the LSTM network with the BERT text encoder (Devlin et al., 2019) for ADE detection. We denote it as BERT-LSTM. Pimpalkhute et al. (2021) introduced a data augmentation method and adopted the RoBERTa text encoder with additional classification layers (Liu et al., 2019) for ADE detection, denoted as RoBERTa-aug. Kayastha et al. (2021) utilized the domain-specific BERTweet (Nguyen et al., 2020) that is pretrained with English Tweets using the same architecture as BERT-base and classified ADE with a single-layer BiLSTM network, denoted as BERTweet-LSTM.
### Hyper-parameters
Table 2 shows the hyper-parameters we tuned in our experiments, where LR is the learning rate. When the number of iterations exceeds a certain threshold, the learning rate scheduler decays the learning rate by the parameter \(\gamma\). In our experiment, we set \(\gamma\) and the iteration milestone to 0.1 and 30, respectively.
## 5 Results
### Comparison with Baselines in Different Domains (RQ1)
We compare our model's predictive performance with baseline models on the TwiMed (Table 3), SMM4H (Table 4) and CADEC (Table 5) datasets. The results of the baselines are taken directly from the original papers.
\begin{table}
\begin{tabular}{l c} \hline \hline Hyper-parameters & Choices \\ \hline LR for text encoder & \(2e^{-5}\), \(3e^{-5}\), \(1e^{-4}\) \\ LR for classifier & \(1e^{-4}\), \(5e^{-4}\), \(1e^{-3}\) \\ LR for graph-based models & \(1e^{-3}\), \(3e^{-3}\), \(5e^{-3}\) \\ Hidden dimension for GNN & 200, 300, 400 \\ Weight coefficient \(\lambda\) & 0, 0.1 0.3, 0.5, 0.7, 0.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Choices of hyper-parameters
However, some of the baselines did not conducted experiments on all of the four datasets. Our proposed model outperforms baseline models in most cases, demonstrating its effectiveness on ADE detection from texts in various domains (RQ1). Our model can better balance precision and recall scores, leading to higher F1 scores. Table 5 shows that our model consistently outperforms the baselines. Our proposed model can capture rich features to identify a document containing ADEs.
### Usefulness of the Knowledge Augmented Graph Convolution (RQ2)
Here we investigate in more detail how the heterogeneous graph convolution with knowledge augmentation can help with ADE detection (RQ2). In Table 3, most models such as HTR-MSA, IAN, CNN-T and ATL perform worse on TwiMed-Twitter dataset, showing that it is difficult to process informal tweets with colloquial language. However, the graph-based encoder in our model helps in effectively encoding information from the informal text, resulting in a better ability to capture the relationships between different entities, improving performance in most cases. Table 4 compares our model with several pretrained BERT-based models. Our model
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Models & \multicolumn{2}{c}{P (\%) R (\%) F1 (\%)} \\ \hline BERTweet-LSTM (Kayastha et al., 2021) & 81.2 & 86.2 & 83.6 \\ RoBERTa-aug (Pimplakhute et al., 2021) & 82.1 & 85.7 & 84.3 \\ BERT-LSTM (Yaseen and Langer, 2021) & 77.0 & 72.0 & 74.0 \\ CGEM (Gao et al., 2022) & **86.7** & 93.4 & 89.9 \\ \hline KnowCAGE (GCN) & 86.6 & 93.1 & 89.7 \\ KnowCAGE (GAT) & 85.2 & **96.8** & 90.6 \\ KnowCAGE (DGCNN) & 86.6 & 95.9 & **91.0** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of SMM4H dataset. Scores are reported for the best performing results, which follows the setup of baselines. The results of baselines are from the corresponding publications. **Bold** text indicates the best performance.
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c}{TwiMed-Pub} & \multicolumn{3}{c}{TwiMed-Twitter} \\ & P (\%) & R (\%) & F1 (\%) & P (\%) & R (\%) & F1 (\%) \\ \hline HTR-MSA (Wu et al., 2018) & 75.0 & 66.0 & 70.2 & 60.7 & 61.7 & 61.2 \\ IAN (Alimova and Solovyev, 2018) & 87.8 & 73.8 & 79.2 & 83.6 & 81.3 & 82.4 \\ CNN-T (Li et al., 2020) & 81.3 & 63.9 & 71.6 & 61.8 & 60.0 & 60.9 \\ MSAM (Zhang et al., 2019) & 85.8 & 85.2 & 85.3 & 74.8 & **85.6** & 79.9 \\ ATL (Li et al., 2020) & 81.5 & 67.0 & 73.4 & 63.7 & 63.4 & 63.5 \\ CGEM (Gao et al., 2022) & 88.4 & 85.0 & 86.7 & 84.2 & 83.7 & 83.9 \\ \hline KnowCAGE (GCN) & 88.8 & **85.8** & **87.3** & 84.1 & 84.0 & 84.0 \\ KnowCAGE (GAT) & **89.6** & 83.4 & 86.4 & **84.8** & 84.1 & **84.4** \\ KnowCAGE (DGCNN) & 88.7 & 83.7 & 86.1 & 83.5 & 84.1 & 83.8 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for two TwiMed datasets, i.e., TwiMed-Pub and TwiMed-Twitter. Scores are reported with the mean of 10-fold cross validation following the setup of baselines. The results of baselines are from the corresponding publications. **Bold** text indicates the best performance.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Models & P (\%) & R (\%) & F1 (\%) \\ \hline HTR-MSA (Wu et al., 2018) & 81.8 & 77.6 & 79.7 \\ CNN-T (Li et al., 2020) & 84.8 & 79.4 & 82.0 \\ ATL (Li et al., 2020) & 84.3 & 81.3 & 82.8 \\ ANNSA (Zhang et al., 2021) & 82.7 & 83.5 & 83.1 \\ \hline KnowCAGE (GCN) & 86.2 & 90.2 & 88.2 \\ KnowCAGE (GAT) & 83.9 & 92.0 & 87.8 \\ KnowCAGE (DGCNN) & **86.1** & **92.9** & **89.4** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results for CADEC dataset. Scores are reported with the mean of 10-fold cross validation following the setup of baselines. The results of baselines are from the corresponding publications. **Bold** text indicates the best performance.
differs from the pretrained models by addition employing the GNN architectures in addition to the pretrained embeddings, and the results suggest the GNN can further improve models' performance on this task. Compared with another graph-based model, CGEM, our method applies knowledge-augmentation to incorporate concept information into the graph learning, which is also seen to improve performance of ADE detection in most cases.
Finally, we examine three graph architectures to study which one is most suitable for the ADE detection task. For datasets containing more training samples (i.e., SMM4H and CADEC datasets), DGCNN performs better. When the number of training samples is small, GCN and GAT achieve better performance. Hence, we conclude that the graph-based encoding method improves the performance. However, we also notice that none of examined graph architectures consistently outperforms the others on all three datasets from different domains.
### Effectiveness of the Concept-Aware Attention (RQ3)
We examine the effectiveness of concept-aware attention by comparing it with other two attention mechanisms, i.e., a simple dot-product attention (Gao et al., 2022) and structured self-attention (Lin et al., 2017). Table 6 shows the concept-aware attention consistently achieves the best F1 score on four datasets. The concept-aware attention distinguishes different types of nodes from the heterogeneous graph and make the overall model better utilize the knowledge augmentation from the ULMS, which answers the third research question (RQ3).
### Effect of Pretraining Domains (RQ4)
We use four pretrained contextualized language models to obtain node embeddings. They use the BERT base model architecture but are pretrained with different strategies or corpora collected from different domains. The pretrained language models include: (1) RoBERTa (Liu et al., 2019), a BERT model optimized with more robust approaches; (2) BioBERT (Lee et al., 2020), a domain-specific model trained with biomedical corpora including PubMed abstracts and PubMed Central full-text articles; (3) ClinicalBERT (Alsentzer et al., 2019), a domain-specific model trained on clinical notes from the MIMIC-III database (Johnson et al., 2016); and (4) PubMedBERT (Gu et al., 2021): a domain specific model trained from scratch using abstracts from PubMed biomedical literature. Figure 3 shows that RoBERTa performs slightly worse than other models on the TwiMedPub dataset. For the other three datasets, RoBERTa performs better than its counterparts. One explanation is the discrepancy between different subdomains. SMM4H, TwiMed-Twitter and CADEC from social media and forum contain more informal social text and non-medical terms, while ClinicalBERT, BioBERT and PubMedBERT are pretrained with clinical notes or biomedical articles. Hence, we conclude that the choice of a specific pretrained model is critical for the accuracy in the ADE detection task. Our finding shows domain-specific pretraining can improve the performance to some extent, and RoBERTa can be the first choice when processing informal text, which is the first answer to RQ4.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multicolumn{3}{c}{dot-product attention} & \multicolumn{3}{c}{structured attention} & \multicolumn{3}{c}{concept-aware attention} \\ & P (\%) & R (\%) & F1 (\%) & P (\%) & R (\%) & F1 (\%) & P (\%) & R (\%) & F1 (\%) \\ \hline SMM4H & 83.4 & 97.8 & 90.0 & 84.4 & 94.9 & 89.3 & 86.6 & 95.9 & **91.0** \\ TwiMed-Pub & 87.9 & 84.5 & 86.2 & 88.9 & 82.9 & 85.8 & 88.8 & 85.8 & **87.3** \\ TwiMed-Twitter & 84.5 & 82.2 & 83.4 & 83.0 & 81.8 & 82.4 & 84.8 & 84.1 & **84.4** \\ CADEC & 83.8 & 91.8 & 87.6 & 83.5 & 89.3 & 86.3 & 86.1 & 92.9 & **89.4** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison on the choices of attention mechanisms
Figure 3: The effect of contextualized text embeddings pretrained from different domains and with different pretraining strategies.
### Effect of Weight Coefficient (RQ4)
This section studies the weight coefficient \(\lambda\) that balances the pretrained embedding-based and GNN-based classifiers and answers how does the contextualized language representation affect the performance of ADE detection (RQ4). When \(\lambda\) is zero, only pretrained embedding-based classifier is working. Figure 4 shows that F1 score experiences an increase then drops to some extent. In terms of F1 score, the best choices of \(\lambda\) for four datasets are 0.5 (SMM4H), 0.7 (TwiMed-Pub), 0.7 (TwiMed-Twitter) and 0.3 (CADEC). This study reveals that the combination of pretrained embedding and graph learning on the heterogeneous graph boosts the performance, which is the second answer to RQ4.
## 6 Conclusion
The automated detection of adverse drug events from social media content or biomedical literature requires the model to encode text information and capture the relation between drug and adverse effect efficiently. This paper utilizes knowledge-augmented contextualized graph embeddings to learn contextual information and capture relations for ADE detection. We equip different graph convolutional networks with pretrained language representations over the knowledge-augmented heterogeneous text graph and develop a concept-aware attention to optimally process the different types of nodes in the graph. By comparing our model with other baseline methods, experimental results show that graph-based embeddings incorporating concept information from the UMLS can inject medical knowledge into the model and the concept-aware attention can learn richer concept-aware representations, leading to better detection performance.
## Acknowledgements
We acknowledge the computational resources provided by the Aalto Science-IT project and CSC - IT Center for Science, Finland. This work was supported by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence FCAI, and grants 315896, 336033) and EU H2020 (grant 101016775). We thank Volker Tresp, Zhen Han, Ruotong Liao and Zhiliang Wu for valuable discussions.
|
2310.06572 | Deep Learning reconstruction with uncertainty estimation for $γ$
photon interaction in fast scintillator detectors | This article presents a physics-informed deep learning method for the
quantitative estimation of the spatial coordinates of gamma interactions within
a monolithic scintillator, with a focus on Positron Emission Tomography (PET)
imaging. A Density Neural Network approach is designed to estimate the
2-dimensional gamma photon interaction coordinates in a fast lead tungstate
(PbWO4) monolithic scintillator detector. We introduce a custom loss function
to estimate the inherent uncertainties associated with the reconstruction
process and to incorporate the physical constraints of the detector.
This unique combination allows for more robust and reliable position
estimations and the obtained results demonstrate the effectiveness of the
proposed approach and highlights the significant benefits of the uncertainties
estimation. We discuss its potential impact on improving PET imaging quality
and show how the results can be used to improve the exploitation of the model,
to bring benefits to the application and how to evaluate the validity of the
given prediction and the associated uncertainties. Importantly, our proposed
methodology extends beyond this specific use case, as it can be generalized to
other applications beyond PET imaging. | Geoffrey Daniel, Mohamed Bahi Yahiaoui, Claude Comtat, Sebastien Jan, Olga Kochebina, Jean-Marc Martinez, Viktoriya Sergeyeva, Viatcheslav Sharyy, Chi-Hsun Sung, Dominique Yvon | 2023-10-10T12:31:29Z | http://arxiv.org/abs/2310.06572v1 | Deep Learning reconstruction with uncertainty estimation for \(\gamma\) photon interaction in fast scintillator detectors
###### Abstract
This article presents a physics-informed deep learning method for the quantitative estimation of the spatial coordinates of gamma interactions within a monolithic scintillator, with a focus on Positron Emission Tomography (PET) imaging. A Density Neural Network approach is designed to estimate the 2-dimensional gamma photon interaction coordinates in a fast lead tungstate (PbWO\({}_{4}\)) monolithic scintillator detector. We introduce a custom loss function to estimate the inherent uncertainties associated with the reconstruction process and to incorporate the physical constraints of the detector.
This unique combination allows for more robust and reliable position estimations and the obtained results demonstrate the effectiveness of the proposed approach and highlights the significant benefits of the uncertainties estimation. We discuss its potential impact on improving PET imaging quality and show how the results can be used to improve the exploitation of the model, to bring benefits to the application and how to evaluate the validity of the given prediction and the associated uncertainties. Importantly, our proposed methodology extends beyond this specific use case, as it can be generalized to other applications beyond PET imaging.
keywords: Deep Learning, Neural Networks, Uncertainty quantification, Event reconstruction algorithms, Gamma detector, PET Imaging +
Footnote †: journal: Engineering Application of Artificial Intelligence
## 1 Introduction
Gamma photon detection is used in numerous industrial, medical and security applications. It is often based on scintillating crystals coupled with a light collection and readout system. The scintillator is either pixelated, consisting on an array of small individual crystals, or
continuous, made of a large monolithic block. The choice between both technologies is often based on a trade-off between sensitivity and resolution performances. For pixelated detectors, the spatial localization of the detection is provided by the crystal impacted by the gamma photon, whereas for continuous detectors, designated algorithms shall be used to derive spatial information. Several algorithms have been proposed, based either on prior knowledge of the physics of detection or on a machine learning approach [18]. The main objective of this work is the development of a physics informed deep learning method for a quantitative estimation of the spatial coordinates of the gamma interaction within a monolithic scintillator, including uncertainties. The application framework is nuclear medicine imaging, more specifically the detection of 511 keV gamma photons in Positron Emission Tomography (PET).
PET imaging is a powerful _in vivo_ functional imaging modality mainly used in oncology, neurology and cardiology. It is based on the administration to the patient of a biomarker labeled with a radionuclide that decays through positron emission, followed by the detection in coincidence, outside the body, of pairs of 511 keV gamma photons resulting from the annihilation of the emitted positrons with electrons of surrounded media. The data acquisition process is followed by the tomographic reconstruction of a three-dimensional image of the biomarker distribution within the body. The quality of the reconstructed image highly depends on the performances of the gamma photon detectors in terms of sensitivity, spatial resolution, and temporal resolution. The detector spatial resolution has a direct impact on the contrast recovery of small structures in the image. A higher detection efficiency translates into a higher number of detected coincidences, resulting in a better signal-to-noise ratio (SNR) in the PET image. The improvement of the coincidence resolving time (CRT), characterizing the capability of a pair of detectors to resolve the difference between the times of interaction of the two 511 keV gamma photons detected in coincidence, also helps for increasing the SNR in the image [23]. This principle is referred to as time-of-flight (ToF) PET.
State of the art clinical PET systems are based on pixelated detectors made of LSO (lutetium oxyorthosilicate) or LYSO (lutetium-yttrium oxyorthosilicate) crystals, with a pixel pitch between 3.5 and 5 mm. These PET systems have, at best, a CRT of 210 ps [26], an intrinsic spatial resolution in the reconstructed image of 3.5 mm [26] (about 3 mm at the detector level), and an absolute sensitivity of \(\sim\) 20 counts/sec/kBq (i.e. \(\sim\) 2%) [15]. Major efforts are being made worldwide in nuclear instrumentation research groups to improve these parameters, in particular to decrease the CRT below 100 ps, ideally down to 10 ps [20]. There is a trend toward the development of PET detectors based on monolithic crystals for their higher sensitivity, since there are no intercrystal gaps [14]. Based on the scintillation light distribution readout, dedicated neural networks have recently been proposed to provide a single 3-dimensional gamma photon interaction position within the monolithic crystal (see, for example [5; 8; 12; 16; 18]). All these studies demonstrate the benefit of using a machine learning approach for an accurate position estimation.
In this study, we propose to address the question of the reconstruction of the 2-dimensional gamma photon interaction coordinates for a fast lead tungstate (PbWO\({}_{4}\)) monolithic scintillator detector [27], that is not straightforward due to the specificity of our acquisition system which is not a conventional pixelated photodetector. Our system consists indeed in the use
of a micro-channel plate photomultiplier tube (MCP-PMT) which necessitates a dedicated process to reconstruct gamma photon interaction parameters from the acquired signal, as we describe in this paper. Moreover, we aim not only to perform the reconstruction but to associate uncertainties on the reconstruction. We will exploit and validate these uncertainties to assess the reliability of the network prediction. This approach is new in this field and the methodology we propose can be extended to other applications on signal processing or sensor data analysis. In the future, these spatial coordinates uncertainties could be used during the tomographic reconstruction and potentially improve the quality of the PET image.
The paper is organized as follows. In section 2, we describe the gamma photon detector based on PbWO\({}_{4}\), the Monte Carlo simulation used to generate the training and testing datasets, and the preprocessing of the detector raw data. Section 3 focuses on the methodology of the deep learning approach used for this study, including the description of a baseline method for performance comparisons and the definition and the architecture of the selected Density Neural Network. Results are presented in section 4. These results and the methodology are discussed in section 5.
Figure 1: **Left:** Schematic diagram of the ClearMind detection module. A 511 keV gamma-ray interaction in the crystal produces scintillation and Cherenkov photons that are converted by the photocathode to photoelectrons. These photoelectrons are then multiplied by the MCP-PMT and induce signals on the transmission lines (TLs). Signals from the left and right ends of each TL are amplified by 40 dB amplifiers and digitized by a SAMPIC module. **Right:** Transmission lines Printed Circuit Board (PCB). The axis \(x\) and \(y\) corresponds to the coordinate system that we use to locate the interaction position.
## 2 Materials
### Detector description
The ClearMind gamma detector (Figure 1) is composed of a MCP-PMT sealed by a monolithic PbWO\({}_{4}\) crystal, acting both as the gamma conversion crystal and as the optical window of the MCP-PMT. A high quantum efficiency photoelectric layer is deposited on its inner face. The direct deposition of a photocathode with a refraction index superior to the refraction index of the PbWO\({}_{4}\) crystal allows us to avoid total reflection at the crystal/photocathode interface, thus maximizing the photon collection efficiency of the module [27]. The use of this "scitronic" crystal as an entrance window of a MCP-PMT makes it possible to optimize the time resolution thanks to the excellent electron transit time spread (\(\sim\)60 ps FWHM) to the detection anodes provided by this type of photodetector.
The PbWO\({}_{4}\) crystal, homogeneously doped, has a surface of \(59\leavevmode\nobreak\ \mathrm{mm}\times 59\leavevmode\nobreak\ \mathrm{mm}\) a thickness of 5 mm, and is provided by CRYTUR [9]. The photocathode deposit and the integration of the device into a MCP-PMT structure is handled by the PHOTEK company, based on its MAPMT-253 design [21]. We developed a signal readout system for this device using 32 transmission lines as shown in Fig. 1, [10]. The signals are read out at both ends of the transmission lines, amplified and then sampled by a SAMPIC WaveShape Recorder [7].
Typically a 511 keV energy deposit in the crystal produces 185 optical photons mostly isotropically. Out of these, \(\sim 20\%\) are collected by the \(53\leavevmode\nobreak\ \mathrm{mm}\times 53\leavevmode\nobreak\ \mathrm{mm}\) photocathode and generate detected photoelectrons that are collected and amplified by the MCP-PMT photodetector.
Many processes in the signal formation involves random features. For example, the photon direction and time of production, the photoelectron production probability, the gain of the Micro Channel Plate used for electron multiplication, the time transit of the electron propagation through the MCP-PMT, the noise of readout amplifiers are best described using parametrized random variables. These are necessary to describe the pulses shapes produced by a single photoelectron (SPE).
Each SPE induces a signal on typically three readout lines. Thus the typical 30 SPE signals pileup at the output of the transmission lines (Figure 2), and build the event signal registered by the SAMPIC module.
### Simulation
In order to create a sufficiently large and unbiased database to train and test our reconstruction algorithms, we developed a detailed simulation of the ClearMind detector [24]. The knowledge of the hidden physical processes available in Monte Carlo simulations is necessary to provide the ground truth (target) for the training of the Machine Learning models. It is also very convenient to assess the intrinsic performances of our algorithms.
This simulation is based on the Gate v9.0 [17; 22] / Geant4 v7.0 [2; 3; 4] software allowing to simulate in full details the interaction of the particle with matter and optical photons generation and tracking. Furthermore, we have developed specialized software to simulate the photodetector, analog, and digital electronic components. Necessary parameters have
been extracted from dedicated measurements [10; 11]. This simulation includes the following main parts of the detector response.
1. The gamma interaction in the crystal accounts for three processes: photoelectric conversion, Compton scattering and Rayleigh diffusion. The two first processes produce relativistic electron that emits visible photons through two mechanisms: Cherenkov radiation and scintillation (\(\sim\)20 and \(\sim\)165 photons for 511 keV \(\gamma\)-quanta respectively).
2. Each optical photon is propagated individually by the simulation program. During the propagation all main physical effects are taken into account: photon absorption inside the PbWO\({}_{4}\) crystal, reflection or absorption on the crystal borders for the different types of the crystal surface (polished, ground, absorbing), escape of photons from the crystal into the air.
3. Photocathode simulation includes the Fresnel reflection of visible photons at photocathode boundaries, absorption of photons by the photocathode and extraction of generated photoelectrons as a function of the photon wavelength. As a result we compute, assuming a photocathode of nominal efficiency, that we produce in average 30 photoelectrons for a 511 keV \(\gamma\)-ray photoelectric conversion in the crystal and 75% of events contain at least one Cherenkov photon converted into a photoelectron.
4. We then simulate the propagation and the multiplication of individual photoelectrons generated by the photocathode in the MCP-PMT and parametrize the main PMT response features: time response, PMT gain and gain fluctuation, signal sharing between different output anodes.
Figure 2: Set of pulses as registered by the SAMPIC waveshape recorder for a 511 keV energy deposit. For clarity purpose, only the pulses registered on one side of the transmission lines are shown (half of the set).
5. Finally, we simulate the signal readout through the transmission lines with realistic signal shapes, taking into account the possible overlay of several photoelectrons, electronics noise and digitization sampling.
Most of the simulation parameters are adjusted to the results obtained by the characterization of the first prototype using pulsed laser in the single-photon regime. More details about the simulation could be found elsewhere [24, 25].
### Waveform preprocessing and input data shaping
Depending on the energy deposited in the crystal, the SAMPIC module records from 2 to 64 pulse shapes, typically about 30 for a deposited energy of 511 keV. This corresponds to about 10 kBytes of raw data per event. This volume of data is to be compared with the volume of data to be reconstructed, the properties of the gamma interaction in the crystal: 3D position, time, deposited energy, interaction multiplicity, typically 25 bytes per event. The acquired data are therefore redundant, but also intricate. The parameters to be reconstructed are encoded in a complex way and are mixed in the pulse shapes.
Thus reconstructing the properties of the gamma interaction in the crystal is a complex task. We first use our knowledge of the physics of the detector to calculate on the raw data a set of statistical variables, so called "parameter observables" highly correlated to the parameters of the gamma-ray interaction to be reconstructed, as well as a second set of "bias observables" which monitor the known undesirable instrumental effects (saturation of the acquisition electronics, edge effects, etc...). The data volume of the observables is 100 bytes per event. These observables are then used as inputs of shallow, fully connected neural networks, whose training takes only a few minutes. The development cycle is fast, at the cost of a loss of information that is difficult to anticipate.
In the following paragraph, we explain the parameter observables developed to correlate to the features targeted by the neural networks presented in this paper: the gamma interaction position \(x_{\rm pos}\) (_position along the transmission lines_) and \(y_{\rm pos}\) (_perpendicular to the transmission lines_).
First, for each transmission line \(l\), that triggered the SAMPIC acquisition, the digitized pulse shapes are acquired: \(F_{l,{\rm Left}}(t_{j})\) and \(F_{l,{\rm Right}}(t_{j})\) at times \(t_{j}\) for the Left and Right transmission line, where the index \(j\) corresponds to the sampling time. Examples of acquired pulse shapes are shown in Figure 3.
We first calculate the electric charges collected (integral of the pulse shape current over time) at both the end of the 32 transmission lines \(C_{l}\).
We compute two interaction observables expected to be correlated to \(y_{\rm pos}\) :
* We select the line with the largest collected charge and its two neighbors. We fit a parabola on these three values. The position of the maximum of this parabola is the first observable.
* The second observable is the median of the distribution of line numbers weighted by the charge collected on these lines : \(\mathrm{Med}_{L}=\mathrm{median}(l,C_{l})\). It is common that one line carries a large fraction of the total charge collected over the 32 readout lines registered in an event. In order to extract as much information as possible from the surrounding line charge values, we developed an "upgraded" median algorithm documented in section 7. This is the "median" algorithm we will use for all the observables calculations.
Quantifying observables expected to correlate to \(x_{\mathrm{pos}}\) is more complex. It turns out that on some lines, we can identify few pulses separated in time in the recorded pulse shapes. We denote each identified pulse by the index \(p\). When this happens, we quantify the pulse detection times (\(T_{l,p,\mathrm{Left}}\) and \(T_{l,p,\mathrm{Right}}\)) and the total charge \(C_{l,p}\) of each of these pulses. The time difference \(\Delta t_{l,p}=T_{l,p,\mathrm{Left}}-T_{l,p,\mathrm{Right}}\) is correlated to the position of the pulse induced by this photoelectron along this line. We then calculate the three following observables:
* the _median_ of the \(\mathrm{Med}_{\Delta t}=\mathrm{median}(\Delta t_{l,p},C_{l,p})\) distribution (all lines and pulses), weighted by the integrated charge of the each pulses. The details of the algorithm for the weighted median are given in Section 7;
* the _mean_ of the same distribution \(\mathrm{Mean}_{\Delta t}=\mathrm{mean}(\Delta t_{l,p},C_{l,p})\);
* \(\Delta t_{\mathrm{lmax}}\), the time difference \(\Delta t\) of the largest pulse of the line (greatest charge collected).
Figure 3: Waveforms registered on one triggered line \(l\). Red and green lines are the left \(F_{l,\mathrm{Left}}(t_{j})\) and right \(F_{l,\mathrm{Right}}(t_{j})\), time shifted, registered pulses shapes. Black line shows the time difference waveform \(F_{l,\mathrm{Left}}(t_{j})-F_{l,\mathrm{Right}}(t_{j})\). We identify on this line three pulses clusters at 4.6 ns, 6 ns and 8.5 ns. For each of them, the time difference curve shows a bipolar shape, correlated to the position of each photoelectron charge induction along the readout line.
We also used the shape of the digitized pulses \(F_{l,p,\text{Left}}(t_{j})\) and \(F_{l,p,\text{Right}}(t_{j})\). The \(\text{Diff}_{l,p}(t_{j})=F_{l,p,\text{Left}}(t_{j})-F_{l,p,\text{Right}}(t_{j})\) show bipolar shapes depending on the position of the pulse injection along the line. We integrated the **F**irst and **S**econd components of the bipolar shape \(\text{Int}_{l,p,F}\) and \(\text{Int}_{l,p,S}\). We then compute the \(\text{Bipol}_{l,p}=(\text{Int}_{l,p,F}-\text{Int}_{l,p,S})/C_{l,p}\), for each pulse. The saved observables are then:
* the _weighted median_ of the distribution \(\text{Med}_{\text{Bipol}}=\text{median}(\text{Bipol}_{l,p})\) (see section 7 for the details)
* the _mean_ of the distribution \(\text{Mean}_{\text{Bipol}}=\text{mean}(\text{Bipol}_{l,p})\).
Along with these observables, we also compute parameters and bias observables relevant for reconstruction the other properties gamma interactions in the crystal (see section 7). Some of them are also expected to be relevant for the uncertainty estimation. These **23 observables are the inputs of the following neural networks**.
An alternative approach would be to use the full waveforms' data as inputs for convolutional deep neural networks. However, preliminary tests showed that the complexity of the data required to implement deeper network structures, involving training times of days on our GPU card, thus with slower development cycles. We are considering this option for future works.
## 3 Methodology
### Baseline method
The simplest approach to gamma-interaction reconstruction is using the fact that the highest density of the detected photons corresponds to the coordinates of the interaction point. This suggests to use the transmission line with the maximum detected charge as the reference line. To reconstruct the coordinate across lines, \(y_{R}\), we calculate the weighted average of center-of-line for this and two neighboring lines:
\[y_{R}=\frac{\sum_{l=i-1}^{i+1}y_{l}C_{l}}{\sum_{l=i-1}^{i+1}C_{l}}\;, \tag{1}\]
where \(y_{l}\) is the y-coordinate of the line center, \(C_{l}\) is the charge of line \(l\) (only the negative signal part is used for the charge calculation), \(i\) is the reference line number.
The coordinate along the lines, \(x_{R}\) is reconstructed as
\[x_{R}=\frac{(t_{\text{Right}}-t_{\text{Left}})}{2}\times v_{\text{signal}}\;, \tag{2}\]
where \(t_{\text{Right}}\) and \(t_{\text{Left}}\) are the time measured at the right and left ends of line \(i\) respectively and \(v_{\text{signal}}\) is the signal propagation speed, assumed to be 35% of speed of light in the simulation. This approach, based only on expert knwoledge of the physical processes is used as a reference for comparison with the performance of our approach based on neural networks.
### ML approach
This section describes our approach based on a supervised Machine Learning algorithm, that takes as input the 23 preprocessed variables described in sections 2.3 and 7 in order to predict the interaction position. We use a Neural Network model that is able to provide both the prediction and an estimation of an uncertainty associated to this prediction thanks to the paradigm of Density Neural Networks.On the contrary to the baseline method, the use of Neural Network is mainly based on the exploitation of simulated data to build a model that link the observable quantities to the reconstruction of the gamma interaction position. Moreover, the baseline method is not able to provide uncertainties associated to the reconstruction, which is a key difference with our approach.
#### 3.2.1 Density Neural Network
The conventional approach to perform a regression through a Neural Network is to define a loss function that measures the discrepancy between the predictions of the neural network and the expected values. With classical notations, let \(\mathcal{D}=\{(x_{i},y_{i}),i=1,2,\ldots,N\}\) be the database and \(\mu_{\theta}(x)\) the prediction of the expected value of the neural network parameterized by \(\theta\). By noting \(||\bullet||\) the Euclidean norm, a usual loss function is the Mean Square Error :
\[\text{MSE}(\theta)=\frac{1}{N}\sum_{i=1}^{N}||y_{i}-\mu_{\theta}(x_{i})||^{2}. \tag{3}\]
The learning procedure consists in finding parameters \(\theta^{*}\), among the space of possible parameters \(\Theta\), that minimizes the Mean Squared Error. It corresponds to solving the following problem:
\[\theta^{*}=\underset{\theta\in\Theta}{\text{argmin}}\;\text{MSE}(\theta). \tag{4}\]
The Density Neural Network approach proposes to make the assumption that the expected outputs \(y_{i}\) are a realization of a probability distribution such as the normal distribution \(y_{i}\sim\mathcal{N}(\mu_{\theta}(x_{i}),\sigma_{\theta}^{2}(x_{i}))\) or a mixture of normal distributions [6]. However, this method can be adapted to other distributions and we propose to use a distribution that considers some specific physical constraints of our application.
The main physical constraint on the interaction position is that this interaction must be located in the limits of the detector. Therefore, we decide to use a truncated normal distribution in order to insure this constraint.
Beforehand in order to avoid confusion with \((x,y)\) coordinates of the position of the interaction in the detector, we will note \(s\) the inputs data from the preprocessing describes in Section 2.3. So formally, our neural network is now expected to output four values for each inputs \(s_{i}\), the two means \(\mu_{\theta,x}(\text{s}_{\text{i}}),\mu_{\theta,y}(\text{s}_{\text{i}})\) and the two scale parameters \(\sigma_{\theta,x}^{2}(\text{s}_{\text{i}}),\sigma_{\theta,y}^{2}(\text{s}_{ \text{i}})\) of the assumed truncated Gaussian law. We have assumed independent normal laws on \(x_{i}\) and \(y_{i}\). In this case, the hypothesis that each output \(x_{i},y_{i}\) follows a truncated normal distribution in \([a,b]\)
leads to the two following distributions relative to the two coordinates with \(z=x\) or \(z=y\):
\[p(z_{i}|\mu_{\theta,z}(\mathbf{s_{i}}),\sigma^{2}_{\theta,z}(\mathbf{ s_{i}})) = \underbrace{\frac{1}{\sqrt{2\pi\sigma^{2}_{\theta,z}(\mathbf{s_{i} })}}\exp\left(-\frac{(z_{i}-\mu_{\theta,z}(\mathbf{s_{i}}))^{2}}{2\sigma^{2}_{ \theta,z}(\mathbf{s_{i}})}\right)}_{\text{Classical Gaussian Likelihood}} \tag{5}\] \[\times \underbrace{\frac{1}{\Phi\left(\frac{b-\mu_{\theta,z}(\mathbf{s_{i }})}{\sigma_{\theta,z}(\mathbf{s_{i}})}\right)-\Phi\left(\frac{a-\mu_{\theta,z }(\mathbf{s_{i}})}{\sigma_{\theta,z}(\mathbf{s_{i}})}\right)}}_{\text{With truncation in $[a,b]$}}\]
where \(\Phi\) is the Cumulative Distribution Function of the standard normal distribution, \(a\) and \(b\) are the truncation boundary of the truncated normal distribution. In our application, \(a\) and \(b\) represent the limits of the detector: \(a=-30\) mm and \(b=30\) mm. In our case, the condition \(a\leq(x_{i},y_{i})\leq b\) is assumed to be always verified : an interaction occurs inside the detector; otherwise it is a simulation error and we should not consider it. So, by using equation 5, we derive the likelihood of the parameters on the whole dataset:
\[L(\theta)=\prod_{i}\prod_{z\in\{x,y\}}p(z_{i}|\mu_{\theta,z}( \mathbf{s_{i}}),\sigma^{2}_{\theta,z}(\mathbf{s_{i}})) \tag{6}\]
Maximizing this likelihood is equivalent to minimizing the negative log-likelihood, that is easier to handle, which leads to the following loss function:
\[l(\theta) = -\log(L(\theta)) \tag{7}\] \[= \sum_{i}\sum_{z\in\{x,y\}}\log\left(\Phi\left(\frac{b-\mu_{\theta,z}(\mathbf{s_{i}})}{\sigma_{\theta,z}(\mathbf{s_{i}})}\right)-\Phi\left( \frac{a-\mu_{\theta,z}(\mathbf{s_{i}})}{\sigma_{\theta,z}(\mathbf{s_{i}})} \right)\right)\] \[+ \frac{1}{2}\log(2\pi\sigma^{2}_{\theta}(\mathbf{s_{i}}))+\frac{( z_{i}-\mu_{\theta,z}(\mathbf{s_{i}}))^{2}}{\sigma^{2}_{\theta,z}(\mathbf{s_{i}})}\]
We want to highlight that the conventional problem with MSE defined in equation 3 is a special case of equation 7 and \(a=-\infty\), \(b=+\infty\) (no truncature) and where the scale parameters \(\sigma^{2}_{\theta,x}(\mathbf{s_{i}})\) and \(\sigma^{2}_{\theta,y}(\mathbf{s_{i}})\) are assumed to be constant (homoscedasticity hypothesis). Finally, the optimization problem consists now in finding the parameters \(\theta^{*}\) that minimize this new loss function:
\[\theta^{*}=\underset{\theta\in\Theta}{\text{argmin}}\ l(\theta) \tag{8}\]
We can solve this problem by using a conventional gradient descent optimization in order to train the neural network and find optimal parameters.
This approach brings two main components:
* the use of a Density Neural Network allows the estimation of an uncertainty associated to the network output through the prediction of the terms \(\sigma^{2}_{\theta,x}(\mathbf{s_{i}})\) and \(\sigma^{2}_{\theta,y}(\mathbf{s_{i}})\). These
variances are specific to each example and represent a higher or lower uncertainty on the predicted positions \(\mu_{\theta,x}(\mathbf{s_{i}})\) or \(\mu_{\theta,y}(\mathbf{s_{i}})\) according to the inputs \(\mathbf{s_{i}}\) of the neural network. For instance, in our application, we can expect an interaction closer to the edge of the detector is less precisely located than an interaction at the center of the detector. We highlight that this uncertainty is self-estimated by the neural network and consequently must be empirically validated, we bring elements for this validation in section 4.3. This uncertainty can be seen as aleatoric uncertainty [19] due to the randomness of the phenomenons involved during the detection and measurements (Cherenkov and scintillation production, loss of optical photons, randomness in the pulse shape creation...).
* the use of a truncated normal hypothesis allows us to consider the physical constraints. In section 4, we show that this property is essential to obtain better performances for interaction close to the edges of the detector, compared to the use of the classical MSE approach.
#### 3.2.2 Architecture and training
We develop two neural networks: one neural network associated to the conventional Mean Squared Error loss function and one neural network associated to our custom loss function with the truncated normal hypothesis. For both architectures, all of the hidden layers are similar: we use six hidden fully-connected layers with 256 neurons each and a tanh activation function. The difference is the output layer:
* the output layer for the neural network associated to the MSE loss has two neurons (one for each coordinate), with a scaled tanh function: \(\frac{b-a}{2}\tanh\) in order to insure that the prediction is located inside the detector.
* the output layer for the neural network associated to the truncated normal loss function has four output neurons. The first ones output \(\mu_{\theta,x}(\mathbf{s_{i}})\) and \(\mu_{\theta,y}(\mathbf{s_{i}})\) and the activation function is the previous scaled tanh function. The second ones output \(\sigma_{\theta,x}(\mathbf{s_{i}})\) and \(\sigma_{\theta,x}(\mathbf{s_{i}})\). The activation function is \(\mathrm{softplus}(\alpha)+\epsilon\) where \(\alpha\) is the output before applying the activation function. \(\epsilon\) is a small constant that ensures that \(\sigma_{\theta,x}(\mathbf{s_{i}})\) and \(\sigma_{\theta,y}(\mathbf{s_{i}})\) are not too close to 0 and avoids divergence issues. In our case, we choose \(\epsilon=10^{-6}\) mm.
The neural networks and their trainings are computed by using the Tensorflow [1] library. The optimizer is the Adam algorithm. The training is stopped by an early stopping procedure that monitors the validation loss; we use randomly 20 % of our data as validation data. The code has been executed on a NVIDIA RTX Quadro 5000. The training time is a dozen of minutes.
#### 3.2.3 Generated datasets
We used three datasets to train and test our supervised Machine learning algorithms. All of them use the detector modeling explained in section 2, corresponding to a PbWO4 scintillation crystal of 59 mm \(\times\) 59 mm. The inputs are the 23 observables described in
sections 2.3 and 7 from the 32 transmission lines and the outputs are both coordinate X and Y of the gamma interaction, which are saved thanks to the Geant4 simulation.
1. For the training set, we simulate a Gamma photon source shaped in a 6 cm cube. The energy spectrum has been adjusted to increase the probability of high energy deposits in the PbWO\({}_{4}\) and thus generate an approximately flat deposited energy spectrum. Thus we generate 7 times more 1.2 MeV than 300 keV photons. The gamma rays impinge the PbWO\({}_{4}\) crystal perpendicularly and uniformly over the entire surface of the optical window. This training batch contains 450 000 events.
2. The first test dataset simulates an grid of \(9\times 9\) 511-keV gamma ray point sources, regularly spaced by 7 mm. Again the gamma rays impinge the PbWO\({}_{4}\) crystal perpendicularly to the surface of the optical window. This test dataset contains 300 000 events.
3. The second test dataset simulates a 6 cm cube-shaped Gamma photon source, mono-energetic at 511 keV. The gamma rays impinge the PbWO\({}_{4}\) crystal perpendicularly and uniformly on the whole surface of the optical window. This test dataset contains 600 000 events. We design this test dataset with a high number of examples in order to conduct an accurate analysis of the performances of the neural network considering the uncertainty estimation.
## 4 Results
### Evaluation on the grid of sources - First test dataset
The positions of the grid of sources are represented on figure 4. This configuration helps visualising directly some properties of the different reconstruction algorithms. The results are presented on figure 5.
Figure 4: Simulated sources - Expected positions
They show that the algorithms are all able to reconstruct the sources in the center area of the detector, between -20 mm and +20 mm in both directions. However, the baseline method and the conventional Neural Network associated to a MSE loss function are not able to give a prediction on the edge of the detector. These predictions are even incorrectly attributed to other position at -25 mm and +25 mm for the conventional MSE approach. This "folding" of the predictions will imply a lower confidence of the reconstruction in this area, as we show in section 4.2. On the contrary, the use of the truncated Gaussian likelihood is able to provide positions close to the edge and avoid the "folding" effect. This property shows a first advantage of the use of this custom loss function.
The bottom right figure introduces the benefits of the uncertainty evaluation. To produce this figure, we use a weighting on each event \(i\) according to the predicted uncertainties terms \(\sigma_{\theta,x}(\mathbf{s_{i}})\) for the X position, and \(\sigma_{\theta,y}(\mathbf{s_{i}})\) for the \(Y\) position. The weights are computed using the following equation:
\[w_{i}=\frac{(\sigma_{\theta,x}(\mathbf{s_{i}})^{2}+\sigma_{\theta,y}(\mathbf{ s_{i}})^{2})^{-1}}{F}, \tag{9}\]
Figure 5: Grid reconstruction by the different methods
where \(F\) is a normalization factor such that \(\sum_{i}(w_{i})\) is equal to the number of detected events on the considered bin of the histogram.
\[F=\sum_{i\in\text{bin}}{(\sigma_{\theta,x}(\mathbf{s_{i}})^{2}+\sigma_{\theta,y }(\mathbf{s_{i}})^{2})^{-1}} \tag{10}\]
This weighting penalizes the events with a high predicted uncertainty. The result shows a reduction of the spreading effect, making the reconstruction more accurate on the detector. We quantify this gain in accuracy in section 4.2.
### Evaluation on the uniform simulation - Second test dataset
For the global evaluation of the performances, we use the simulation of a uniform distribution of sources (cube source) in front of the detector. The direct reconstruction provided by the algorithms is shown on figure 6. The results show the same properties as the grid simulation, especially the inability of the baseline method and the conventional MSE to reconstruct the edges of the detector. We can see a partial "folding" effect on the reconstruction with the
Figure 6: Reconstruction of the uniform simulation
truncated Gaussian likelihood. This effect is attenuated by applying the weighting as defined in equation 9.
Figure 7 shows for each true position the average 2D distance to the reconstructed position. A high average 2D distance in this case corresponds to a high spread of the reconstructions. At the center of the detector, the baseline method provides less spread reconstructions, between 2 and 3 mm, than the conventional MSE and our approach with the truncated Gaussian likelihood, between 3 and 4 mm. However, the area of high spread, over 6 mm, is larger for the baseline method than the machine learning method. We show later that this default highly affects the precision of the reconstruction, even at the center of the detector.
The use of the weighting helps the truncated Gaussian likelihood to give more importance to the most certain events, leading to a spread between 1 and 2 mm at the center of the detector, between 2 and 4 mm in the intermediate area. The area with the higher spread, over 6 mm, is thiner than the other algorithms.
Figure 8 shows for each predicted position the average 2D distance to the true position. A high
Figure 7: 2D error reconstruction according to the true position, corresponding to the spread of the reconstruction - The color scale has been threshold at 6 mm for visualisation purpose
average 2D distance in this case corresponds to a low precision of the reconstructions. Black areas correspond to area with no predicted position due to the inability of the algorithms to reconstruct some positions, as shown on figure 6. The baseline method gives very low accurate results, mostly over 6 mm, even at the center of the detector. By combining with the previous information, we can give the following interpretation:
* if the algorithm provides a localisation at the center of the detector, we cannot trust the algorithm since the reconstruction error is over 5 mm, as shown on figure 8 ;
* if the true position is the center of the detector, we can trust the algorithm because the reconstruction error is between 2 and 3 mm, as shown on figure 7. However, for a real application, we cannot access to the true position, then we cannot exploit this information.
The conventional MSE shows a better precision than the baseline method, with a reconstruction error between 3 mm at the center of the detector to 5 mm close to the edges. The
Figure 8: 2D error reconstruction according to the predicted position, corresponding to the precision of the reconstruction - The color scale has been threshold at 6 mm for visualisation purpose
areas with very high reconstruction error, over 6 mm, are limited. The truncated Gaussian likelihood shows similar performances as the conventional MSE for the precision, with the ability to reconstruct the edges, with less precision. The application of the weighting improves the results, providing a precision between 1 and 2 mm at the center of the detector, between 3 and 5 mm in the intermediate area and over 6 mm in a thin band close to the edge. This band corresponds to the part of the detector without optical layer and it is expected to have a degradation of the performances in this area.
Finally, figures 9 and 10 show the overall results, considering the whole test dataset. The histograms represent the global reconstruction errors, without selecting any position. We also compute the Root Mean Squared Error (RMSE) of the predictions.
\[w_{X,i} =\frac{(\sigma_{\theta,x}(\mathbf{s_{i}}))^{-2}}{\sum_{j}(\sigma_{ \theta,x}(\mathbf{s_{j}}))^{-2}} \tag{11}\] \[w_{Y,i} =\frac{(\sigma_{\theta,y}(\mathbf{s_{i}}))^{-2}}{\sum_{j}(\sigma_ {\theta,y}(\mathbf{s_{j}}))^{-2}} \tag{12}\]
On average, the conventional MSE gives slightly better results than the truncated Gaussian likelihood in terms of RMSE. We can expect this behaviour because the MSE approach is specifically designed to minimize the MSE, thus its squared root the RMSE.
Table 1 summarizes several performances such as the mean error (the bias) on the histogram, RMSE and the Standard Deviation (spread of the distribution without bias). A small bias is observed for the reconstructions with the truncated Gaussian likelihood methods for the X coordinate, under 1 mm. On the contrary, the conventional MSE presents a bias for the Y coordinate reconstruction. On the other metrics, the weighting applied to the truncated Gaussian likelihood outperforms the other methods: the application of this weighting helps
Figure 10: Global histogram of reconstruction errors on the Y position
improving the performances by giving less importance to uncertain reconstructions. Especially, the RMSE reaches 2.61 mm for the X coordinate and 1.91 mm for the Y coordinate.
### Calibration of the uncertainties
The evaluation of the performances shows the advantage of using the uncertainty prediction as a weighting of the uncertain reconstruction. However, these results do not provide accurate elements to assess the calibration of the uncertainties. In this section, we provide an evaluation of the quality of the uncertainty estimation by using a coverage plot.
To produce this plot, we consider a probability level \(\alpha\). For each event \(i\), our neural network provides a prediction of the parameters \(\mu_{\theta}^{(i)}\) and \(\sigma_{\theta}^{(i)}\) for a truncated Gaussian distribution, with bounds \(a\) and \(b\). Under the hypothesis of this distribution, we compute the Prediction Intervals (PI) at level \(\alpha\) for each event \(I^{(i)}(\alpha)\) such that:
\[p(y_{\text{true}}\in I^{(i)}(\alpha))=\alpha \tag{13}\]
where \(y_{\text{true}}\) is the expected value. There is an infinite number of possible intervals that correspond to this condition, we choose the interval that is centered on the expected median. Then, we compute the number of events whose expected value belongs indeed to the Prediction Interval, which leads to the Prediction Interval Coverage Probability (PICP) at level \(\alpha\):
\[\text{PICP}(\alpha)=\frac{|\{y_{\text{true}}\in I^{(i)}(\alpha)\}_{i}|}{N} \tag{14}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Quantity & Baseline method & Conventional MSE & Truncated Gaussian & Truncated Gaussian \\ & & & & \\ & & MSE & & \\ & & & & \\ \hline \hline Mean error & \multirow{2}{*}{**-0.01**} & \multirow{2}{*}{**0.01**} & \multirow{2}{*}{-0.32} & \multirow{2}{*}{-0.39} \\ X (mm) & & & & \\ \hline RMSE X & \multirow{2}{*}{8.05} & \multirow{2}{*}{4.8} & \multirow{2}{*}{5.11} & \multirow{2}{*}{**2.61**} \\ (mm) & & & & \\ \hline Std dev X & \multirow{2}{*}{8.05} & \multirow{2}{*}{4.8} & \multirow{2}{*}{5.1} & \multirow{2}{*}{**2.58**} \\ (mm) & & & & \\ \hline \hline Mean error & \multirow{2}{*}{**0.00**} & \multirow{2}{*}{0.19} & \multirow{2}{*}{0.04} & \multirow{2}{*}{0.02} \\ Y (mm) & & & & \\ \hline RMSE Y & \multirow{2}{*}{5.5} & \multirow{2}{*}{4.72} & \multirow{2}{*}{5.08} & \multirow{2}{*}{**1.91**} \\ (mm) & & & & \\ \hline Std dev Y & \multirow{2}{*}{5.5} & \multirow{2}{*}{4.72} & \multirow{2}{*}{5.08} & \multirow{2}{*}{**1.91**} \\ (mm) & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Table of performances. Best values are in bold.
where \(|E|\) is the cardinal of the ensemble \(E\) and \(N\) is the total number of events. The PICP can be seen as the empirical frequency of the true values that indeed belong to the Prediction Interval. Finally, we compare the PICP(\(\alpha\)) to the probability level \(\alpha\):
* if PICP(\(\alpha\)) = \(\alpha\), we have a perfect calibration;
* if PICP(\(\alpha\)) > \(\alpha\), the model is under-confident, or in other word, conservative;
* if PICP(\(\alpha\)) < \(\alpha\), the model is over-confident.
Figure 11 shows the results for different probability levels \(\alpha\). For both coordinates, the PICP is close to the value \(\alpha\) which means the Prediction Interval can be trusted in average. The model is slightly over-confident for probability levels under 75 % and conservative for higher probability levels. For instance, Prediction Intervals at 95 % are conservative.
## 5 Discussion
### Benefits of the study for the detector reconstruction and the PET imaging
The use of Machine Learning approaches to reconstruct gamma photon position of interaction in monolithic crystals is a hot topic nowadays. For PET applications, most of the time it is a LYSO scintillator of several tenth of millimeters size and about 1 cm thickness coupled with pixelated silicon photomultiplier (SiPM). For example, in [5] the spatial resolution obtained with a feed-forward neural network is 0.74 \(\pm\) 0.01 mm for the XY plane and an average 1.01 \(\pm\) 0.01 mm for the Z coordinate. Similar results were obtained in [12] with one multilayer perceptron with five hidden layers and 100 nodes. In [8], the sub-millimeter precision in XY plane is obtained with an algorithm based on neural networks integrated into a second
Figure 11: Coverage plots - The models are slightly too confident for probability levels lower than 75 % and slightly too conservative for probability levels higher than 75 %.
neural network for simultaneous estimation of the event position and timestamp. Even a preclinical PET system with an annular monolithic LYSO scintillator was proposed by [16] with an inner diameter of 6 cm with minimum transaxial scintillator thickness of about 1 cm covered by SiPMs. Authors of these study obtained the spatial resolution on reconstructed interaction position of about half of millimeter with a ten-layer deep residual-convolutional neural networks. Sub-millimeter precise photon interaction position determination in CeBr\({}_{3}\) crystal and LaBr3\({}_{3}\):Ce [18] for Compton Camera imaging system was obtained with a newly designed convolutional neural network of five layers.
As shown in these recent studies and in section 4, the Machine Learning approaches provide promising and more accurate reconstructions than the baseline methods, only based on the knowledge of the detector. In contrast to prior research with LYSO or other scintillation only crystals, this current study aims a precise reconstruction of the gamma photon interaction in a PbWO\({}_{4}\) crystal with faster (Cherenkov effect) but fewer (lower light yield) optical photons. Moreover, in contrary to pixelated SiPM with one readout channel per pixel, a MCP-PMT with fewer readout channels (\(2\times 32\) instead of \(32\times 32\)) is used. These differences introduce additional complexity on the precise position reconstruction. The coordinate along the transmission lines (X-coordinate) turned out to be complex to reconstruct, but our machine learning models were able to learn to address this complexity.
For monolithic crystals, the so-called edge-effect worsens the spatial localization performance toward the detector borders [12]. In our study, the use of a dedicated loss function, the truncated Gaussian likelihood, helps recovering some dead area at the edges that are not reconstructed by the baseline methods or the conventional MSE. This effect can be explained by the fact that no interaction can be observed outside of the detector, so the MSE avoids to provide predictions close to the area outside of the detector and consequently, close to the edges. Because we include this prior knowledge in the truncated Gaussian likelihood, the model trained with this loss function can predict positions close to the edges.
Different from previous research, an uncertainty on the gamma photon interaction reconstruction is provided in this study by the Density Neural Network. This additional information is included by a weighting in the performance evaluation. We have shown that it significantly improves the performances. We can object that this improvement is "artificial" because we do not improve the prediction, we give less importance in the metrics to the uncertain prediction. However, it is very promising for the application in PET imaging because the tomographic image reconstruction relies on the processing of a large number of events. If some events carry less information, they can degrade the SNR in the reconstructed image. As a consequence, accounting for the uncertainty information in the tomographic reconstruction shall increase the SNR in the image. The uncertainty can be included in a spatial resolution model adapted to each individual event, enabling an event-based resolution modeling image reconstruction. The proposed approach would be innovative for PET imaging, with the best of our knowledge, and its impact on the reconstructed image quality must be assessed. The combination of the proposed gamma photon interaction reconstruction in the detector with the full PET image reconstruction is planned for future works.
Finally, the calibration of the uncertainties show that their prediction is satisfying, although not perfect. The disagreement between the probability level and the empirical frequency can
come from two factors:
* the prediction of the neural networks can be improved by optimizing the model and the training phase;
* the truncated Gaussian hypothesis could not be the best representation of the uncertainty. Further works are planned to use other probability distribution functions, under the constraint that these probability distributions must not diverge and be differentiable so that they can be used as loss functions.
### Generalization of the methodology
Through this paper, we want to highlight the use of a specific methodology, which can be beneficial for many applications of Deep Neural Network in regression tasks. This approach relies on the paradigm of uncertainty estimation, which is a growing topic in the scientific literature on Artificial Intelligence. Several approaches are usually proposed, such as Bayesian Neural Network or Deep Ensembles [13]. The use of the Density Neural Network is an interesting approach when the uncertainties are dominated by fluctuations and random phenomenon in the data, such as the source of variability during the scintillation process in our detectors. This approach brings additional information thanks to the uncertainties that can be used to gain information on the prediction of neural networks and can be exploited in numerous applications. A dedicated methodology of exploitation and validation has been presented in this paper.
More generally, we can describe the methodology as follows:
* The design of a dedicated loss function, based on a negative log-likelihood of an assumed distribution. This distribution is often assumed as a Gaussian distribution, but it is not restricted to this type of distribution and can be adapted to the problem to address. In our case, we decided to add physical constraints through a truncation of a Gaussian distribution according to the detector edges, which requires a specific implementation of the loss function. We show that this approach brings improvements for reconstructing specific events, especially close to the detector.
* The estimated uncertainties can be exploited to complete the prediction of the neural network. The uncertainty can be evaluated event by event to assess the confidence on the prediction of the model. In our case, we show that we can statistically improve the reconstruction resolution and this feature will be helpful in the exploitation of the detector. Uncertain events are consequently considered as noisy and can be discriminated through this method. Applications applying Neural Networks to analyse signals or sensors data would draw benefits from this approach.
* It must be kept in mind that the uncertainties are self-estimated by the Neural Network, thus they are prone to the same biases as the classical prediction. Consequently, the validation of the estimated uncertainty is required and the use of coverage plot is a possible approach to empirically verify the relevance and reliability of these uncertainties.
## 6 Conclusion
Our objective to predict the 2D position of gamma interaction in the ClearMind detector is successfully achieved thanks to Machine Learning methods. Our neural network outperforms the baseline approaches based on physical knowledge only and used as inputs of the model. The introduction of Density Neural Networks provides a reliable estimation of the uncertainty in the prediction of the neural network that can be used to discriminate reconstructions that are likely less accurate. We aim to draw benefits from this method in further works for PET image reconstruction. Moreover, we exploit the flexibility of the Density Neural Network approach to design a truncated gaussian loss function based on physical constraints. This property helps for the reconstruction of events in areas far from the center of the detector, increasing the exploitable surface of the crystal. We want to highlight that this methodology is generic and can be adapted to other use cases, as far as we are able to introduce such constraints in the loss function.
The preprocessing step is also important to reduce the number of input variables, thus to get a compact neural network that can be embedded as close as possible to the detector. The computation time on test data on a conventional CPU is 14 s for the 600 000 events, corresponding to 23 us per event.
In outlooks, we want to conduct a sensitivity analysis on the input variables. The objective is to study the importance of the preprocessed observables for the prediction of the positions. We will verify if we find the expected correlation between the prediction of the positions and the identified observables given by the physical expertise. Moreover, the analysis will be performed on the uncertainty estimation in order to understand which quantities have the highest influence on the uncertainty in the reconstruction. We will compare the performances in reconstruction with the use of the raw data, in order to assess the possible loss of information due to this preprocessing. In further works, we will also apply this methodology for the Depth of Interaction, energy and time reconstructions in order to obtain complete information on the gamma ray reconstructions. Moreover, these results will be exploited to complete PET image reconstruction in dedicated studies, in order to improve the SNR on the image by the weighting or the discrimination of uncertain reconstruction.
## Acknowledgments
This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 800 945. We are grateful for the support and seed funding from the CEA, Programme Exploratoire Bottom-Up, under grant No. 17P103-CLEAR-MIND, the Cross-Disciplinary Program on Numerical Simulation of CEA for the financial support of project AAIMME, and the French National Research Agency under grant No. ANR-19-CE19-0009-01.
## 7 Appendix: Statistical Processing of raw Waveforms
The following paragraphs describes the preprocessed observables we developed for the ClearMind detector in addition to the description in section 2.3. First, for each line that
triggered the SAMPIC acquisition, let's define \(F_{l,\text{Left}}(t_{j})\) and \(F_{l,\text{Right}}(t_{j})\), the digitized pulse shapes (Left and Right of transmission line) at time \(t_{j}\).
Then, before computing event observables, we calculate for each triggered line:
* The times of the first samples of the pulse shapes \(T_{0,\text{Left}}\), \(T_{0,\text{Right}}\)
* The times calculated on the rising edges of the pulses, extrapolated to the half height between the sampled values, \(T_{l,\text{Left}}\) and \(T_{l,\text{Right}}\).
* The time lapse when the pulses have exceeded the WaveCatcher trigger threshold.
* The time lapse when the pulses have saturated the WaveCatcher.
* The sum of the charges collected at both ends of the transmission lines \(C_{l}\).
The selected observables can be classified into several categories.
First, a general parameter stores the number of transmission lines that have acquired a signal.
Then we store the observables expected to be correlated to the _interaction time_ of the gamma photon in the crystal. These are :
* The time of the first SAMPIC sample recorded on the first transmission line that acquired a signal. This time \(T_{\text{First}}\) is then subtracted from all the time variables associated with the event.
* The time of the first photoelectron detected on the event. We store the lowest photoelectron time value (defined as \(0.5\times(T_{l,\text{Left}}\) + \(T_{l,\text{Right}})\)) of all the lines that triggered the acquisition.
We add four bias observables, intended to help the neural network to decorrelate possible stacking effects on the pulse shapes. These are
* the time lapse beyond 50% of the amplitude on the pulse shapes \(F_{l,\text{Left}}(t_{j}))\) and \(F_{l,\text{Right}}(t_{j})\) (one observable for Left pulse shape and one for Right)
* the computed pulses' rise time (one observable per side)
Then we store the observables expected to be correlated to the _energy deposited_ by the interaction in the detector which is a quantity that is planned to be reconstructed for future works. These observables are also expected to bring information to estimate the uncertainties on the interaction position. We store :
* The sum of the charges collected over all lines \(C_{T}\).
* The sum of the time lapse when pulses have exceeded the triggering threshold of the WaveCatcher \(T_{\text{Thres}}\) over all lines.
* The sum over of the time lapse when pulses have saturated the WaveCatcher (bias observable), \(T_{\text{Sat}}\).
Then we store observables correlated to the position on the axis perpendicular to the transmission lines \(x_{\text{pos}}\) and along transmission line \(y_{\text{pos}}\), as explained in section 2.3.
Finally, we compute quantities expected to be correlated to the _depth of interaction_ (DOI) of the gamma in the crystal, used for future works to achieve 3D reconstruction. We use the fact that the farther the gamma interaction occurs from the photoelectric layer, the more photoelectrons are produced over a large area. The depth of interaction is thus correlated to the dispersion of the produced photoelectrons. We can also expect that this dispersion is relevant for our neural network to estimate the uncertainty on \(x_{\text{pos}}\) and \(y_{\text{pos}}\). To quantify this dispersion we first calculate the distribution: \(|l-\text{Med}_{L}|\), weighted by the charge measured on the line l, \(C_{l}\). \(\text{Med}_{l}=\text{median}(|l-Med_{L}|,C_{l})\). The two first observables are then :
* the \(\text{median}(|l-\text{Med}_{L}|,C_{l})\) and
* the \(\text{mean}(|l-\text{Mean}_{L}|,C_{l})\)
To quantify the dispersion along the transmission lines, we use the same algorithms, but on the previously computed \(\Delta t_{l,p}\) and \(\text{Bipol}_{l,p}\) distributions (section 2.3), weighted by the charge collected in the pulses. The observables are then :
* the \(\text{median}(|\Delta t_{l,p}-\text{Med}_{\Delta t}|,C_{l,p})\),
* the \(\text{mean}(|\Delta t_{l,p}-\text{Mean}_{\Delta t}|,C_{l,p})\),
* the \(\text{median}(|\text{Bipol}_{l,p}-\text{Med}_{\text{Bipol}}|,C_{l,p})\)
* the \(\text{mean}(|\text{Bipol}_{l,p}-\text{Mean}_{\text{Bipol}}|,C_{l,p})\)
We did not devise until now any bias observables on the depth of interaction parameter.
### Appendix: Median algorithm on weighted distributions
Following is our algorithm for computing the median of a binned distribution, written in C++. It uses the STL library to sort the input values. Instead of returning the central value of the bin where the mid-point value of the distribution happen, we decided to make use of the values of the neighbouring bins to qualify the result.
bool CompPairValues(const PairValues& Pair1, const PairValues& Pair2) { return (Pair1.Val<Pair2.Val); } // Compute the mid point of a set of values VValue of weigths Vweight double Calc_Median_Weighted( vector<double> Vvalues, vector<double> Vweigth, bool Print unsigned long NbPair = Vvalues.size();
if(NbPair==0){cerr<<"ErrorinDY_Calc_Median_Weighted:Vvaluesempty"<<endl;exit(-1);} } //Westorevaluesinpairs vector<PairValues>VPair(NbPair); doubleHalf_SomWeight=0.; for(unsignedlongind=0;ind<NbPair;ind++){ VPair[ind].Val=Vvalues[ind]; VPair[ind].weigh=Vweigh[ind]; Half_SomWeight+=VPair[ind].weigh; } Half_SomWeight*=0.5; //Wesortvalues,stable_sortousort std::stable_sort(VPair.begin(),VPair.end(),CompPairValues); //Wefindmid-weightpoint doubleValPlus=0.;doubleValMinus=0.; doubleSomWeigPlus=0.;doubleSomWeigMinus=0.; unsignedlongIndexPlus=0; for(unsignedlongIndex=0;Index<NbPair;Index++) {SomWeigPlus+=VPair[Index].weigh; ValPlus=VPair[Index].Val; IndexPlus=Index; if(SomWeigPlus>Half_SomWeight)break; SomeWeigMinus=SomWeigPlus; ValMinus=ValPlus; } //ComputetheMedian doubleValueMedian,ValueMedian2,Median; doubleSomWeigMinDes; if((IndexPlus==0)||(IndexPlus==(NbPair-1))) Median=ValPlus; else{ doubleUn_WeightIndPlus=1./VPair[IndexPlus].weigh; //RisingOption doubleDiffSom=Half_SomWeight-SomWeigMinus; doubleDeltaVal=ValPlus-ValMinus; ValueMedian=ValMinus+DeltaVal*DiffSom*Un_WeightIndPlus; //Descendingoption doubleValMinus=VPair[IndexPlus+1].Val;
SomWeigMinDes=Half_SomWeight*2.-SomWeigPlus; doubleDiffSom2=Half_SomWeight-SomWeigMinDes; doubleDeltaVal2=ValPlus-ValMinDes; ValueMedian2=ValMinDes+DeltaVal2*DiffSom2*Un_WeightIndPlus; Median=0.5*(ValueMedian+ValueMedian2); } if( PrintInside)//Printeverythingfordebugging{} returnMedian; }
|
2304.12498 | A Tukia-type theorem for nilpotent Lie groups and quasi-isometric
rigidity of solvable groups | In this paper we study uniform quasiconformal groups of Carnot-by-Carnot
groups. We show that they can be conjugated into conformal groups provided the
induced action on the space of distinct pairs is cocompact. Following the
approach of Eskin-Fisher-Whyte these results have applications to
quasi-isometric rigidity of certain solvable groups. | Tullia Dymarz, David Fisher, Xiangdong Xie | 2023-04-24T23:49:07Z | http://arxiv.org/abs/2304.12498v2 | # A Tukia-Type Theorem for Nilpotent Lie Groups and Quasi-isometric Rigidity of Solvable Groups
###### Abstract.
In this paper we study uniform quasiconformal groups of Carnot-by-Carnot groups. We show that they can be conjugated into conformal groups provided the induced action on the space of distinct pairs is cocompact. Following the approach of [1] these results have applications to quasi-isometric rigidity of certain solvable groups.
Key words and phrases:uniform quasisimilarity groups, quasi-isometric rigidity, nilpotent Lie groups, solvable Lie groups
###### Contents
* 1 Introduction
* 2 Outline of main theorem
* 3 Preliminary
* 3.1 Quasi-actions and uniform quasisimilarity actions
* 3.2 Nilpotent Lie algebras and nilpotent Lie groups
* 3.3 Carnot algebras and Carnot groups
* 3.4 Derivations, semi-direct products, Heintze groups, and SOL-like groups
* 3.5 Homogeneous distances on nilpotent Lie groups
* 3.6 A fiber Tukia theorem for diagonal Heintze pairs
* 4 Quasi-isometric rigidity of solvable groups
* 4.1 Quasi-isometric rigidity of lattices in the isometry group of SOL-like groups
* 4.2 Quasi-isometric classification of a class of solvable Lie groups
* 4.3 Quasi-isometric rigidity of quasi-actions on certain Heintze groups
* 5 BiLipschitz maps of diagonal Heintze pairs
* 6 Characterization of biLipschitz shear maps
* 6.1 Horizontal tautological one form on Carnot groups
* 6.2 Structure of Lie algebra when \(\alpha\) is not an integer
* 6.3 Characterization of biLipschitz shear maps
* 7 Eliminating \(s_{j}\) for \(j<\alpha\)
7.1 The affine action on \(E_{j}\)7.2 Existence of fixed point in \(\mathcal{H}_{j}\)7.3 Eliminating \(s_{j}\)8 Conformal structures in the \(V_{\alpha}\) direction8.1 Differential of \(F_{p,\alpha}\)8.2 Measurable conformal structure in the \(V_{\alpha}\) direction8.3 Completing the proof of Theorem 1.2 when \(\dim(W)\geq 2\) and \(\dim(N/W)\geq 2\)9 Case \(\dim(W)=1\)10 Case \(\dim(N/W)=1\)A Examples of Carnot-by-Carnot groupsB Lattices in SOL-like groupsC Compatible expressions
## 1. Introduction
In this paper we prove a Tukia-type theorem for certain nilpotent Lie groups and then use it to establish quasi-isometric rigidity for related solvable groups.
**Tukia-type Theorems.** A Tukia-type theorem asserts that a group of self maps of a metric space \((X,d)\) that distort distances by a uniformly controlled amount must actually be, up to conjugation, a lot more rigid. In this paper, our uniformly controlled maps will be _quasi-similarities_, i.e. bijections \(f:(X,d)\to(X,d)\) such that
\[K_{f}^{-}d(x,y)\leq d(f(x),f(y))\leq K_{f}^{+}d(x,y)\quad\text{ for all }x,y\in X \tag{1}\]
where \(K_{f}^{+}/K_{f}^{-}\leq K\) and \(K\) is a uniform constant over all group elements. A more common term for a map satisfying Equation (1) is _biLipschitz_ but then \(K_{f}^{-}\) is usually defined to be the inverse of \(K_{f}^{+}\) so uniformity in our sense is harder to describe. The goal conclusion will be that up to conjugation by a biLipschitz map, the group must be acting by _similarities_, that is maps where \(K_{f}^{-}=K_{f}^{+}\). Such a theorem is always true for example when \(X=\mathbb{R}\)[10] but in general, additional hypotheses on the size of the group are often needed.
Tukia's original theorem [16] was proved for groups of uniform quasiconformal maps of the \(n\)-sphere for \(n\geq 2\), showing that up to conjugation these groups act by conformal maps. For \(n=2\) this was also proved by Sullivan [17]. When \(n\geq 3\) the additional hypothesis that was needed was that every point be a _radial limit point_ (see Definition 8.5) under the action. We will also need such a hypothesis in our case. Tukia's theorem was first generalized in [10] to quasiconformal groups of the real Heisenberg group equipped with a Carnot-Caratheodory metric. A similar generalization holds for all Carnot groups endowed with a Carnot-Caratheodory metric (see Theorem 3.6) but in the conclusion the conformal action may be with respect to a different Carnot-Caratheodory metric. Other generalizations of Tukia's theorem for uniform quasisimilarity groups may be found in [11] where \(N=\mathbb{R}^{n}\) with homogeneous distances that are not the
standard Euclidean distances. This paper contains the first Tukia-type theorem for non-abelian non-Carnot nilpotent Lie groups.
**Definition 1.1**.: _Let \(N\) be a simply connected nilpotent Lie group with Lie algebra \(\mathfrak{n}\). Let \(D\) be a diagonalizable derivation of \(\mathfrak{n}\) with positive eigenvalues. Let \(\mathfrak{w}\) be the Lie sub-algebra generated by the eigenspace of \(D\) associated with the smallest eigenvalue. We say that \((N,D)\) is a Carnot-by-Carnot group if the following conditions are satisfied:_
1. \(\mathfrak{w}\) _is a proper ideal of_ \(\mathfrak{n}\)_;_
2. \(\mathfrak{n}/\mathfrak{w}\) _is a Carnot algebra;_
3. \(D\) _induces a derivation_ \(\bar{D}\) _on_ \(\mathfrak{n}/\mathfrak{w}\) _that is a multiple of the standard Carnot derivation on_ \(\mathfrak{n}/\mathfrak{w}\)_._
We remark that \(\mathfrak{w}\) is itself a Carnot algebra, see Lemma 3.3. Carnot-by-Carnot groups are abundant. The obvious examples are given by a direct product of two Carnot groups (where the derivation is a product of distinct multiples of the standard Carnot derivation on the Lie algebras of the two factors). More interesting examples are provided by central products and semi-direct products. Using Lie algebra cohomology one can also obtain Carnot-by-Carnot groups that are not semi-direct products. See Appendix A for more details.
Borrowing terminology from [13], we say a distance \(d\) on \(N\) is _\(D\)-homogeneous_ if it is left invariant, induces the manifold topology on \(N\) and such that \(d(e^{tD}x,e^{tD}y)=e^{t}d(x,y)\) for all \(x,y\in N\) and \(t\in\mathbb{R}\), where \(\{e^{tD}|t\in\mathbb{R}\}\) is the one parameter group of automorphisms of \(N\) generated by the derivation \(D\). There are many \(D\)-homogeneous distances \(d\) on \(N\) and it is easy to see that any two \(D\)-homogeneous distances are pairwise biLipschitz equivalent. When we talk about a biLipschitz map \(F\) of \(N\) we mean that \(F\) is biLipschitz with respect to a \(D\)-homogeneous distance on \(N\). Note, however, that for two \(D\)-homogeneous distances \(d_{1}\), \(d_{2}\) on \(N\), a map \(F:(N,d_{1})\to(N,d_{1})\) being a similarity does not imply that \(F:(N,d_{2})\to(N,d_{2})\) is a similarity. A \(D\)-homogeneous distance \(d_{0}\) on \(N\) is _maximally symmetric_ if for any \(D\)-homogeneous distances \(d\) on \(N\) there is a biLipschitz automorphism \(\phi\) such that \(\phi\text{Sim}(N,d)\phi^{-1}\subset\text{Sim}(N,d_{0})\), where \(\text{Sim}(N,d)\) denotes the group of similarities of \((N,d)\). We remark that \(N\) always admits a maximally symmetric \(D\)-homogeneous distance \(d_{0}\), see Lemma 2.3, [10].
**Theorem 1.2**.: _Let \((N,D)\) be a Carnot-by-Carnot group and \(\Gamma\) a group with a uniform quassimilarity action of \(N\) such that almost every point in \(N\) is a radial limit point. When \(\text{\rm dim}(W)=1\), we further assume \(\Gamma\) is locally compact amenable. Then there exists a biLipschitz map \(F_{0}\) of \(N\) such that \(F_{0}\Gamma F_{0}^{-1}\subset\text{Sim}(N,d_{0})\), where \(d_{0}\) is a fixed maximally symmetric \(D\)-homogeneous distance on \(N\)._
We remark that a version of Theorem 1.2 for uniform quasiconformal groups is not needed since by [16] when \(N\) is Carnot-by-Carnot, each quasiconformal map of \(N\cup\{\infty\}\) fixes \(\infty\) and a uniform quasiconformal group of \(N\cup\{\infty\}\) is the same thing as a uniform quasisimilarity group of \(N\). Theorem 1.2 also holds for Carnot groups but in this case all that is required is running the proof of Theorem 3.6 with the additional assumption that all maps in question fix \(\infty\) and are biLipschitz. In a general nilpotent group there are two sources of metric distortion, one from brackets and one from differing rates of contraction from the derivation D. The Carnot case is exactly the one where these two distortions agree. A major difficulty in going beyond the Carnot case [10] and the abelian case [11] comes when it is difficult to separate the two sources of distortion. The case of general \((N,D)\) involves more phenomena of this kind not already seen in the Carnot-by-Carnot case. In the case of products of Carnot groups this is easier than in the general Carnot-by-Carnot case. In fact we can easily prove a similar theorem for uniform quasisimilarity groups of \((N,D)\)
with \(N=\prod_{i}N_{i}\) where each \(N_{i}\) is Carnot with Lie algebra \(\mathfrak{n}_{i}\) and \(D|_{\mathfrak{n}_{i}}\) is an increasing multiple of a Carnot derivation (see Theorem 10.1).
The amenability condition when \(\dim(W)=1\) is needed in the proof in order to apply Day's fixed point theorem which states that affine actions of amenable groups on compact convex subsets of locally convex topological vector spaces have fixed points. We actually also use Day's fixed point theorem in the case \(\dim(W)\geq 2\) but in that case we are able to deduce the amenability of the acting group. This extra amenability condition does not affect the application of our theorem to quasi-isometric rigidity since amenability of discrete groups is preserved by quasi-isometries. Nevertheless it would be interesting to know if this condition can be removed.
Theorem 1.2 should be viewed as a first step toward a Tukia-type theorem for all Heintze pairs. The Lie sub-algebra \(\mathfrak{w}\) of \(\mathfrak{n}\) generated by the eigenspace of the smallest eigenvalue of \(D\) is an ideal in the case when \((N,D)\) is Carnot-by-Carnot, while it is not an ideal for general Heintze pairs. The assumption that \(\mathfrak{w}\) is an ideal is essentially used in this paper, particularly in Section 5 for the structure of biLipschitz maps and Section 6 for the characterization of biLipschitz shear maps.
The proof of Theorem 1.2 takes up the bulk of the paper and so we outline it in more detail in the next section.
### Quasi-isometric rigidity of solvable groups
The power of a Tukia-type theorem in geometric group theory is that it can often be used to prove quasi-isometric rigidity results. This is done through the notion of quasi-action. A quasi-action of a group \(\Gamma\) on a metric space \(X\) is an assignment \(\gamma\mapsto G_{\gamma}\) where \(G_{\gamma}\) is a self quasi-isometry of \(X\) such that
1. \(G_{\gamma}\) is an \((L,A)\) quasi-isometry where \(L\) and \(A\) are uniform over all \(\gamma\in\Gamma\).
2. \(G_{\gamma\eta}\) and \(G_{\gamma}\circ G_{\eta}\) are bounded distance apart in the sup norm, uniformly over all \(\gamma,\eta\in\Gamma\).
3. \(G_{Id}\) is bounded distance from the identity map on \(X\).
A quasi-action is _cobounded_ if there is a bounded set \(S\subset X\) such that for any \(x\in X\) there is \(\gamma\in\Gamma\) such that \(G_{\gamma}(x)\in S\).
The standard example of a cobounded quasi-action arises when \(\Gamma\) is a group with a left invariant metric (for example, a finitely generated group with a word metric or a Lie group with a left invariant Riemannian metric) and \(\phi:\Gamma\to X\) is a quasi-isometry with coarse inverse \(\bar{\phi}\). Then \(\gamma\mapsto\phi\circ L_{\gamma}\circ\bar{\phi}\) defines a cobounded quasi-action of \(\Gamma\) on \(X\), where \(L_{\gamma}\) is the left translation of \(\Gamma\) by \(\gamma\).
In the case that \(X\) is Gromov hyperbolic, quasi-isometries of \(X\) correspond to quasiconformal maps (quasisymmetric maps, to be more precise) of the ideal boundary \(\partial X\), and quasi-actions on \(X\) correspond to uniform quasiconformal actions on \(\partial X\). Furthermore, with suitable choices of metrics on \(X\) and \(\partial X\), isometries of \(X\) often correspond to conformal maps of \(\partial X\) or similarities of a one-point complement of \(\partial X\). As a result, a group \(\Gamma\) quasi-acting on a Gromov hyperbolic space \(X\) induces a uniform quasiconformal action on \(\partial X\); if a Tukia-type theorem is available, then this uniform quasiconformal action is conjugate by a quasiconformal map of \(\partial X\) to a conformal action of \(\Gamma\) on \(\partial X\), which implies the original quasi-action is conjugate by a quasi-isometry to an isometric action on \(X\).
We now apply the outline of the proceeding paragraph in the case that \(X\) is a so called _Heintze group_: Given a simply connected nilpotent Lie group \(N\) with Lie algebra \(\mathfrak{n}\) and a derivation \(D\) of \(\mathfrak{n}\), one can form the semi-direct product \(S=N\rtimes_{D}\mathbb{R}\), where the action of \(\mathbb{R}\) on \(N\) is by the automorphisms \(e^{tD}\) of \(N\) generated by the derivation \(D\). Such a group \(S\) is called a Heintze group if the eigenvalues of \(D\) all have positive real parts. For example when \((N,D)\) is Carnot-by-Carnot
the associated extension \(S\) is a Heintze group. By [10], Heintze groups are Gromov hyperbolic. By applying our Tukia-type theorem we get the following (a similar statement holds when \(N\) is a Carnot group, see Corollary 1.2 in [11]):
**Theorem 1.3**.: _Let \((N,D)\) be Carnot-by-Carnot and \(S=N\rtimes_{D}\mathbb{R}\) the associated Heintze group. When \(\dim(W)=1\), we further assume \(\Gamma\) is locally compact amenable. Then there is a left invariant Riemannian metric \(g_{0}\) on \(S\) with the following property. If \(\Gamma\) is a group that quasi-acts coboundedly on \(S\), then this quasi-action is quasi-conjugate to an isometric action of \(\Gamma\) on \((S,g_{0})\)._
More generally, Tukia-type theorems have been used in the proofs of quasi-isometric rigidity of certain solvable Lie groups that are not themselves negatively curved but instead are foliated by negatively curved spaces. Most notably Eskin-Fisher-Whyte's quasi-isometric rigidity theorem for SOL [1] uses the fact that SOL is foliated by hyperbolic planes \(\mathbb{H}^{2}\) whose visual boundary can be identified with \(S^{1}\simeq\mathbb{R}\cup\{\infty\}\). Recall that \(SOL\simeq\mathbb{R}^{2}\rtimes\mathbb{R}\) where the action of \(\mathbb{R}\) on \(\mathbb{R}^{2}\) is given by \(e^{tA}\) where \(A\) is the diagonal matrix with \(1\) and \(-1\) on the diagonal and a left invariant metric can be given by \(ds^{2}=e^{-2t}dx^{2}+e^{2t}dy^{2}+dt^{2}\). There are two foliations by hyperbolic planes, given by fixing either the \(x\) or the \(y\) coordinate. As a consequence of Eskin-Fisher-Whyte's coarse differentiation theorem on the structure of quasi-isometries of SOL, a quasi-action on SOL induces two quasi-actions on the hyperbolic plane (and hence two quasisimilarity actions on \(\mathbb{R}\)), one for each foliation.
We call a group _SOL-like_ if it has the form \((N_{1}\times N_{2})\rtimes\mathbb{R}\), where \(N_{i}\) is a simply connected nilpotent Lie group with Lie algebra \(\mathfrak{n}_{i}\) and the action of \(\mathbb{R}\) on \(N_{1}\times N_{2}\) is given by \(e^{tD}\) with \(D\) the derivation of \(\mathfrak{n}_{1}\times\mathfrak{n}_{2}\) given by \(D:=(D_{1},-D_{2})\) and \(D_{i}\) a derivation of \(\mathfrak{n}_{i}\) whose eigenvalues have positive real part. The negatively curved spaces foliating \((N_{1}\times N_{2})\rtimes\mathbb{R}\) are simply \(N_{i}\rtimes\mathbb{R}\) where the action of \(\mathbb{R}\) is defined by the derivation \(D_{i}\). The visual boundary of \(N_{i}\rtimes\mathbb{R}\) is simply \(N_{i}\cup\{\infty\}\) but in the context of quasi-isometries of SOL-like groups the point \(\infty\) is always preserved. The structure of quasi-isometries of SOL-like groups follows from Eskin-Fisher-Whyte's coarse differentiation techniques [1, 1] (see also [14] for a detailed treatment of the "non-unimodular" case in a more general setting). A consequence of Eskin-Fisher-Whyte's argument is that any quasi-action of a group \(\Gamma\) on \((N_{1}\times N_{2})\rtimes\mathbb{R}\) induces a quasi-action of \(\Gamma\) on \(N_{i}\rtimes\mathbb{R}\) for \(i=1,2\). A Tukia-type theorem (if available) can then be applied to get isometric actions of \(\Gamma\) on \(N_{1}\rtimes\mathbb{R}\) and \(N_{2}\rtimes\mathbb{R}\). One then tries to show that the two isometric actions combine to give an isometric action of \(\Gamma\) on \((N_{1}\times N_{2})\rtimes\mathbb{R}\). Using this strategy and our Tukia-type theorem we get:
**Theorem 1.4**.: _Let \(S=(N_{1}\times N_{2})\rtimes\mathbb{R}\) be a SOL-like group, where each \((N_{i},D_{i})\) is either Carnot or Carnot-by-Carnot. In the case \((N_{i},D_{i})\) is Carnot-by-Carnot with \(\dim(\mathfrak{n}_{i})=1\) for at least one \(i\), where \(\mathfrak{w}_{i}\) is the Lie sub-algebra of \(\mathfrak{n}_{i}\) generated by the eigenspace of the smallest eigenvalue of \(D_{i}\), we further assume that \(\Gamma\) is locally compact amenable or that the isometry group \(\text{Isom}(S,g)\) admits a uniform lattice for some left invariant Riemannian metric \(g\) on \(S\). There is a left invariant Riemannian metric \(g_{0}\) on \(S\) with the following property. Suppose a finitely generated group \(\Gamma\) is quasi-isometric to \(S\). Then \(\Gamma\) is, up to finite index and finite kernel, a lattice in \(\text{Isom}(S,g_{0})\)._
Note that any two finitely generated groups that differ by finite index or finite kernel are automatically quasi-isometric so it is impossible to drop these conditions. The assumption that \(\text{Isom}(S,g)\) admits a uniform lattice implies that \(\Gamma\) is amenable and so we can apply Theorem 1.2: by Theorem 3 in [12], a uniform lattice \(\Gamma_{0}\) in \(\text{Isom}(S,g)\) is virtually solvable; on the other hand, \(\Gamma\) is quasi-isometric to \(S\) and so is also quasi-isometric to \(\Gamma_{0}\); since amenability is a quasi-isometry invariant among finitely generated groups it follows that \(\Gamma\) is amenable. In Appendix B we give examples of SOL-like groups \(S\) that admit lattices (in this case \(\text{Isom}(S,g)\) admit uniform lattices for any left invariant Riemannian metric \(g\)). The proof of Theorem 1.4 can be found in Section 4.
Theorem 1.4 is related to Conjecture 1.2.2(2) in [10]. Theorem 1.4 also has consequences for SOL-like groups whose isometry groups do not admit lattices.
**Conjecture 1.5**.: _Let \(S\) be a SOL-like group. Suppose \(\text{Isom}(S,g)\) does not admit a uniform lattice for any left invariant Riemannian metric \(g\) on \(S\). Then \(S\) is not quasi-isometric to any finitely generated group._
By Theorem 1.4 this conjecture is true when both \(N_{i}\) are either Carnot or Carnot-by-Carnot with \(\dim(W_{i})\geq 2\). In case \(\dim(W_{i})=1\) we can only conclude that \(S\) is not quasi-isometric to any amenable finitely generated group.
**Quasi-isometric classification of solvable Lie groups.** An open problem concerning the large scale geometry of Lie groups is the quasi-isometry classification of simply connected solvable Lie groups. A simply connected solvable Lie group \(S\) with Lie algebra \(\mathfrak{s}\) is of _real type_ (also called completely solvable, or split-solvable) if all eigenvalues of \(\text{ad}(X)\) are real for all \(X\in\mathfrak{s}\). By Theorem 4.21 in [11] for any simply connected solvable Lie group \(S\), there is a unique simply connected solvable Lie group \(S_{\mathbb{R}}\) of real type, so called _real shadow_ of \(S\), such that \(S\) and \(S_{\mathbb{R}}\) can be made isometric to each other. Recall that two Lie groups \(S_{1}\) and \(S_{2}\) can be made isometric to each other if there are left invariant Riemannian metrics \(g_{i}\) on \(S_{i}\) (\(i=1,2\)) such that \((S_{1},g_{1})\) and \((S_{2},g_{2})\) are isometric. Notice that this result also implies two simply connected solvable Lie groups \(S_{1}\) and \(S_{2}\) of real type can be made isometric if and only if they are isomorphic. It follows that to classify simply connected solvable Lie groups up to quasi-isometry, it suffices to restrict attention to the class of simply connected solvable Lie groups of real type. Here is a conjecture by Y. Cornulier.
**Conjecture 1.6**.: _([1], Conjecture 19.113) Two simply connected solvable Lie groups \(S_{1}\) and \(S_{2}\) of real type are quasi-isometric if and only if they are isomorphic._
The following two results confirm the above conjecture for two classes of solvable Lie groups.
**Theorem 1.7**.: _Let \(\mathcal{S}\) be the class of SOL-like groups \(S=(N_{1}\times N_{2})\rtimes\mathbb{R}\) where \((N_{i},D_{i})\) (\(i=1,2\)) is either Carnot or Carnot-by-Carnot. Then two members \(S_{1},S_{2}\in\mathcal{S}\) are quasi-isometric if and only if they are isomorphic._
Theorem 1.7 is the same as Theorem C, [12] in the case when \(S\) is non-unimodular and \((N_{i},D_{i})\) (\(i=1,2\)) is Carnot. Similarly Theorem 1.8 follows from Pansu's differentiability theorem in the case when \((N,D)\) is Carnot.
**Theorem 1.8**.: _Let \(\mathcal{S}\) be the class of simply connected solvable Lie groups of the form \(S=N\rtimes_{D}\mathbb{R}\), where \((N,D)\) is Carnot or Carnot-by-Carnot. Then two members \(S_{1},S_{2}\in\mathcal{S}\) are quasi-isometric if and only if they are isomorphic._
As indicated above, a quasi-isometry between \(S_{1},S_{2}\) induces a quasi-action of \(S_{1}\) on \(S_{2}\). Our Tukiat-type theorem and Eskin-Fisher-Whyte's coarse differentiation method then imply this quasi-action can be conjugated into an isometric action of \(S_{1}\) on \(S_{2}\). From here it is not too difficulty to show that \(S_{1}\) and \(S_{2}\) are isomorphic.
**Structure of the paper:** We collect the preliminaries in Section 3. The proofs of the quasi-isometric rigidity theorems are covered in Section 4. The proof of Theorem 1.2 spans from Section 5 to Section 10, see Section 2 for an outline of this proof. Appendix A provides examples of Carnot-by-Carnot groups. In Appendix B we review some examples of SOL-like groups that admit lattices. The content of Appendix C is the proof of Lemma 5.4.
**Acknowledgments.** T. Dymarz was supported by NSF career Grant 1552234. D. Fisher was supported by NSF grants DMS-1906107 and DMS-2246556. X. Xie was supported by Simons Foundation grant #315130. X. Xie would like to thank the department of mathematics, University of Wisconsin at Madison for financial support during his visit there in February 2020. We would also like to thank Dave Witte Morris for useful conversations and thank Tom Ferragut and Gabriel Pallier for thoughtful comments on an earlier version.
## 2. Outline of main theorem
In this section we outline the proof of Theorem 1.2.
Let \(\Gamma\) be a group acting on a Carnot-by-Carnot group \((N,D)\) by uniform quasisimilarities with respect to a \(D\)-homogeneous distance that satisfies the conditions of Theorem 1.2. We use the notation \(W\), \(\mathfrak{w}\) and \(\bar{D}\) from Definition 1.1 for a Carnot-by-Carnot group. Without loss of generality we can assume that \(D\) scales by \(1\) on the first layer of \(\mathfrak{n}\) (equivalently the first layer of \(\mathfrak{w}\)) and by \(\alpha\) on the first layer of \(\mathfrak{n}/\mathfrak{w}\). For example if \(N\simeq\mathbb{R}^{n}\) then \(D\) is a diagonal matrix with diagonal entries \(1\) and \(\alpha\). (see Section 3 for details).
A biLipschitz map \(f\) of \(N\) is a _fiber similarity map_ if there exist Carnot-Caratheodory metrics \(\bar{d}_{CC}\) and \(d_{CC}\) on \(N/W\) and \(W\) respectively such that
1. \(f\) induces a similarity of \((N/W,\bar{d}_{CC})\),
2. \(f\) acts by similarities along cosets of \(W\), i.e. for each \(g\in N\) the map \(L_{f(g)^{-1}}\circ f\circ L_{g}:(W,d_{CC})\to(W,d_{CC})\) is a similarity.
We say \(\Gamma\) is a _fiber similarity group_ if there exist Carnot-Caratheodory metrics \(\bar{d}_{CC}\) and \(d_{CC}\) on \(N/W\) and \(W\) respectively such that every element \(\gamma\in\Gamma\) is a fiber similarity map with respect to \(\bar{d}_{CC}\) and \(d_{CC}\). When \(\dim(N/W)\geq 2,\dim(W)\geq 2\), Theorem 3.7 implies that \(\Gamma\) can be conjugated into a fiber similarity group.
**Individual biLipschitz maps (Section 5).** In Section 5 we study the structure of individual fiber similarity maps of \(N\). In general fiber similarity maps are not similarities of \(N\) since the similarity induced on \(N/W\) may not be compatible with the similarities on the cosets of \(W\) and there may be a shear along the cosets of \(W\). When \(N\) is abelian, one can show that any fiber similarity map is the composition of a similarity of \(N\) and a shear map \((w,h)\mapsto(w+s(h),h)\) along cosets of \(W\) (these are called "almost similarities" in [4]). The same is true if \(N\) is a direct product of \(W\) and \(H\): up to left translation, any such map has the form \((w,h)\mapsto(\phi(w)s(h),B(h))\) where \(\phi:W\to W\) and \(B:H\to H\) are automorphisms, \((w,h)\mapsto(\phi(w),B(h))\) is an automorphism of \(N\) (and a similarity with respect to some \(D\)-homogeneous distance) and \(s:H\to W\) is the shear amount.
The general case is more involved. To understand the interaction of \(W\) and \(N/W\) we decompose the Lie algebras \(\mathfrak{n}\) of \(N\) and \(\mathfrak{w}\) of \(W\) into eigenspaces
\[\mathfrak{n}=V_{\lambda_{1}}\oplus\cdots\oplus V_{\lambda_{n}}\text{ and } \mathfrak{w}=W_{1}\oplus\cdots\oplus W_{m}.\]
Here \(V_{\lambda_{j}}\) is the eigenspace associated to the eigenvalue \(\lambda_{j}\) of \(D\) and \(W_{i}\) is the eigenspace of \(D|_{\mathfrak{w}}\) associated to eigenvalue \(i\). Recall that without loss of generality we have assumed that \(\lambda_{1}=1\) so that, since \(\mathfrak{w}\) is Carnot with respect to \(D|_{\mathfrak{w}}\), its eigenvalues are all positive integers. The eigenvalue \(\lambda_{j}\) will either be an integer or a multiple of \(\alpha\) (or both) where \(\alpha\) is the smallest eigenvalue of the induced derivation \(\bar{D}\) on the Lie algebra of \(N/W\).
If \(k\) is not an integral multiple of \(\alpha\) then \(V_{k}=W_{k}\) otherwise \(k=j\alpha\) and we write
\[V_{k}=W_{j\alpha}\oplus H_{j}\]
for some complement \(H_{j}\subset V_{k}\) of \(W_{j\alpha}\) (see Section 3 for details). Then we have a bijection from \(H=H_{1}\oplus\cdots\oplus H_{s}\) to \(\mathfrak{n}/\mathfrak{w}\) so that \(H\) is a transversal for \(\mathfrak{w}\) in \(\mathfrak{n}\). Note that a fiber similarity map does not necessarily preserve \(H\). In Lemma 5.4 we show that any such map must have a _compatible expression_. For a rigorous definition of compatible expression see Definition 5.3 but morally speaking it still involves an automorphism \(\phi:W\to W\) and a shear map on \(N\) determined by a map \(s:N/W\to W\) but now \(B:H\to\mathfrak{n}\) is only a linear map from the transversal \(H\) into \(\mathfrak{n}\) that satisfies the following compatibility condition with \(\phi\): for any \(w\in\mathfrak{w}\) and \(h\in H\), \([d\phi(w),Bh]=d\phi[w,h]\). In particular \(w+h\mapsto d\phi(w)+Bh\) does not necessarily define an automorphism of \(\mathfrak{n}\). In addition both \(B\) and the shear \(s\) depend on the choice of transversal \(H\).
**Case when \(\alpha\) is not an integer.** Note that when \(\alpha\) is not an integer then \(V_{\alpha}=H_{1}\) and \(\mathfrak{n}\) is a central product (see Section 6.2). This simplifies many of our arguments. In fact a substantial portion of our later analysis takes place on \(V_{\alpha}=W_{\alpha}\oplus H_{1}\) when \(W_{\alpha}\neq 0\).
**Shear maps (Section 6).** In Section 6 we characterize maps \(s:N/W\to W\) for which the corresponding shear maps \(g\mapsto g\cdot s(gW)\) are biLipschitz maps of \(N\). BiLipschitz shear maps are used to conjugate a uniform quasisimilarity group to eliminate the \(W_{j}\) shear components for \(j<\alpha\) in the compatible expression of group elements. As above we work with Lie algebra coordinates, we identify \(W\) with \(\mathfrak{w}\) and \(N/W\) with \(\mathfrak{n}/\mathfrak{w}\) via the exponential map and write \(s\) as \(s:\mathfrak{n}/\mathfrak{w}\to\mathfrak{w}\). First we show that all such maps \(s\) must have image in \(Z(\mathfrak{w})\), the center of \(\mathfrak{w}\). In the abelian case any \(\frac{1}{\alpha}\)-Holder map \(\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\) induces a biLipschitz map of \(N\). When \(\alpha\) is not an integer there is also a simple description of such admissible maps \(s\). In general, the description is more complex. To describe \(s:\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\) we first decompose \(s=(s_{i})\) where \(s_{i}:H\to Z_{i}:=W_{i}\cap Z(\mathfrak{w})\). Then we show that \(s\) is admissible as long as each \(s_{i}\) lies in a function space \(\mathcal{H}_{i}\) that is recursively defined by the vanishing of certain integrals along horizontal closed curves. (See Section 6.3 and especially Propositions 6.4 and 6.7 for details).
**Day's theorem (Section 7).** Now we are ready to look at the entire uniform group \(\Gamma\), acting by fiber similarity maps on \(N\). The goal in this step is to show that after conjugation the shear components \(s_{j}\) for \(j<\alpha\) of each group element must be zero. This is where Day's theorem comes into play. Recall that Day's theorem shows that any amenable group that has an affine action on a compact convex subset of a locally convex topological vector space has a fixed point. In previous cases, such as those covered in [4], Day's theorem applies directly to the action of \(\Gamma\) on the Banach space of \(\frac{1}{\alpha}\)-Holder functions. Using the fixed point obtained, one can use the corresponding biLipschitz shear map to conjugate the group action to an action where all shear components are trivial. In this paper we need to show that the fixed point lies in \(\mathcal{H}_{j}\), the subspace of Holder continuous maps defined above. We do this in order to construct a conjugating biLipschitz shear map, see the preceding paragraph.
**Conformal structure in the \(V_{\alpha}\) direction (Section 8).** At this point, the shear components \(s_{j}\) for \(j<\alpha\) of each group element are zero. In Section 8 we conjugate one more time so that \(s_{\alpha}\) is a homomorphism. This part requires substantially more work. It involves first applying a modified version of a foliated Tukia-type argument (as in [4] and [5]) that uses a measurable conformal structure defined solely in the \(V_{\alpha}\) direction. Some care is needed to define this structure since our maps do not necessarily preserve \(V_{\alpha}\). Once we know that \(s_{\alpha}\) is a homomorphism it is not hard to show that our maps are affine maps of \(N\) and hence similarities with respect to some \(D\)-homogeneous distance on \(N\). This section is only needed when \(\alpha\) is an integer.
**Dimension one (Sections 9 and 10).** Theorem 3.7 only applies when \(\dim(N/W)\) and \(\dim(W)\) are both at least two. In the case where \(\dim(W)=1\) we are not able to define a foliated conformal structure along \(W\) but instead we use the action of the group on the space of derivative maps in the \(W\) direction. Once again we require Day's theorem which we apply to this action. We are able to construct a conjugating map that conjugates our group action into one by fiber similarity maps. If instead \(\dim(N/W)=1\) then there are two cases. If \(N\) is a direct product we can use Farb-Mosher [10] to conjugate the action on \(N/W\). Otherwise we use the algebraic structure to show that the group action was already by fiber similarity maps.
## 3. Preliminary
### Quasi-actions and uniform quasisimilarity actions
Let \(L\geq 1\), \(A\geq 0\). A (not necessarily continuous) map \(f:X\to Y\) between two metric spaces is an \((L,A)\)-_quasi-isometry_ if:
(1) \(d(x_{1},x_{2})/L-A\leq d(f(x_{1}),f(x_{2}))\leq L\,d(x_{1},x_{2})+A\) for all \(x_{1},x_{2}\in X\);
(2) for any \(y\in Y\), there is some \(x\in X\) with \(d(f(x),y)\leq A\).
An \((L,0)\)-quasi-isometry is also called a \(L\)-biLipschitz map.
One of the main motivations for studying quasi-isometries comes from geometric group theory. A finitely generated group \(\Gamma\) with a symmetric finite generating set \(S\) has an associated left invariant _word metric_ given by \(d_{S}(g,h)=\|g^{-1}h\|_{S}\) where \(\|a\|_{S}\) denotes the minimal \(k\in\mathbb{Z}_{+}\) such that \(a=s_{1}s_{2}\cdots s_{k}\) with \(s_{i}\in S\) for \(i=1,\ldots,k\). It is a simple exercise to show that for any two finite generating sets \(S,S^{\prime}\) we have that \((\Gamma,d_{S})\) and \((\Gamma,d_{S^{\prime}})\) are quasi-isometric.
The definition of a quasi-action can be found in the Introduction.
Let \(\Lambda\geq 1\) and \(C>0\). A bijection \(F:X\to Y\) between two metric spaces is called a \((\Lambda,C)\)-_quasisimilarity_ if
\[\frac{C}{\Lambda}\,d(x,y)\leq d(F(x),F(y))\leq C\,\Lambda\,d(x,y)\]
for all \(x,y\in X\). When \(\Lambda=1\), we say \(F\) is a similarity. It is clear that a map is a quasisimilarity if and only if it is a biLipschitz map. The point of using the notion of quasisimilarity is that sometimes there is control on \(\Lambda\) but not on \(C\). An action \(\Phi:\Gamma\to\operatorname{Homeo}(X)\) of a group \(\Gamma\) on a metric space \(X\) is a _uniform quasisimilarity action_ if there is some \(\Lambda\geq 1\) with the following property: for every \(\gamma\in\Gamma\) there is a constant \(C_{\gamma}>0\) such that \(\Phi(\gamma)\) is a \((\Lambda,C_{\gamma})\)-quasisimilarity.
Let \(X\) be a Gromov hyperbolic space, and \(\Gamma\) a group quasi-acting on \(X\). If there is a point \(\infty\in\partial X\) such that for any \(\gamma\in\Gamma\), the homeomorphism of \(\partial X\) induced by \(G_{\gamma}\) fixes \(\infty\) and is biLipschitz with respect to a fixed visual metric on \(\partial X\), then the quasi-action of \(\Gamma\) on \(X\) induces a uniform quasisimilarity action of \(\Gamma\) on \((\partial X\backslash\{\infty\},d)\) for some parabolic visual metric \(d\) based at \(\infty\). Conversely, if \(X\) is a visual Gromov hyperbolic space and \(\Gamma\) admits a uniform quasisimilarity action on \((\partial X\backslash\{\infty\},d)\) for some parabolic visual metric \(d\) based at \(\infty\), then this action extends to a quasi-action of \(\Gamma\) on \(X\). More generally, there is a notion of uniform quasi-Mobius actions, and quasi-actions on visual Gromov hyperbolic spaces correspond to uniform quasi-Mobius actions on the visual boundary. Since we do not need the notion of uniform quasi-Mobius actions, we shall not recall the definition here.
### Nilpotent Lie algebras and nilpotent Lie groups
Let \(\mathfrak{n}\) be a Lie algebra. The lower central series is defined recursively as follows: \(\mathfrak{n}^{(1)}=\mathfrak{n}\), \(\mathfrak{n}^{(k)}=[\mathfrak{n},\mathfrak{n}^{(k-1)}]\). The Lie algebra \(\mathfrak{n}\) is called nilpotent if \(\mathfrak{n}^{(t+1)}=0\) for some \(t\geq 1\). The smallest such \(t\) is called the nilpotency of \(\mathfrak{n}\). A connected Lie group is nilpotent if and only if its Lie algebra is nilpotent.
Let \(N\) be a simply connected nilpotent Lie group with Lie algebra \(\mathfrak{n}\). Then the exponential map \(\exp:\mathfrak{n}\to N\) is a diffeomorphism. Under this identification the Lebesgue measure on \(\mathfrak{n}\) is a Haar measure on \(N\) ([10], page 19). One can pull back the group operation from \(N\) to get a group structure on \(\mathfrak{n}\). This group structure on \(\mathfrak{n}\) can be described by the Baker-Campbell-Hausdorff formula (BCH formula in short), which expresses the product \(X*Y\) (\(X,Y\in\mathfrak{n}\)) in terms of the iterated Lie brackets of \(X\) and \(Y\). The group operation in \(N\) will be denoted by \(\cdot\). The pull-back group operation \(*\) on \(\mathfrak{n}\) is defined as follows. For \(X,Y\in\mathfrak{n}\), define
\[X*Y=\exp^{-1}(\exp X\cdot\exp Y).\]
Then the BCH formula ([10], page 11) says
\[X*Y=X+Y+\frac{1}{2}[X,Y]+\frac{1}{12}[X,[X,Y]]-\frac{1}{12}[Y,[X,Y]]+\cdots.\]
We also recall the following formula ([1], pages 127, 128):
\[Y*X*(-Y)=\sum_{k=0}^{\infty}\frac{1}{k!}(\operatorname{ad}Y)^{k}(X)=X+[Y,X]+ \cdots+\frac{1}{k!}(\operatorname{ad}Y)^{k}(X)+\cdots. \tag{2}\]
Recall, if \(\phi:G_{1}\to G_{2}\) is a Lie group homomorphism and \(d\phi:\mathfrak{g}_{1}\to\mathfrak{g}_{2}\) denotes the associated Lie algebra homomorphism then \(\phi\circ\exp=\exp\circ d\phi\).
**Standing Assumption.** In this paper (particularly in Sections 6-8 and Appendix C) we shall often identify a simply connected nilpotent Lie group \(N\) with its Lie algebra \(\mathfrak{n}\) via the exponential map. With this identification, the inverse (with respect to \(*\)) of an element \(X\in\mathfrak{n}\) is \(-X\): \(X^{-1}=-X\). For this reason we denote by \(0\) the identity element of a Lie group.
The following statement is known (see for example [11], p.980). We include it here for completeness.
**Lemma 3.1**.: _Let \(\phi,\tilde{\phi}:N_{1}\to N_{2}\) be Lie group homomorphisms between two simply connected nilpotent Lie groups. Let \(N_{2}\) be equipped with a left invariant metric \(d\) such that \(d\) induces the manifold topology and closed balls are compact. Suppose \(C:=\sup_{x\in N_{1}}\{d(\phi(x),\tilde{\phi}(x))\}<\infty\). Then \(\phi=\tilde{\phi}\)._
Proof.: It suffices to show the corresponding Lie algebra homomorphisms agree. Suppose the contrary that there is some \(X\in\mathfrak{n}_{1}\) such that \(d\phi(X)\neq d\tilde{\phi}(X)\). Let \(\{\mathfrak{n}_{2}^{(j)}\}_{j}\) be the lower central series of \(\mathfrak{n}_{2}\) and \(\pi_{j}:\mathfrak{n}_{2}\to\mathfrak{n}_{2}/\mathfrak{n}_{2}^{(j)}\) the quotient homomorphism. Let \(k\geq 2\) be the integer satisfying \(d\phi(X)-d\tilde{\phi}(X)\in\mathfrak{n}_{2}^{(k-1)}\backslash\mathfrak{n}_{2 }^{(k)}\). Set \(Y=\pi_{k}(d\phi(X))\), \(\tilde{Y}=\pi_{k}(d\tilde{\phi}(X))\) and \(\mathfrak{g}=\mathfrak{n}_{2}/\mathfrak{n}_{2}^{(k)}\). Then \(Y=\tilde{Y}+Z\) for some \(Z\neq 0\) in the center of \(\mathfrak{g}\). The assumption implies \(\bar{d}(tY,t\tilde{Y})\leq C\) for all \(t\in\mathbb{R}\), where \(\bar{d}\) is the distance on \(\mathfrak{g}\) induced by \(d\): \(\bar{d}(\bar{a},\bar{b})=\inf\{d(a,b)|a\in\pi_{k}^{-1}(\bar{a}),b\in\pi_{k}^{- 1}(\bar{b})\}\). Here we identified \(N_{2}\) with \(\mathfrak{n}_{2}\) and so \(d\) is a distance on \(\mathfrak{n}_{2}\). On the other hand, the BCH formula gives \((-t\tilde{Y})*(tY)=tZ\) and so \(C\geq\bar{d}(tY,t\tilde{Y})=\bar{d}(0,(-t\tilde{Y})*(tY))=\bar{d}(0,tZ)\). Pick \(z\in\pi_{k}^{-1}(Z)\). Then there is some \(a_{t}\in\mathfrak{n}_{2}^{(k)}\) satisfying \(d(a_{t},tz)\leq C\). We can write \(tz=a_{t}*b_{t}\) for some \(b_{t}\in B:=\{x\in\mathfrak{n}_{2}|d(0,x)\leq C\}\). Since \(\mathfrak{n}_{2}^{(k)}\) is an ideal of \(\mathfrak{n}_{2}\), by the BCH formula we can write \(a_{t}*b_{t}=c_{t}+b_{t}\) for some \(c_{t}\in\mathfrak{n}_{2}^{(k)}\). We have \(tz=c_{t}+b_{t}\) and so \(z=\frac{c_{t}}{t}+\frac{b_{t}}{t}\). Since the ball \(B\) is compact, we have \(\frac{b_{t}}{t}\to 0\) as \(t\to\infty\). This implies \(\frac{c_{t}}{t}\to z\) as \(t\to\infty\). As \(\frac{c_{t}}{t}\in\mathfrak{n}_{2}^{(k)}\) and \(\mathfrak{n}_{2}^{(k)}\) is closed in \(\mathfrak{n}_{2}\), we conclude that \(z\in\mathfrak{n}_{2}^{(k)}\), contradicting the fact that \(Z\neq 0\).
### Carnot algebras and Carnot groups
Carnot algebras form a particular class of nilpotent Lie algebras. A Carnot Lie algebra is a finite dimensional Lie algebra \(\mathfrak{n}\) together with a direct sum decomposition \(\mathfrak{n}=V_{1}\oplus V_{2}\oplus\cdots\oplus V_{r}\) of non-trivial vector subspaces such that \([V_{1},V_{i}]=V_{i+1}\) for all \(1\leq i\leq r\), where we set \(V_{r+1}=\{0\}\). For every Carnot algebra \(\mathfrak{n}=V_{1}\oplus V_{2}\oplus\cdots\oplus V_{r}\), the _Carnot derivation_\(D:\mathfrak{n}\to\mathfrak{n}\) is given by \(D(x)=ix\) for \(x\in V_{i}\). The automorphisms of \(\mathfrak{n}\) generated by the Carnot derivation are called _Carnot dilations_ and are given by \(\delta_{t}:\mathfrak{n}\to\mathfrak{n}\), \(t\in(0,+\infty)\), where \(\delta_{t}(x)=t^{i}x\) for \(x\in V_{i}\). Let \(\mathfrak{n}=V_{1}\oplus V_{2}\oplus\cdots\oplus V_{r}\) and \(\mathfrak{n}^{\prime}=V_{1}^{\prime}\oplus V_{2}^{\prime}\oplus\cdots\oplus V_ {s}^{\prime}\) be two Carnot algebras. A Lie algebra homomorphism \(\phi:\mathfrak{n}\to\mathfrak{n}^{\prime}\) is graded if \(\phi(V_{i})\subset V_{i}^{\prime}\) for all \(1\leq i\leq r\).
A simply connected nilpotent Lie group is a Carnot group if its Lie algebra is a Carnot algebra. Let \(N\) be a Carnot group with Lie algebra \(\mathfrak{n}=V_{1}\oplus\cdots\oplus V_{r}\). The subspace \(V_{1}\) defines a left invariant distribution \(HN\subset TN\) on \(N\). An absolutely continuous curve \(\gamma\) in \(N\) whose velocity vector \(\gamma^{\prime}(t)\) is contained in \(H_{\gamma(t)}N\) for a.e. \(t\) is called a horizontal curve. By Chow's theorem ([BR, Theorem 2.4]), any two points of \(N\) can be connected by horizontal curves. Fix a left invariant inner product on \(HN\). Let \(p,q\in N\), the _Carnot-Caratheodory metric_\(d_{CC}(p,q)\) between them is defined as the infimum of length of horizontal curves that join \(p\) and \(q\). Since the inner product on \(HN\) is left invariant, the Carnot metric on \(N\) is also left invariant. Different choices of inner product on \(HN\) result in Carnot metrics that are biLipschitz equivalent. The Hausdorff dimension of \(N\) with respect to a Carnot metric is given by \(\sum_{i=1}^{r}i\cdot\dim(V_{i})\).
**Definition 3.2**.: _Let \(G\) and \(G^{\prime}\) be two Carnot groups endowed with Carnot-Caratheodory metrics, and \(U\subset G\), \(U^{\prime}\subset G^{\prime}\) open subsets. A map \(F:U\to U^{\prime}\) is Pansu differentiable at \(x\in U\) if there exists a graded homomorphism \(L:G\to G^{\prime}\) such that_
\[\lim_{y\to x}\frac{d(F(x)^{-1}*F(y),\,L(x^{-1}*y))}{d(x,y)}=0.\]
_In this case, the graded homomorphism \(L:G\to G^{\prime}\) is called the Pansu differential of \(F\) at \(x\), and is denoted by \(DF(x)\)._
Pansu [P89] showed that a Lipschitz map between Carnot groups is Pansu differentiable a.e. and if the map is biLipschitz then the Pansu differential is a graded isomorphism.
### Derivations, semi-direct products, Heintze groups, and SOL-like groups
Let \(\mathfrak{n}\) be a Lie algebra. A linear map \(D:\mathfrak{n}\to\mathfrak{n}\) is called a derivation if \(D[X,Y]=[DX,Y]+[X,DY]\) for all \(X,Y\in\mathfrak{n}\). Let \(\operatorname{der}(\mathfrak{n})\) be the set of derivations of \(\mathfrak{n}\). It is clearly a vector space. It becomes a Lie algebra with the usual bracket for endomorphisms: for \(D_{1},D_{2}\in\operatorname{der}(\mathfrak{n})\), define \([D_{1},D_{2}]=D_{1}\circ D_{2}-D_{2}\circ D_{1}\).
Let \(\mathfrak{w}\) and \(\mathfrak{h}\) be two Lie algebras, and \(\phi:\mathfrak{h}\to\operatorname{der}(\mathfrak{w})\) a Lie algebra homomorphism. The semi-direct product \(\mathfrak{w}\rtimes_{\phi}\mathfrak{h}\) is the vector space \(\mathfrak{w}\oplus\mathfrak{h}\) with the Lie bracket given by:
\[[(W_{1},H_{1}),(W_{2},H_{2})]=([W_{1},W_{2}]+\phi(H_{1})(W_{2})-\phi(H_{2})(W_ {1}),[H_{1},H_{2}]).\]
Let \(\mathfrak{n}\) be a Lie algebra and \(D\in\operatorname{der}(\mathfrak{n})\). Then we can define a homomorphism \(\phi:\mathbb{R}\to\operatorname{der}(\mathfrak{n})\) by \(\phi(t)=tD\). We denote \(\mathfrak{n}\rtimes_{D}\mathbb{R}:=\mathfrak{n}\rtimes_{\phi}\mathbb{R}\). When \(\mathfrak{n}\) is a nilpotent Lie algebra and the eigenvalues of \(D\in\operatorname{der}(\mathfrak{n})\) have positive real parts, we call \(\mathfrak{n}\rtimes_{D}\mathbb{R}\) a Heintze algebra. In this case, the simply connected solvable Lie group with Lie algebra \(\mathfrak{n}\rtimes_{D}\mathbb{R}\) is called a Heintze group, and the pair \((\mathfrak{n},D)\) (and also \((N,D)\)) is called a Heintze pair. A Heintze algebra (or Heintze group, or the pair \((\mathfrak{n},D)\)) is called purely real if all the eigenvalues of \(D\) are real numbers. By [1], every Heintze group is biLipschitz to a purely real Heintze group. Since we are only interested in quai-isometries of Heintze groups in this paper, we only need to consider purely real Heintze groups.
A purely real Heintze pair \((\mathfrak{n},D)\) is called diagonal if \(D\) is diagonalizable.
**Lemma 3.3**.: _Suppose \((\mathfrak{n},D)\) is a diagonal Heintze pair and \(0<\lambda_{1}<\cdots<\lambda_{r}\) are the eigenvalues of \(D\). Denote by \(V_{\lambda_{i}}\) the eigenspace associated to \(\lambda_{i}\). Then (1) \(\mathfrak{n}=\oplus_{i=1}^{r}V_{\lambda_{i}}\) and \([V_{\lambda_{i}},V_{\lambda_{j}}]\subset V_{\lambda_{i}+\lambda_{j}}\); (2) Let \(\mathfrak{w}\) be the Lie sub-algebra of \(\mathfrak{n}\) generated by \(V_{\lambda_{1}}\). Then \(\mathfrak{w}\) is a Carnot algebra and \(D(\mathfrak{w})=\mathfrak{w}\). If \(\mathfrak{w}\) is an ideal of \(\mathfrak{n}\), then \(D\) projects to a derivation \(\bar{D}\) of \(\bar{\mathfrak{n}}:=\mathfrak{n}/\mathfrak{w}\), which is diagonalizable with positive eigenvalues. Furthermore, the smallest eigenvalue of \(\bar{D}\) is strictly larger than \(\lambda_{1}\)._
Proof.: (1) \(\mathfrak{n}=\oplus_{i=1}^{r}V_{\lambda_{i}}\) follows from linear algebra and \([V_{\lambda_{i}},V_{\lambda_{j}}]\subset V_{\lambda_{i}+\lambda_{j}}\) follows from the definition of derivation.
(2) Set \(W_{1}=V_{\lambda_{1}}\) and \(W_{j}=[W_{1},W_{j-1}]\) for \(j\geq 2\). By (1) we have \(W_{j}\subset V_{j\lambda_{1}}\). It follows that \(\mathfrak{w}=\oplus_{j}W_{j}\) is a Carnot grading and \(D(\mathfrak{w})=\mathfrak{w}\). Now assume \(\mathfrak{w}\) is an ideal. As \(D(\mathfrak{w})=\mathfrak{w}\), \(D\) induces a derivation \(\bar{D}:\bar{\mathfrak{n}}\to\bar{\mathfrak{n}}\).
Since \(\mathfrak{w}\) is graded, that is, \(\mathfrak{w}=\oplus_{i}(\mathfrak{w}\cap V_{\lambda_{i}})\), we have \(\bar{\mathfrak{n}}=\oplus_{i}(V_{\lambda_{i}}/(\mathfrak{w}\cap V_{\lambda_{ i}})\). As \(D|_{V_{\lambda_{i}}}\) is multiplication by \(\lambda_{i}\), \(\bar{D}\) is also multiplication by \(\lambda_{i}\) when restricted to \(V_{\lambda_{i}}/(\mathfrak{w}\cap V_{\lambda_{i}})\). It follows that \(\bar{D}\) is also diagonalizable and its set of eigenvalues is a subset of the set of eigenvalues of \(D\). Since \(W_{1}=V_{\lambda_{1}}\) we have \(V_{\lambda_{1}}/(\mathfrak{w}\cap V_{\lambda_{1}})=0\) and so \(\lambda_{1}\) is not an eigenvalue of \(\bar{D}\). It follows that the smallest eigenvalue of \(\bar{D}\) is \(>\lambda_{1}\).
A Heintze pair \((\mathfrak{n},D)\) (Heintze algebra, Heintze group) is of Carnot type if it is purely real with \(D\) diagonal such that \(\mathfrak{w}=\mathfrak{n}\), where \(\mathfrak{w}\) is as in Lemma 3.3. A Heintze pair is of non-Carnot type otherwise.
Let \((\mathfrak{n}_{i},D_{i})\) (\(i=1,2\)) be a Heintze pair. Let \(D\) be the derivation of \(\mathfrak{n}_{1}\times\mathfrak{n}_{2}\) given by \(D=(D_{1},-D_{2})\). We call \((\mathfrak{n}_{1}\times\mathfrak{n}_{2})\rtimes_{D}\mathbb{R}\) a SOL-like algebra and the simply connected solvable Lie group with Lie algebra \((\mathfrak{n}_{1}\times\mathfrak{n}_{2})\rtimes_{D}\mathbb{R}\) a SOL-like group.
For any Lie sub-algebra \(\mathfrak{h}\) of a Lie algebra \(\mathfrak{n}\), the normalizer of \(\mathfrak{h}\) in \(\mathfrak{n}\) is defined by:
\[N(\mathfrak{h})=\{x\in\mathfrak{n}:[x,\mathfrak{h}]\subset\mathfrak{h}\}.\]
**Lemma 3.4**.: _If a derivation \(D\) of a Lie algebra \(\mathfrak{n}\) satisfies \(D(\mathfrak{h})\subset\mathfrak{h}\), then \(D(N(\mathfrak{h}))\subset N(\mathfrak{h})\)._
Proof.: Let \(X\in N(\mathfrak{h})\) and \(Y\in\mathfrak{h}\) be arbitrary. Then
\[[D(X),Y]=D[X,Y]-[X,D(Y)]\in\mathfrak{h}\]
implying \(D(X)\in N(\mathfrak{h})\).
### Homogeneous distances on nilpotent Lie groups
Let \((N,D)\) be a diagonal Heintze pair. A distance \(d\) on \(N\) is called \(D\)-homogeneous if it is left invariant, induces the manifold topology on \(N\) and such that \(d(e^{tD}x,e^{tD}y)=e^{t}d(x,y)\) for all \(x,y\in N\) and \(t\in\mathbb{R}\), where \(\{e^{tD}|t\in\mathbb{R}\}\) denotes the automorphisms of \(N\) generated by the derivation \(D\). Let \(\mathfrak{n}=\oplus_{j}V_{\lambda_{j}}\) be the decomposition of \(\mathfrak{n}\) into the direct sum of eigenspaces of \(D\). An inner product \(\langle\cdot,\cdot\rangle\) on \(\mathfrak{n}\) is called a \(D\)-inner product if the eigenspaces corresponding to distinct eigenvalues are perpendicular with respect to \(\langle\cdot,\cdot\rangle\). By the construction in Theorem 2 of [10], given any \(D\)-inner product \(\langle\cdot,\cdot\rangle\) on \(\mathfrak{n}\), there is a \(D\)-homogeneous distance \(d\) on \(N\) such that \(d(0,x)=\langle x,x\rangle^{\frac{1}{2\lambda_{j}}}\) for \(x\in V_{\lambda_{j}}\). During the course of proof of Theorem 1.2, we will modify the \(D\)-inner products several times and so also the corresponding \(D\)-homogeneous distances.
It is easy to see that any two \(D\)-homogeneous distances on \(N\) are biLipschitz equivalent. We will always equip \(N\) with a \(D\)-homogeneous distance. Hence it makes sense to speak of a biLipschitz map of \(N\) without specifying the \(D\)-homogeneous distance.
For computational purposes, we also define a function \(\rho\) that is biLipschitz equivalent to a \(D\)-homogeneous distance \(d\). For any \(D\)-inner product \(\langle\cdot,\cdot\rangle\) on \(\mathfrak{n}\) define a "norm" on \(\mathfrak{n}\) by
\[||v||=\sum_{i}|v_{i}|^{\frac{1}{\lambda_{i}}},\]
where \(v=\sum_{i}v_{i}\) with \(v_{i}\in V_{\lambda_{i}}\). Then define \(\rho\) by \(\rho(x,y)=||x^{-1}*y||\). We identify \(\mathfrak{n}\) and \(N\). Clearly \(\rho\) is left invariant, induces the manifold topology and satisfies \(\rho(e^{tD}x,e^{tD}y)=e^{t}\rho(x,y)\) for all \(x,y\in\mathfrak{n}\) and \(t\in\mathbb{R}\). It follows that for any \(D\)-homogeneous distance \(d\) on \(\mathfrak{n}\), there is a constant \(L\geq 1\) such that \(d(x,y)/L\leq\rho(x,y)\leq L\cdot d(x,y)\) for all \(x,y\in\mathfrak{n}\). The explicit formula for \(\rho\) will make the calculations much easier. We will frequently use \(\rho\) rather than \(d\) for estimates.
**Lemma 3.5**.: _Let \(\phi\) be an automorphism of \(N\). Then \(\phi\) is biLipschitz if and only if \(d\phi\) is "layer preserving"; that is, \(d\phi(V_{\lambda_{j}})=V_{\lambda_{j}}\) for each \(j\)._
Proof.: First suppose \(\phi\) is biLipschitz. Let \(0\neq v\in V_{\lambda_{j}}\) and write \(d\phi(v)=\sum_{i}x_{i}\) with \(x_{i}\in V_{\lambda_{i}}\). Then \(d\phi(tv)=\sum_{i}tx_{i}\). We have
\[\rho(0,tv)=|v|^{\frac{1}{\lambda_{j}}}|t|^{\frac{1}{\lambda_{j}}}\]
and
\[\rho(0,d\phi(tv))=\sum_{i}|x_{i}|^{\frac{1}{\lambda_{i}}}|t|^{\frac{1}{ \lambda_{i}}}.\]
The biLipschitz condition implies \(x_{i}=0\) when \(i\neq j\) by letting \(t\to\infty\) or \(t\to 0\).
Conversely assume \(d\phi\) is layer preserving. Then there is some constant \(C\geq 1\) such that
\[|v|/C\leq|d\phi(v)|\leq C|v| \tag{3}\]
for all \(v\in V_{\lambda_{j}}\), \(\forall j\). Now let \(v\in\mathfrak{n}\). Write \(v=\sum_{j}v_{j}\) with \(v_{j}\in V_{\lambda_{j}}\). Then \(d\phi(v)=\sum_{j}d\phi(v_{j})\). We have \(\rho(0,v)=\sum_{j}|v_{j}|^{\frac{1}{\lambda_{j}}}\) and \(\rho(0,d\phi(v))=\sum_{j}|d\phi(v_{j})|^{\frac{1}{\lambda_{j}}}\). Now the claim follows from (3).
An automorphism \(\phi\) of \(N\) is called graded if it satisfies the condition in Lemma 3.5.
Let \(G\) be a connected Lie group with a left invariant distance \(d\) that induces the manifold topology, and \(H\) a closed normal subgroup of \(G\). We define a distance on \(G/H\) by \(\bar{d}(xH,yH)=\inf\{d(xh_{1},yh_{2})|h_{1},h_{2}\in H\}\). Then \(\bar{d}\) is a left invariant distance on \(G/H\) that induces the manifold topology and the quotient map \((G,d)\to(G/H,\bar{d})\) is \(1\)-Lipschitz. Since \(H\) is normal, we have \(\bar{d}(xH,yH)=d(xh_{1},yH)=d(yh_{2},xH)=d_{H}(xH,yH)\) for any \(h_{1},h_{2}\in H\), where \(d_{H}\) denotes the Hausdorff distance. If \(F\) is a biLipschitz map of \((G,d)\) that permutes the cosets of \(H\), then \(F\) induces a biLipschitz map \(\bar{F}:(G/H,\bar{d})\to(G/H,\bar{d})\) with the same biLipschitz constant as \(F\).
Let \((\mathfrak{n},D)\) be a diagonal Heintze pair and \(d\) a \(D\)-homogeneous distance on \(N\). Assume \(\mathfrak{w}\) is an ideal of \(\mathfrak{n}\) such that \(D(\mathfrak{w})\subset\mathfrak{w}\). Then \(D\) induces a derivation \(\bar{D}\) of \(\mathfrak{n}/\mathfrak{w}\) and \((\mathfrak{n}/\mathfrak{w},\bar{D})\) is a diagonal Heintze pair. In this case, the distance \(\bar{d}\) on \(N/W\) induced by \(d\) is a \(\bar{D}\)-homogeneous distance, where \(W\) is the Lie subgroup of \(N\) with Lie algebra \(\mathfrak{w}\).
### A fiber Tukia theorem for diagonal Heintze pairs
Here we recall the Tukia-type theorem for Carnot groups and a fiber version of Tukia theorem for diagonal Heintze pairs, see [1] for more details.
**Theorem 3.6**.: _(Theorem 1.1, [1]) Let \(N\) be a Carnot group and \(\hat{N}=N\cup\{\infty\}\) the one-point compactification of \(N\). There is a left invariant Carnot-Caratheodory metric \(d_{0}\) on \(N\) with the following property. Let \(G\) be a uniform quasiconformal group of \(\hat{N}\). If the action of \(G\) on the space of distinct triples of \(\hat{N}\) is co-compact, then there is some quasiconformal map \(f:\hat{N}\to\hat{N}\) such that \(fGf^{-1}\) consists of conformal maps with respect to \(d_{0}\)._
The metric \(d_{0}\) has the largest conformal group in the sense that the conformal group of any left invariant Carnot-Caratheodory metric is conjugated into the conformal group of \(d_{0}\). In general it is not possible to conjugate a uniform quasiconformal group into the conformal group of an arbitrary left invariant Carnot-Caratheodory metric, see Section 6, [1] for an example.
Next let \((\mathfrak{n},D)\) be a diagonal Heintze pair. Then there is a sequence of \(D\)-invariant Lie sub-algebras \(\{0\}=\mathfrak{n}_{0}\subset\mathfrak{n}_{1}\subset\cdots\subset\mathfrak{n} _{s}=\mathfrak{n}\) with the following properties: each \(\mathfrak{n}_{i-1}\) is an ideal of \(\mathfrak{n}_{i}\) with the quotient \(\mathfrak{n}_{i}/\mathfrak{n}_{i-1}\) a Carnot Lie algebra; \(D\) induces a derivation \(\bar{D}:\mathfrak{n}_{i}/\mathfrak{n}_{i-1}\to\mathfrak{n}_{i}/\mathfrak{n}_{ i-1}\) which is a multiple of the Carnot derivation of \(\mathfrak{n}_{i}/\mathfrak{n}_{i-1}\). Let \(N_{i}\) be the connected Lie subgroup of \(N\) with Lie algebra \(\mathfrak{n}_{i}\). Then \(N/N_{i}\) is a homogeneous manifold and the natural map \(\pi_{i}:N/N_{i-1}\to N/N_{i}\) is a fiber bundle with fiber the Carnot group \(N_{i}/N_{i-1}\). We call the sequence of subgroups \(0=N_{0}<N_{1}<\cdots<N_{s}=N\) the preserved subgroup sequence.
Let \(d\) be a \(D\)-homogeneous distance on \(N\). In general \(d\) does not induce any metric on the homogeneous space \(N/N_{i}\) when \(N_{i}\) is not normal in \(N\). Nonetheless, it induces a metric on the fibers \(N_{i}/N_{i-1}\) of \(\pi_{i}:N/N_{i-1}\to N/N_{i}\). Furthermore, every biLipschitz map \(F\) of \(N\) permutes the cosets of \(N_{i}\) for each \(i\). Hence \(F\) induces a map \(F_{i}:N/N_{i}\to N/N_{i}\) and a bundle map of \(\pi_{i}:N/N_{i-1}\to N/N_{i}\). The restriction of \(F_{i}\) to the fibers of \(\pi_{i}\) are biLipschitz maps of the Carnot group \(N_{i}/N_{i-1}\) in the following sense. For each \(p\in N\), let \(F_{p}=L_{F(p)^{-1}}\circ F\circ L_{p}\), where \(L_{x}\) denotes the left translation of \(N\) by \(x\). Notice that the map \((F_{p})_{i-1}:N/N_{i-1}\to N/N_{i-1}\) satisfies \((F_{p})_{i-1}(N_{i}/N_{i-1})=N_{i}/N_{i-1}\). The statement above simply means \((F_{p})_{i-1}|_{N_{i}/N_{i-1}}:N_{i}/N_{i-1}\to N_{i}/N_{i-1}\) is biLipschitz with respect to any left invariant Carnot-Caratheodory metric on \(N_{i}/N_{i-1}\).
**Theorem 3.7**.: _(Theorem 1.3, [1]) Let \((N,D)\) be a diagonal Heintze pair and \(\Gamma\) be a uniform quasisimilarity group of \(N\) that acts cocompactly on the space of distinct pairs of \(N\) (or equivalently \(\Gamma\) a group that quasi-acts coboundedly on \(S=N\rtimes_{D}\mathbb{R}\)). Let \(I=\{i|1\leq i\leq s,\dim(N_{i}/N_{i-1})\geq 2\}\). Then there exists a biLipschitz map \(F_{0}:N\to N\) and a left invariant Carnot-Caratheodory metric \(d_{i}\) on \(N_{i}/N_{i-1}\) for each \(i\in I\) such that for each \(p\in N\) and each \(g\in F_{0}\Gamma F_{0}^{-1}\), the map \((g_{p})_{i-1}|_{N_{i}/N_{i-1}}:(N_{i}/N_{i-1},d_{i})\to(N_{i}/N_{i-1},d_{i})\) is a similarity._
## 4. Quasi-isometric rigidity of solvable groups
In this section we combine Tukia-type theorems with the coarse differentiation method of Eskin-Fisher-Whyte [1], [1] to establish quasi-isometric rigidity results for solvable groups.
### Quasi-isometric rigidity of lattices in the isometry group of SOL-like groups
One of the main applications of Tukia-type theorems for nilpotent groups is quasi-isometric rigidity of lattices in the isometry group of SOL-like groups. Recall a SOL-like group has the form \(S=(N_{1}\times N_{2})\rtimes\mathbb{R}\) and admits two foliations by Heintze groups \(N_{1}\rtimes_{D_{1}}\mathbb{R}\) and \(N_{2}\rtimes_{D_{2}}\mathbb{R}\).
There is a two step outline for proving the quasi-isometric rigidity for SOL-like groups:
1. Show that up to composition with an isometry any self quasi-isometry of a SOL-like group is bounded distance from a product map \[(x,y,t)\to(f(x),f(y),t)\text{ where }x\in N_{1},\ y\in N_{2},\ t\in\mathbb{R}.\]
2. Prove a Tukia-type theorem for groups acting by quasi-similarities on \(N_{i},\ i=1,2\).
This strategy comes from Eskin-Fisher-Whyte's seminal work on the quasi-isometric rigidity of SOL itself. In their work [1, 1, 2] they focus primarily on Step (1) and develop a new "coarse differentiation" technique for understanding quasi-isometries of SOL-like groups. For clarity their papers focus primarily on the case of the three dimensional SOL and its discrete cousins the Diestel-Leader graphs. Despite this, their results follow similarly for all SOL-like groups as already pointed out in [1, Section 4.4]. Indeed, in her work [11, 12] on abelian-by-abelian higher rank SOL-like groups (i.e groups of the form \(\mathbb{R}^{n}\rtimes\mathbb{R}^{m}\) with appropriate conditions on the action of \(\mathbb{R}^{m}\)) Peng already points out that Eskin-Fisher-Whyte's arguments work more generally in the case of SOL-like groups where \(N_{i}\) is abelian. Recently Ferragut in his thesis [13] has endeavored to find the most general framework for which Eskin-Fisher-Whyte's work applies. In doing so he has extended the work of [1] to the class of _horo-pointed metric measure spaces_ and is looking to do the same for [1].
We will briefly review in the SOL case how combining steps (1) and (2) gives quasi-isometric rigidity for lattices in SOL.
### Quasi-isometric rigidity outline for SOL
Let \(\Gamma\) be a finitely generated group quasi-isometric to SOL (hence to any lattice in SOL). Then there is an induced quasi-action of \(\Gamma\) on SOL by \(L\geq 1,\ C\geq 0\) quasi-isometries where \(L,C\) are uniform over all \(\gamma\in\Gamma\). Up to replacing \(\Gamma\) with a subgroup of index two we can apply Eskin-Fisher-Whyte (i.e. Step (1) ) to get that all \(\gamma\in\Gamma\) act by maps of the form \(\gamma(x,y,t)=(f_{\gamma}(x),g_{\gamma}(y),t+c_{\gamma})\). This induces two uniform quasisimilarity actions on \(\mathbb{R}\) (via \(f_{\gamma}\) and \(g_{\gamma}\) respectively). By Step (2) (in our case by [10]) both of these actions can be conjugated to actions by similarities on \(\mathbb{R}\). That is, after conjugation \(f_{\gamma}\) scales distances on \(\mathbb{R}\) by some constant \(n_{\gamma}\) and \(g_{\gamma}\) by some constant \(m_{\gamma}\). Note that these two conjugations together define a quasi-conjugation of the original quasi-action on SOL. If \(n_{\gamma}=m_{\gamma}\) for all \(\gamma\in\Gamma\) then the quasi-action is bounded distance from an action by isometries. The other case cannot occur because if \(n_{\gamma_{0}}\neq m_{\gamma_{0}}\) for even a single \(\gamma_{0}\) then the quasi-isometry constants of the map induced by \(\gamma_{0}^{k}\) go to infinity as \(k\to\infty\) thus violating uniformity of the quasi-action. Now we have that \(\Gamma\) up to finite index and finite kernel acts by isometries on SOL. But SOL is an index \(8\) subgroup of its isometry group so \(\Gamma\) must be virtually a lattice in SOL.
For the proof of Theorem 1.4 we also follow this outline however there are several extra considerations that apply in our more general setting. First, our conjugation may only give us an isometric action with respect to a different metric than the one we started with. Second, the isometry group of a more general SOL-like group \(S\) may be much larger than \(S\) (they do not have the same dimension in general).
### Proof of Theorem 1.4
We now prove Theorem 1.4 in detail.
Let \(S=(N_{1}\times N_{2})\rtimes\mathbb{R}\) be a SOL-like group equipped with a left invariant Riemannian metric \(g\), where \((N_{i},D_{i})\) is Carnot or Carnot-by-Carnot. Let \(\Gamma\) be a finitely generated group quasi-isometric to \(S\). The quasi-isometry between \(\Gamma\) and \(S\) induces a quasi-action of \(\Gamma\) on \(S\). In particular, each \(\gamma\in\Gamma\) gives rise to a \((L,C)\)-quasi-isometry \(\phi(\gamma):S\to S\), where \(L\geq 1\), \(C\geq 0\) are constants independent of \(\gamma\). As explained above, the arguments of Eskin-Fisher-Whyte imply that \(\phi(\gamma)\) is at a bounded distance from a product map. After replacing \(C\) by a larger constant and replacing \(\phi(\gamma)\) by a product map we may assume that \(\phi(\gamma)\) is a product map. After taking an index two
subgroup if necessary we may further assume that each \(\phi(\gamma)\) preserves the foliation of \(S\) by cosets of \(S_{i}=N_{i}\rtimes_{D_{i}}\mathbb{R}\). Thus for each \(\gamma\in\Gamma\), there are maps \(\gamma_{1}:N_{1}\to N_{1}\), \(\gamma_{2}:N_{2}\to N_{2}\), \(h_{\gamma}:\mathbb{R}\to\mathbb{R}\), such that \(\phi(\gamma)\) is given by:
\[\phi(\gamma)(n_{1},n_{2},t)=(\gamma_{1}(n_{1}),\gamma_{2}(n_{2}),h_{\gamma}(t)).\]
Let \(\tilde{\gamma}_{i}:S_{i}\to S_{i}\) be given by \(\tilde{\gamma}_{i}(n_{i},t)=(\gamma_{i}(n_{i}),h_{\gamma}(t))\). Then \(\gamma\mapsto\tilde{\gamma}_{i}\) defines a quasi-action of \(\Gamma\) on \(S_{i}\). Since this quasi-action is by height-respecting quasi-isometries, \(\Gamma\) induces a uniform quasisimilarity action on \((N_{i},d_{i})\), where \(d_{i}\) is a \(D_{i}\)-homogeneous distance on \(N_{i}\). Due to the particular form of \(\tilde{\gamma}_{i}\) this induced action of \(\Gamma\) on \(N_{i}\) is given by \(\gamma\mapsto\gamma_{i}\).
Now we assume each \((N_{i},D_{i})\) (\(i=1,2\)) is either Carnot or Carnot-by-Carnot with \(\dim(\mathfrak{w}_{i})\geq 2\), where \(\mathfrak{w}_{i}\) is the Lie sub-algebra of \(\mathfrak{n}_{i}\) generated by the eigenspace of the smallest eigenvalue of \(D_{i}\). Then by Theorem 1.2 (Carnot-by-Carnot case) or Theorem 3.6 (Carnot case), there is a biLipschitz map \(f_{i}\) of \(N_{i}\) and a maximally symmetric \(D_{i}\)-homogeneous distance \(d_{i}\) on \(N_{i}\) such that \(f_{i}\circ\gamma_{i}\circ f_{i}^{-1}\) is a similarity of \((N_{i},d_{i})\) for each \(\gamma\in\Gamma\). [13] Theorem 1.2 implies that \(f_{i}\circ\gamma_{i}\circ f_{i}^{-1}\) has the form \(f_{i}\circ\gamma_{i}\circ f_{i}^{-1}=L_{a_{i}}\circ e^{v_{i,\gamma}D_{i}}\circ \phi_{i}\) for some \(a_{i}\in N_{i}\), where \(\phi_{i}\) is an automorphism of \(N_{i}\) and is also an isometry of \((N_{i},d_{i})\)..
**Lemma 4.1**.: _The equality \(v_{1,\gamma}+v_{2,\gamma}=0\) holds for all \(\gamma\in\Gamma\)._
Proof.: Note that the left translation \(L_{(0,-v_{1,\gamma})}:S_{1}\to S_{1}\) (which is an isometry of \(S_{1}\)) is given by \(L_{(0,-v_{1,\gamma})}(x,t)=(e^{-v_{1,\gamma}D_{1}}x,t-v_{1,\gamma})\). It follows that \(L_{(0,-v_{1,\gamma})}\circ\tilde{\gamma}_{1}\) is a \((L,C)\)-quasi-isometry and is given by \(L_{(0,-v_{1,\gamma})}\circ\tilde{\gamma}_{1}(x,t)=(e^{-v_{1,\gamma}D_{1}} \gamma_{1}(x),h_{\gamma}(t)-v_{1,\gamma})\). Since the boundary map \(e^{-v_{1,\gamma}D_{1}}\circ\gamma_{1}\) of \(L_{(0,-v_{1,\gamma})}\circ\tilde{\gamma}_{1}\) is an isometry, Lemma 5.1 in [10] implies \(|h_{\gamma}(t)-v_{1,\gamma}-t|\leq C_{1}\) for a constant \(C_{1}\) that depends only on \(L,C\) and \(N_{1}\). Similarly by considering \(L_{(0,v_{2,\gamma})}\circ\tilde{\gamma}_{2}\) we get \(|h_{\gamma}(t)+v_{2,\gamma}-t|\leq C_{2}\) for a constant \(C_{2}\) that depends only on \(L,C\) and \(N_{2}\). Combining these two inequalities we get \(|v_{1,\gamma}+v_{2,\gamma}|\leq C_{1}+C_{2}\) for all \(\gamma\in\Gamma\). Since \(v_{i,\gamma^{n}}=nv_{i,\gamma}\) for any \(n\geq 1\), the above inequality applied to \(\gamma^{n}\) implies \(|v_{1,\gamma}+v_{2,\gamma}|\leq(C_{1}+C_{2})/n\). Since this is true for all \(n\geq 1\), the lemma follows.
**Lemma 4.2**.: _Let \(f_{i}:(N_{i},d_{i})\to(N_{i},d_{i})\) (\(i=1,2\)) be a biLipschitz map. Let \(d\) be a left invariant Riemannian metric on \(S\) such that \(N_{1}\), \(N_{2}\) and \(\mathbb{R}\) are perpendicular to each other. Define \(F:(S,d)\to(S,d)\) by \(F(n_{1},n_{2},t)=(f_{1}(n_{1}),f_{2}(n_{2}),t)\). Then \(F\) is a \((1,\tilde{C})\)-quasi-isometry for some constant \(\tilde{C}\geq 0\)._
Proof.: After rescaling the metric we may assume that vertical lines \(c_{(n_{1},n_{2})}:\mathbb{R}\to S\), \(c_{(n_{1},n_{2})}(t)=(n_{1},n_{2},t)\), are unit speed geodesics. Let \(\pi_{1}:S\to S_{1}\) be given by \(\pi_{1}(n_{1},n_{2},t)=(n_{1},t)\) and \(\pi_{2}:S\to S_{2}\) be given by \(\pi_{2}(n_{1},n_{2},t)=(n_{2},t)\). Also let \(h:S\to\mathbb{R}\) be the height function \(h(n_{1},n_{2},t)=t\). By Corollary 4.13 of [14] or Theorem 4.1 in [1] there is a constant \(C_{1}\geq 0\) such that for any \(p,q\in S\) we have
\[|d(p,q)-d^{(1)}(\pi_{1}(p),\pi_{1}(q))-d^{(2)}(\pi_{2}(p),\pi_{2}(q))+|h(p)-h(q) ||\leq C_{1},\]
where \(d^{(1)}\) is the metric on \(S_{1}\sim(N_{1}\times\{0\})\rtimes\mathbb{R}\subset S\) induced by \(d\) and similarly for \(d^{(2)}\). Replacing \(p,q\) with \(F(p)\) and \(F(q)\) respectively we get
\[|d(F(p),F(q))-d^{(1)}(\pi_{1}(F(p)),\pi_{1}(F(q)))-d^{(2)}(\pi_{2}(F(p)),\pi_{2 }(F(q)))+|h(F(p))-h(F(q))||\leq C_{1}.\]
The lemma follows from the following claim since by the definition of \(F\) we have \(h(F(x))=h(x)\) for any \(x\in S\).
Claim: There is a constant \(\tilde{C}_{i}\) depending only on the Gromov hyperbolicity constant of \(S_{i}\) and the biLipschitz constant of \(f_{i}\) such that for all \(p_{1},p_{2}\in S\): \(|d^{(i)}(\pi_{i}(F(p_{1})),\pi_{i}(F(p_{2})))-d^{(i)}(\pi_{i}(p_{1}),\pi_{i}(p_{ 2}))|\leq\tilde{C}_{i}\).
Proof of the claim: we will only consider the case \(i=1\) as the case \(i=2\) is similar. Let \(p_{1}=(x_{1},y_{1},t_{1})\), \(p_{2}=(x_{2},y_{2},t_{2})\). Then the claim takes the form
\[|d^{(1)}(f_{1}(x_{1}),t_{1}),(f_{1}(x_{2}),t_{2}))-d^{(1)}((x_{1},t_{1}),(x_{2 },t_{2}))|\leq\tilde{C}_{1}.\]
Let \(t_{x_{1},x_{2}}\) be the height at which the two vertical geodesics \(c_{x_{1}}\) and \(c_{x_{2}}\) in \(S_{1}\) diverge from each other (that is, the distance between \(c_{x_{1}}(t_{x_{1},x_{2}})\) and \(c_{x_{2}}(t_{x_{1},x_{2}})\) is 1), where for \(x\in N_{1}\), \(c_{x}:\mathbb{R}\to S_{1}\) is given by \(c_{x}(t)=(x,t)\). Then we have
\[|d^{(1)}((x_{1},t_{1}),(x_{2},t_{2}))-(t_{x_{1},x_{2}}-t_{1})-(t_{x_{1},x_{2}}- t_{2})|\leq C_{2}\]
if \(t_{x_{1},x_{2}}>\max\{t_{1},t_{2}\}\) and \(|d^{(1)}((x_{1},t_{1}),(x_{2},t_{2}))-|t_{1}-t_{2}||\leq C_{2}\) otherwise, for some constant \(C_{2}\) depending only on the Gromov hyperbolicity constant of \(S_{1}\). Similarly
\[|d^{(1)}((f_{1}(x_{1}),t_{1}),(f_{1}(x_{2}),t_{2}))-(t_{f_{1}(x_{1}),f_{1}(x_{ 2})}-t_{1})-(t_{f_{1}(x_{1}),f_{1}(x_{2})}-t_{2})|\leq C_{2}\]
if \(t_{f_{1}(x_{1}),f_{1}(x_{2})}>\max\{t_{1},t_{2}\}\) and \(|d^{(1)}((f_{1}(x_{1}),t_{1}),(f_{1}(x_{2}),t_{2}))-|t_{1}-t_{2}||\leq C_{2}\) otherwise. It now suffices to show that there is a constant \(C_{3}\) such that \(|t_{f_{1}(x_{1}),f_{1}(x_{2})}-t_{x_{1},x_{2}}|\leq C_{3}\) for all \(x_{1},x_{2}\in N_{1}\). This follows from the fact that \(d_{1}(x_{1},x_{2})\) is comparable with \(e^{t_{x_{1},x_{2}}}\) and that \(f_{1}\) is biLipschitz.
**Completing the proof of Theorem 1.4**. Recall that the \(D_{i}\)-homogeneous distance \(d_{i}\) on \(N_{i}\) is associated to an inner product \(\langle,\rangle_{i}\) on \(\mathfrak{n}_{i}\). Since \(\phi_{i}\) is an automorphism of \(N_{i}\) and also an isometry of \((N_{i},d_{i})\), Lemma 3.5 implies that \(d\phi_{i}\) is layer-preserving and is an orthogonal transformation with respect to \(\langle,\rangle_{i}\). Let \(\langle,\rangle_{0}\) be the inner product on \(T_{e}S=(\mathfrak{n}_{1}\times\mathfrak{n}_{2})\rtimes\mathbb{R}\) that agrees with \(\langle,\rangle_{i}\) on \(\mathfrak{n}_{i}\), that satisfies \(\langle(0,0,1),(0,0,1)\rangle_{0}=1\), and such that \(\mathfrak{n}_{1}\), \(\mathfrak{n}_{2}\) and \(\mathbb{R}\) are perpendicular to each other. Let \(g_{0}\) be the left invariant Riemannian metric on \(S\) determined by \(\langle,\rangle_{0}\).
For each \(\gamma\in\Gamma\), define a map \(\Psi(\gamma):S\to S\) by
\[\Psi(\gamma)(n_{1},n_{2},t)=(f_{1}\circ\gamma_{1}\circ f_{1}^{-1}(n_{1}),f_{2} \circ\gamma_{2}\circ f_{2}^{-1}(n_{2}),t+v_{1,\gamma}).\]
Since \(f_{i}\circ\gamma_{i}\circ f_{i}^{-1}=L_{a_{i}}\circ e^{v_{i,\gamma}D_{i}}\circ \phi_{i}\), Lemma 4.1 implies \(\Psi(\gamma)(n_{1},n_{2},t)=L_{(a_{1},a_{2},v_{1,\gamma})}(\phi_{1}n_{1},\phi_ {2}n_{2},t)\). The properties of \(\phi_{i}\) imply that the map \(S\to S\), \((n_{1},n_{2},t)\mapsto(\phi_{1}n_{1},\phi_{2}n_{2},t)\) is an automorphism of \(S\) and is an isometry of \((S,g_{0})\). It follows that \(\Psi(\gamma)\) is an isometry of \((S,g_{0})\).
Define \(F:S\to S\) by \(F(n_{1},n_{2},t)=(f_{1}(n_{1}),f_{2}(n_{2}),t)\) as in Lemma 4.2. Then \(F\) is a quasi-isometry of \(S\). Notice that \(\Psi(\gamma)\) and \(F\circ\phi(\gamma)\circ F^{-1}\) induce the same boundary map of \(S_{i}\). It follows that \(\Psi(\gamma)\) and \(F\circ\phi(\gamma)\circ F^{-1}\) are at a bounded distance from each other. By replacing \(F\circ\phi(\gamma)\circ F^{-1}\) with \(\Psi(\gamma)\) we see that the original quasi-action of \(\Gamma\) on \(S\) is now quasi-conjugated to an isometric action of \(\Gamma\) on \((S,g_{0})\). Since the original quasi-action is cobounded, the isometric action is cocompact and so \(\Psi(\Gamma)\) is a uniform lattice in \(\operatorname{Isom}(S,g_{0})\). This finishes the proof when each \((N_{i},D_{i})\) (\(i=1,2\)) is either Carnot or Carnot-by-Carnot with \(\dim(\mathfrak{n}_{i})\geq 2\).
If \(\dim(\mathfrak{w}_{i})=1\) for at least one \(i\) and \(\Gamma\) is amenable, then Theorem 1.2 still applies and the above proof works. Now assume \(\dim(\mathfrak{w}_{i})=1\) and \(\operatorname{Isom}(S,g)\) admits a uniform lattice \(\Gamma_{0}\) for some left invariant Riemannian metric \(g\) on \(S\). By Theorem 3 in [W00]\(\Gamma_{0}\) is virtually a lattice in some simply connected solvable Lie group. In particular, \(\Gamma_{0}\) is a finitely generated amenable group. It follows that \(\Gamma\) is also amenable, being quasi-isometric to \(\Gamma_{0}\). Hence again Theorem 1.2 applies and the above proof works.
### Quasi-isometric classification of a class of solvable Lie groups
Here we provide the proofs of Theorem 1.7 and Theorem 1.8.
**Proof of Theorem 1.7**. One direction is clear. For the other direction, we assume \(S_{1},S_{2}\in\mathcal{S}\) are quasi-isometric. Then \(S_{1}\) quasi-acts on \(S_{2}\). Since \(S_{1}\) is amenable, the proof of Theorem 1.4 applies (for \(\Gamma:=S_{1}\), \(S:=S_{2}\)) and we conclude that there is a left invariant Riemannian metric \(g_{2}\) on \(S_{2}\) such that the quasi-action of \(S_{1}\) on \(S_{2}\) is quasi-conjugate to an isometric action of \(S_{1}\) on \((S_{2},g_{2})\). This induces a continuous homomorphism \(\phi:S_{1}\to\operatorname{Isom}(S_{2},g_{2})\). Since a continuous homomorphism between Lie groups is a Lie group homomorphism, we see that the isometric action of \(S_{1}\) on \((S_{2},g_{2})\) is a smooth action; that is, the corresponding map \(F:S_{1}\times S_{2}\to S_{2}\), \(F(s_{1},s_{2})=\phi(s_{1})(s_{2})\), is a smooth map. Fix some \(x\in S_{2}\). Then the map \(f:S_{1}\to S_{2}\) given by \(f(s_{1})=F(s_{1},x)=\phi(s_{1})(x)\) is smooth. It is easy to see that \(\phi(s_{1}^{\prime})\circ f=f\circ L_{s_{1}^{\prime}}\) for all \(s_{1}^{\prime}\in S_{1}\), where \(L_{s_{1}^{\prime}}\) is the left translation of \(S_{1}\) by \(s_{1}^{\prime}\). This implies the differentials of the map \(f\) have constant rank. If this rank is less than \(\dim(S_{1})\), then there exists \(h\in f^{-1}(x)\backslash\{e\}\). Then all the powers \(h^{n}\), \(n\in\mathbb{Z}\), would fix \(x\in S_{2}\). This is a contradiction since \(\{h^{n}|n\in\mathbb{Z}\}\) is unbounded in \(S_{1}\), \(\{\phi(h^{n})(x)|n\in\mathbb{Z}\}=\{x\}\) is bounded in \(S_{2}\), and the action of \(S_{1}\) on \(S_{2}\) is quasi-conjugate to the action of \(S_{1}\) on itself by left translations. Hence the rank of \(f\) is at least \(\dim(S_{1})\), which implies \(\dim(S_{2})\geq\dim(S_{1})\). By switching the roles of \(S_{1}\) and \(S_{2}\) we get the reverse inequality and so \(\dim(S_{2})=\dim(S_{1})\). Hence \(f\) is a local diffeomorphism from \(S_{1}\) to \(S_{2}\). The above argument also shows \(f\) is injective and so is a diffeomorphism onto an open subset of \(S_{2}\). Since the image of \(f\) is an orbit of the \(S_{1}\) action on \(S_{2}\), we see that \(f\) must be surjective since otherwise \(S_{2}\) would be a disjoint union of at least two open subsets. Consequently \(f\) is a diffeomorphism from \(S_{1}\) to \(S_{2}\).
We next use an idea we found in [10], Proposition 2.1. Let \(g_{1}\) be the pullback Riemannian metric of \(g_{2}\) by \(f\). Then \(f:(S_{1},g_{1})\to(S_{2},g_{2})\) is an isometry. Finally the equality \(\phi(s_{1}^{\prime})\circ f=f\circ L_{s_{1}^{\prime}}\) implies \(L_{s_{1}^{\prime}}:(S_{1},g_{1})\to(S_{1},g_{1})\) is an isometry for any \(s_{1}^{\prime}\in S_{1}\). In other words, \(g_{1}\) is left invariant. Hence \(S_{1}\) and \(S_{2}\) can be made isometric. Since they are both of real type, Theorem 4.21 in [10] implies they are isomorphic.
By using Theorem 1.3 (proved next) and Corollary 1.2, [11] instead of Theorem 1.4 the above argument yields Theorem 1.8.
### Quasi-isometric rigidity of quasi-actions on certain Heintze groups
Finally we supply the proof of Theorem 1.3.
**Proof of Theorem 1.3**. Let \((N,D)\) be Carnot-by-Carnot, and \(\Gamma\) a group that quasi-acts coboundedly on \(N\rtimes_{D}\mathbb{R}\). We further assume \(\Gamma\) is amenable when \(\dim(W)=1\). Then \(\Gamma\) induces a uniform quasisimilarity action on \(N\) such that every point in \(N\) is a radial limit point (since the induced action of \(\Gamma\) on the space of distinct pairs of \(N\) is cocompact). By Theorem 1.2, there is a biLipschitz map \(F_{0}\) of \(N\) such that \(F_{0}\Gamma F_{0}^{-1}\subset\operatorname{Sim}(N,d_{0})\), where \(d_{0}\) is a fixed maximally symmetric \(D\)-homogeneous distance on \(N\). Recall that the \(D\)-homogeneous distance \(d_{0}\) is associated with an inner product \(\langle,\rangle_{0}\) on \(\mathfrak{n}\). Let \(\langle,\rangle\) be the inner product on \(T_{e}(N\rtimes_{D}\mathbb{R})=\mathfrak{n}\times\mathbb{R}\) satisfying \(\langle,\rangle|_{\mathfrak{n}\times\mathfrak{n}}=\langle,\rangle_{0}\), \(\langle\mathfrak{n},\{0\}\times\mathbb{R}\rangle=0\) and \(\langle(0,1),(0,1)\rangle=1\). Let \(g_{0}\) be the left invariant Riemannian metric on \(N\rtimes_{D}\mathbb{R}\) determined by the inner product \(\langle,\rangle\). Let \(\gamma\in\Gamma\). Recall that \(F_{0}\gamma F_{0}^{-1}\) acts on \(N\) as an affine map and has the form \(F_{0}\gamma F_{0}^{-1}=L_{a_{\gamma}}\circ e^{t_{\gamma}D}\circ\phi_{\gamma}\) for some \(a_{\gamma}\in N\), \(t_{\gamma}\in\mathbb{R}\), where \(L_{a_{\gamma}}\) is the left translation of \(N\) by \(a_{\gamma}\) and \(\phi_{\gamma}\) is a graded automorphism such that \(d\phi_{\gamma}\) is a linear isometry of \((\mathfrak{n},\langle,\rangle_{0})\). Now let \(F_{0}\gamma F_{0}^{-1}\) act on \(N\rtimes_{D}\mathbb{R}\) by \(F_{0}\gamma F_{0}^{-1}(n,t)=L_{(a_{\gamma},t_{\gamma})}(\phi_{\gamma}(n),t)\), where \(L_{(a_{\gamma},t_{\gamma})}\) is the left translation of \(N\rtimes_{D}\mathbb{R}\) by \((a_{\gamma},t_{\gamma})\). It is easy to check that this defines an isometric action of \(\Gamma\) on \((N\rtimes_{D}\mathbb{R},g_{0})\) that induces the given action of \(F_{0}\Gamma F_{0}^{-1}\) on \(N=\partial S\backslash\{\infty\}\).
## 5. BiLipschitz maps of diagonal Heintze pairs
In this section we study individual biLipschitz maps of diagonal Heintze pairs (see section 3.4). We show that if a biLipschitz map permutes the cosets of a connected graded normal subgroup, then it has an expression (we call it compatible expression) with some nice properties, see Lemma 5.4.
For any biLipschitz map \(F:N\to N\) and \(g\in N\), we denote by \(F_{g}:=L_{F(g)^{-1}}\circ F\circ L_{g}\). Note that \(F_{g}(0)=0\).
**Lemma 5.1**.: _Let \((\mathfrak{n},D)\) be a diagonal Heintze pair and \(\mathfrak{w}\) an ideal of \(\mathfrak{n}\) such that \(D(\mathfrak{w})=\mathfrak{w}\). Let \(F:N\to N\) be a biLipschitz map that permutes the cosets of \(W\), where \(W\) is the connected Lie subgroup of \(N\) with Lie algebra \(\mathfrak{w}\). In addition assume for every \(g\in N\), the map \(F_{g}|_{W}:W\to W\) is an automorphism of \(W\). Then there is an automorphism \(\phi:W\to W\) such that \(d\phi:\mathfrak{w}\to\mathfrak{w}\) is layer-preserving, \(F(gw)=F(g)\phi(w)\) and \((\chi_{F_{0}(g)}|_{W})\circ\phi=\phi\circ(\chi_{g}|_{W})\) hold for any \(g\in N\), where \(\chi_{x}\) denotes the conjugation by \(x\)._
**Remark 5.2**.: _If we start with a uniform quasisimilarity group \(\Gamma\) of a Carnot-by-Carnot group \(N\) and \(W\) is the connected Lie subgroup of \(N\) with Lie algebra generated by the eigenspace of the smallest eigenvalue of \(D\), we can apply the fiber Tukia theorem (Theorem 3.7) when \(\dim(W)\geq 2\) such that every element in a biLipschitz conjugate of \(\Gamma\) satisfies the assumption in the lemma._
Proof.: By replacing \(F\) with \(F_{0}\) we may assume \(F(0)=0\). Since \(W\) is normal in \(N\), \(gW=Wg\) for any \(g\in N\). The fact that \(F\) permutes the cosets of \(W\) implies that \(F(gW)=F(g)W\) and \(F(Wg)=WF(g)\). Hence there are two functions \(\phi_{g},\psi_{g}:W\to W\) such that \(F(gw)=F(g)\phi_{g}(w)\) and \(F(wg)=\psi_{g}(w)F(g)\) for any \(w\in W\). Notice that \(\phi_{g}=F_{g}|_{W}\) and so by assumption is an automorphism of \(W\). By definition \(\psi_{g}(w)=F(wg)F(g)^{-1}=F(g\cdot g^{-1}wg)F(g)^{-1}=F(g)\phi_{g}(g^{-1}wg)F (g)^{-1}\). It follows that
\[\psi_{g}=(\chi_{F(g)}|_{W})\circ\phi_{g}\circ(\chi_{g^{-1}}|_{W}) \tag{4}\]
is a composition of automorphisms of \(W\) and so is an automorphism of \(W\). Here we used the assumption that \(W\) is normal in \(N\) and so \(\chi_{x}|_{W}\) is an automorphism of \(W\) for any \(x\in N\).
We claim that \(\psi:=\psi_{g}\) is independent of \(g\). Let \(g_{1},g_{2}\in N\). Then for any \(w\in W\), \(d(wg_{1},wg_{2})=d(g_{1},g_{2})\). Since \(F\) is \(L\)-biLipschitz for some \(L\geq 1\), we have \(d(F(wg_{1}),F(wg_{2}))\leq Ld(g_{1},g_{2})\). Since \(F(wg_{i})=\psi_{g_{i}}(w)F(g_{i})\), we get
\[d(\psi_{g_{1}}(w),\psi_{g_{2}}(w)) \leq d(\psi_{g_{1}}(w),F(wg_{1}))+d(F(wg_{1}),F(wg_{2}))+d(F(wg_{2} ),\psi_{g_{2}}(w))\] \[\leq d(0,F(g_{1}))+Ld(g_{1},g_{2})+d(F(g_{2}),0)\]
for all \(w\in W\). By Lemma 3.1 we have \(\psi_{g_{1}}=\psi_{g_{2}}\).
We now show that \(\phi:=\phi_{g}\) is also independent of \(g\). Since \(D(\mathfrak{w})=\mathfrak{w}\), the restriction of a \(D\)-homogeneous distance to \(W\) is a \(D|_{\mathfrak{w}}\)-homogeneous distance on \(W\). As the restriction of a biLipschitz map, the automorphism \(\phi_{g}\) is a biLipschitz map of \(W\). By Lemma 3.5, \(d\phi_{g}:\mathfrak{w}\to\mathfrak{w}\) is layer preserving. Write \(\mathfrak{w}=W_{\mu_{1}}\oplus\cdots\oplus W_{\mu_{m}}\) as the direct sum of eigenspaces of \(D|_{\mathfrak{w}}\), where \(0<\mu_{1}<\cdots<\mu_{m}\) are the distant eigenvalues of \(D|_{\mathfrak{w}}\). Then \(d\phi_{g}(W_{\mu_{j}})=W_{\mu_{j}}\). Let \(g,h\in N\). We shall show that for each \(j\) the equality \(d\phi_{g}|_{W_{\mu_{j}}}=d\phi_{h}|_{W_{\mu_{j}}}\) holds. To see this, notice that for any \(i\) and \(x\in N\), if we denote \(I_{i}=\oplus_{k\geq i}W_{\mu_{k}}\), then (2) implies \(d\chi_{x}(I_{i})\subset I_{i}\) and that \(d\chi_{x}\) induces the identity map on \(I_{j}/I_{j+1}\cong W_{\mu_{j}}\). Hence the map on \(I_{j}/I_{j+1}\) induced by \(d(\chi_{F(g)}|_{W})\circ d\phi_{g}\circ d(\chi_{g^{-1}}|_{W})\) equals \(d\phi_{g}|_{W_{\mu_{j}}}\). The same is true when \(g\) is replaced by \(h\). Now (4) and the fact that \(d\psi_{g}=d\psi_{h}\) implies \(d\phi_{g}|_{W_{\mu_{j}}}=d\phi_{h}|_{W_{\mu_{j}}}\).
Finally, by picking \(g=0\) in (4) we obtain \(\phi=\psi\) as \(F(0)=0\). The equality \((\chi_{F_{0}(g)}|_{W})\circ\phi=\phi\circ(\chi_{g}|_{W})\) for any \(g\in N\) also follows from (4).
Below when we say \(F:\mathfrak{n}\to\mathfrak{n}\) is a biLipschitz map, we identify \(\mathfrak{n}\) with \(N\) and \(\mathfrak{n}\) is equipped with a \(D\)-homogeneous distance.
A self homeomorphism \(F:G\to G\) of a Lie group is called an affine map if \(F=L_{g}\circ\phi\), where \(\phi\) is an automorphism of \(G\) and \(L_{g}\) is left translation by \(g\in G\). We say \(\phi\) is the automorphism part of the affine map \(F\).
Let \((\mathfrak{n},D)\) and \(\mathfrak{w}\) be as in Lemma 5.1. Denote by \(\bar{D}:\mathfrak{n}/\mathfrak{w}\to\mathfrak{n}/\mathfrak{w}\) the derivation of \(\mathfrak{n}/\mathfrak{w}\) induced by \(D\) and \(\sigma(\bar{D})\) the set of eigenvalues of \(\bar{D}\). Let \(H\subset\mathfrak{n}\) be a graded subspace of \(\mathfrak{n}\) complementary to \(\mathfrak{w}\); that is, for each \(\lambda\in\sigma(\bar{D})\), \(H_{\lambda}\subset V_{\lambda}\) is a complementary linear subspace of \(W_{\lambda}\) in \(V_{\lambda}\) (if \(W_{\lambda}=\{0\}\), then \(H_{\lambda}=V_{\lambda}\)), and \(H=\oplus_{\lambda\in\sigma(\bar{D})}H_{\lambda}\). Notice that for every \(g\in\mathfrak{n}\), there are unique \(h\in H\), \(w\in\mathfrak{w}\) such that \(g=h*w\).
For \(x\in\mathfrak{n}\), we use \(\bar{x}\) to denote \(\pi(x)\), where \(\pi:\mathfrak{n}\to\mathfrak{n}/\mathfrak{w}\) is the quotient map. For convenience, we introduce the following terminology.
**Definition 5.3**.: _Let \((\mathfrak{n},D)\) be a diagonal Heintze pair and \(\mathfrak{w}\) an ideal of \(\mathfrak{n}\) such that \(D(\mathfrak{w})=\mathfrak{w}\). Let \(F:N\to N\) be a biLipschitz map that permutes the cosets of \(W\), where \(W\) is the connected Lie subgroup of \(N\) with Lie algebra \(\mathfrak{w}\). In addition assume that \(F\) induces an affine map of \(N/W\) and that for every \(g\in N\), the map \(F_{g}|_{W}:W\to W\) is an automorphism of \(W\). Let \(\bar{B}\) be the automorphism part of the affine map of \(N/W\) induced by \(F\). We say an expression \(F(h*w)=F(0)*Bh*Aw*As(\bar{h})\) (\(h\in H\), \(w\in\mathfrak{w}\)), with \(A=d\phi\), where \(\phi\) is the automorphism of \(W\) from Lemma 5.1, is a compatible expression of \(F\) if the following hold: 1. \(B:H\to\mathfrak{n}\) is a linear map satisfying \(B(H_{\lambda})\subset V_{\lambda}\) and \(d\bar{B}\circ\pi|_{H}=\pi\circ B\), where \(\pi:\mathfrak{n}\to\bar{\mathfrak{n}}\) is the quotient map; 2. \([Bh,Aw]=A[h,w]\) for any \(h\in H\), \(w\in\mathfrak{w}\); 3. \(s\) is a map \(\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\), where \(Z(\mathfrak{w})\) denotes the center of \(\mathfrak{w}\)._
We shall show (see Lemma 5.5) that conditions 1-2 above imply condition 3. But to prove this we need the existence of a compatible expression (Lemma 5.4).
In general a biLipschitz map \(F\) does not have a unique compatible expression. The reason is that there might be more than one linear map \(B:H\to\mathfrak{n}\) satisfying Conditions 1 and 2 above. As a result, the map \(s\) is also not unique. However, if \(\alpha\) denotes the smallest eigenvalue of \(\bar{D}\) and \(\pi_{\lambda}:\mathfrak{w}\to W_{\lambda}\) is the projection with respect to the decomposition \(\mathfrak{w}=\oplus_{\lambda}W_{\lambda}\), then \(\pi_{\lambda}\circ s\) for \(\lambda<\alpha\) is unique since a change in \(B\) only affects \(\pi_{\lambda}\circ s\) for \(\lambda\geq\alpha\). Another way to see this is to notice \((\pi_{\lambda}\circ s)(\bar{h})=A^{-1}(\pi_{\lambda}(F(0)^{-1}*F(h)))\) for \(\lambda<\alpha\).
**Lemma 5.4**.: _Let \((\mathfrak{n},D)\), \(\mathfrak{w}\) and \(F\) be as in Definition 5.3. Then \(F\) has a compatible expression._
The proof of Lemma 5.4 is tedious. To improve readability we have placed the proof of Lemma 5.4 in Appendix C. The main ingredient in the proof is the fact that \(\phi\circ\chi_{g}=\chi_{G(g)}\circ\phi\), where \(G=F_{0}\). See Lemma 5.1. We also implicitly (and repeatedly) use the fact that \([Z(\mathfrak{w}),\mathfrak{n}]\subset Z(\mathfrak{w})\). This follows from the Jacobi identity and the fact that \(\mathfrak{w}\) is an ideal of \(\mathfrak{n}\).
Notice that the order of appearance of \(Aw\) and \(As(\bar{h})\) in (1) and (2) below are different.
**Lemma 5.5**.: _Let \((\mathfrak{n},D)\), \(\mathfrak{w}\) and \(F\) be as in Definition 5.3. Using the notation \(p=h*w\in\mathfrak{n}\) where \(h\in H\), \(w\in\mathfrak{w}\):_
_(1) If an expression \(F(h*w)=F(0)*Bh*Aw*As(\bar{h})\) for \(F\) satisfies conditions 1-2 of Definition 5.3, then it also satisfies condition 3; (2) If an expression \(F(h*w)=F(0)*Bh*As(\bar{h})*Aw\) for \(F\) satisfies conditions 1-2 of Definition 5.3, then it also satisfies condition 3 (\(s(\bar{h})\in Z(\mathfrak{w})\)) and so \(F(h*w)=F(0)*Bh*Aw*As(\bar{h})\) is a compatible expression._
Proof.: (1) Let \(F(h*w)=F(0)*B_{1}h*Aw*As_{1}(\bar{h})\) be a compatible expression. Condition 1 implies \(B_{1}h-Bh\in\mathfrak{w}\) for all \(h\in H\). From condition 2 we have \([Bh,Aw]=A[h,w]=[B_{1}h,Aw]\) for all \(h\in H\) and all \(w\in\mathfrak{w}\). It follows that \([B_{1}h-Bh,Aw]=0\) and so \(s^{\prime}(h):=B_{1}h-Bh\in Z(\mathfrak{w})\). By using the BCH formula and the fact that \([\mathfrak{n},Z(\mathfrak{w})]\subset Z(\mathfrak{w})\) we get \((-Bh)*B_{1}h\in Z(\mathfrak{w})\). Finally from the two expressions for \(F(h)\) we get \(As(\bar{h})=(-Bh)*B_{1}h*As_{1}(\bar{h})\in Z(\mathfrak{w})\).
(2) The same proof as above also shows in this case \(s(\bar{h})\in Z(\mathfrak{w})\). This allows us to switch \(Aw\) and \(As(\bar{h})\) to get a compatible expression.
We observe that if \([Bh,Aw]=A[h,w]\) for all \(h\in H\), \(w\in\mathfrak{w}\), then
\[Bh*Aw*(Bh)^{-1}=\sum_{i=0}^{\infty}\frac{1}{i!}(\operatorname{ad}{(Bh)})^{i}( Aw)=A(\sum_{i=0}^{\infty}\frac{1}{i!}(\operatorname{ad}{(h)})^{i}(w))=A(h*w*h^{-1}). \tag{5}\]
For later use (in Lemma 8.8), we record the following lemma. The main point of the lemma is that the same linear map \(B\) works for \(F_{p}\).
**Lemma 5.6**.: _Let \(F:\mathfrak{n}\to\mathfrak{n}\) be a biLipschitz map with a compatible expression \(F(h*w)=F(0)*Bh*Aw*As(\bar{h})\). Then for any \(p\in N\), the map \(F_{p}\) admits a compatible expression of the form \(F_{p}(h*w)=Bh*Aw*A\tilde{s}(\bar{h})\) for some map \(\tilde{s}:\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\)._
Proof.: Note that \((F_{p})_{q}=F_{pq}\) for any \(p,q\in N\) so the same \(A\) works for \(F_{p}\). By Lemma 5.1 we can write \(F_{p}(h*w)=F_{p}(h)*Aw\). Since \(F_{p}\) induces the automorphism \(\bar{B}\) on \(N/W\), we can write \(F_{p}(h)=Bh*A\tilde{s}(\bar{h})\) for some map \(\tilde{s}:\bar{\mathfrak{n}}\to\mathfrak{w}\). So we have \(F_{p}(h*w)=Bh*A\tilde{s}(\bar{h})*Aw\) with Conditions 1 and 2 of Definition 5.3 satisfied. By Lemma 5.5 (2) \(F_{p}(h*w)=Bh*Aw*A\tilde{s}(\bar{h})\) is a compatible expression for \(F_{p}\).
## 6. Characterization of biLipschitz shear maps
**In Sections 6-8 and Appendix C we shall often identify a simply connected nilpotent Lie group with its Lie algebra via the exponential map.** (See Section 3.2.)
From now on, unless stated otherwise, \((N,D)\) will be Carnot-by-Carnot and \(W\) will be the Lie subgroup of \(N\) with Lie algebra \(\mathfrak{w}\) generated by the eigenspace of the smallest eigenvalue of \(D\). In this section we will characterize continuous maps \(s:N/W\to W\) such that the shear maps \(F:(N,d)\to(N,d)\) given by \(F(g)=g\,s(gW)\) are biLipschitz. BiLipschitz shear maps will be used to conjugate a uniform quasisimilarity group into a similarity group. We are only interested in the case \(s(0)=0\). In this case, \(F(0)=0\). We first introduce a useful differential one form on Carnot groups, which will be used in the characterization of biLipschitz shear maps.
### Horizontal tautological one form on Carnot groups
We first recall the tautological one form on Lie groups and then define the horizontal tautological one form on Carnot groups. Finally we give an expression for the horizontal tautological one form in exponential coordinates.
For a Lie group \(G\) with Lie algebra \(\mathfrak{g}=T_{e}G\), the tautological one form \(\theta\) is a \(\mathfrak{g}\) valued left invariant one form on \(G\) defined as follows. For each \(x\in G\), \(\theta_{x}:T_{x}G\to\mathfrak{g}\) is the linear map given by \(\theta_{x}(X)=\tilde{X}_{e}\), where \(\tilde{X}\) is the left invariant vector field on \(G\) satisfying \(\tilde{X}_{x}=X\) and \(\tilde{X}_{g}\) denotes the value of \(\tilde{X}\) at \(g\in G\). The name "tautological" comes from the fact that at each \(x\in G\), \(\theta_{x}\) is the identity map if one identifies the tangent vectors with the corresponding left invariant vector fields.
The horizontal tautological one form is a counterpart of tautological one form in the setting of Carnot groups. Let \(G\) be a Carnot group with Lie algebra grading \(\mathfrak{g}=\oplus_{j=1}^{n}V_{j}\). Then the horizontal tautological one form \(\theta_{H}\) is a \(V_{1}\)-valued left invariant one form on \(G\). Let \(\pi_{1}:\mathfrak{g}\to V_{1}\) be the projection with respect to the decomposition \(\mathfrak{g}=\oplus_{j=1}^{n}V_{j}\). Define \(\theta_{H}=\pi_{1}\circ\theta\); that is, for each \(x\in G\), \((\theta_{H})_{x}:T_{x}G\to V_{1}\) is defined to be the composition \(\pi_{1}\circ\theta_{x}\).
Let \(e_{1},\cdots,e_{k_{n}}\) be a basis of \(\mathfrak{g}=T_{e}G\) such that \(\{e_{1},\cdots,e_{k_{1}}\}\) is a basis of \(V_{1}\). For each \(i\), let \(X_{i}\) be the left invariant vector field on \(G\) that equals \(e_{i}\) at \(e\). Let \(\{\theta_{1},\cdots,\theta_{k_{n}}\}\) be the basis of the space of left invariant one forms on \(G\) dual to \(\{e_{1},\cdots,e_{k_{n}}\}\). So we have \(\theta_{i}(X_{j})=\delta_{ij}\). Now it is easy to see that \(\theta_{H}=\sum_{i=1}^{k_{1}}\theta_{i}e_{i}\) as \((\sum_{i=1}^{k_{1}}\theta_{i}e_{i})(\sum_{j=1}^{k_{n}}a_{j}X_{j})=\sum_{i=1}^{ k_{1}}a_{i}e_{i}\). If we use the exponential coordinates \(\mathbb{R}^{\dim\mathfrak{g}}\to G\), \((x_{1},\cdots,x_{k_{n}})\mapsto\exp(\sum x_{j}e_{j})\), then \(\theta_{i}\) is given by \(\theta_{i}=dx_{i}\) for \(1\leq i\leq k_{1}\). Hence \(\theta_{H}\) has the expression \(\theta_{H}=\sum_{i=1}^{k_{1}}dx_{i}e_{i}\).
### Structure of Lie algebra when \(\alpha\) is not an integer
We return to the case that \((N,D)\) is Carnot-by-Carnot. We rescale so that the smallest eigenvalue of \(D\) is \(1\). Let \(\alpha>1\) be the smallest eigenvalue of the induced derivation \(\bar{D}:\mathfrak{n}/\mathfrak{w}\to\mathfrak{n}/\mathfrak{w}\). Under the assumption that \(\alpha\) is not an integer we clarify the algebraic structure of \(\mathfrak{n}\) and show that \(\mathfrak{n}\) is a central product.
Recall \(\bar{\mathfrak{n}}=\mathfrak{n}/\mathfrak{w}=\bar{V}_{1}\oplus\cdots\oplus \bar{V}_{m}\) is assumed to be a Carnot algebra and \(\pi:\mathfrak{n}\to\bar{\mathfrak{n}}\) is the quotient map.
**Lemma 6.1**.: _(1) If \(\alpha>1\) is irrational, then \(\mathfrak{n}\) has an ideal \(\mathfrak{h}\) that is mapped by \(\pi\) isomorphically onto \(\bar{\mathfrak{n}}\). In particular, \(\mathfrak{n}\) is the direct sum of two ideals \(\mathfrak{w}\) and \(\mathfrak{h}\). (2) Suppose \(\alpha>1\) is rational but not an integer. Let \(k_{0}=\min\{k\in\mathbb{N}|\,k\alpha\,\,\,\text{is an integer}\}\). Then there exist a graded central ideal \(I\) of \(\mathfrak{w}\) contained in \(\oplus_{l}W_{lk_{0}\alpha}\), a Carnot algebra \(\mathfrak{h}=\oplus_{j}H_{j}\) with a graded central ideal \(J\) contained in \(\oplus_{l}H_{lk_{0}}\), and a linear isomorphism \(\phi:I\to J\) satisfying \(\phi(I\cap W_{lk_{0}\alpha})=J\cap H_{lk_{0}}\), such that \(\mathfrak{n}\) is isomorphic to the central product \(\mathfrak{w}\times_{\phi}\mathfrak{h}=(\mathfrak{w}\oplus\mathfrak{h})/K\), where \(K=\{(x,\phi(x))|x\in I\}\)._
Proof.: Let \(H_{1}=V_{\alpha}\) and \(\mathfrak{h}\) be the Lie subalgebra of \(\mathfrak{n}\) generated by \(H_{1}\). Then \(\pi(H_{1})=\bar{V}_{1}\) and \(\pi(\mathfrak{h})=\bar{\mathfrak{n}}\). Furthermore, the property \([V_{a},V_{b}]\subset V_{a+b}\) implies \(\mathfrak{h}\) is a Carnot algebra.
(1) Assume \(\alpha\) is irrational. Since \(\mathfrak{h}\subset\oplus_{l}V_{l\alpha}\) and \(\mathfrak{w}\subset\oplus_{j\geq 1}V_{j}\), we have \(\mathfrak{w}\cap\mathfrak{h}=\{0\}\). As \(\mathfrak{w}\) is an ideal of \(\mathfrak{n}\), the property \([V_{a},V_{b}]\subset V_{a+b}\) implies \([\mathfrak{w},\mathfrak{h}]=0\). It follows that \(\mathfrak{h}\) is an ideal of \(\mathfrak{n}\) and \(\mathfrak{n}=\mathfrak{w}\oplus\mathfrak{h}\) is a direct sum of two ideals.
(2) Suppose \(\alpha>1\) is rational but not an integer. The fact \([H_{1},W_{1}]\subset V_{1+\alpha}\cap\mathfrak{w}=\{0\}\) implies \([\mathfrak{w},\mathfrak{h}]=0\) and so \(\mathfrak{h}\) is also an ideal of \(\mathfrak{n}\). However in this case \(\mathfrak{h}\) and \(\mathfrak{w}\) may have nontrivial intersection. The fact \([\mathfrak{w},\mathfrak{h}]=0\) implies \(\mathfrak{w}\cap\mathfrak{h}\) is central in \(\mathfrak{n}\) and so is central in both \(\mathfrak{w}\) and \(\mathfrak{h}\). Define \(f:\mathfrak{w}\oplus\mathfrak{h}\to\mathfrak{n}\) by \(f(w,h)=w+h\). Then \(f\) is a surjective Lie algebra homomorphism with kernel \(f^{-1}(0)=\{(w,h)|w\in\mathfrak{w}\cap\mathfrak{h},h=-w\}\). Set \(I=J=\mathfrak{w}\cap\mathfrak{h}\) and define \(\phi:I\to J\) by \(\phi(w)=-w\). Then \(\mathfrak{n}\cong\mathfrak{w}\times_{\phi}\mathfrak{h}\). Finally as the intersection of two graded ideals \(\mathfrak{w}\), \(\mathfrak{h}\) of \(\mathfrak{n}\), \(\mathfrak{w}\cap\mathfrak{h}\) (\(=I\)=\(J\)) is a graded ideal in both \(\mathfrak{w}\) and \(\mathfrak{h}\).
Since \([\mathfrak{w},\mathfrak{h}]=0\) and \(\mathfrak{n}=\mathfrak{w}+\mathfrak{h}\), we see that \(Z(\mathfrak{w})\) lies in the center of \(\mathfrak{n}\).
### Characterization of biLipschitz shear maps
Let \(s:N/W\to W\) be a map satisfying \(s(0)=0\) and \(F:(N,d)\to(N,d)\) be given by \(F(g)=gs(gW)\).
**Lemma 6.2**.: _Assume \(F\) as above is biLipschitz. Then \(s\) takes values in the center \(Z(W)\) of \(W\)._
Proof.: As \(s(gW)\in W\), \(F\) maps each coset of \(W\) to itself. Let \(g\in N\) and \(A_{g}:W\to W\) be given by \(A_{g}=F_{g}|_{W}\). Observe that \(A_{g}\) is an inner automorphism of \(W\): \(A_{g}(w)=s(gW)^{-1}ws(gW)\). As \(F\) is biLipschitz, so is \(A_{g}\). A biLipschitz automorphism of a Carnot group is necessarily graded (see Lemma 3.5) and so is completely determined by its action on the first layer. On the other hand, an inner automorphism induces the trivial map on \(\mathfrak{w}/[\mathfrak{w},\mathfrak{w}]\). It follows that \(A_{g}\) is the trivial automorphism and therefore \(s(gW)\) must lie in \(Z(W)\).
We need to introduce some function spaces before we can give the statement of the characterization.
**The spaces \(\mathcal{H}_{j}\).** Let \(\pi:\mathfrak{n}\to\mathfrak{n}/\mathfrak{w}\) be the natural projection. For any \(\bar{X}\in\mathfrak{n}/\mathfrak{w}\) and \(z\in Z(\mathfrak{w})\), we define \([\bar{X},z]:=[X,z]\), where \(X\in\mathfrak{n}\) is such that \(\pi(X)=\bar{X}\). This is well-defined since \(z\in Z(\mathfrak{w})\) and different choices of \(X\) differ by an element in \(\mathfrak{w}\). Notice \([\bar{X},z]\in Z(\mathfrak{w})\).
Let \(\theta_{H}\) be the horizontal tautological one form on \(\mathfrak{n}/\mathfrak{w}\). For any \(j\geq 1\), denote by \(P_{j}\) the space of all continuous maps \(c:\mathfrak{n}/\mathfrak{w}\to Z_{j}(\mathfrak{w})\) (recall \(Z_{j}(\mathfrak{w})=Z(\mathfrak{w})\cap W_{j}\)) satisfying \(c(0)=0\) and \(\int_{\gamma}[c(x),\theta_{H}(x)]=0\) for all closed horizontal curves \(\gamma\) in \(\mathfrak{n}/\mathfrak{w}\). Here we are using our definition of bracket \([z,\bar{X}]\) for \(z\in Z(\mathfrak{w})\), \(\bar{X}\in\mathfrak{n}/\mathfrak{w}\). Notice \([c(x),\theta_{H}(x)]\in Z_{j+\alpha}(\mathfrak{w})\) as \(c(x)\in Z_{j}(\mathfrak{w})\) and \(\theta_{H}(x)\in\bar{V}_{1}\). For any \(c\in P_{j}\), define a map \(c^{(1)}:\mathfrak{n}/\mathfrak{w}\to Z_{j+\alpha}(\mathfrak{w})\) by \(c^{(1)}(p)=\int_{\gamma}[c(x),\theta_{H}(x)]\), where \(\gamma\) is any horizontal path from \(0\) to \(p\). This is well-defined by the definition of \(P_{j}\). If \(c^{(1)}\in P_{j+\alpha}\), then we define \(c^{(2)}=(c^{(1)})^{(1)}\). Similarly, we can define \(c^{(k)}:\mathfrak{n}/\mathfrak{w}\to Z_{j+k\alpha}(\mathfrak{w})\) if \(c^{(k-1)}\in P_{j+(k-1)\alpha}\).
For each integer \(1\leq j\leq\alpha\), let \(E_{j}\) be the space of \(\frac{j}{\alpha}\) -Holder continuous maps \(c:\mathfrak{n}/\mathfrak{w}\to Z_{j}(\mathfrak{w})\) satisfying \(c(0)=0\), and \(\mathcal{H}_{j}\) be the set of elements \(c\in E_{j}\) such that \(c^{(k)}\) is defined for all \(k\geq 1\) (i.e., \(c^{(k-1)}\in P_{j+(k-1)\alpha}\)). Here the metric on \(\mathfrak{n}/\mathfrak{w}\) is a Carnot metric and the metric on \(Z_{j}(\mathfrak{w})\) is an Euclidean metric. Note that \(c^{(k)}\equiv 0\) for large enough \(k\) since \(\mathfrak{n}\) is nilpotent.
**Special case.** We notice that \(\mathcal{H}_{j}=E_{j}\) for all \(1\leq j<\alpha\) when \(\alpha\) is not an integer. This is because in this case \([Z(\mathfrak{w}),\mathfrak{n}]=0\) and so \(c^{(i)}\equiv 0\) for all \(c\in E_{j}\) and all \(i\geq 1\).
Given a continuous map \(s:\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\) satisfying \(s(0)=0\), define \(K:\mathfrak{n}/\mathfrak{w}\times\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\) by
\[K(\bar{g}_{1},\bar{g}_{2})=s(\bar{g}_{2})*(g_{1}^{-1}*g_{2})^{-1}*(-s(\bar{g}_{ 1}))*(g_{1}^{-1}*g_{2}).\]
Notice that \(K\) is well-defined as \(s\) takes values in \(Z(\mathfrak{w})\). Since both \(s(\bar{g}_{2})\) and \((g_{1}^{-1}*g_{2})^{-1}*(-s(\bar{g}_{1}))*(g_{1}^{-1}*g_{2})\) lie in \(Z(\mathfrak{w})\), the BCH formula implies \(K(\bar{g}_{1},\bar{g}_{2})=s(\bar{g}_{2})+(g_{1}^{-1}*g_{2})^{-1}*(-s(\bar{g}_{ 1}))*(g_{1}^{-1}*g_{2})\).
**Lemma 6.3**.: _Write \(s=\sum_{j}s_{j}\), where \(s_{j}:\mathfrak{n}/\mathfrak{w}\to Z_{j}(\mathfrak{w})\) is the \(W_{j}\) component of \(s\)._
_(1) Assume_ \(\alpha\) _is not an integer. If there is a constant_ \(C>0\) _such that_ \(|\pi_{j}(K(\bar{g}_{1},\bar{g}_{2}))|^{\frac{1}{j}}\leq Cd_{CC}^{\frac{1}{ \alpha}}(\bar{g}_{1},\bar{g}_{2})\) _for all_ \(j\geq 1\) _and all_ \(\bar{g}_{1},\bar{g}_{2}\in\mathfrak{n}/\mathfrak{w}\)_, then_ \(s_{j}\in E_{j}\) _for_ \(1\leq j<\alpha\) _and_ \(s_{j}\equiv 0\) _for_ \(j>\alpha\)_;_
_(2) Assume_ \(\alpha\) _is an integer and_ \(j_{0}\) _is an integer satisfying_ \(1\leq j_{0}\leq\alpha\)_. If there is a constant_ \(C>0\)
_such that \(|\pi_{k\alpha+j_{0}}(K(\bar{g}_{1},\bar{g}_{2}))|^{\frac{1}{k\alpha+j_{0}}}\leq Cd _{CC}^{\frac{1}{\alpha}}(\bar{g}_{1},\bar{g}_{2})\) for all \(k\geq 0\) and all \(\bar{g}_{1},\bar{g}_{2}\in\mathfrak{n}/\mathfrak{w}\), then \(s_{j_{0}}\in\mathcal{H}_{j_{0}}\) and \(s_{k\alpha+j_{0}}=s_{j_{0}}^{(k)}\) for all \(k\geq 1\)._
Proof.: Notice
\[K(\bar{g}_{1},\bar{g}_{2})=s(\bar{g}_{2})-s(\bar{g}_{1})+[-(g_{1}^{-1}*g_{2}),- s(\bar{g}_{1})]+\sum_{k\geq 2}\frac{1}{k!}(ad(-(g_{1}^{-1}*g_{2})))^{k}(-s( \bar{g}_{1})).\]
Since \(s\) takes values in \(Z(\mathfrak{w})\), using our definition of bracket \([\bar{X},z]\) with \(\bar{X}\in\mathfrak{n}/\mathfrak{w}\), \(z\in Z(\mathfrak{w})\), we can write
\[K(\bar{g}_{1},\bar{g}_{2})=s(\bar{g}_{2})-s(\bar{g}_{1})+[-(\bar{g}_{1}^{-1}* \bar{g}_{2}),-s(\bar{g}_{1})]+\sum_{k\geq 2}\frac{1}{k!}(ad(-(\bar{g}_{1}^{-1}* \bar{g}_{2})))^{k}(-s(\bar{g}_{1})).\]
(1) Assume \(\alpha\) is not an integer. Since \([Z(\mathfrak{w}),\mathfrak{n}]=0\) in this case we have \(K(\bar{g}_{1},\bar{g}_{2})=s(\bar{g}_{2})-s(\bar{g}_{1})\). The assumption implies \(|s_{j}(\bar{g}_{2})-s_{j}(\bar{g}_{1})|^{\frac{1}{\alpha}}\leq Cd_{CC}^{\frac {1}{\alpha}}(\bar{g}_{1},\bar{g}_{2})\) for all \(j\geq 1\) and all \(\bar{g}_{1},\bar{g}_{2}\in\mathfrak{n}/\mathfrak{w}\). When \(1\leq j<\alpha\), this implies \(s_{j}\in E_{j}\). When \(j>\alpha\), this implies \(s_{j}\) is a constant function and so \(s_{j}\equiv 0\) as \(s_{j}(0)=0\).
(2) Assume \(\alpha\) is an integer. As \([-(\bar{g}_{1}^{-1}*\bar{g}_{2}),-s(\bar{g}_{1})]\in\oplus_{i\geq\alpha+1}W_{i}\), we have \(\pi_{j}K(\bar{g}_{1},\bar{g}_{2})=s_{j}(\bar{g}_{2})-s_{j}(\bar{g}_{1})\) for \(1\leq j\leq\alpha\). The assumption implies \(s_{j_{0}}\in E_{j_{0}}\). Let \(\bar{\gamma}:[a,b]\to\mathfrak{n}/\mathfrak{w}\) be a rectifiable horizontal curve parametrized by arc length. Then \(d_{CC}(\bar{\gamma}(t_{1}),\bar{\gamma}(t_{2}))\leq|t_{2}-t_{1}|\) for any \(a\leq t_{1},t_{2}\leq b\). Write \(\bar{\gamma}(t_{1})^{-1}*\bar{\gamma}(t_{2})=\sum_{j}\bar{p}_{j}\) with \(\bar{p}_{j}=\bar{p}_{j}(t_{1},t_{2})\in\bar{V}_{j}\). Since \(\bar{\gamma}\) is a horizontal curve, we have \(\bar{p}_{j}=o(t_{2}-t_{1})\) for \(j\geq 2\) (as \(t_{2}\to t_{1}\)). If \(\bar{\gamma}\) has tangent at \(t_{1}\), then \(\frac{\bar{p}_{1}}{t_{2}-t_{1}}\to\bar{\gamma}^{\prime}(t_{1})\) as \(t_{2}\to t_{1}\).
For \(k\geq 1\), we have
\[\pi_{k\alpha+j_{0}}K(\bar{\gamma}(t_{1}),\bar{\gamma}(t_{2}))=s_{k\alpha+j_{0 }}(\bar{\gamma}(t_{2}))-s_{k\alpha+j_{0}}(\bar{\gamma}(t_{1}))+[\bar{p}_{1},s _{(k-1)\alpha+j_{0}}(\bar{\gamma}(t_{1}))]+o(t_{2}-t_{1}).\]
Since \(d_{CC}(\bar{\gamma}(t_{1}),\bar{\gamma}(t_{2}))\leq|t_{2}-t_{1}|\), the assumption implies that
\[\frac{\pi_{k\alpha+j_{0}}K(\bar{\gamma}(t_{1}),\bar{\gamma}(t_{2}))}{t_{2}-t_ {1}}\to 0\]
as \(t_{2}\to t_{1}\). If \(\bar{\gamma}\) has tangent at \(t_{1}\), then
\[\frac{d}{dt}s_{k\alpha+j_{0}}(\bar{\gamma}(t))|_{t=t_{1}}=\lim_{t_{2}\to t_{1 }}\frac{s_{k\alpha+j_{0}}(\bar{\gamma}(t_{2}))-s_{k\alpha+j_{0}}(\bar{\gamma}( t_{1}))}{t_{2}-t_{1}}=[s_{(k-1)\alpha+j_{0}}(\bar{\gamma}(t_{1})),\bar{\gamma}^{ \prime}(t_{1})].\]
By the fundamental theorem of calculus, we have
\[s_{k\alpha+j_{0}}(\bar{\gamma}(b))-s_{k\alpha+j_{0}}(\bar{\gamma}(a))=\int_{a }^{b}[s_{(k-1)\alpha+j_{0}}(\bar{\gamma}(t)),\bar{\gamma}^{\prime}(t)]dt=\int_ {\bar{\gamma}}[s_{(k-1)\alpha+j_{0}}(x),\theta_{H}(x)]. \tag{6}\]
In particular, \(\int_{\bar{\gamma}}[s_{(k-1)\alpha+j_{0}}(x),\theta_{H}(x)]=0\) when \(\bar{\gamma}\) is a closed horizontal curve in \(\mathfrak{n}/\mathfrak{w}\) and so \(s_{(k-1)\alpha+j_{0}}\in P_{(k-1)\alpha+j_{0}}\). Furthermore, (6) also shows \(s_{k\alpha+j_{0}}=s_{(k-1)\alpha+j_{0}}^{(1)}\). By induction this shows that \(s_{j_{0}}\in\mathcal{H}_{j_{0}}\) and \(s_{k\alpha+j_{0}}=s_{j_{0}}^{(k)}\) for all \(k\geq 1\).
**Proposition 6.4**.: _Let \(s:\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\) be a continuous map satisfying \(s(0)=0\) and \(F:\mathfrak{n}\to\mathfrak{n}\), \(F(g)=g*s(\bar{g})\) be the associated shear map. Write \(s=\sum_{j}s_{j}\), where \(s_{j}:\mathfrak{n}/\mathfrak{w}\to Z_{j}(\mathfrak{w})\) is the \(W_{j}\) component of \(s\). Assume \(F\) is biLipschitz. (1) If \(\alpha\) is not an integer, then \(s_{j}\in E_{j}\) for \(1\leq j<\alpha\) and \(s_{j}\equiv 0\) for \(j>\alpha\); (2) If \(\alpha\) is an integer, then \(s_{j}\in\mathcal{H}_{j}\) for each \(1\leq j\leq\alpha\) and \(s_{k\alpha+j}=s_{j}^{(k)}\) for all \(k\geq 1\) and \(1\leq j\leq\alpha\)._
Proof.: There is some \(L>0\) such that \(d(F(g_{1}),F(g_{2}))\leq L\cdot d(g_{1},g_{2})\) for any \(g_{1},g_{2}\in\mathfrak{n}\). Let \(g_{1},g_{2}\in\mathfrak{n}\). Then \(F(g_{1})=g_{1}*s(\bar{g}_{1})\) and \(F(g_{2})=g_{2}*s(\bar{g}_{2})\). Since
\[(F(g_{1}))^{-1}*F(g_{2})=(s(\bar{g}_{1}))^{-1}*g_{1}^{-1}*g_{2}*s(\bar{g}_{2}),\]
we have \(K(\bar{g}_{1},\bar{g}_{2})=(g_{1}^{-1}*g_{2})^{-1}*(F(g_{1}))^{-1}*F(g_{2})\). It follows that
\[d(0,K(\bar{g}_{1},\bar{g}_{2})) =d(g_{1}^{-1}*g_{2},(F(g_{1}))^{-1}*F(g_{2}))\] \[\leq d(g_{1}^{-1}*g_{2},0)+d(0,(F(g_{1})^{-1}*F(g_{2})))\] \[=d(g_{1},g_{2})+d(F(g_{1}),F(g_{2}))\leq(L+1)d(g_{1},g_{2}).\]
Hence for any \(i\geq 1\) we have
\[|\pi_{i}(K(\bar{g}_{1},\bar{g}_{2}))|^{\frac{1}{i}}\leq d(0,K( \bar{g}_{1},\bar{g}_{2})) \leq(L+1)\inf_{w_{1},w_{2}\in\mathfrak{n}}d(g_{1}*w_{1},g_{2}*w_{2})\] \[=(L+1)d(g_{1}*\mathfrak{w},g_{2}*\mathfrak{w})\] \[\leq(L+1)Cd_{CC}(\bar{g}_{1},\bar{g}_{2})^{\frac{1}{\alpha}},\]
for some \(C>0\) as the distance between cosets \(d(g_{1}*\mathfrak{w},g_{2}*\mathfrak{w})\) is comparable with \(d_{CC}(\bar{g}_{1},\bar{g}_{2})^{\frac{1}{\alpha}}\). The proposition now follows from Lemma 6.3.
**Lemma 6.5**.: _Given any Carnot group \(G\), there exists an integer \(n_{0}\) and a constant \(C>0\) with the following property: for any \(g\in G\), there is a horizontal curve \(c\) from \(0\) to \(g\) that is a concatenation of at most \(n_{0}\) horizontal line segments and such that the length of \(c\) is at most \(C\cdot d(0,g)\). Here a horizontal line segment in \(G\) is a path \(c:[0,a]\to G\) of the form \(c(t)=g\,\text{exp}(tX)\), where \(X\) lies in the first layer of the Lie algebra of \(G\)._
Proof.: This follows from the proof of Proposition 2.26 in [BLD], see also Chapter 8 of [1] (in particular Theorem 8.1 and Proposition 8.5).
Recall that \(H_{1}\subset V_{\alpha}\) is a linear subspace complimentary to \(W_{\alpha}\).
**Lemma 6.6**.: _There is a constant \(C>0\) depending only on \(N\) with the following property. For any \(b_{1},b_{2}>0\), and any \(w\in Z(\mathfrak{n})\), \(h\in H_{1}\) satisfying \(d(0,w)\leq b_{1}^{\frac{1}{\alpha}}\) and \(|h|<b_{2}\), the inequality \(d(0,(-h)*w*h)\leq C\cdot(\max\{b_{1},b_{2}\})^{\frac{1}{\alpha}}\) holds._
Proof.: Write \(w=w_{1}+\cdots+w_{m}\) with \(w_{j}\in W_{j}\). The assumption \(d(0,w)\leq b_{1}^{\frac{1}{\alpha}}\) implies \(|w_{j}|\leq b_{1}^{\frac{j}{\alpha}}\) for each \(j\). We calculate
\[(-h)*w*h=w+\sum_{k\geq 1}\frac{(-1)^{k}}{k!}(ad(h))^{k}w.\]
As \(N\) is nilpotent, the above is a finite sum. We have
\[\pi_{j}((-h)*w*h)=w_{j}+\sum_{k\geq 1,j-k\alpha\geq 1}\frac{(-1)^{k}}{k!}(ad(h ))^{k}w_{j-k\alpha}.\]
By using \(|[X,Y]|\leq C_{0}\cdot|X|\cdot|Y|\), we get (with \(b=\max\{b_{1},b_{2}\}\))
\[|\frac{(-1)^{k}}{k!}(ad(h))^{k}w_{j-k\alpha}|\leq\frac{C_{0}^{k}}{k!}|h|^{k}| w_{j-k\alpha}|\leq\frac{C_{0}^{k}}{k!}b_{2}^{k}b_{1}^{\frac{j-k\alpha}{ \alpha}}\leq\frac{C_{0}^{k}}{k!}b^{\frac{j}{\alpha}}.\]
From this it is clear that \(|\pi_{j}((-h)*w*h)|\leq Cb^{\frac{j}{\alpha}}\) for some \(C\) depending only on \(N\) and the lemma follows.
**Proposition 6.7**.: _Let \(s_{j}\in\mathcal{H}_{j}\) be given for each integer \(1\leq j\leq\alpha\). (1) If \(\alpha\) is not an integer, set \(s_{j}\equiv 0\) for \(j>\alpha\) and \(s=\sum_{j}s_{j}\); (2) If \(\alpha\) is an integer, set \(s_{k\alpha+j}=s_{j}^{(k)}\) for all \(k\geq 1\) and \(s=\sum_{j}s_{j}\). Then the shear map associated to \(s\) is biLipschitz._
Proof.: We shall prove that the shear map \(F\) associated to \(s\) is Lipschitz. The same argument shows \(F^{-1}\) is also Lipschitz since \(F^{-1}\) is the shear map associated to \(-s\).
By the triangle inequality,
\[d(F(g_{1}),F(g_{2})) =d(0,F(g_{1})^{-1}*F(g_{2}))\] \[=d((g_{1}^{-1}*g_{2})^{-1},(g_{1}^{-1}*g_{2})^{-1}*F(g_{1})^{-1}* F(g_{2}))\] \[\leq d((g_{1}^{-1}*g_{2})^{-1},0)+d(0,(g_{1}^{-1}*g_{2})^{-1}*F(g_ {1})^{-1}*F(g_{2}))\] \[=d(g_{1},g_{2})+d(0,K(\bar{g}_{1},\bar{g}_{2})).\]
Hence it suffices to show there is some \(C>0\) such that
\[d(0,K(\bar{g}_{1},\bar{g}_{2}))\leq C\cdot d(g_{1},g_{2})\ \ \forall g_{1},g_{2}\in\mathfrak{n}. \tag{7}\]
(1) Assume \(\alpha\) is not an integer. In this case we have \(K(\bar{g}_{1},\bar{g}_{2})=s(\bar{g}_{2})-s(\bar{g}_{1})\) and (7) follows from the assumption.
(2) Assume \(\alpha\) is an integer. We first show (7) in the case when \(\bar{g}_{1}^{-1}*\bar{g}_{2}\) lies in \(\bar{V}_{1}\). So assume \(\bar{g}_{1}^{-1}*\bar{g}_{2}=t\bar{h}\in\bar{V}_{1}\) with \(|\bar{h}|=1\) and \(t>0\). We have
\[K(\bar{g}_{1},\bar{g}_{2}) =s(\bar{g}_{2})-s(\bar{g}_{1})+[-t\bar{h},-s(\bar{g}_{1})]+\sum_{ k\geq 2}\frac{1}{k!}(ad(-t\bar{h}))^{k}(-s(\bar{g}_{1}))\] \[=s(\bar{g}_{2})-s(\bar{g}_{1})+t[\bar{h},s(\bar{g}_{1})]+\sum_{k \geq 2}\frac{(-1)^{k+1}t^{k}}{k!}(ad(\bar{h}))^{k}s(\bar{g}_{1}).\]
Since in our case \(d(g_{1},g_{2})\geq d_{CC}(\bar{g}_{1},\bar{g}_{2})^{\frac{1}{\alpha}}=t^{\frac {1}{\alpha}}\), it suffice to show that there is some constant \(C>0\) such that for each \(j\geq 1\), the following inequality holds:
\[|\pi_{j}K(\bar{g}_{1},\bar{g}_{2})|^{\frac{1}{j}}\leq C\cdot t^{\frac{1}{ \alpha}}. \tag{8}\]
When \(1\leq j\leq\alpha\), \(\pi_{j}K(\bar{g}_{1},\bar{g}_{2})=s_{j}(\bar{g}_{2})-s_{j}(\bar{g}_{1})\) and so (8) holds in this case since by assumption \(s_{j}\in E_{j}\). Now let \(k_{0}\geq 1\) and suppose there is some constant \(C>0\) such that \(|\pi_{k\alpha+j}K(\bar{g}_{1},\bar{g}_{2})|^{\frac{1}{k\alpha+j}}\leq C\cdot t ^{\frac{1}{\alpha}}\) holds for all \(k<k_{0}\) and all \(1\leq j\leq\alpha\). We shall show a similar inequality holds for \(k=k_{0}\) (with a different constant). We have
\[\pi_{k_{0}\alpha+j}K(\bar{g}_{1},\bar{g}_{2})\] \[=s_{k_{0}\alpha+j}(\bar{g}_{2})-s_{k_{0}\alpha+j}(\bar{g}_{1})+t [\bar{h},s_{(k_{0}-1)\alpha+j}(\bar{g}_{1})]+\sum_{k=2}^{k_{0}}\frac{(-1)^{k+1 }t^{k}}{k!}(ad(\bar{h}))^{k}(s_{(k_{0}-k)\alpha+j}(\bar{g}_{1})).\]
Let \(\bar{\gamma}:[0,t]\to\mathfrak{n}/\mathfrak{w}\) be the horizontal curve given by \(\bar{\gamma}(t_{1})=\bar{g}_{1}*(t_{1}\bar{h})\). We have \(\bar{\gamma}^{\prime}(t_{1})=\bar{h}\) for all \(t_{1}\). By the definition of \(s_{k\alpha+j}\), we have
\[s_{k_{0}\alpha+j}(\bar{g}_{2})-s_{k_{0}\alpha+j}(\bar{g}_{1})+t[ \bar{h},s_{(k_{0}-1)\alpha+j}(\bar{g}_{1})]\] \[=\int_{0}^{t}[s_{(k_{0}-1)\alpha+j}(\bar{g}_{1}*(t_{1}\bar{h})), \bar{h}]dt_{1}-\int_{0}^{t}[s_{(k_{0}-1)\alpha+j}(\bar{g}_{1}),\bar{h}]dt_{1}\] \[=\int_{0}^{t}[s_{(k_{0}-1)\alpha+j}(\bar{g}_{1}*(t_{1}\bar{h}))-s_ {(k_{0}-1)\alpha+j}(\bar{g}_{1}),\bar{h}]dt_{1}\] \[=\int_{0}^{t}[\int_{0}^{t_{1}}[s_{(k_{0}-2)\alpha+j}(\bar{g}_{1}* (t_{2}\bar{h})),\bar{h}]dt_{2},\bar{h}]dt_{1}\] \[=\int_{0}^{t}\int_{0}^{t_{1}}[[s_{(k_{0}-2)\alpha+j}(\bar{g}_{1}* (t_{2}\bar{h})),\bar{h}],\bar{h}]dt_{2}dt_{1}\] \[=(-1)^{2}\int_{0}^{t}\int_{0}^{t_{1}}(ad(\bar{h}))^{2}(s_{(k_{0}- 2)\alpha+j}(\bar{g}_{1}*(t_{2}\bar{h}))dt_{2}dt_{1}.\]
On the other hand, notice
\[\frac{(-1)^{k+1}t^{k}}{k!}(ad(\bar{h}))^{k}(s_{(k_{0}-k)\alpha+j}(\bar{g}_{1}) )=-(-1)^{k}\int_{0}^{t}\int_{0}^{t_{1}}\cdots\int_{0}^{t_{k-1}}(ad(\bar{h}))^{k }(s_{(k_{0}-k)\alpha+j}(\bar{g}_{1}))dt_{k}\cdots dt_{1}.\]
By induction we get
\[K_{k_{0}\alpha+j}(\bar{g}_{1},\bar{g}_{2})\] \[=(-1)^{k_{0}}\int_{0}^{t}\cdots\int_{0}^{t_{k_{0}-1}}(ad(\bar{h}) )^{k_{0}}(s_{j}(\bar{g}_{1}*t_{k_{0}}\bar{h})-s_{j}(\bar{g}_{1}))dt_{k_{0}} \cdots dt_{1}.\]
There is a constant \(C_{1}>0\) depending only on \(\mathfrak{n}\) such that \(|[X,Y]|\leq C_{1}|X|\cdot|Y|\) for any \(X,Y\in\mathfrak{n}\). Since \(s_{j}\) is \(\frac{j}{\alpha}\)-Holder for \(1\leq j\leq\alpha\), we have
\[|K_{k_{0}\alpha+j}(\bar{g}_{1},\bar{g}_{2})| \leq\int_{0}^{t}\cdots\int_{0}^{t_{k_{0}-1}}|(ad(\bar{h}))^{k_{0}} (s_{j}(\bar{g}_{1}*t_{k_{0}}\bar{h})-s_{j}(\bar{g}_{1}))|dt_{k_{0}}\cdots dt_{1}\] \[\leq\int_{0}^{t}\cdots\int_{0}^{t_{k_{0}-1}}C_{1}^{k_{0}}|\bar{h }|^{k_{0}}|(s_{j}(\bar{g}_{1}*t_{k_{0}}\bar{h})-s_{j}(\bar{g}_{1}))|dt_{k_{0}} \cdots dt_{1}\] \[\leq C_{1}^{k_{0}}\int_{0}^{t}\cdots\int_{0}^{t_{k_{0}-1}}Cd_{CC} (\bar{g}_{1}*(t_{k_{0}}\bar{h}),\bar{g}_{1})^{\frac{j}{\alpha}}dt_{k_{0}} \cdots dt_{1}\] \[=C_{1}^{k_{0}}C\int_{0}^{t}\cdots\int_{0}^{t_{k_{0}-1}}t_{k_{0}}^ {\frac{j}{\alpha}}dt_{k_{0}}\cdots dt_{1}\] \[=C_{2}t^{\frac{k_{0}\alpha+j}{\alpha}}.\]
Hence (8) and in turn (7) holds when \(\bar{g}_{1}^{-1}*\bar{g}_{2}\in\bar{V}_{1}\).
Now consider the general case. Let \(g,g^{\prime}\in\mathfrak{n}\). As \(\mathfrak{n}/\mathfrak{w}\) is Carnot, by Lemma 6.5, there exist \(\bar{g}=\bar{g}_{0},\bar{g}_{1},\cdots,\bar{g}_{k}=\bar{g}^{\prime}\) such that \(k\leq n_{0}\), \(\bar{g}_{i-1}^{-1}*\bar{g}_{i}\in\bar{V}_{1}\) and \(\sum_{i}d_{CC}(\bar{g}_{i-1},\bar{g}_{i})\leq Cd_{CC}(\bar{g},\bar{g}^{\prime})\), where \(C>0\) and \(n_{0}\) depend only on \(\mathfrak{n}/\mathfrak{w}\). In particular we have \(d_{CC}(\bar{g}_{i-1},\bar{g}_{i})\leq Cd_{CC}(\bar{g},\bar{g}^{\prime})\). Let \(g_{0}=g\). We inductively define \(g_{i}\) for \(1\leq i\leq k\) such that \(d(g_{i},g_{i-1})=d(g_{i}*\mathfrak{w},g_{i-1}*\mathfrak{w})\). There is a constant
\(C_{4}>0\) such that \(\frac{1}{C_{4}}d_{CC}(\bar{x},\bar{y})^{\frac{1}{\alpha}}\leq d(x*\mathfrak{w},y* \mathfrak{w})\leq C_{4}d_{CC}(\bar{x},\bar{y})^{\frac{1}{\alpha}}\) for all \(x,y\in\mathfrak{n}\). We have
\[d(g,g_{k}) \leq\sum_{i=1}^{k}d(g_{i-1},g_{i})=\sum_{i=1}^{k}d(g_{i-1}* \mathfrak{w},g_{i}*\mathfrak{w})\] \[\leq\sum_{i=1}^{k}C_{4}d_{CC}(\bar{g}_{i-1},\bar{g}_{i})^{\frac{1 }{\alpha}}\leq C_{4}n_{0}C^{\frac{1}{\alpha}}d_{CC}(\bar{g},\bar{g}^{\prime})^ {\frac{1}{\alpha}}\] \[\leq C_{4}^{2}n_{0}C^{\frac{1}{\alpha}}d(g*\mathfrak{w},g^{\prime }*\mathfrak{w})\leq C_{4}^{2}n_{0}C^{\frac{1}{\alpha}}d(g,g^{\prime}).\]
By the triangle inequality we have \(d(g_{k},g^{\prime})\leq(1+C_{4}^{2}n_{0}C^{\frac{1}{\alpha}})d(g,g^{\prime})\). Now by the special case, we have
\[d(F(g),F(g_{k}))\leq\sum_{i}d(F(g_{i-1}),F(g_{i}))\leq\sum_{i}C_{3}d(g_{i-1},g _{i})\leq C_{3}C_{4}^{2}n_{0}C^{\frac{1}{\alpha}}d(g,g^{\prime}).\]
On the other hand it is easy to see that the restriction of \(F\) on the cosets of \(W\) are isometries and so \(d(F(g_{k}),F(g^{\prime}))=d(g_{k},g^{\prime})\leq(1+C_{4}^{2}n_{0}C^{\frac{1}{ \alpha}})d(g,g^{\prime})\). Finally by the triangle inequality \(d(F(g),F(g^{\prime}))\leq d(F(g),F(g_{k}))+d(F(g_{k}),F(g^{\prime}))\leq(1+(1+ C_{3})C_{4}^{2}n_{0}C^{\frac{1}{\alpha}})d(g,g^{\prime})\).
## 7. Eliminating \(s_{j}\) for \(j<\alpha\)
The goal of this section is to prove the following result.
**Proposition 7.1**.: _Let \((N,D)\) be Carnot-by-Carnot satisfying \(\dim(W)\geq 2\), \(\dim(N/W)\geq 2\), and \(\Gamma\) a uniform quasisimilarity group of \(N\). Then there exists a biLipschitz map \(F_{0}\) such that every element in the conjugate \(F_{0}\Gamma F_{0}^{-1}\) has a compatible expression \(a*Bh*Aw*As(\bar{h})\) where the \(W_{j}\) component \(s_{j}\) of \(s\) vanishes for every \(1\leq j<\alpha\)._
We shall use the approach in [4] to prove Proposition 7.1. The cases \(\dim(W)=1\) and \(\dim(N/W)=1\) will be considered in Sections 9 and 10 respectively.
Let \(N\) and \(\Gamma\) be as in Proposition 7.1. After replacing \(\Gamma\) with a biLipschitz conjugate we may assume \(\Gamma\) is a fiber similarity group (see Section 2 for the definition of fiber similarity map and fiber similarity group). To be more precise, after applying Theorem 3.7 (and Lemma 5.1) we may assume there are Carnot metrics \(d_{CC}\) and \(\bar{d}_{CC}\) on \(W\) and \(N/W\) respectively such that every \(\gamma\in\Gamma\) induces a similarity \(\bar{\gamma}\) of \((N/W,\bar{d}_{CC})\), and there is a graded automorphism \(A_{\gamma}\) of \(W\) that is also a similarity of \((W,d_{CC})\) such that \(A_{\gamma}=\gamma_{p}|_{W}\) for any \(p\in N\).
Let \(\operatorname{Aut}_{I}(W,d_{CC})\) be the group of isometric graded automorphisms of \((W,d_{CC})\), and \(\operatorname{Aut}_{c}(W,d_{CC})\) be the group of graded automorphisms of \(W\) that are compositions of a Carnot dilation and an isometric graded automorphism. Then \(\operatorname{Aut}_{I}(W,d_{CC})\) is compact and \(\operatorname{Aut}_{c}(W,d_{CC}))\cong\operatorname{Aut}_{I}(W,d_{CC})\times\mathbb{R}\) is amenable. Similarly \(\operatorname{Aut}_{c}(\mathfrak{n}/\mathfrak{w},\bar{d}_{CC})\) is amenable and so is the group of similarities \(\operatorname{Sim}(\mathfrak{n}/\mathfrak{w},\bar{d}_{CC})=(\mathfrak{n}/ \mathfrak{w})\rtimes\operatorname{Aut}_{c}(\mathfrak{n}/\mathfrak{w},\bar{d}_ {CC})\) of \((\mathfrak{n}/\mathfrak{w},\bar{d}_{CC})\). Denote \(G_{0}=\operatorname{Aut}_{c}(W,d_{CC})\times\operatorname{Sim}(\mathfrak{n}/ \mathfrak{w},\bar{d}_{CC})\) and define a map \(\Psi:\Gamma\to G_{0}\) by \(\Psi(\gamma)=(A_{\gamma},\overline{\gamma})\). It is easy to see that \(\Psi\) is a homomorphism. Let \(G\) be the closure of \(\Psi(\Gamma)\) in \(G_{0}\). We notice that \(G\) is amenable, being a closed subgroup of the amenable group \(G_{0}\).
Let \(1\leq j<\alpha\) and \(E_{j}=\{s:(\mathfrak{n}/\mathfrak{w},\bar{d}_{CC})\to Z_{j}(\mathfrak{w})|s\) is \(\frac{j}{\alpha}-\operatorname{Holder},\ s(0)=0\}\). We shall define an affine action of \(G\) on \(E_{j}\), then show that this affine action has a fixed point in \(\mathcal{H}_{j}\subset E_{j}\), and finally use this fixed point to construct a biLipschitz shear map of \(N\) to conjugate \(\Gamma\) into a group with the desired property.
### The affine action on \(E_{j}\)
We first define a norm on \(E_{j}\), then a linear isometric action on \(E_{j}\), and finally the translational part of the affine action.
Since the compact group \(\operatorname{Aut}_{I}(W,d_{CC})\) leaves \(W_{j}\) invariant, there exists a \(\operatorname{Aut}_{I}(W,d_{CC})\)-invariant inner product \(\langle,\rangle_{j}\) on \(W_{j}\). Let \(|\cdot|_{j}\) be the associated norm on \(W_{j}\). Notice that \(E_{j}\) is a Banach space with respect to the norm
\[||s||=\sup_{p\neq q}\frac{|s(p)-s(q)|_{j}}{(\bar{d}_{CC}(p,q))^{\frac{j}{\alpha }}}.\]
For a similarity \(f:X\to X\) of a metric space \(X\), we denote by \(\lambda_{f}\) the similarity constant of \(f\): \(d(f(x_{1}),f(x_{2}))=\lambda_{f}d(x_{1},x_{2})\). Let \(G_{1}:=\{(A,B)\in G_{0}|\lambda_{B}=\lambda_{A}^{\alpha}\}\). We first define an action of \(G_{1}\) on \(E_{j}\). Let \(c\in E_{j}\) and \((A,B)\in G_{1}\). We define \(\pi_{(A,B)}c:\mathfrak{n}/\mathfrak{w}\to Z_{j}(\mathfrak{w})\) to be the function given by
\[(\pi_{(A,B)}c)(\bar{h})=A^{-1}c(B(\bar{h}))-A^{-1}c(B(0)).\]
Clearly \((\pi_{(A,B)}c)(0)=0\) and \(\pi_{(A,B)}c\) is linear in \(c\).
**Lemma 7.2**.: \(\pi_{(A,B)}\) _defines an action of the opposite group \(G_{1}^{*}\) of \(G_{1}\) on \(E_{j}\) by linear isometries._
Proof.: Write \(A=\delta_{\lambda_{A}}\circ A^{\prime}\), where \(\delta_{\lambda_{A}}\) is a Carnot dilation and \(A^{\prime}\in\operatorname{Aut}_{I}(W,d_{CC})\). For \(w\in W_{j}\) we have \(A(w)=\lambda_{A}^{j}A^{\prime}(w)\) and so \(|A(w_{1})-A(w_{2})|_{j}=\lambda_{A}^{j}|w_{1}-w_{2}|_{j}\) for \(w_{1},w_{2}\in W_{j}\). Now let \(\bar{h}_{1},\bar{h}_{2}\in\mathfrak{n}/\mathfrak{w}\) and \(c\in E_{j}\). Then
\[\frac{|(\pi_{(A,B)}c)(\bar{h}_{1})-(\pi_{(A,B)}c)(\bar{h}_{2})|_{ j}}{\bar{d}_{CC}(\bar{h}_{1},\bar{h}_{2})^{\frac{j}{\alpha}}} =\frac{|A^{-1}c(B(\bar{h}_{1}))-A^{-1}c(B(\bar{h}_{2}))|_{j}}{ \bar{d}_{CC}(\bar{h}_{1},\bar{h}_{2})^{\frac{j}{\alpha}}}\] \[=\frac{\lambda_{A}^{-j}|c(B(\bar{h}_{1}))-c(B(\bar{h}_{2}))|_{j}} {\bar{d}_{CC}(\bar{h}_{1},\bar{h}_{2})^{\frac{j}{\alpha}}}\] \[=\frac{|c(B(\bar{h}_{1}))-c(B(\bar{h}_{2}))|_{j}}{(\lambda_{B} \bar{d}_{CC}(\bar{h}_{1},\bar{h}_{2}))^{\frac{j}{\alpha}}}\] \[=\frac{|c(B(\bar{h}_{1}))-c(B(\bar{h}_{2}))|_{j}}{(\bar{d}_{CC}(B (\bar{h}_{1}),B(\bar{h}_{2})))^{\frac{j}{\alpha}}}.\]
showing that \(\pi_{(A,B)}c\in E_{j}\) and \(||\pi_{(A,B)}c||=||c||\). We already observed that \(\pi_{(A,B)}c\) is linear in \(c\). Therefore the map \(E_{j}\to E_{j}\), \(c\mapsto\pi_{(A,B)}c\), is a linear isometry.
It is easy to check that \(\pi_{(A_{2},B_{2})(A_{1},B_{1})}=\pi_{(A_{1},B_{1})}\circ\pi_{(A_{2},B_{2})}\) holds for any \((A_{1},B_{1}),(A_{2},B_{2})\in G_{1}\) and so \(\pi_{(A,B)}\) defines an action of the opposite group \(G_{1}^{*}\) of \(G_{1}\) on \(E_{j}\).
To obtain a linear isometric action of \(G^{*}\) on \(E_{j}\), we will show that \(G\subset G_{1}\).
**Lemma 7.3**.: _The formula \(\lambda_{\bar{\gamma}}=\lambda_{A_{\gamma}}^{\alpha}\) holds for all \(\gamma\in\Gamma\)._
Proof.: Since both \(d_{CC}\) and \(d|_{W}\) (\(d\) is a fixed \(D\)-homogeneous distance on \(N\)) are homogeneous distances on \(W\), there is a constant \(L\geq 1\) such that \(d_{CC}(w_{1},w_{2})/L\leq d(w_{1},w_{2})\leq Ld_{CC}(w_{1},w_{2})\) for all \(w_{1},w_{2}\in W\). Similarly, as both \(\bar{d}\) (recall \(\bar{d}\) is the distance on \(N/W\) induced by \(d\), see end of Section 3.5) and \(\bar{d}_{CC}^{\frac{1}{\alpha}}\) are \(\bar{D}\)-homogeneous distances on \(N/W\), there is a constant \(\bar{L}\geq 1\) such that \(\bar{d}_{CC}^{\frac{1}{\alpha}}(x,y)/\bar{L}\leq\bar{d}(x,y)\leq\bar{L}\bar{d} _{CC}^{\frac{1}{\alpha}}(x,y)\) for any \(x,y\in N/W\). After possibly replace both \(L\) and \(\bar{L}\) with \(\max(L,\bar{L})\) we may assume \(L=L\).
Let \(\gamma\in\Gamma\). Let \(w_{1}\neq w_{2}\in W\). As \(\gamma\) is a \((\Lambda,C_{\gamma})\)-quasi-similarity with respect to \(d\) for some \(C_{\gamma}>0\), we have
\[\lambda_{A_{\gamma}}d_{CC}(w_{1},w_{2}) =d_{CC}(\gamma_{p}(w_{1}),\gamma_{p}(w_{2}))\leq Ld(\gamma_{p}(w_{ 1}),\gamma_{p}(w_{2}))\] \[=Ld(\gamma(pw_{1}),\gamma(pw_{2}))\leq L\Lambda C_{\gamma}d(pw_{1 },pw_{2})\] \[\leq L^{2}\Lambda C_{\gamma}d_{CC}(w_{1},w_{2}),\]
and so \(\lambda_{A_{\gamma}}\leq L^{2}\Lambda C_{\gamma}\). Similarly we get \(\lambda_{A_{\gamma}}\geq\frac{C_{\gamma}}{L^{2}\Lambda}\) from
\[\lambda_{A_{\gamma}}d_{CC}(w_{1},w_{2}) =d_{CC}(\gamma_{p}(w_{1}),\gamma_{p}(w_{2}))\geq\frac{1}{L}\cdot d (\gamma_{p}(w_{1}),\gamma_{p}(w_{2}))\] \[=\frac{1}{L}\cdot d(\gamma(pw_{1}),\gamma(pw_{2}))\geq\frac{C_{ \gamma}}{L\Lambda}d(pw_{1},pw_{2})\] \[\geq\frac{C_{\gamma}}{L^{2}\Lambda}d_{CC}(w_{1},w_{2}).\]
Pick \(p,q\in N\) so that \(pW\neq qW\) and such that \(p,q\) realize the distance between the cosets \(pW\) and \(qW\). Now
\[(\lambda_{\bar{\gamma}}\bar{d}_{CC}(pW,qW))^{\frac{1}{\alpha}} =d^{\frac{1}{\alpha}}_{CC}(\bar{\gamma}(pW),\bar{\gamma}(qW))\leq L \bar{d}(\bar{\gamma}(pW),\bar{\gamma}(qW))\] \[=Ld(\gamma(p)W,\gamma(q)W)\leq Ld(\gamma(p),\gamma(q))\] \[\leq L\Lambda C_{\gamma}d(p,q)=L\Lambda C_{\gamma}\bar{d}(pW,qW)\] \[\leq L^{2}\Lambda C_{\gamma}d^{\frac{1}{\alpha}}_{CC}(pW,qW),\]
yielding \(\lambda^{\frac{1}{\alpha}}_{\bar{\gamma}}\leq L^{2}\Lambda C_{\gamma}\). Similarly by picking \(p,q\in N\) so that \(pW\neq qW\) and \(d(\gamma(p),\gamma(q))=d(\gamma(p)W,\gamma(q)W)\) we get \(\lambda^{\frac{1}{\alpha}}_{\bar{\gamma}}\geq\frac{C_{\gamma}}{L^{2}\Lambda}\). Combining the above four inequalities we get \(\frac{1}{L^{4}\Lambda^{2}}\leq\frac{\lambda^{\frac{1}{\alpha}}_{\bar{\gamma}} }{\lambda_{A_{\gamma}}}\leq L^{4}\Lambda^{2}\) for all \(\gamma\in\Gamma\). Notice that for any integer \(n\geq 1\), we have \(\lambda_{A_{\gamma}n}=\lambda^{n}_{A_{\gamma}}\) and \(\lambda_{\bar{\gamma}^{n}}=\lambda^{n}_{\bar{\gamma}}\). The above inequality applied to \(\gamma^{n}\) yields
\[\frac{1}{L^{4}\Lambda^{2}}\leq\left(\frac{\lambda^{\frac{1}{\alpha}}_{\bar{ \gamma}}}{\lambda_{A_{\gamma}}}\right)^{n}\leq L^{4}\Lambda^{2}\]
for all \(n\geq 1\) and so we must have \(\lambda_{\bar{\gamma}}=\lambda^{\alpha}_{A_{\gamma}}\).
Lemma 7.3 implies that \(\Psi(\Gamma)\subset G_{1}\). Since \(G_{1}\) is a closed subgroup of \(G_{0}\), we have \(G\subset G_{1}\). By restricting to \(G^{*}\) the linear isometric action of \(G_{1}^{*}\) on \(E_{j}\) we get a linear isometric action of \(G^{*}\) on \(E_{j}\). To get an affine action on \(E_{j}\) we next define the translational part.
Define a map \(b_{j}:\Gamma\to E_{j}\) by \(b_{j}(\gamma)=s_{\gamma,j}\), where \(s_{\gamma,j}=\pi_{j}\circ s_{\gamma}\) and \(s_{\gamma}\) is as in a compatible expression \(\gamma(h*w)=\gamma(0)*B_{\gamma}h*A_{\gamma}w*A_{\gamma}s_{\gamma}(\bar{h})\) of \(\gamma\). Recall that \(\pi_{j}\circ s_{\gamma}\) is unique for \(j<\alpha\) even if \(\gamma\) may have more than one compatible expression. In Lemma 7.8 we shall prove that \(s_{\gamma,j}\in\mathcal{H}_{j}\); in particular, \(s_{\gamma,j}\in E_{j}\).
**Lemma 7.4**.: _The equality \(b_{j}(\gamma_{2}\gamma_{1})=b_{j}(\gamma_{1})+\pi_{\Psi(\gamma_{1})}b_{j}( \gamma_{2})\) holds for any \(\gamma_{1},\gamma_{2}\in\Gamma\)._
Proof.: The proof may be tedious, but the idea is simple: it follows from two different ways of computing \(\gamma_{2}\gamma_{1}(h)\). We first notice that if \(X,Y\in\mathfrak{n}\) and \(X\in\oplus_{\lambda_{j}\geq\alpha}V_{\lambda_{j}}\), then \(\pi_{j}(X*Y)=\pi_{j}(Y)\)
for \(1\leq j<\alpha\). We shall repeatedly use this fact implicitly. Also recall that \(s_{\gamma,j}(\bar{h})=A_{\gamma}^{-1}\pi_{j}(\gamma(0)^{-1}*\gamma(h))\).
Let \(\gamma_{1},\gamma_{2}\in\Gamma\) and \(h\in H\). Write \(\gamma_{1}(0)=h_{1}*w_{1}\) and \(\gamma_{1}(0)*B_{\gamma_{1}}h=h_{2}*w_{2}\) with \(h_{1},h_{2}\in H\) and \(w_{1},w_{2}\in\mathfrak{w}\). Then
\[w_{1}^{-1}*w_{2}=[w_{1}^{-1}*(h_{2}^{-1}*h_{1})*w_{1}]*B_{\gamma_{1}}h\]
and it follows that \(w_{1}^{-1}*w_{2}\in\oplus_{j\geq\alpha}W_{j}\). We have \((\gamma_{2}\circ\gamma_{1})(0)=\gamma_{2}(0)*B_{\gamma_{2}}h_{1}*A_{\gamma_{2 }}w_{1}*A_{\gamma_{2}}s_{\gamma_{2}}(\bar{h}_{1})\) and
\[\gamma_{2}(\gamma_{1}(h)) =\gamma_{2}(\gamma_{1}(0)*B_{\gamma_{1}}h*A_{\gamma_{1}}s_{ \gamma_{1}}(\bar{h}))\] \[=\gamma_{2}(h_{2}*w_{2}*A_{\gamma_{1}}s_{\gamma_{1}}(\bar{h}))\] \[=\gamma_{2}(0)*B_{\gamma_{2}}h_{2}*A_{\gamma_{2}}w_{2}*A_{\gamma _{2}}A_{\gamma_{1}}s_{\gamma_{1}}(\bar{h})*A_{\gamma_{2}}s_{\gamma_{2}}(\bar{ h}_{2}).\]
Now we have
\[s_{\gamma_{2}\gamma_{1},j}(\bar{h})\] \[=A_{\gamma_{2}\gamma_{1}}^{-1}\pi_{j}(\gamma_{2}\gamma_{1}(0)^{- 1}*(\gamma_{2}\gamma_{1})(h))\] \[=A_{\gamma_{2}\gamma_{1}}^{-1}\pi_{j}\{A_{\gamma_{2}}(s_{\gamma_ {2}}(\bar{h}_{1}))^{-1}*A_{\gamma_{2}}w_{1}^{-1}*(B_{\gamma_{2}}h_{1})^{-1}*B_ {\gamma_{2}}h_{2}*A_{\gamma_{2}}w_{2}*A_{\gamma_{2}}A_{\gamma_{1}}s_{\gamma_{ 1}}(\bar{h})*A_{\gamma_{2}}s_{\gamma_{2}}(\bar{h}_{2})\}\] \[=A_{\gamma_{2}\gamma_{1}}^{-1}\pi_{j}\{A_{\gamma_{2}}(s_{\gamma_ {2}}(\bar{h}_{1}))^{-1}*A_{\gamma_{2}}w_{1}^{-1}*A_{\gamma_{2}}w_{2}*A_{\gamma _{2}}A_{\gamma_{1}}s_{\gamma_{1}}(\bar{h})*A_{\gamma_{2}}s_{\gamma_{2}}(\bar{ h}_{2})\}\] \[=A_{\gamma_{2}\gamma_{1}}^{-1}\pi_{j}\{A_{\gamma_{2}}(s_{\gamma_ {2}}(\bar{h}_{1}))^{-1}*A_{\gamma_{2}}A_{\gamma_{1}}s_{\gamma_{1}}(\bar{h})*A_ {\gamma_{2}}s_{\gamma_{2}}(\bar{h}_{2})\}\] \[=\pi_{j}\{A_{\gamma_{1}}^{-1}(s_{\gamma_{2}}(\bar{h}_{1}))^{-1}* s_{\gamma_{1}}(\bar{h})*A_{\gamma_{1}}^{-1}s_{\gamma_{2}}(\bar{h}_{2})\}.\]
Since \(s_{\gamma}\) takes values in \(Z(\mathfrak{w})\), by BCH formula and noting \(\bar{h}_{2}=\bar{\gamma}_{1}(\bar{h})\), \(\bar{h}_{1}=\bar{\gamma}_{1}(0)\) we obtain
\[s_{\gamma_{2}\gamma_{1},j}(\bar{h})=s_{\gamma_{1},j}(\bar{h})+A_{\gamma_{1}}^ {-1}s_{\gamma_{2},j}(\bar{h}_{2})-A_{\gamma_{1}}^{-1}s_{\gamma_{2},j}(\bar{h} _{1})=s_{\gamma_{1},j}(\bar{h})+(\pi_{\Psi(\gamma_{1})}s_{\gamma_{2},j})(\bar{ h}).\]
Recall we have two maps \(\Psi:\Gamma\to G\) and \(b_{j}:\Gamma\to E_{j}\).
**Lemma 7.5**.: _Let \(\{g_{i}\}\) and \(\{\tilde{g}_{i}\}\) be two sequences in \(\Gamma\) and \((A,B)\in G\) such that \(\Psi(g_{i})\to(A,B)\) and \(\Psi(\tilde{g}_{i})\to(A,B)\). If \(s,\tilde{s}\in E_{j}\) are such that \(b_{j}(g_{i})\to s\) and \(b_{j}(\tilde{g}_{i})\to\tilde{s}\) pointwise as \(i\to\infty\), then \(s=\tilde{s}\)._
Proof.: By setting \(\gamma_{1}=\gamma^{-1}\) and \(\gamma_{2}=\gamma\) in Lemma 7.4, we get \(b_{j}(\gamma^{-1})=-\pi_{\Psi(\gamma^{-1})}b_{j}(\gamma)\). Similarly if we set \(\gamma_{1}=g_{i}\) and \(\gamma_{2}=\tilde{g}_{i}^{-1}\) then we get
\[b_{j}(\tilde{g}_{i}^{-1}g_{i})=b_{j}(g_{i})+\pi_{\Psi(g_{i})}b_{j}(\tilde{g}_{i }^{-1})=b_{j}(g_{i})+\pi_{\Psi(g_{i})}(-\pi_{\Psi(\tilde{g}_{i}^{-1})}b_{j}( \tilde{g}_{i}))=b_{j}(g_{i})-\pi_{\Psi(\tilde{g}_{i}^{-1}g_{i})}b_{j}(\tilde{g }_{i}).\]
The assumption implies \(\Psi(\tilde{g}_{i}^{-1}g_{i})\to(\operatorname{Id},\operatorname{Id})\). On the other hand, by Corollary 7.10 there is a constant \(C>0\) such that \(||b_{j}(\gamma)||\leq C\) for all \(\gamma\in\Gamma\). It follows that \(b_{j}(\tilde{g}_{i}^{-1}g_{i})\to s-\tilde{s}\) pointwise as \(i\to\infty\). Now it suffices to show that if \(\{\gamma_{i}\}\) is a sequence in \(\Gamma\) such that \(\Psi(\gamma_{i})\to(\operatorname{Id},\operatorname{Id})\) and \(b_{j}(\gamma_{i})\to s\) pointwise for some \(s\in E_{j}\), then \(s\equiv 0\).
We suppose \(s\not\equiv 0\) and shall get a contradiction. There is some \(\bar{h}\in\mathfrak{n}/\mathfrak{w}\) such that \(s(\bar{h})\neq 0\). Fix some \(m\geq 1\) such that \(m\cdot\frac{|s(\bar{h})||_{i}}{2}>C\cdot\bar{d}_{CC}(\bar{h},0)^{\frac{j}{ \alpha}}.\) Note \(\Psi(\gamma_{i}^{k})\to(\operatorname{Id},\operatorname{Id})\) for \(1\leq k\leq m\). On the other hand, an easy induction using Lemma 7.4 implies
\[b_{j}(\gamma_{i}^{m})=b_{j}(\gamma_{i})+\pi_{\Psi(\gamma_{i})}b_{j}(\gamma_{i})+ \cdots+\pi_{\Psi(\gamma_{i}^{m-1})}b_{j}(\gamma_{i}).\]
Since \(\Psi(\gamma_{i}^{k})\to(\mathrm{Id},\mathrm{Id})\) and \(b_{j}(\gamma_{i})\to s\) pointwise, we have \(|\pi_{\Psi(\gamma_{i}^{k})}b_{j}(\gamma_{i})(\bar{h})-s(\bar{h})|_{j}<\frac{|s( \bar{h})|_{j}}{2}\) for all \(0\leq k\leq m-1\) and all sufficiently large \(i\). It follows from the triangle inequality that \(|b_{j}(\gamma_{i}^{m})(\bar{h})-ms(\bar{h})|_{j}<m\cdot\frac{|s(\bar{h})|_{j}} {2}\) and hence \(|b_{j}(\gamma_{i}^{m})(\bar{h})|_{j}>m\cdot\frac{|s(\bar{h})|_{j}}{2}>C\cdot \bar{d}_{CC}(\bar{h},0)^{\frac{j}{\alpha}}\), contradicting the fact that \(||b_{j}(\gamma)||\leq C\) for all \(\gamma\in\Gamma\).
Lemma 7.5 in particular implies that \(b_{j}(\gamma)=b_{j}(\tilde{\gamma})\) holds whenever \(\Psi(\gamma)=\Psi(\tilde{\gamma})\) (by taking \(g_{i}=\gamma\) and \(\tilde{g}_{i}=\tilde{\gamma}\)). Since \(b_{j}(\gamma)\) lies in the closed ball \(\bar{B}(0,C)\subset E_{j}\) for all \(\gamma\in\Gamma\) and \(\bar{B}(0,C)\) is compact in the topology of pointwise convergence, Lemma 7.5 implies that for any \((A,B)\in G\) and any sequence \(\{\gamma_{i}\}\) satisfying \(\Psi(\gamma_{i})\to(A,B)\), the sequence \(b_{j}(\gamma_{i})\) converges to some \(s\in\bar{B}(0,C)\) pointwise and \(s\) is independent of the choice of the sequence \(\{\gamma_{i}\}\). We denote this \(s\) by \(\tilde{b}_{j}(A,B)\). It follows that the map \(\tilde{b}_{j}:G\to E_{j}\) is well-defined and continuous, where \(E_{j}\) is equipped with the topology of pointwise convergence. This map \(\tilde{b}_{j}\) is the translational part of the affine action.
To see that the map \(\tilde{b}_{j}\) is actually the translational part of an affine action, we need to show that \(\tilde{b}_{j}\) is a \(1\)-cocycle associated to the \(G^{*}\)-module \(E_{j}\); that is,
\[\tilde{b}_{j}((A_{2},B_{2})(A_{1},B_{1}))=\tilde{b}_{j}(A_{1},B_{1})+\pi_{(A_{ 1},B_{1})}\tilde{b}_{j}(A_{2},B_{2})\]
holds for all \((A_{1},B_{1}),(A_{2},B_{2})\in G\). This follows from Lemma 7.4 and the definition of \(\tilde{b}_{j}\). The associated affine action of \(G^{*}\) on \(E_{j}\) is given by
\[(A,B)\cdot c=\pi_{(A,B)}c+\tilde{b}_{j}(A,B),\]
where \(c\in E_{j}\) and \((A,B)\in G\). Since \(\tilde{b}_{j}\) is continuous, it is now easy to see that the affine action \(G^{*}\times E_{j}\to E_{j}\) is separately continuous, where \(E_{j}\) is equipped with the topology of pointwise convergence.
### Existence of fixed point in \(\mathcal{H}_{j}\)
Fix some \(c\in\mathcal{H}_{j}\subset E_{j}\), let \(G^{*}\cdot c\) be the orbit of \(c\) under the affine action of \(G^{*}\). Let \(K\) be the closure of the convex hull of \(G^{*}\cdot c\) in \(E_{j}\) with respect to the topology of pointwise convergence. In order to use Day's fixed point theorem to obtain a fixed point in \(\mathcal{H}_{j}\), we need to show that \(K\) is compact and lies in \(\mathcal{H}_{j}\). The following lemma will be useful for this purpose.
**Lemma 7.6**.: _Let \(Y\subset\mathcal{H}_{j}\) be bounded with respect to the norm \(||\cdot||\). Then \(\bar{Y}\subset\mathcal{H}_{j}\), where \(\bar{Y}\) is the closure of \(Y\) in \(E_{j}\) in the topology of pointwise convergence._
Proof.: Since \(Y\) is bounded in \((E_{j},||\cdot||)\), there is a constant \(C_{0}>0\) such that \(||s||\leq C_{0}\) for any \(s\in Y\). It follows that for any \(i\geq 1\) and any compact subset \(M\subset\mathfrak{n}/\mathfrak{w}\), there is a constant \(C(C_{0},M,i)>0\) such that \(|s^{(i)}(x)|\leq C(C_{0},M,i)\) for all \(x\in M\) and all \(s\in Y\). Let \(s_{k}\in Y\), \(k=1,\cdots\) and \(s\in E_{j}\) be such that \(s_{k}(x)\to s(x)\) for any \(x\in\mathfrak{n}/\mathfrak{w}\). The dominated convergence theorem then implies \(\int_{\gamma}[s_{k},\theta_{H}(x)]\to\int_{\gamma}[s,\theta_{H}(x)]\) for any horizontal curve \(\gamma\) in \(\mathfrak{n}/\mathfrak{w}\). This implies \(s^{(1)}\) is defined and \(s^{(1)}_{k}\to s^{(1)}\) pointwise. Now an induction shows \(s^{(i)}\) is defined for all \(i\) and so \(s\in\mathcal{H}_{j}\).
If \(\alpha\) is not an integer, then \(\mathcal{H}_{j}=E_{j}\) so \(K\subset\mathcal{H}_{j}\) holds automatically. To show \(K\subset\mathcal{H}_{j}\), we may assume \(\alpha\geq 2\) is an integer. We first prove \(G^{*}\cdot c\subset\mathcal{H}_{j}\). By the definition of the affine action we need to show that for any \(1\leq j\leq\alpha-1\):
(1) \(\pi_{(A,B)}c\in\mathcal{H}_{j}\) for any \((A,B)\in G\) and any \(c\in\mathcal{H}_{j}\); see Lemma 7.7.
(2) \(\tilde{b}_{j}(A,B)\in\mathcal{H}_{j}\) for any \((A,B)\in G\); see the paragraph before Lemma 7.8.
**Lemma 7.7**.: \(\pi_{(A,B)}c\in\mathcal{H}_{j}\) _for any integer \(1\leq j<\alpha\), \((A,B)\in G\) and any \(c\in\mathcal{H}_{j}\)._
Proof.: Let \((A,B)\in G\) and \(c\in\mathcal{H}_{j}\). Then there is a sequence \(\{\gamma_{i}\}\) in \(\Gamma\) such that \(\Psi(\gamma_{i})\to(A,B)\). Notice that \(\pi_{\Psi(\gamma_{i})}c\to\pi_{(A,B)}c\) pointwise. Since \(||\pi_{\Psi(\gamma_{i})}c||=||c||\), by Lemma 7.6 to show \(\pi_{(A,B)}c\in\mathcal{H}_{j}\) it suffices to show \(\pi_{\Psi(\gamma)}c\in\mathcal{H}_{j}\) for all \(\gamma\in\Gamma\). For this we shall show that there is a biLipschitz shear map \(f_{2}(g)=g*\tilde{s}(\tilde{g})\) with \(\tilde{s}_{j}=\pi_{\Psi(\gamma)}c\). The lemma then follows from Proposition 6.4.
Pick \(c\in\mathcal{H}_{j}\) and \(\gamma\in\Gamma\). Let \(s:\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\) be given by \(s=\sum_{i}s_{i}\), where \(s_{j}=c\), \(s_{i}=0\) for \(1\leq i\leq\alpha\), \(i\neq j\) and \(s_{k\alpha+i}=s_{i}^{(k)}\) for \(k\geq 1\) and \(1\leq i\leq\alpha\). By Proposition 6.7, the shear map \(f(g)=g*s(\tilde{g})\) is biLipschitz. We shall show \(f_{2}:=L_{f_{1}(0)^{-1}}\circ f_{1}\) with \(f_{1}:=\gamma^{-1}f\gamma\) is the desired biLipschitz shear map.
Notice that for any two maps \(F,G:N\to N\) and any \(p\in N\), we have \((F\circ G)_{p}=F_{G(p)}\circ G_{p}\). It is clear that \(f_{1}\) is a biLipschitz map of \(N\) that permutes the cosets of \(W\). Since \(f_{p}|_{W}=\operatorname{Id}\) and \(\gamma_{p}|_{W}=A_{\gamma}\), we see that \((f_{1})_{p}|_{W}=\operatorname{Id}\). On the other hand, as \(f\) induces the identity map on \(N/W\), so does \(f_{1}\). Hence the assumption of Lemma 5.4 are satisfied and \(f_{1}\) has a compatible expression \(f_{1}(h*w)=f_{1}(0)*h*w*\tilde{s}(\bar{h})\) for some map \(\tilde{s}:\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\). It follows that \(f_{2}(h*w)=h*w*\tilde{s}(\bar{h})\) is a biLipschitz shear map. It remains to show \(\tilde{s}_{j}=\pi_{\Psi(\gamma)}c\). Note \(\tilde{s}_{j}=\pi_{j}(f_{1}(0)^{-1}*f_{1}(h))\).
Fix a compatible expression \(\gamma(h*w)=\gamma(0)*B_{\gamma}h*A_{\gamma}w*A_{\gamma}s_{\gamma}(\bar{h})\) of \(\gamma\). We calculate \(f_{1}(h)\):
\[f_{1}(h)=\gamma^{-1}f\gamma(h) =\gamma^{-1}f(\gamma(0)*B_{\gamma}h*A_{\gamma}s_{\gamma}(\bar{h}))\] \[=\gamma^{-1}(\gamma(0)*B_{\gamma}h*A_{\gamma}s_{\gamma}(\bar{h}) *s(\overline{\gamma(0)}*\bar{B}_{\gamma}\bar{h}))\] \[=h*A_{\gamma}^{-1}s(\overline{\gamma(0)}*\bar{B}_{\gamma}\bar{h}),\]
where the last equality follows from the fact that \(\gamma\) is a bijection and
\[\gamma(h*\{A_{\gamma}^{-1}s(\overline{\gamma(0)}*\bar{B}_{\gamma}\bar{h})\})= \gamma(0)*B_{\gamma}h*s(\overline{\gamma(0)}*\bar{B}_{\gamma}\bar{h})*A_{ \gamma}s_{\gamma}(\bar{h}).\]
In particular, \(f_{1}(0)=A_{\gamma}^{-1}s(\overline{\gamma(0)})\). Now \(f_{2}(h)=A_{\gamma}^{-1}s(\overline{\gamma(0)})^{-1}*h*A_{\gamma}^{-1}s( \overline{\gamma(0)}*\bar{B}_{\gamma}\bar{h})\) and from this we see
\[s_{f_{2},j}(\bar{h})=A_{\gamma}^{-1}s_{j}(\overline{\gamma(0)}*\bar{B}_{\gamma }\bar{h})-A_{\gamma}^{-1}s_{j}(\overline{\gamma(0)})=A_{\gamma}^{-1}c(\overline {\gamma(0)}*\bar{B}_{\gamma}\bar{h})-A_{\gamma}^{-1}c(\overline{\gamma(0)})=( \pi_{\Psi(\gamma)}c)(\bar{h}).\]
Let \((A,B)\in G\). By the definition of \(\tilde{b}_{j}\), there is a sequence \(\{\gamma_{i}\}\) in \(\Gamma\) such that \(b_{j}(\gamma_{i})\to\tilde{b}_{j}(A,B)\) pointwise. Since \(b_{j}(\gamma_{i})\) lies in the closed ball \(\bar{B}(0,C)\) (see Corollary 7.10), Lemma 7.6 implies \(\tilde{b}_{j}(A,B)\in\mathcal{H}_{j}\) provided \(b_{j}(\gamma)\in\mathcal{H}_{j}\) for all \(\gamma\in\Gamma\). This is true by (2) of the following lemma.
**Lemma 7.8**.: _Let \(F:\mathfrak{n}\to\mathfrak{n}\) be a fiber similarity map and \(F(h*w)=F(0)*Bh*Aw*As(\bar{h})\) a compatible expression of \(F\). Then (1) for \(1\leq j<\alpha\), \(s_{j}\in E_{j}\) with \(||s_{j}||\) bounded above by a constant depending only on \(H\) and the biLipschitz constant of \(F\). (2) if \(\alpha\) is an integer, then \(s_{j}\in\mathcal{H}_{j}\) for any \(1\leq j<\alpha\). (3) if \(\alpha\) is an integer, then \(s_{\alpha}\in E_{\alpha}\) with \(||s_{\alpha}||\) bounded above by a constant depending only on \(H\), the biLipschitz constant of \(F\) and the map \(B\)._
**Remark 7.9**.: _The map \(F\) may admit many different compatible expressions: both \(B\) and \(s_{\alpha}\) may change and as a result their Lipschitz constants may be arbitrarily large while \(F\) is fixed._
Proof.: We will use the fact that there is a constant \(L_{1}\geq 1\) depending only on \(H\) such that \((1/L_{1})\cdot d_{CC}^{\frac{1}{\alpha}}(0,\bar{h})\leq d(0,h)\leq L_{1}\cdot d_{ CC}^{\frac{1}{\alpha}}(0,\bar{h})\) for any \(h\in H\), where \(d_{CC}\) is a Carnot metric on \(\mathfrak{n}/\mathfrak{w}\). Since \(F\) is \(L_{2}\)-biLipschitz for some \(L_{2}\geq 1\), for any \(h_{0},h\in H\) we have
\[d(0,F(h_{0})^{-1}*F(h_{0}*h))=d(F(h_{0}),F(h_{0}*h))\leq L_{2}\cdot d(h_{0},h_{ 0}*h)=L_{2}\cdot d(0,h)\leq L_{2}L_{1}d_{CC}^{\frac{1}{\alpha}}(0,\bar{h}).\]
There is a constant \(L>0\) depending on \(B\) such that for any \(h\in H\) we have
\[d(0,Bh)\leq L\cdot d_{CC}^{\frac{1}{\alpha}}(0,\bar{B}\bar{h})=L\lambda_{\bar{ B}}^{\frac{1}{\alpha}}\cdot d_{CC}^{\frac{1}{\alpha}}(0,\bar{h}),\]
where \(\lambda_{\bar{B}}\) denotes the similarity constant of \(\bar{B}\). We also have
\[d(0,(Bh)^{-1}*F(h_{0})^{-1}*F(h_{0}*h))=d(Bh,F(h_{0})^{-1}*F(h_{0 }*h))\] \[\leq d(Bh,0)+d(0,F(h_{0})^{-1}*F(h_{0}*h))\leq(L\lambda_{\bar{B}}^ {\frac{1}{\alpha}}+L_{2}L_{1})d_{CC}^{\frac{1}{\alpha}}(0,\bar{h}).\]
Let \(F_{0}:\mathfrak{n}\to\mathfrak{n}\) be the shear map given by \(F_{0}(g)=g*s(\bar{g})\).
**Claim:** (a) \(\pi_{j}(F(h_{0})^{-1}*F(h_{0}*h))=As_{j}(\overline{h_{0}}*\bar{h})-As_{j}( \overline{h_{0}})\) holds for \(1\leq j<\alpha\);
(b) \(\pi_{i}((Bh)^{-1}*F(h_{0})^{-1}*F(h_{0}*h))=A\pi_{i}(h^{-1}*F_{0}(h_{0})^{-1}* F_{0}(h_{0}*h))\) for any integer \(i\) that is not an integral multiple of \(\alpha\), where \(\pi_{i}:\mathfrak{n}\to V_{i}=W_{i}\) is the projection with respect to the decomposition \(\mathfrak{n}=\oplus V_{\lambda_{k}}\).
(c) \(\pi_{\alpha}((Bh)^{-1}*F(h_{0})^{-1}*F(h_{0}*h))=As_{\alpha}(\overline{h_{0}} *\bar{h})-As_{\alpha}(\overline{h_{0}})\).
We first assume the Claim and finish the proof of the lemma. Part (a) of the claim implies for \(1\leq j<\alpha\),
\[|s_{j}(\overline{h_{0}}*\bar{h})-s_{j}(\overline{h_{0}})|^{\frac {1}{j}} \leq L_{2}\cdot|\pi_{j}(F(h_{0})^{-1}*F(h_{0}*h))|^{\frac{1}{j}}\] \[\leq L_{2}\cdot L_{2}L_{1}d_{CC}^{\frac{1}{\alpha}}(0,\bar{h})\] \[=L_{2}^{2}L_{1}d_{CC}^{\frac{1}{\alpha}}(\overline{h_{0}}, \overline{h_{0}}*\bar{h}),\]
hence (1) holds. Now notice \(h^{-1}*F_{0}(h_{0})^{-1}*F_{0}(h_{0}*h)=K(\overline{h_{0}},\overline{h_{0}*h})\), where \(K(\bar{g}_{1},\bar{g}_{2})\) is defined for \(g_{1},g_{2}\in\mathfrak{n}\) before Lemma 6.3. So when \(\alpha\) is an integer, Part (b) of the claim implies
\[|\pi_{k\alpha+j}(K(\overline{h_{0}},\overline{h_{0}*h}))|^{\frac{1}{k\alpha+j }}\leq L_{2}(L\lambda_{\bar{B}}^{\frac{1}{\alpha}}+L_{2}L_{1})\cdot d_{CC}^{ \frac{1}{\alpha}}(0,\bar{h}) \tag{9}\]
for all \(k\geq 0\) and all \(1\leq j<\alpha\). Then (2) follows since by (9) the assumption of Lemma 6.3 is satisfied. Finally (3) follows from Part (c):
\[|s_{\alpha}(\overline{h_{0}}*\bar{h})-s_{\alpha}(\overline{h_{0}})|^{\frac{1} {\alpha}}\leq L_{2}|\pi_{\alpha}((Bh)^{-1}*F(h_{0})^{-1}*F(h_{0}*h))|^{\frac{1} {\alpha}}\leq L_{2}(L\lambda_{\bar{B}}^{\frac{1}{\alpha}}+L_{2}L_{1})d_{CC}^{ \frac{1}{\alpha}}(0,\bar{h}).\]
Next we prove the claim. Write \(h_{0}*h=h_{1}*w_{1}\) for some \(h_{1}\in H\), \(w_{1}\in\oplus_{i}W_{i\alpha}\). Applying \(\pi_{\alpha}\) to both sides we get \(\pi_{\alpha}(h_{0})+\pi_{\alpha}(h)=\pi_{\alpha}(h_{1})+\pi_{\alpha}(w_{1})\). As \(V_{\alpha}=W_{\alpha}\oplus H_{1}\) is a direct sum, we obtain \(\pi_{\alpha}(h_{0})+\pi_{\alpha}(h)=\pi_{\alpha}(h_{1})\) and \(\pi_{\alpha}(w_{1})=0\); hence \(\pi_{\alpha}(Bh_{0})+\pi_{\alpha}(Bh)=\pi_{\alpha}(Bh_{1})\) and \(w_{1}\in\oplus_{i\geq 2}W_{i\alpha}\). Note \(\bar{h}_{1}=\overline{h_{0}}*\bar{h}\). We have \(F(h_{0})=F(0)*Bh_{0}*As(\overline{h_{0}})\) and \(F(h_{0}*h)=\pi_{\alpha}(Bh_{1})\).
\(F(h_{1}*w_{1})=F(0)*Bh_{1}*Aw_{1}*As(\overline{h_{0}}*\bar{h})\). Using \(Bh*Aw*(Bh)^{-1}=A(h*w*h^{-1})\) twice we get:
\[(Bh)^{-1}*(F(h_{0}))^{-1}*F(h_{0}*h)\] \[=(Bh)^{-1}*A(-s(\overline{h_{0}}))*(Bh_{0})^{-1}*Bh_{1}*Aw_{1}*As( \overline{h_{0}}*\bar{h})\] \[=(Bh)^{-1}*(Bh_{0})^{-1}*\{Bh_{0}*A(-s(\overline{h_{0}}))*(Bh_{0} )^{-1}\}*Bh_{1}*Aw_{1}*As(\overline{h_{0}}*\bar{h})\] \[=(Bh)^{-1}*(Bh_{0})^{-1}*A(h_{0}*(-s(\overline{h_{0}}))*h_{0}^{-1 })*Bh_{1}*Aw_{1}*As(\overline{h_{0}}*\bar{h})\] \[=(Bh)^{-1}*(Bh_{0})^{-1}*Bh_{1}*\{(Bh_{1})^{-1}*A(h_{0}*(-s( \overline{h_{0}}))*h_{0}^{-1})*Bh_{1}\}*Aw_{1}*As(\overline{h_{0}}*\bar{h})\] \[=(Bh)^{-1}*(Bh_{0})^{-1}*Bh_{1}*A(h_{1}^{-1}*h_{0}*(-s(\overline{ h_{0}}))*h_{0}^{-1}*h_{1})*Aw_{1}*As(\overline{h_{0}}*\bar{h})\] \[=(Bh)^{-1}*(Bh_{0})^{-1}*Bh_{1}*A(w_{1}*h^{-1}*(-s(\overline{h_{0 }}))*h*w_{1}^{-1})*Aw_{1}*As(\overline{h_{0}}*\bar{h})\] \[=(Bh)^{-1}*(Bh_{0})^{-1}*Bh_{1}*Aw_{1}*A(h^{-1}*(-s(\overline{h_{ 0}}))*h)*As(\overline{h_{0}}*\bar{h}),\]
where for the sixth equality we used \(h_{1}*w_{1}=h_{0}*h\). We first prove Part (c): as \(\pi_{\alpha}(Bh_{0})+\pi_{\alpha}(Bh)=\pi_{\alpha}(Bh_{1})\) and \(w_{1}\in\oplus_{i\geq 2}W_{i\alpha}\), the above calculation yields
\[\pi_{\alpha}((Bh)^{-1}*F(h_{0})^{-1}*F(h_{0}*h)) =\pi_{\alpha}(A(h^{-1}*(-s(\overline{h_{0}}))*h)*As( \overline{h_{0}}*\bar{h}))\] \[=As_{\alpha}(\overline{h_{0}}*\bar{h})-As_{\alpha}(\overline{h_{ 0}}).\]
The above calculation also gives \((Bh)^{-1}*(F(h_{0}))^{-1}*F(h_{0}*h)=P*Az\), where \(P=(Bh)^{-1}*(Bh_{0})^{-1}*Bh_{1}*Aw_{1}\) and \(z=s(\overline{h_{0}}*\bar{h})+h^{-1}*(-s(\overline{h_{0}}))*h\). Since \(z\) lies in \(Z(\mathfrak{w})\), the BCH formula gives \((Bh)^{-1}*(F(h_{0}))^{-1}*F(h_{0}*h)=P+Az+Q\), where \(Q\) is a sum of iterated brackets of \(P\) and \(Az\), and \(Az\) appears exactly once in each of these iterated brackets. Notice that the BCH formula also implies \(P\) is a sum of iterated brackets of the terms \(Bh\), \(Bh_{0}\), \(Bh_{1}\), \(Aw_{1}\). Now \([Bh,Aw]=A[h,w]\) and the Jacobi identity imply that \(Q=A\tilde{Q}\), where \(\tilde{Q}\) is obtained from the sum of iterated brackets that gives rise to \(Q\) by replacing \(Bh\), \(Bh_{0}\), \(Bh_{1}\), \(Aw_{1}\), \(Az\) by \(h\), \(h_{0}\), \(h_{1}\), \(w_{1}\), \(z\) respectively. So \((Bh)^{-1}*(F(h_{0}))^{-1}*F(h_{0}*h)=P+Az+A\tilde{Q}\).
Now comparing the formulas \(F(h*w)=F(0)*Bh*Aw*As(\bar{h})\) and \(F_{0}(h*w)=h*w*s(\bar{h})\), and repeating the above calculation for \(h^{-1}*(F_{0}(h_{0}))^{-1}*F_{0}(h_{0}*h)\) (we only need to replace \(A\) with \(\operatorname{Id}_{W}\) and \(B\) with \(\operatorname{Id}_{H}\)), we get
\[h^{-1}*(F_{0}(h_{0}))^{-1}*F_{0}(h_{0}*h)=\tilde{P}+z+\tilde{Q},\]
where \(\tilde{P}\) is obtained from the sum of iterated brackets that gives rise to \(P\) by replacing \(Bh\), \(Bh_{0}\), \(Bh_{1}\), \(Aw_{1}\) by \(h\), \(h_{0}\), \(h_{1}\), \(w_{1}\) respectively. Now Part (b) of the claim follows since \(P,\tilde{P}\in\oplus_{i}V_{i\alpha}\).
Next we prove Part (a) of the Claim. As \(h,Bh\in\oplus_{\lambda_{i}\geq\alpha}V_{\lambda_{i}}\), for \(1\leq j<\alpha\), by Claim (b) we have
\[\pi_{j}(F(h_{0})^{-1}*F(h_{0}*h)) =\pi_{j}((Bh)^{-1}*F(h_{0})^{-1}*F(h_{0}*h))=A\pi_{j}(h^{-1}*F_{0 }(h_{0})^{-1}*F_{0}(h_{0}*h))\] \[=A\pi_{j}(F_{0}(h_{0})^{-1}*F_{0}(h_{0}*h))=A\pi_{j}((-s(\bar{h}_ {0})*h_{0}^{-1}*h_{0}*h*s(\bar{h}_{0}*\bar{h}))\] \[=A\pi_{j}(s(\bar{h}_{0}*\bar{h})-s(\bar{h}_{0}))=A(s_{j}(\bar{h}_ {0}*\bar{h})-s_{j}(\bar{h}_{0})).\]
**Corollary 7.10**.: _Let \(\Gamma\) be a fiber similarity group of \(N\). Then there is a constant \(C>0\) such that \(||s_{\gamma,j}||<C\) for all \(\gamma\in\Gamma\) and all \(1\leq j<\alpha\)._
Proof.: The inequality \(\frac{C_{\gamma}}{L^{2}\Lambda}\leq\lambda_{A_{\gamma}}\leq L^{2}\Lambda C_{\gamma}\) we obtained in the proof of Lemma 7.3 implies that there is a constant \(K_{0}>0\) such that \(e^{-\log(\lambda_{A_{\gamma}})D}\circ\gamma\) is \(K_{0}\)-biLipschitz for every \(\gamma\in\Gamma\). An element \(\gamma\in\Gamma\) with compatible expression \(\gamma(h*w)=\gamma(0)*B_{\gamma}h*A_{\gamma}w*A_{\gamma}s_{\gamma}(\bar{h})\) can be written
as \(\gamma(h*w)=\gamma(0)*e^{\log(\lambda_{A_{\gamma}})D}(B^{\prime}_{\gamma}h*A^{ \prime}_{\gamma}w*A^{\prime}_{\gamma}s_{\gamma}(\bar{h}))\), where \(s_{\gamma}\) is as before and \(A^{\prime}_{\gamma}\) is an isometric graded automorphism of \((W,d_{CC})\). It follows that \(e^{-\log(\lambda_{A_{\gamma}})D}\circ\gamma\) has a compatible expression \(e^{-\log(\lambda_{A_{\gamma}})D}\circ\gamma(h*w)=a^{\prime}_{\gamma}*B^{ \prime}_{\gamma}h*A^{\prime}_{\gamma}w*A^{\prime}_{\gamma}s_{\gamma}(\bar{h})\). The corollary now follows by applying Lemma 7.8 to \(e^{-\log(\lambda_{A_{\gamma}})D}\circ\gamma\).
We have shown that \(G^{*}\cdot c\subset\mathcal{H}_{j}\). Corollary 7.10 implies \(G^{*}\cdot c\) is bounded. By Lemma 7.6\(K\subset\mathcal{H}_{j}\). We next show that the affine action has a fixed point in \(K\). For this purpose we use
**Theorem (Day's fixed point theorem)** [1]_Let \(K\) be a compact convex subset of a locally convex topological vector space \(E\) and let \(\Gamma\) be a locally compact group that acts on \(K\) by affine transformations. If \(\Gamma\) is amenable and the action \(\Gamma\times K\to K,(\gamma,x)\mapsto\gamma\cdot x,\) is separately continuous, then the action of \(\Gamma\) has a global fixed point in \(K\)._
**Lemma 7.11**.: _The affine action associated to the \(1\)-cocycle \(\tilde{b}_{j}\) has a fixed point \(c\in\mathcal{H}_{j}\)._
Proof.: We equip \(E_{j}\) with the topology of pointwise convergence. Then \(E_{j}\) is a locally convex topological vector space and \(K\) is a compact convex subset (as \(K\) is bounded). We have observed that the affine action \(G^{*}\times K\to K\) is separately continuous. Since \(G^{*}\) is amenable, by Day's fixed point theorem, the affine action has a fixed point in \(K\).
### Eliminating \(s_{j}\)
**Lemma 7.12**.: _Let \(1\leq j<\alpha\). Suppose the affine action of \(G^{*}\) on \(E_{j}\) has a fixed point \(c\in\mathcal{H}_{j}\). Let \(s:\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\) be the map provided by Proposition 6.7 satisfying \(s_{j}=c\) and \(s_{i}=0\) for \(1\leq i\leq\alpha\), \(i\neq j\) such that the corresponding shear map \(F_{0}(g)=g*s(\bar{g})\) is biLipschitz. Then every element \(\tilde{\gamma}\in F_{0}\Gamma F_{0}^{-1}\) has a compatible expression \(\tilde{\gamma}(h*w)=\tilde{\gamma}(0)*B_{\gamma}h*A_{\gamma}w*A_{\gamma}\tilde {s}_{\gamma}(\bar{h})\) with \(\tilde{s}_{\gamma,j}=0\)._
Proof.: For \(\gamma\in\Gamma\), denote \(\tilde{\gamma}=F_{0}\circ\gamma\circ F_{0}^{-1}\). Consider a compatible expression \(\gamma(h*w)=\gamma(0)*B_{\gamma}h*A_{\gamma}w*A_{\gamma}s_{\gamma}(\bar{h})\) of \(\gamma\). Since \(c\) is a fixed point of the affine action, we have \(\pi_{\Psi(\gamma)}c+s_{\gamma,j}=c\) for all \(\gamma\in\Gamma\). Note \(F_{0}\) and \(F_{0}^{-1}\) have the expressions: \(F_{0}(h*w)=h*w*s(\bar{h})\), \(F_{0}^{-1}(h*w)=h*w*(-s(\bar{h}))\). We now calculate
\[\tilde{\gamma}(h)=F_{0}\circ\gamma(h*(-s(\bar{h}))) =F_{0}(\gamma(0)*B_{\gamma}h*A_{\gamma}[s_{\gamma}(\bar{h})*(-s( \bar{h}))])\] \[=\gamma(0)*B_{\gamma}h*A_{\gamma}[s_{\gamma}(\bar{h})*(-s(\bar{h }))]*s(\overline{\gamma(0)}*\bar{B}_{\gamma}\bar{h}).\]
In particular, \(\tilde{\gamma}(0)=\gamma(0)*s(\overline{\gamma(0)})\). Now
\[s_{\tilde{\gamma},j}(\bar{h}) =A_{\gamma}^{-1}\pi_{j}(\tilde{\gamma}(0)^{-1}*\tilde{\gamma}(h))\] \[=A_{\gamma}^{-1}\pi_{j}\{(-s(\overline{\gamma(0)}))*B_{\gamma}h*A_ {\gamma}[s_{\gamma}(\bar{h})*(-s(\bar{h}))]*s(\overline{\gamma(0)}*\bar{B}_{ \gamma}\bar{h})\}\] \[=A_{\gamma}^{-1}\pi_{j}\{(-s(\overline{\gamma(0)}))*A_{\gamma}[s_ {\gamma}(\bar{h})*(-s(\bar{h}))]*s(\overline{\gamma(0)}*\bar{B}_{\gamma}\bar{h })\}\] \[=A_{\gamma}^{-1}\{-c(\overline{\gamma(0)})+A_{\gamma}s_{\gamma,j}( \bar{h})+A_{\gamma}(-c(\bar{h}))+c(\overline{\gamma(0)}*\bar{B}_{\gamma}\bar{h })\}\] \[=s_{\gamma,j}(\bar{h})-c(\bar{h})+A_{\gamma}^{-1}c(\overline{ \gamma(0)}*\bar{B}_{\gamma}\bar{h})-A_{\gamma}^{-1}c(\overline{\gamma(0)})\] \[=s_{\gamma,j}(\bar{h})-c(\bar{h})+(\pi_{\Psi(\gamma)}c)(\bar{h})=0.\]
The proof of Proposition 7.1 is now complete.
## 8. Conformal structures in the \(V_{\alpha}\) direction
We continue the proof of Theorem 1.2. In Section 7 we showed that we can get rid of \(s_{\gamma,j}\) for \(j<\alpha\) after a conjugation. In this section we continue to assume \(\dim(W)\geq 2\), \(\dim(N/W)\geq 2\). We will first show that, if \(\alpha\) is an integer, then after a conjugation, \(s_{\gamma,\alpha}:\mathfrak{n}/\mathfrak{w}\to Z_{\alpha}(\mathfrak{w})\) is a Lie group homomorphism for every \(\gamma\in\Gamma\), and then we complete the proof of Theorem 1.2 under the assumptions \(\dim(W)\geq 2\), \(\dim(N/W)\geq 2\). Here we are using the identification between simply connected nilpotent Lie groups and their Lie algebras via the exponential map.
We will imitate the proof of Tukia's theorem, with the last step different. As biLipschitz maps of Carnot-by-Carnot groups in general are not differentiable, we can not directly work with the differentials. Instead we look for differentiability in the \(V_{\alpha}\) direction. For each biLipschitz map \(F:\mathfrak{n}\to\mathfrak{n}\) and each \(p\in\mathfrak{n}\), we consider the differential of the map
\[F_{p,\alpha}:=(\pi_{\alpha}\circ F_{p})|_{V_{\alpha}}:V_{\alpha}\to V_{\alpha},\]
with \(F_{p}=L_{F(p)^{-1}}\circ F\circ L_{p}\), where \(L_{g}\) is left translation by the element \(g\) and \(\pi_{\alpha}:\mathfrak{n}\to V_{\alpha}\) is the projection with respect to the decomposition \(\mathfrak{n}=\oplus_{j}V_{\lambda_{j}}\).
In the next two subsections we assume \(\alpha\) is an integer.
### Differential of \(F_{p,\alpha}\)
In this subsection we first find a formula for the differential of \(F_{p,\alpha}\) at \(0\) and then show that this differential satisfies the chain rule (Lemma 8.2).
Let \((N,D)\) be Carnot-by-Carnot and \(F:\mathfrak{n}\to\mathfrak{n}\) be a biLipschitz map satisfying the assumptions of Lemma 5.4. Then \(F\) has a compatible expression \(F(h*w)=F(0)*Bh*Aw*As(\tilde{h})\). Write \(s=\sum_{j}s_{j}\) with \(s_{j}:\mathfrak{n}/\mathfrak{w}\to Z_{j}(\mathfrak{w})\). By Lemma 7.8\(s_{j}\) is \(\frac{j}{\alpha}\)-Holder for each \(1\leq j\leq\alpha\).
To simplify the calculations we will work in a suitable quotient of \(\mathfrak{n}\). Notice that \(\oplus_{\lambda_{j}>\alpha}V_{\lambda_{j}}\) is an ideal of \(\mathfrak{n}\). Denote \(\bar{\mathfrak{n}}_{\alpha}=\mathfrak{n}/(\oplus_{\lambda_{j}>\alpha}V_{ \lambda_{j}})\) and let \(P_{\alpha}:\mathfrak{n}\to\bar{\mathfrak{n}}_{\alpha}\) be the quotient homomorphism. Observe that if \(x\in\oplus_{\lambda_{j}\geq\alpha}V_{\lambda_{j}}\), then \(P_{\alpha}(x)\) lies in the center of \(\bar{\mathfrak{n}}_{\alpha}\). Let \(h\in H\), \(w\in\mathfrak{w}\). There is some \(w^{\prime}\in\mathfrak{w}\) such that \(h+w=h*w^{\prime}\). By applying \(P_{\alpha}\) to both sides of \(h+w=h*w^{\prime}\) and noting that \(P_{\alpha}(h)\) lies in the center of \(\bar{\mathfrak{n}}_{\alpha}\), we get \(P_{\alpha}(w)=P_{\alpha}(w^{\prime})\). Now write \(p=h_{0}*w_{0}\) with \(h_{0}\in H\) and \(w_{0}\in\mathfrak{w}\), and \(h_{0}*h=\tilde{h}+\tilde{w}=\tilde{h}*\tilde{\tilde{w}}\) for some \(\tilde{h}\in H\) and \(\tilde{w},\tilde{\tilde{\tilde{w}}}\in\mathfrak{w}\). Using BCH formula we see that \(\tilde{w},\tilde{\tilde{w}}\in\oplus_{j>\alpha}W_{j}\). By applying \(P_{\alpha}\) to both sides of \(h_{0}*h=\tilde{h}+\tilde{w}\) we get \(P_{\alpha}(\tilde{h})=P_{\alpha}(h_{0})+P_{\alpha}(h)\). It follows that \(P_{\alpha}(B\tilde{h})=P_{\alpha}(Bh_{0})+P_{\alpha}(Bh)\) and so \(P_{\alpha}((Bh_{0})^{-1}*B\tilde{h})=P_{\alpha}(Bh)\).
We are ready to find a formula for \(F_{p,\alpha}\). Recall \(p=h_{0}*w_{0}\). Write
\[p*(h+w)=h_{0}*w_{0}*h*w^{\prime}=h_{0}*h*(h^{-1}*w_{0}*h)*w^{\prime}=\tilde{h}* \tilde{\tilde{w}}*(h^{-1}*w_{0}*h)*w^{\prime}.\]
So
\[F(p)=F(0)*Bh_{0}*Aw_{0}*As(\overline{h_{0}})\]
and
\[F(p*(h+w))=F(0)*B\tilde{h}*A\tilde{\tilde{w}}*A(h^{-1}*w_{0}*h)* Aw^{\prime}*As(\overline{h_{0}}*\bar{h}).\]
Therefore (see explanation after the display formula)
\[P_{\alpha}(L_{F(p)^{-1}}\circ F\circ L_{p}(h+w))\] \[= P_{\alpha}(L_{F(p)^{-1}}(F(0)*B\tilde{h}*A\tilde{\tilde{w}}*A(h^{-1 }*w_{0}*h)*Aw^{\prime}*As(\overline{h_{0}}*\bar{h})))\] \[= P_{\alpha}(As(\overline{h_{0}})^{-1}*Aw_{0}^{-1}*(Bh_{0})^{-1}* B\tilde{h}*A\tilde{\tilde{w}}*Aw_{0}*Aw*As(\overline{h_{0}}*\bar{h}))\] \[= P_{\alpha}(As(\overline{h_{0}})^{-1}*Aw_{0}^{-1}*(Bh_{0})^{-1}* B\tilde{h}*Aw_{0}*Aw*As(\overline{h_{0}}*\bar{h}))\] \[= P_{\alpha}(As(\overline{h_{0}})^{-1}*(Bh_{0})^{-1}*B\tilde{h}*Aw *As(\overline{h_{0}}*\bar{h}))\] \[= P_{\alpha}(As(\overline{h_{0}})^{-1}*Bh*Aw*As(\overline{h_{0}}* \bar{h}))\] \[= P_{\alpha}(Bh+Aw+As(\overline{h_{0}}*\bar{h})-As(\overline{h_{0} })).\]
For the second equality we used \(P_{\alpha}(h^{-1}*w_{0}*h)=P_{\alpha}(w_{0})\) (as \(P_{\alpha}(h)\) lies in the center of \(\bar{\mathfrak{n}}_{\alpha}\)) and \(P_{\alpha}(w^{\prime})=P_{\alpha}(w)\). For the third equality we used \(P_{\alpha}(\tilde{\tilde{w}})=0\) (as \(\tilde{\tilde{w}}\in\oplus_{j>\alpha}W_{j}\)). For the fourth equality we used \(P_{\alpha}(Aw_{0}^{-1}*Bh_{0}^{-1}*B\tilde{h}*Aw_{0})=P_{\alpha}(Bh_{0}^{-1}* B\tilde{h})\) (as \(P_{\alpha}(Bh_{0}^{-1}*B\tilde{h})\) lies in the center of \(\bar{\mathfrak{n}}_{\alpha}\)). For the fifth equality we used \(P_{\alpha}((Bh_{0})^{-1}*B\tilde{h})=P_{\alpha}(Bh)\). For the last equality we used the fact that \(P_{\alpha}(Bh)\) lies in the center of \(\bar{\mathfrak{n}}_{\alpha}\) and that \(Aw\), \(As(\overline{h_{0}}*\bar{h})\) and \(As(\overline{h_{0}})\) commute with each other (as \(s\) takes values in \(Z(\mathfrak{w})\)).
After applying \(\pi_{\alpha}\) to both sides of the above display formula we get
\[\pi_{\alpha}\circ F_{p}(h+w)=B\pi_{\alpha}(h)+A\pi_{\alpha}(w)+As_{\alpha}( \overline{h_{0}}*\bar{h})-As_{\alpha}(\overline{h_{0}}). \tag{10}\]
This calculation will be used in the proofs of Lemmas 8.1 and 8.2. When \(h\in H_{1}\) and \(w\in W_{\alpha}\), we have \(\pi_{\alpha}(h)=h\) and \(\pi_{\alpha}(w)=w\) and so \(F_{p,\alpha}:V_{\alpha}\to V_{\alpha}\) is given by the formula
\[F_{p,\alpha}(h+w)=Bh+Aw+As_{\alpha}(\overline{h_{0}}*\bar{h})-As_{\alpha}( \overline{h_{0}}).\]
**Lemma 8.1**.: _Let \(\psi(F):\mathfrak{n}/\mathfrak{w}\to V_{\alpha}\) be the map given by \(\psi(F)(\bar{h})=Bh_{1}+As_{\alpha}(\bar{h})\), where \(h_{1}\) is the \(H_{1}\) component of \(h\in H\). Then \(\psi(F)\) is Lipschitz with the Lipschitz constant bounded above by a constant depending only on \(H\) and the biLipschitz constant of \(F\)._
Proof.: By setting \(w=0\) in formula (10) we get
\[\pi_{\alpha}(F(p)^{-1}*F(p*h))=Bh_{1}+As_{\alpha}(\overline{h_{0}}*\bar{h})-As _{\alpha}(\overline{h_{0}})=\psi(F)(\bar{h}_{0}*\bar{h})-\psi(F)(\bar{h}_{0}).\]
Since \(F\) is \(L\)-biLipschitz for some \(L\geq 1\) we have
\[|\psi(F)(\bar{h}_{0}*\bar{h})-\psi(F)(\bar{h}_{0})|^{\frac{1}{\alpha}}\leq d(0, F(p)^{-1}*F(p*h))=d(F(p),F(p*h))\leq L\,d(p,p*h)=L\,d(0,h).\]
On the other hand, since \(\pi|_{H}:H\to\bar{\mathfrak{n}}\) is a linear bijection, \(H\) is stable under the automorphisms \(e^{tD}\) (\(D\) acts by Euclidean dilation on each \(V_{\lambda_{j}}\)), and \((\pi|_{H})\circ(e^{tD}|_{H})=e^{t\bar{D}}\circ(\pi|_{H})\), there is a constant \(L_{0}\geq 1\) that depends on \(H\) but independent of the biLipschitz map \(F\) such that \((1/L_{0})d^{\frac{1}{\alpha}}_{CC}(0,\bar{h})\leq d(0,h)\leq L_{0}d^{\frac{1}{ \alpha}}_{CC}(0,\bar{h})\) for any \(h\in H\). It follows that \(|\psi(F)(\bar{h}_{0}*\bar{h})-\psi(F)(\bar{h}_{0})|^{\frac{1}{\alpha}}\leq LL_{ 0}d^{\frac{1}{\alpha}}_{CC}(0,\bar{h})\).
Now we can try to compute the differential \(dF_{p,\alpha}(0)\) of \(F_{p,\alpha}\) at the origin \(0\):
\[dF_{p,\alpha}(0)(h+w) =\lim_{t\to-\infty}\frac{F_{p,\alpha}(e^{\alpha t}(h+w))}{e^{ \alpha t}}\] \[=\lim_{t\to-\infty}\frac{Be^{\alpha t}h+Ae^{\alpha t}w+As_{\alpha} (\overline{h_{0}}*e^{\alpha t}\overline{h})-As_{\alpha}(\overline{h_{0}})}{e^{ \alpha t}}\] \[=Bh+Aw+A\lim_{t\to-\infty}\frac{s_{\alpha}(\overline{h_{0}}*e^{t \bar{D}}\overline{h})-s_{\alpha}(\overline{h_{0}})}{e^{\alpha t}}.\]
In the above we used the fact that \(\bar{D}|_{\bar{V}_{1}}=\alpha\cdot\mathrm{Id}_{\bar{V}_{1}}\). Notice that at a point \(\overline{h_{0}}\) where the Lipschitz map \(s_{\alpha}:\mathfrak{n}/\mathfrak{w}\to Z_{\alpha}(\mathfrak{w})\) is Pansu differentiable, we have that \(\lim_{t\to-\infty}\frac{s_{\alpha}(\overline{h_{0}}*e^{t\bar{D}}\overline{h} )-s_{\alpha}(\overline{h_{0}})}{e^{\alpha t}}=Ds_{\alpha}(\overline{h_{0}})( \overline{h})\), where \(Ds_{\alpha}(\overline{h_{0}}):\mathfrak{n}/\mathfrak{w}\to Z_{\alpha}( \mathfrak{w})\) is the Pansu differential of \(s_{\alpha}\) at \(\overline{h_{0}}\). Recall that a Lipschitz map between Carnot groups is Pansu differentiable a.e., see [11]. Since \(s_{\alpha}\) is Pansu differentiable at a.e. \(\overline{h_{0}}\in\mathfrak{n}/\mathfrak{w}\), we see that at a.e. \(p=h_{0}*w_{0}\in\mathfrak{n}\), the map \(F_{p,\alpha}:V_{\alpha}\to V_{\alpha}\) is differentiable at the origin with differential \(dF_{p,\alpha}(0)\) given by:
\[dF_{p,\alpha}(0)(h+w)=Bh+Aw+ADs_{\alpha}(\overline{h_{0}})(\overline{h}). \tag{11}\]
We shall denote \(D_{\alpha}F(p)=dF_{p,\alpha}(0)\).
The differential \(D_{\alpha}F(p)\) should be thought as a counterpart for the restriction of Pansu differential to the first layer, although our groups are not Carnot. It satisfies the chain rule.
**Lemma 8.2**.: _Let \(F,\tilde{F}:\mathfrak{n}\to\mathfrak{n}\) be biLipschitz maps satisfying the assumptions of Lemma 5.4. Then \(D_{\alpha}(F\circ\tilde{F})(p)=D_{\alpha}F(\tilde{F}(p))\circ D_{\alpha}\tilde {F}(p)\) for a.e. \(p\in\mathfrak{n}\)._
Proof.: Let \(p\in\mathfrak{n}\) be a point such that \(D_{\alpha}(F\circ\tilde{F})(p)\), \(D_{\alpha}F(\tilde{F}(p))\) and \(D_{\alpha}\tilde{F}(p)\) all exist. We need to show
\[D_{\alpha}(F\circ\tilde{F})(p)(X)=D_{\alpha}F(\tilde{F}(p))\circ D_{\alpha} \tilde{F}(p)(X),\ \ \forall X\in V_{\alpha}. \tag{12}\]
Since \(V_{\alpha}=W_{\alpha}\oplus H_{1}\), it suffices to establish (12) for \(X\in W_{\alpha}\) and \(X\in H_{1}\).
The maps \(F\) and \(\tilde{F}\) have compatible expressions \(F(h*w)=F(0)*Bh*Aw*As(\bar{h})\), \(\tilde{F}(h*w)=\tilde{F}(0)*\tilde{B}h*\tilde{A}w*A\tilde{s}(\bar{h})\). Let \(X=w\in W_{\alpha}\). By (11), \(D_{\alpha}F(p)(w)=Aw\). It follows that \(D_{\alpha}F(\tilde{F}(p))\circ D_{\alpha}\tilde{F}(p)(w)=D_{\alpha}F(\tilde{ F}(p))(\tilde{A}w)=A(\tilde{A}w)\). On the other hand, \(D_{\alpha}(F\circ\tilde{F})(p)(w)=A\tilde{A}(w)\) as on cosets of \(W\) the map \(F\circ\tilde{F}\) acts by \(A\tilde{A}\). Hence (12) holds for \(X\in W_{\alpha}\). We next consider the case \(X=h\in H_{1}\).
Consider the paths \(\tilde{c}(t)=L_{\tilde{F}(p)^{-1}}\circ\tilde{F}(p*(th))\) and \(c(t)=L_{(F\circ\tilde{F}(p))^{-1}}\circ F\circ L_{\tilde{F}(p)}(\tilde{c}(t))\), and set \(\tilde{c}_{\alpha}(t)=\pi_{\alpha}(\tilde{c}(t))\), \(c_{\alpha}(t)=\pi_{\alpha}(c(t))\). Since \(D_{\alpha}\tilde{F}(p)\) and \(D_{\alpha}(F\circ\tilde{F})(p)\) exist, we have \(\tilde{c}^{\prime}_{\alpha}(0)=D_{\alpha}\tilde{F}(p)(h)\) and \(c^{\prime}_{\alpha}(0)=D_{\alpha}(F\circ\tilde{F})(p)(h)\).
Write \(\tilde{c}(t)=h(t)+w(t)\) with \(h(t)\in H\) and \(w(t)\in\mathfrak{w}\). Then
\[D_{\alpha}\tilde{F}(p)(h)=\tilde{c}^{\prime}_{\alpha}(0)=\lim_{t\to 0}\frac{\pi_{ \alpha}h(t)}{t}+\lim_{t\to 0}\frac{\pi_{\alpha}w(t)}{t}.\]
Set \(h_{1}=\lim_{t\to 0}\frac{\pi_{\alpha}h(t)}{t}\in H_{1}\) and \(w_{1}=\lim_{t\to 0}\frac{\pi_{\alpha}w(t)}{t}\in W_{\alpha}\). Then \(D_{\alpha}\tilde{F}(p)(h)=h_{1}+w_{1}\).
We also write \(\tilde{F}(p)=\tilde{h}_{0}*\tilde{w}_{0}\) with \(\tilde{h}_{0}\in H\) and \(\tilde{w}_{0}\in\mathfrak{w}\). We have
\[D_{\alpha}F(\tilde{F}(p))(D_{\alpha}\tilde{F}(p)(h))=D_{\alpha}F(\tilde{F}(p))(h _{1}+w_{1})=Bh_{1}+Aw_{1}+ADs_{\alpha}(\overline{\tilde{h}_{0}})(\overline{h_ {1}}).\]
By (10),
\[c_{\alpha}(t)=\pi_{\alpha}\circ L_{(F\circ\tilde{F}(p))^{-1}}\circ F\circ L_{ \tilde{F}(p)}(\tilde{c}(t))=B\pi_{\alpha}h(t)+A\pi_{\alpha}w(t)+As_{\alpha}( \overline{\tilde{h}_{0}}*\overline{h(t)})-As_{\alpha}(\overline{\tilde{h}_{0}}).\]
Hence
\[D_{\alpha}(F\circ\tilde{F})(p)(h)=c^{\prime}_{\alpha}(0) =B\lim_{t\to 0}\frac{\pi_{\alpha}h(t)}{t}+A\lim_{t\to 0}\frac{\pi_{\alpha}w(t)}{t}+A \lim_{t\to 0}\frac{s_{\alpha}(\overline{\tilde{h}_{0}}*\overline{h(t)})-As_{ \alpha}(\overline{\tilde{h}_{0}})}{t}\] \[=Bh_{1}+Aw_{1}+A\lim_{t\to 0}\frac{s_{\alpha}(\overline{\tilde{h}_{0}}* \overline{h(t)})-s_{\alpha}(\overline{\tilde{h}_{0}})}{t}.\]
It now suffices to show
\[\lim_{t\to 0}\frac{s_{\alpha}(\overline{\tilde{h}_{0}}*\overline{h(t)})-s_{ \alpha}(\overline{\tilde{h}_{0}})}{t}=Ds_{\alpha}(\overline{\tilde{h}_{0}})( \overline{h_{1}}).\]
Since \(\pi\circ\tilde{F}(x)=\overline{\tilde{F}(0)}*\bar{\tilde{B}}(\bar{x})\), we get
\[\overline{h(t)}=\pi(\tilde{c}(t))=((\pi\circ\tilde{F})(p))^{-1}*(\pi\circ \tilde{F})(p*th)=\bar{\tilde{B}}(\bar{p})^{-1}*\bar{\tilde{B}}(\bar{p}*t\bar{ h})=\bar{\tilde{B}}(t\bar{h})=t\bar{\tilde{B}}(\bar{h}).\]
Hence
\[\lim_{t\to 0}\frac{s_{\alpha}(\overline{\tilde{h}_{0}}*\overline{h(t)})-s_{ \alpha}(\overline{\tilde{h}_{0}})}{t}=\lim_{t\to 0}\frac{s_{\alpha}( \overline{\tilde{h}_{0}}*t\bar{\tilde{B}}(\bar{h}))-s_{\alpha}(\overline{ \tilde{h}_{0}})}{t}=Ds_{\alpha}(\overline{\tilde{h}_{0}})(\bar{\tilde{B}}( \bar{h})).\]
Finally we notice that \(D_{\alpha}\tilde{F}(p)(h+w)=\tilde{B}h+\tilde{A}w+\tilde{A}D\tilde{s}_{\alpha} (\bar{p})(\bar{h})\) implies \(\bar{h}_{1}=\pi(D_{\alpha}\tilde{F}(p)(h))=\bar{\tilde{B}}\bar{h}\).
Now we are ready to show that if \(\Gamma\) is a fiber similarity group of \(N\), then the differentials \(\{D_{\alpha}\gamma(p)|p\in N,\gamma\in\Gamma\}\) are "uniformly quasiconformal". This result is needed in order to run Tukia's argument for the existence of invariant conformal structure.
**Lemma 8.3**.: _Let \(\Gamma\) be a fiber similarity group of \(N\). Then there is a constant \(C\geq 1\) such that for every \(\gamma\in\Gamma\), the differential \(D_{\alpha}\gamma(p)\) is \(C\)-quasiconformal for a.e. \(p\in N\)._
Proof.: By the discussion in Section 7, there is a constant \(K_{0}\geq 1\) with the following property: for each \(\gamma\in\Gamma\), there is some \(t_{\gamma}\in\mathbb{R}\) such that \(\gamma^{\prime}:=e^{-t_{\gamma}D}\circ\gamma\) is \(K_{0}\)-biLipschitz, and \(\gamma^{\prime}\) acts on cosets of \(W\) by isometric graded automorphism and induces an isometry of \(N/W\). If \(\gamma(h*w)=\gamma(0)*B_{\gamma}h*A_{\gamma}w*A_{\gamma}s_{\gamma}(\bar{h})\) is a compatible expression for \(\gamma\), then \(\gamma^{\prime}\) has a compatible expression given by \(\gamma^{\prime}(h*w)=\gamma^{\prime}(0)*B^{\prime}_{\gamma}h*A^{\prime}_{ \gamma}w*A^{\prime}_{\gamma}s_{\alpha}(\bar{h})\) where \(A^{\prime}_{\gamma}=e^{-t_{\gamma}D}\circ A_{\gamma}\) is an isometry of \((\mathfrak{w},d_{CC})\) and \(B^{\prime}_{\gamma}=e^{-t_{\gamma}D}\circ B_{\gamma}\) is such that \(\bar{B}^{\prime}_{\gamma}\) is an isometry of \((\mathfrak{n}/\mathfrak{w},d_{CC})\). By the formula for \(D_{\alpha}F(p)\) we have that \(D_{\alpha}(\gamma^{\prime})(p)\) is the composition of \(D_{\alpha}(\gamma)(p)\) with a standard Euclidean dilation: \(D_{\alpha}(\gamma^{\prime})(p)(h+w)=e^{-t_{\gamma}\alpha}D_{\alpha}\gamma(p)( h+w)\) for \(h\in H_{1},w\in W_{\alpha}\). Hence it suffices to show that there is a constant \(C\geq 1\) such that for every \(\gamma\in\Gamma\), the differential \(D_{\alpha}\gamma^{\prime}(p)\) is \(C\)-quasiconformal for a.e. \(p\in N\).
We fix an inner product on \(V_{\alpha}\) so that \(H_{1}\) and \(W_{\alpha}\) are perpendicular to each other. Notice that the equality \(D_{\alpha}\gamma^{\prime}(p)(h+w)=A^{\prime}_{\gamma}w+D\psi(\gamma^{\prime})( \bar{p})(\bar{h})\) holds, where \(\psi(F)\) was defined in Lemma 8.1. As \(A^{\prime}_{\gamma}\) is an isometry and \(\psi(\gamma^{\prime})\) is Lipschitz with Lipschitz constant bounded above by a constant depending only on \(H\) and the biLipschitz constant of \(\gamma^{\prime}\), we see that there is a constant \(C_{1}>0\) such that for each \(\gamma\in\Gamma\), the norm of the linear map \(D_{\alpha}\gamma^{\prime}(p):V_{\alpha}\to V_{\alpha}\) is bounded above by \(C_{1}\) for a.e. \(p\in N\). In particular it also holds for \(\gamma^{-1}\). By applying Lemma 8.2 to the composition \(\gamma\circ\gamma^{-1}=\mathrm{Id}\) we conclude that \(D_{\alpha}\gamma^{\prime}(p)\) is \(C_{1}\)-biLipschitz and so is \(C_{1}^{2}\) quasiconformal.
### Measurable conformal structure in the \(V_{\alpha}\) direction
In this subsection we shall show that, after a conjugation, \(s_{\gamma,\alpha}:\mathfrak{n}/\mathfrak{w}\to Z_{\alpha}(\mathfrak{w})\) is a Lie group homomorphism for every \(\gamma\in\Gamma\). See Proposition 8.6. We shall modify the proof of Tukia's theorem [13], see also [14] for the foliated version.
Fix an inner product on \(V_{\alpha}\) and an orthonormal basis of \(V_{\alpha}\) with respect to this inner product. Denote \(n_{\alpha}=\dim(V_{\alpha})\). Then we can identify a linear transformation of \(V_{\alpha}\) with an \(n_{\alpha}\times n_{\alpha}\) matrix. Denote by \(SL(V_{\alpha})\) (the special linear group) the group of linear transformations of \(V_{\alpha}\) whose matrices have determinant equal to \(1\), and \(SO(V_{\alpha})\subset SL(V_{\alpha})\) the subgroup consisting of linear transformations that preserve the inner product. Let \(X=SL(V_{\alpha})/SO(V_{\alpha})\). Recall that, \(X\) is a symmetric space of non-compact type (see table V on page 518 of [10]) and so has nonpositive sectional curvature. We denote by \(\rho\) the distance on \(X\).
A measurable conformal structure on \(N\) in the \(V_{\alpha}\) direction is an essentially bounded measurable map
\[\mu:U\to X\]
defined on a full measure subset \(U\subset N\). This is just a measurable way of assigning inner products (up to a scalar multiple) in the direction of \(V_{\alpha}\). To simplify language, we will drop "in the \(V_{\alpha}\) direction" and will just say "measurable conformal structure".
Let \(\mu\) be a measurable conformal structure on \(N\) and \(F:N\to N\) a fiber similarity map. The pull-back measurable conformal structure \(F^{*}\mu\) is defined by:
\[(F^{*}\mu)(p)=(D_{\alpha}F(p))[\mu(F(p))]:=(\det D_{\alpha}F(p))^{-\frac{2}{ \dim V_{\alpha}}}(D_{\alpha}F(p))^{T}\mu(F(p))D_{\alpha}F(p),\ \ \mbox{for a.e.\ }p\in N.\]
This is analogous to the pull-back of a Riemannian metric under a diffeomorphism. Here we are using the fact that \(D_{\alpha}F(p)\) exists a.e., see Section 8.1.
**Corollary 8.4**.: \((\gamma_{2}\gamma_{1})^{*}\mu=\gamma_{1}^{*}(\gamma_{2}^{*}\mu)\) _holds for all \(\gamma_{1},\gamma_{2}\in\Gamma\)._
Proof.: It follows immediately from the chain rule \(D_{\alpha}(\gamma_{2}\gamma_{1})(p)=D_{\alpha}\gamma_{2}(\gamma_{1}(p))\circ D _{\alpha}\gamma_{1}(p)\).
A fiber similarity map \(F\) is called conformal with respect to the measurable conformal structure \(\mu\) if \(F^{*}\mu=\mu\). Tukia's argument together with Corollary 8.4 and Lemma 8.3 then yield that \(\Gamma\) has an invariant measurable conformal structure; that is, there is a measurable conformal structure \(\mu\) on \(N\) such that every \(\gamma\in\Gamma\) is conformal with respect to \(\mu\). We may assume \(\Gamma\) is countable: let \(\Gamma_{0}\) be a countable subgroup of \(\Gamma\) that is dense in \(\Gamma\) in the topology of uniform convergence on compact subsets of \(N\); if \(\Gamma_{0}\) can be conjugated into the similarity group of \((N,d)\) for some \(D\)-homogeneous distance \(d\), then the same map conjugates \(\Gamma\) into the group of similarities of \((N,d)\) as the limits of similarities are similarities.
We next recall the notion of radial limit points. Let \(S=N\rtimes_{D}\mathbb{R}\) be the Heintze group associated with \((N,D)\) and \(\mathcal{H}:S\to\mathbb{R}\) the height function given by \(\mathcal{H}(x,t)=t\). Let \(\mathcal{P}(N)=\{(\xi_{1},\xi_{2})\in N\times N|\xi_{1}\neq\xi_{2}\}\), where we view \(N=\partial S\backslash\{\infty\}\). Let \(\chi:\mathcal{P}(N)\to S\) be the map that assigns to each pair \((\xi_{1},\xi_{2})\in\mathcal{P}(N)\) the highest point on the geodesic \(\xi_{1}\xi_{2}\); that is, \(\mathcal{H}(\chi(\xi_{1},\xi_{2}))=\max\{\mathcal{H}(p)|p\in\xi_{1}\xi_{2}\}\). In a sense, \(\chi(P)\) is a center of the triple \((\infty,\xi_{1},\xi_{2})\): it is where the two geodesics that join \(\infty\) to \(\xi_{1}\) and \(\xi_{2}\) respectively diverge from each other. We observe that for any compact \(C\subset S\), the set \(\chi^{-1}(C)\) is compact in \(\mathcal{P}(N)\).
The group \(\Gamma\) acts diagonally on \(\mathcal{P}(N)\): \(g(\xi_{1},\xi_{2})=(g(\xi_{1}),g(\xi_{2}))\).
**Definition 8.5**.: _A point \(\xi\in N\) is said to be a radial limit point of \(\Gamma\) if there exists a sequence of elements \(\{h_{i}\}_{i=1}^{\infty}\) of \(\Gamma\) with the following property: for any pair \(P=(\xi_{1},\xi_{2})\in\mathcal{P}(N)\), and
any complete geodesic \(\sigma\) asymptotic to \(\xi\), there exists a constant \(C>0\) with \(\chi(h_{i}(P))\to\xi\) and \(d(\chi(h_{i}(P)),\sigma)\leq C\)._
**Proposition 8.6**.: _There exists a biLipschitz map \(F\) of \(N\) such that each element of \(F\Gamma F^{-1}\) has a compatible expression \(h*w\mapsto a*Bh*Aw*As(\bar{h})\) such that \(s_{\alpha}:\mathfrak{n}/\mathfrak{w}\to Z_{\alpha}(\mathfrak{w})\) is a Lie group homomorphism._
Proof.: We equip \(S=N\rtimes_{D}\mathbb{R}\) with a left invariant Riemannian metric such that \(N\) and \(\mathbb{R}\) are perpendicular to each other. The left translations \(L_{(0,t)}\) are isometries of \(S\) and translate the vertical geodesic \(\sigma(s)=(0,s)\) above \(0\in N\) and the boundary homeomorphisms induced by them are the automorphisms \(e^{tD}\) of \(N\) generated by the derivation \(D\).
Let \(\mu\) be a \(\Gamma\)-invariant measurable conformal structure on \(N\). As \(\mu\) is measurable, it is approximately continuous a.e. in \(N\), see Theorem 2.9.13 in [10]. Let \(p\in N\) be a radial limit point of \(\Gamma\) and also a point at which \(\mu\) is approximately continuous. After applying a left translation we may assume \(p=0\) is the origin of \(N\). Fix a pair \(P\in\mathcal{P}(N)\) and let \(\sigma\) be the vertical geodesic (in \(S\)) above \(0\). Since \(0\) is a radial limit point of \(\Gamma\), there exists a sequence of elements \(\{\gamma_{i}\}_{i=1}^{\infty}\) of \(\Gamma\) and a constant \(C>0\) with \(\chi(\gamma_{i}(P))\to 0\) and \(d(\chi(\gamma_{i}(P)),\sigma)\leq C\). Fix a point \(x_{0}\in\sigma\). For each \(i\) there is some \(t^{\prime}_{i}\in\mathbb{R}\) with \(t^{\prime}_{i}\to+\infty\) as \(i\to\infty\) such that \(d(L_{(0,t^{\prime}_{i})}(\chi(\gamma_{i}(P))),x_{0})\leq C\). Since \(L_{(0,t^{\prime}_{i})}\) is an isometry of \(S\), we have \(L_{(0,t^{\prime}_{i})}\circ\chi=\chi\circ e^{t^{\prime}_{i}D}\). Hence \(d(\chi\circ e^{t^{\prime}_{i}D}\circ\gamma_{i}(P),x_{0})\leq C\) and so the set \(\{e^{t^{\prime}_{i}D}\circ\gamma_{i}(P)\}_{i=1}^{\infty}\) lies in the compact subset \(\chi^{-1}\bar{B}(x_{0},C)\). It follows that \(\{e^{t^{\prime}_{i}D}\circ\gamma_{i}\}_{i=1}^{\infty}\) is a compact family of biLipschitz maps of \(N\). Recall that there is a Carnot metric \(d_{CC}\) on \(W\) with the property that for each \(\gamma\in\Gamma\) there is some \(t_{\gamma}\in\mathbb{R}\) such that \(\gamma^{\prime}:=e^{-t_{\gamma}D}\circ\gamma\) has a compatible expression \(\gamma^{\prime}(h*w)=\gamma^{\prime}(0)*B_{\gamma}h*A_{\gamma}w*A_{\gamma}s_{ \gamma}(\bar{h})\), where \(A_{\gamma}\) is a graded isomorphism of \(W\) that is also an isometry of \((W,d_{CC})\). The compactness of the family \(\{e^{t^{\prime}_{i}D}\circ\gamma_{i}\}_{i=1}^{\infty}\) implies that \(\{t^{\prime}_{i}+t_{\gamma_{i}}\}\) is a bounded sequence and so the family \(\{e^{-t_{\gamma_{i}}D}\circ\gamma_{i}\}_{i=1}^{\infty}\) is also compact. Set \(f_{i}=e^{-t_{\gamma_{i}}D}\circ\gamma_{i}\). By passing to a subsequence, we may assume \(f_{i}\) converges uniformly on compact subsets to a biLipschitz map \(f:N\to N\). Since each \(f_{i}\) is a fiber similarity, so is \(f\).
Let \(\gamma\in\Gamma\) and denote \(\tilde{\gamma}=f\gamma f^{-1}\), \(\tilde{\gamma}_{i}=f_{i}\gamma f_{i}^{-1}\). Let \(\mu_{i}=(f_{i}^{-1})^{*}\mu\). Since \(\mu\) is \(\Gamma\)-invariant, \(\tilde{\gamma}_{i}\) is conformal with respect to \(\mu_{i}\):
\[\tilde{\gamma}_{i}^{*}\mu_{i}=(f_{i}^{-1})^{*}\gamma^{*}f_{i}^{*}(f_{i}^{-1})^ {*}\mu=(f_{i}^{-1})^{*}\gamma^{*}\mu=(f_{i}^{-1})^{*}\mu=\mu_{i}.\]
Note \(\mu_{i}=(f_{i}^{-1})^{*}\mu=(e^{t_{\gamma_{i}}D})^{*}(\gamma_{i}^{-1})^{*}\mu= (e^{t_{\gamma_{i}}D})^{*}\mu\). So for \(q\in N\),
\[\mu_{i}(q)=(e^{t_{\gamma_{i}}})^{*}\mu(q)=D_{\alpha}e^{t_{\gamma_{i}}D}(q)[\mu (e^{t_{\gamma_{i}}D}(q))]=\mu(e^{t_{\gamma_{i}}D}(q))\]
since \(D_{\alpha}e^{t_{\gamma_{i}}D}:V_{\alpha}\to V_{\alpha}\) is the standard dilation by \(e^{at_{\gamma_{i}}}\) and so is conformal.
Let \(U\subset N\) be a bounded open subset containing \(0\). There is a bounded open subset \(U_{0}\) such that \(U\bigcup\cup_{i}\tilde{\gamma}_{i}(U)\subset U_{0}\). Since \(\mu\) is approximately continuous at \(0\) and \(t_{\gamma_{i}}\to-\infty\), the equality \(\mu_{i}(q)=\mu(e^{t_{\gamma_{i}}D}(q))\) implies that for any \(\epsilon>0\) there are subsets \(C_{i}\subset U_{0}\) with \(|C_{i}|\to 0\) as \(i\to\infty\) and \(\rho(\mu_{i}(x),\mu(0))\leq\epsilon\) for \(x\in U_{0}\backslash C_{i}\). Here \(|E|\) denotes the measure of a subset \(E\subset N\) and \(\rho\) is the distance on the symmetric space \(X\).
The maps \(\tilde{\gamma}_{i}^{-1}\) and \(\tilde{\gamma}^{-1}\) form a compact family of biLipschitz maps. There is some \(L\geq 1\) such that these maps are all \(L\)-biLipschitz. Hence \(|\tilde{\gamma}_{i}^{-1}(C_{i})|\to 0\) as \(i\to\infty\). Set \(D_{i}=C_{i}\cup\tilde{\gamma}_{i}^{-1}(C_{i})\). Now we have \(|D_{i}|\to 0\) as \(i\to\infty\) and \(\rho(\mu_{i}(x),\mu(0))\leq\epsilon\) and \(\rho(\mu_{i}(\tilde{\gamma}_{i}(x)),\mu(0))\leq\epsilon\) for \(x\in U\backslash D_{i}\). Since \(\tilde{\gamma}_{i}\) is \(\mu_{i}\)-conformal, we have \(\mu_{i}(x)=D_{\alpha}\tilde{\gamma}_{i}(x)[\mu_{i}(\tilde{\gamma}_{i}(x))]\) for a.e. Now
\[\rho(\mu_{i}(x),D_{\alpha}\tilde{\gamma}_{i}(x)[\mu(0)])=\rho(\mu_{i}(\tilde{ \gamma}_{i}(x)),\mu(0))\leq\epsilon\]
for a.e. \(x\in U\backslash D_{i}\). Combining this with \(\rho(\mu_{i}(x),\mu(0))\leq\epsilon\), we get
\[\rho(\mu(0),D_{\alpha}\tilde{\gamma}_{i}(x)[\mu(0)])\leq 2\epsilon \tag{13}\]
for a.e. \(x\in U\backslash D_{i}\).
We consider compatible expressions \(\tilde{\gamma}(h*w)=\tilde{\gamma}(0)*Bh*Aw*As(\bar{h})\) and \(\tilde{\gamma}_{i}(h*w)=\tilde{\gamma}_{i}(0)*B_{i}h*A_{i}w*A_{i}s_{i}(\bar{h})\). We know that \(\tilde{\gamma}_{i}\) converges to \(\tilde{\gamma}\) uniformly on compact subsets. It follows that \(\tilde{\gamma}_{i}(0)\to\tilde{\gamma}(0)\) and \(A_{i}\to A\). In general we can not conclude that \(s_{i}\to s\) and \(B_{i}\to B\) due to the fact that compatible expressions are not unique. Define functions \(g_{i},g:\mathfrak{n}/\mathfrak{w}\to V_{\alpha}\) by \(g_{i}(\bar{h})=B_{i}h_{1}+A_{i}s_{i,\alpha}(\bar{h})\) and \(g(\bar{h})=Bh_{1}+As_{\alpha}(\bar{h})\), where \(h_{1}\) is the \(H_{1}\) component of \(h\in H\). By considering the \(V_{\alpha}\) component of \(\tilde{\gamma}_{i}(h)\) and \(\tilde{\gamma}(h)\) we see that \(g_{i}\to g\) uniformly on compact subsets.
For the rest of the proof, when we talk about "perpendicular", "orthonormal", length \(|X|\) of a vector \(X\in V_{\alpha}\), inner product \(\left\langle\cdot,\cdot\right\rangle\), they are all with respect to \(\mu(0)\). Let \(X_{1},\cdots,X_{k}\) (\(k=\dim(W_{\alpha})\)) be an orthonormal basis of \(W_{\alpha}\). For \(1\leq l\leq k\), let \(g_{l}(x)=\left\langle g(x),X_{l}\right\rangle\), \(g_{i,l}(x)=\left\langle g_{i}(x),X_{l}\right\rangle\). Since \(\tilde{\gamma}\) and \(\tilde{\gamma}_{i}\) are \(L\)-biLipschitz, Lemma 8.1 implies that there is some constant \(L_{1}>0\) such that \(g\) and \(g_{i}\) are \(L_{1}\)-Lipschitz for all \(i\geq 1\). It follows that \(g_{l}\) and \(g_{i,l}\) are \(L_{1}\)-Lipschitz functions on \(N/W\) for all \(i\geq 1\), \(1\leq l\leq k\).
Let \(\phi:H_{1}\to W_{\alpha}\) be the linear map such that \(\{h+\phi(h)|h\in H_{1}\}\) is perpendicular to \(W_{\alpha}\). Denote by \(\psi:\bar{\mathfrak{n}}\to H_{1}\) the linear map given by \(\psi(\bar{h})=(\pi|_{H_{1}})^{-1}(\bar{\pi}_{1}(\bar{h}))\), where \(\bar{\pi}_{1}:\bar{\mathfrak{n}}\to\bar{V}_{1}\) is the projection with respect to the decomposition \(\bar{\mathfrak{n}}=\oplus_{j}\bar{V}_{j}\) and \(\pi:\mathfrak{n}\to\bar{\mathfrak{n}}\) is the quotient map. For \(1\leq l\leq k\), let \(L_{l}:\bar{\mathfrak{n}}\to\mathbb{R}\) be the linear map defined by
\[L_{l}(\bar{h})=-\left\langle X_{l},A\phi(\psi(\bar{h}))\right\rangle.\]
**Claim**: For each \(1\leq l\leq k\), \(Dg_{i,l}\) converges to \(L_{l}\) in \(L^{1}_{\rm loc}(N/W)\) as \(i\to\infty\).
We first assume the claim and finish the proof of the Proposition. The claim implies that for any \(h\in H_{1}\) and any smooth function with compact support \(\varphi\) defined on \(N/W\),
\[\int_{N/W}\varphi(x)L_{l}(\bar{h})dx \longleftarrow\int_{N/W}\varphi(x)Dg_{i,l}(x)(\bar{h})dx\] \[= -\int_{N/W}g_{i,l}(x)D_{\bar{h}}\varphi(x)dx\longrightarrow-\int_ {N/W}g_{l}(x)D_{\bar{h}}\varphi(x)dx=\int_{N/W}\varphi(x)Dg_{l}(x)(\bar{h})dx,\]
which yields \(\int_{N/W}\varphi(x)(Dg_{l}(x)(\bar{h})-L_{l}(\bar{h}))dx=0\). Here the first convergence follows from the claim; the second one follows from the fact that \(g_{i,l}\) converges to \(g_{l}\) uniformly on compact subsets; and the equalities follow from integration by parts. It follows that \(Dg_{l}(x)=L_{l}\) for a.e. \(x\in N/W\). This implies \(\left\langle X_{l},ADs_{\alpha}(x)(\bar{h})+A\phi(\psi(\bar{h}))\right\rangle= -\langle X_{l},Bh\rangle\) for a.e. \(x\in N/W\) and all \(1\leq l\leq k\). Hence \(ADs_{\alpha}(x)(\bar{h})+A\phi(\psi(\bar{h}))\) defines a constant Lie group homomorphism (independent of \(x\)) from \(N/W\) to \(W_{\alpha}\). As \(A\phi(\psi(\bar{h}))\) is also a group homomorphism, we see that the Pansu differential of \(s_{\alpha}\) is a constant group homomorphism. It now follows from \(s_{\alpha}(0)=0\) that \(s_{\alpha}\) is a Lie group homomorphism.
We now prove the claim. Recall \(\rho(D_{\alpha}\tilde{\gamma}_{i}(x)[\mu(0)],\mu(0))\leq 2\epsilon\) for \(x\in U\backslash D_{i}\). This means for \(x\in U\backslash D_{i}\) the matrix representation for the linear map \(D_{\alpha}\tilde{\gamma}_{i}(x):V_{\alpha}\to V_{\alpha}\) with respect to an orthonormal basis of \((V_{\alpha},\mu(0))\) is a constant multiple of a matrix that is very close to an orthogonal matrix. Since \(D_{\alpha}\tilde{\gamma}_{i}(x)(W_{\alpha})=W_{\alpha}\), we see that if \(x\in U\backslash D_{i}\) then \(D_{\alpha}\tilde{\gamma}_{i}(x):V_{\alpha}\to V_{\alpha}\) sends vectors in \(V_{\alpha}\) perpendicular to \(W_{\alpha}\) to vectors almost perpendicular to \(W_{\alpha}\). On the other hand, as \(D_{\alpha}\tilde{\gamma}_{i}(x)(w)=A_{i}w\) for \(w\in W_{\alpha}\) and \(\{A_{i}|i\geq 1\}\) has compact closure (as \(A_{i}\to A\)), there is some \(M>0\) such that the operator norm of \(D_{\alpha}\tilde{\gamma}_{i}(x)\) is bounded above by \(M\) for all \(i\) and all \(x\in U\backslash D_{i}\). Fix any \(h\in H_{1}\) with \(|h|=1\). By the definition of \(\phi:H_{1}\to W_{\alpha}\), we have \(\left\langle W_{\alpha},h+\phi(h)\right\rangle=0\). As \(D_{\alpha}\tilde{\gamma}_{i}(x)(h+\phi(h))=B_{i}h+A_{i}\phi(h)+A_{i}Dis_{i, \alpha}(\bar{x})(\bar{h})\), there is a constant \(\delta>0\) depending only on \(\epsilon\) with \(\delta\to 0\) as \(\epsilon\to 0\) such that
\[|\left\langle X_{l},B_{i}h+A_{i}\phi(h)+A_{i}Dis_{i,\alpha}(\bar{x})(\bar{h}) \right\rangle|<\delta\]
holds for all \(x\in U\backslash D_{i}\) and all \(1\leq l\leq k\). Note \(\psi(\bar{h})=h\) (as \(h\in H_{1}\)). Since \(A_{i}\to A\) and \(Dg_{i,l}(\bar{x})(\bar{h})=\left<B_{i}h+A_{i}Ds_{i,\alpha}(\bar{x})(\bar{h}),X_{l}\right>\), we see that \(|Dg_{i,l}(\bar{x})(\bar{h})-L_{l}(\bar{h})|\leq 2\delta\) for all \(x\in U\backslash D_{i}\) and all sufficiently large \(i\). This implies \(\int_{U\backslash D_{i}}|Dg_{i,l}(\bar{x})(\bar{h})-L_{l}(\bar{h})|dx\leq 2 \delta|U|\) for sufficiently large \(i\). On the other hand, as \(g_{i,l}\), \(i\geq 1\), \(1\leq l\leq k\), are \(L_{1}\)-Lipschitz, there is a constant \(C>0\) such that \(|Dg_{i,l}(\bar{x})(\bar{h})-L_{l}(\bar{h})|\leq C\) for all \(x\in N\). From this we get \(\int_{D_{i}}|Dg_{i,l}(\bar{x})(\bar{h})-L_{l}(\bar{h})|dx\leq C|D_{i}|\). As \(|D_{i}|\to 0\), we conclude that
\[\overline{\lim}_{i\to\infty}\int_{U}|Dg_{i,l}(\bar{x})(\bar{h})-L_{l}(\bar{h}) |dx\leq 2\delta|U|.\]
As this holds for all \(\epsilon>0\) we have \(\lim_{i\to\infty}\int_{U}|Dg_{i,l}(\bar{x})(\bar{h})-L_{l}(\bar{h})|dx=0\). Since this holds for all bounded open subset \(U\subset N\) and all \(h\in H_{1}\), we have \(Dg_{i,l}\to L_{l}\) in \(L^{1}_{\rm loc}(N/W)\).
### Completing the proof of Theorem 1.2 when \(\dim(W)\geq 2\) and \(\dim(N/W)\geq 2\)
We assume the assumptions of Theorem 1.2 and in addition that \(\dim(W)\geq 2\), and \(\dim(N/W)\geq 2\). We first use Section 7 to get rid of \(s_{\gamma,j}\) for \(j<\alpha\), and if \(\alpha\) is an integer then apply Proposition 8.6 to conclude that \(s_{\gamma,\alpha}\) is a Lie group homomorphism for all \(\gamma\in\Gamma\), after a possible further biLipschitz conjugation. We observe that the property \(s_{\gamma,j}\equiv 0\) for \(j<\alpha\) is preserved when we apply Proposition 8.6: this is because the conjugating map \(f\) in the proof of Proposition 8.6 is the limit of a sequence \(\{f_{i}\}\) and each \(f_{i}\) is the composition of a group element \(\gamma_{i}\in\Gamma\) with \(e^{t_{i}D}\) for some \(t_{i}\); since \(\gamma_{i}\) has the above property, so do \(f_{i}\) and the limit \(f\); finally a calculation shows that if two biLipschitz maps \(f_{1},f_{2}\) of \(N\) have this property then so does the composition \(f_{1}\circ f_{2}\).
At this point every element in \(\Gamma\) has a compatible expression \(F(h*w)=F(0)*Bh*Aw*As(\bar{h})\), where \(s:\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\) has the properties that \(s_{j}=0\) for \(1\leq j<\alpha\) and if \(\alpha\) is an integer then \(s_{\alpha}:\mathfrak{n}/\mathfrak{w}\to Z_{\alpha}(\mathfrak{w})\) is a Lie group homomorphism. We next show that such a map \(F\) is an affine map; that is, \(L_{F(0)^{-1}}\circ F\) is a Lie group automorphism.
**Lemma 8.7**.: _Let \(F,\tilde{F}:\mathfrak{n}\to\mathfrak{n}\) be biLipschitz maps with compatible expressions \(F(h*w)=Bh*Aw*As(\bar{h})\) and \(\tilde{F}(h*w)=Bh*Aw*A\tilde{s}(\bar{h})\). If \(s\), \(\tilde{s}\) satisfy \(s_{j}=\tilde{s}_{j}\) for \(1\leq j\leq\alpha\), then \(F=\tilde{F}\)._
Proof.: We observe that \(F(h*w*(s(\bar{h}))^{-1}*\tilde{s}(\bar{h}))=\tilde{F}(h*w)\). It follows that \((F^{-1}\circ\tilde{F})(h*w)=h*w*(s(\bar{h})^{-1}*\tilde{s}(\bar{h}))\) is a biLipschitz shear map with shear function \(\tilde{\tilde{s}}\) given by \(\tilde{\tilde{s}}=\tilde{s}-s\). The assumption implies \(\tilde{\tilde{s}}_{j}\equiv 0\) for all \(1\leq j\leq\alpha\). On the other hand, by Proposition 6.4, if \(\alpha\) is not an integer, then \(\tilde{\tilde{s}}_{j}\equiv 0\) for all \(j>\alpha\), and if \(\alpha\) is an integer, then \(\tilde{\tilde{s}}_{k\alpha+j}=\tilde{\tilde{s}}_{j}^{(k)}\equiv 0\) for each \(k\geq 1\) and \(1\leq j\leq\alpha\). It follows that \(\tilde{\tilde{s}}=0\) and \(F=\tilde{F}\).
**Lemma 8.8**.: _Let \(F:\mathfrak{n}\to\mathfrak{n}\) be a biLipschitz map with compatible expression \(F(h*w)=Bh*Aw*As(\bar{h})\). Suppose \(s_{j}=0\) for \(1\leq j<\alpha\) and if \(\alpha\) is an integer \(s_{\alpha}:\mathfrak{n}/\mathfrak{w}\to Z_{\alpha}(\mathfrak{w})\) is a Lie group homomorphism. Then \(F\) is a Lie group automorphism._
Proof.: Notice that it suffices to show \(F_{p}=F\) for any \(p\in N\). Let \(p\in N\) and set \(\tilde{F}=F_{p}\). By Lemma 5.6, \(\tilde{F}\) has a compatible expression given by \(\tilde{F}(h*w)=Bh*Aw*A\tilde{s}(\bar{h})\) for some map \(\tilde{s}:\mathfrak{n}/\mathfrak{w}\to Z(\mathfrak{w})\). By Lemma 8.7 it now suffices to show \(\tilde{s}_{j}=s_{j}\) for all \(1\leq j\leq\alpha\).
Let \(h\in H\). As \(\tilde{F}(h)=Bh*A\tilde{s}(\bar{h})\), we have \(\tilde{s}(\bar{h})=A^{-1}((Bh)^{-1}*\tilde{F}(h))\). Write \(p=h_{0}*w_{0}\) and \(h_{0}*h=h_{1}*w_{1}\) with \(h_{0},h_{1}\in H\), \(w_{0},w_{1}\in\mathfrak{w}\). Notice \(w_{1}\in\oplus_{j\geq 2\alpha}W_{j}\). We have \(p*h=h_{0}*w_{0}*h=h_{1}*w_{1}\) with \(h_{0},h_{1}\in H\), \(w_{0},w_{1}\in\mathfrak{w}\). Notice \(w_{1}\in\oplus_{j\geq 2\alpha}W_{j}\). We have
\(h_{0}*h*(h^{-1}*w_{0}*h)=h_{1}*w_{1}*(h^{-1}*w_{0}*h)\). In the following calculations we use the quotient homomorphism \(P_{\alpha}:\mathfrak{n}\to\bar{\mathfrak{n}}_{\alpha}=\mathfrak{n}/(\oplus_{ \lambda_{j}>\alpha}V_{\lambda_{j}})\):
\[P_{\alpha}((Bh)^{-1}*\tilde{F}(h))\] \[=P_{\alpha}((Bh)^{-1}*(F(h_{0}*w_{0}))^{-1}*F(h_{1}*w_{1}*(h^{-1}* w_{0}*h)))\] \[=P_{\alpha}((Bh)^{-1}*As(\overline{h_{0}})^{-1}*Aw_{0}^{-1}*(Bh_{ 0})^{-1}*Bh_{1}*Aw_{1}*A(h^{-1}*w_{0}*h)*As(\overline{h_{0}}*\bar{h}))\] \[=P_{\alpha}((Bh)^{-1}*As(\overline{h_{0}})^{-1}*Aw_{0}^{-1}*(Bh_{ 0})^{-1}*Bh_{1}*Aw_{1}*(Bh)^{-1}*Aw_{0}*(Bh)*As(\overline{h_{0}}*\bar{h}))\] \[=P_{\alpha}((Bh)^{-1}*As(\overline{h_{0}})^{-1}*Aw_{0}^{-1}*(Bh_{ 0})^{-1}*Bh_{1}*Aw_{0}*As(\overline{h_{0}}*\bar{h}))\] \[=P_{\alpha}((Bh)^{-1}*(Bh_{0})^{-1}*Bh_{1}*As(\overline{h_{0}})^{ -1}*As(\overline{h_{0}}*\bar{h}))\] \[=P_{\alpha}(A(s(\overline{h_{0}}*\bar{h})-s(\overline{h_{0}}))).\]
For the third equality we used (5). In the 4th equality we used \(P_{\alpha}(Aw_{1})=0\) and \(P_{\alpha}(Bh)\in Z(\bar{\mathfrak{n}}_{\alpha})\). For the 5th equality we used \(P_{\alpha}(Bh_{0}),P_{\alpha}(Bh_{1})\in Z(\bar{\mathfrak{n}}_{\alpha})\). For the last equality we used \(P_{\alpha}((Bh_{0})^{-1}*Bh_{1})=P_{\alpha}(Bh)\). It follows that \(\tilde{s}_{j}(\bar{h})=s_{j}(\overline{h_{0}}*\bar{h})-s_{j}(\overline{h_{0}})\) for \(1\leq j\leq\alpha\). By the assumption on \(s\), we have \(\tilde{s}_{j}=0\) for \(1\leq j<\alpha\) and if \(\alpha\) is an integer then \(\tilde{s}_{\alpha}(\bar{h})=s_{\alpha}(\overline{h_{0}}*\bar{h})-s_{\alpha}( \overline{h_{0}})=s_{\alpha}(\bar{h})\) as \(s_{\alpha}:\mathfrak{n}/\mathfrak{w}\to Z_{\alpha}(\mathfrak{w})\) is a homomorphism. Now we have \(\tilde{s}_{j}=s_{j}\) for \(1\leq j\leq\alpha\). By Lemma 8.7, \(\tilde{F}=F\) and so \(F\) is an automorphism.
In this last paragraph we switch back to Lie group notation. At this point \(\Gamma\) acts on \(N\) by affine maps and is a uniform quasisimilarity group of \(N\). We write \(\gamma=L_{\gamma(0)}\circ\phi_{\gamma}\), where \(\phi_{\gamma}\) is the automorphism \(L_{\gamma(0)^{-1}}\circ\gamma\). By Lemma 3.5 we know that \(\phi_{\gamma}\) is layer preserving, that is, \(d\phi_{\gamma}(V_{\lambda_{j}})=V_{\lambda_{j}}\) for all \(j\). Each \(\gamma\in\Gamma\) acts on the cosets of \(W\) by an automorphism \(A_{\gamma}\) of \(W\) and \(A_{\gamma}\) is the composition of a Carnot dilation and an isometric graded isomorphism of \(W\). Hence for each \(\gamma\), there is a unique \(t_{\gamma}\in\mathbb{R}\) such that \(e^{-t_{\gamma}D}\circ\gamma\) acts on \(W\) by an isometric graded isomorphism. Since \(\Gamma\) is a uniform group of quasisimilarities of \(N\), there is a constant \(L>0\) such that \(e^{-t_{\gamma}D}\circ\gamma\) is \(L\)-biLipschitz for all \(\gamma\in\Gamma\). It follows that for each \(j\geq 1\) the linear isomorphism \(d(e^{-t_{\gamma}D}\circ\phi_{\gamma})|_{V_{\lambda_{j}}}:V_{\lambda_{j}}\to V _{\lambda_{j}}\) is \(L\)-biLipschitz. Now we see that the map \(\Gamma\to GL(V_{\lambda_{j}})\) given by \(\gamma\mapsto d(e^{-t_{\gamma}D}\circ\phi_{\gamma})|_{V_{\lambda_{j}}}\) is a homomorphism whose image has compact closure in \(GL(V_{\lambda_{j}})\). It follows that there is an inner product \(\langle\cdot,\cdot\rangle_{j}\) on \(V_{\lambda_{j}}\) such that each \(d(e^{-t_{\gamma}D}\circ\phi_{\gamma})|_{V_{\lambda_{j}}}\) is an isometry with respect to this inner product. Let \(\langle\cdot,\cdot\rangle\) be the inner product on \(\mathfrak{n}\) that agrees with \(\langle\cdot,\cdot\rangle_{j}\) on \(V_{\lambda_{j}}\) such that \(V_{\lambda_{i}}\) and \(V_{\lambda_{j}}\) are perpendicular to each other for \(i\neq j\). Let \(d_{1}\) be a \(D\)-homogeneous distance on \(N\) associated to this inner product. Although \(d(e^{-t_{\gamma}D}\circ\phi_{\gamma})\) is a linear isometry of \((\mathfrak{n},\langle,\rangle)\), it is not clear that \(e^{-t_{\gamma}D}\circ\phi_{\gamma}\) is an isometry of \((N,d_{1})\). However, \(\{e^{-t_{\gamma}D}\circ\phi_{\gamma}|\gamma\in\Gamma\}\) is a subgroup of the group \(\operatorname{Auto}_{g}(N)\) of graded automorphisms with compact closure (we denote the closure by \(K\)). Let \(m\) be a normalized Haar measure on \(K\). Define a new distance \(d_{2}\) on \(N\) by \(d_{2}(x,y)=\int_{K}d_{1}(k(x),k(y))dm(k)\). Now it is easy to check that \(d_{2}\) is a \(K\)-invariant \(D\)-homogeneous distance on \(N\) associated to \(\langle,\rangle\). It follows that \(\Gamma\) acts on \((N,d_{2})\) by similarities. Finally we conjugate \(\Gamma\) into \(\operatorname{Sim}(N,d_{0})\) where \(d_{0}\) is a fixed maximally symmetric \(D\)-homogeneous distance on \(N\).
We have finished the proof of Theorem 1.2 in the case when \(\dim(W)\geq 2\), \(\dim(N/W)\geq 2\).
## 9. Case \(\dim(W)=1\)
In this section we prove Theorem 1.2 in the case when \(\dim(W)=1\), \(\dim(N/W)\geq 2\) and \(\Gamma\) is amenable. In this case Tukia's arguments can not be used to prove a fiber Tukia theorem. Nonetheless Day's fixed point theorem once again can be used to "straighten" the action along the
cosets of \(W\). We point out that the argument in this section is valid for uniform quasisimilarity groups \(\Gamma\) of product metric spaces of the form \(\mathbb{R}\times Y\), see Theorem 9.7. The only properties used in the proof below are that \(H\) is a proper metric space, the action of \(\Gamma\) on \(\mathbb{R}\times H\) permutes the subsets \(\{\mathbb{R}\times\{h\}:h\in H\}\) and the induced action on \(H\) is by similarities.
Throughout this section we assume \(\dim(W)=1\) and \(\dim(N/W)\geq 2\). Let \(\mathfrak{h}=\oplus_{\lambda>1}V_{\lambda}\). Since \(W=V_{1}\) is an ideal of \(\mathfrak{n}\), it follows from the property \([V_{a},V_{b}]\subset V_{a+b}\) that \(\mathfrak{n}=W\oplus\mathfrak{h}\) is a direct sum of ideals where \(W\simeq\mathbb{R}\). So in this case, our group \(N\) is a direct product of a copy of \(\mathbb{R}\) and a Carnot group which we also refer to as \(W\) and \(H\), where \(H\) is the simply connected Lie group with Lie algebra \(\mathfrak{h}\). Because of this we can write \(w*h\in N\) as \((w,h)\in W\oplus H\) and our group \(\Gamma\) acts on \(N\) by maps of the form
\[\gamma(w,h)=(\gamma^{\mathbb{R}}(w,h),\gamma^{H}(h))\]
where \(\gamma^{H}:H\to H\) is biLipschitz and for each \(h\in H\), the map \(\gamma^{\mathbb{R}}(\cdot,h):\mathbb{R}\to\mathbb{R}\) is also biLipschitz. The proof we give here follows the proof from Section 3.3 in [4] but we construct the conjugating map in a slightly different manner in order to fix an oversight in the original paper; namely it is unclear that the conjugating map in [4] is actually biLipschitz. To construct our map we need to use that \(\Gamma\) is amenable.
Since \(\dim(N/W)\geq 2\), we can first apply Tukia-type theorem for Carnot groups (Theorem 3.6) to the induced action of \(\Gamma\) on \(N/W\simeq H\). So there is a biLipschitz map \(f_{0}\) of \(H\) such that after conjugation by \(f_{0}\), the induced action of \(\Gamma\) on \(H\) is by similarities. Set \(F_{0}=(\operatorname{Id},f_{0}):W\oplus H\to W\oplus H\). Then we can conjugate the action of \(\Gamma\) on \(N\) by \(F_{0}\) to get an action where \(\gamma^{\mathbb{R}}(\cdot,h)\) is still biLipschitz and \(\gamma^{H}\) is a similarity of \((H,\bar{d}_{CC})\). Let \(t_{\gamma}\in\mathbb{R}\) be such that \(e^{at_{\gamma}}\) is the similarity constant of \(\gamma^{H}:(H,\bar{d}_{CC})\to(H,\bar{d}_{CC})\) and let \(\tilde{\gamma}:W\times H\to W\) be the map given by \(\tilde{\gamma}(w,h)=e^{-t_{\gamma}}\gamma^{\mathbb{R}}(w,h)\). Then \(\gamma^{\mathbb{R}}(w,h)=e^{t_{\gamma}}\tilde{\gamma}(w,h)\). Since \(\Gamma\) is a uniform quasisimilarity group, there is a constant \(\Lambda\geq 1\) such that \(\tilde{\gamma}(\cdot,h)\) is \(\Lambda\)-biLipschitz for all \(\gamma\in\Gamma\) and all \(h\in H\). After taking an index two subgroup if necessary, we may assume \(\tilde{\gamma}(\cdot,h)\) is orientation-preserving and so has derivatives in the interval \([1/\Lambda,\Lambda]\). For each \(\gamma\in\Gamma\) we define a function \(u_{\gamma}:N\to\mathbb{R}\) by \(u_{\gamma}(w,h)=\frac{\partial\tilde{\gamma}}{\partial w}(w,h)\) when \(\frac{\partial\tilde{\gamma}}{\partial w}(w,h)\) exists and \(u_{\gamma}(w,h)=1\) otherwise. Here \(\frac{\partial\tilde{\gamma}}{\partial w}(w,h)\) denotes the derivative of the function \(\tilde{\gamma}(\cdot,h):\mathbb{R}\to\mathbb{R}\) at the point \(w\) if it exists. Then \(u_{\gamma}\in L^{\infty}(N)\) with values in \([1/\Lambda,\Lambda]\). In this section we are using the Hausdorff measure on \(N\) including in the definition of \(L^{p}\) spaces. Of course, the Hausdorff measure on \(N\) is a Haar measure. The point is that the argument in this section still works when \(N=\mathbb{R}\times H\) is replaced with a product metric space \(\mathbb{R}\times Y\).
Let \(L^{\infty}(N)=(L^{1}(N))^{*}\) be equipped with weak\({}^{*}\) topology. Then \(L^{\infty}(N)\) is a locally convex topological vector space (see Section 3.14, [10]). We also consider the \(L^{\infty}\) norm \(||\cdot||\) on \(L^{\infty}(N)\). We stress that the topology induced by the \(L^{\infty}\) norm is different from the weak\({}^{*}\) topology. In the following, when we say a subset \(X\subset L^{\infty}(N)\) is closed (compact) we mean it is closed (compact) in the weak\({}^{*}\) topology; similarly for closure of subsets and continuity of maps; when we say \(X\) is bounded we mean it is bounded with respect to the \(L^{\infty}\) norm. By the Banach-Alaoglu theorem, bounded closed subsets of \(L^{\infty}(N)\) are compact.
Next we define an action of the opposite group \(\Gamma^{*}\) of \(\Gamma\) on \(L^{\infty}(N)\). For \(\gamma\in\Gamma\) and \(\phi\in L^{\infty}(N)\), define \(\gamma\cdot\phi\in L^{\infty}(N)\) by \(\gamma\cdot\phi=u_{\gamma}(\phi\circ\gamma)\), that is,
\[(\gamma\cdot\phi)(w,h)=u_{\gamma}(w,h)\phi(\gamma(w,h))=u_{\gamma}(w,h)\phi( \gamma^{\mathbb{R}}(w,h),\gamma^{H}(h)).\]
One checks that this defines a linear action (in particular an affine action) of \(\Gamma^{*}\) on \(L^{\infty}(N)\).
**Lemma 9.1**.: _The map \(\Gamma\times L^{\infty}(N)\to L^{\infty}(N),(\gamma,\phi)\mapsto\gamma\cdot\phi\), is separately continuous._
Proof.: It is easy to see that for fixed \(\gamma\in\Gamma\), the map \(L^{\infty}(N)\to L^{\infty}(N),\phi\mapsto\gamma\cdot\phi\), is continuous. We next show that for fixed \(\phi\in L^{\infty}(N)\), the map \(\Gamma\to L^{\infty}(N),\gamma\mapsto\gamma\cdot\phi\), is also continuous. Let
\(\phi\in L^{\infty}(N)\) be fixed and \(\gamma_{j},\gamma\in\Gamma\) be such that \(\gamma_{j}\to\gamma\). We need to show \(\gamma_{j}\cdot\phi\stackrel{{ w}}{{\longrightarrow}}\gamma\cdot\phi\). Let \(C_{c}(N)\) be the space of compactly supported continuous functions on \(N\). Since \(C_{c}(N)\) is a dense subspace of \(L^{1}(N)\), it suffices to show that for any fixed \(f\in C_{c}(N)\), \(\int_{N}f(\gamma_{j}\cdot\phi)dm\to\int_{N}f(\gamma\cdot\phi)dm\) as \(j\to\infty\), where \(m\) denotes the Hausdorff measure on \(N\). Notice that the Jacobian of \(\gamma\in\Gamma\) at a point \((w,h)\) is given by \(J\gamma(w,h)=e^{t_{\gamma}d_{0}}u_{\gamma}(w,h)\), where \(d_{0}\) is the Hausdorff dimension of \((N,d)\). The area formula applied to the map \(\gamma\) and the function \(f(\phi\circ\gamma)\) yields
\[\int_{N}f(\phi\circ\gamma)J\gamma dm=\int_{N}(f\circ\gamma^{-1})\phi dm.\]
It follows that
\[\int_{N}f(\gamma\cdot\phi)dm=\int_{N}fu_{\gamma}(\phi\circ\gamma)dm=\frac{1}{e ^{t_{\gamma}d_{0}}}\int_{N}f(\phi\circ\gamma)J\gamma dm=\frac{1}{e^{t_{\gamma} d_{0}}}\int_{N}(f\circ\gamma^{-1})\phi dm.\]
Similarly we have
\[\int_{N}f(\gamma_{j}\cdot\phi)dm=\frac{1}{e^{t_{\gamma_{j}}d_{0}}}\int_{N}(f \circ\gamma_{j}^{-1})\phi dm.\]
Now
\[\int_{N}f(\gamma_{j}\cdot\phi)dm-\int_{N}f(\gamma\cdot\phi)dm\] \[=\frac{1}{e^{t_{\gamma_{j}}d_{0}}}\int_{N}(f\circ\gamma_{j}^{-1}- f\circ\gamma^{-1})\phi dm+\big{(}\frac{1}{e^{t_{\gamma_{j}}d_{0}}}-\frac{1}{e^{t_{ \gamma_{j}}d_{0}}}\big{)}\int_{N}(f\circ\gamma^{-1})\phi dm.\]
As \(t_{\gamma_{j}}\to t_{\gamma}\), the second term above clearly goes to \(0\) as \(j\to\infty\). Since \(f\) is continuous with compact support and \(\gamma_{j}\) converges to \(\gamma\) uniformly on compact subsets of \(N\), there is a compact subset \(F\subset N\) and a quantity \(\epsilon_{j}\to 0\) such that \(\sup\{|f\circ\gamma_{j}^{-1}(n)-f\circ\gamma^{-1}(n)|:n\in F\}<\epsilon_{j}\) and \(f\circ\gamma_{j}^{-1}(n)-f\circ\gamma^{-1}(n)=0\) for all \(n\in N\backslash F\) and all sufficiently large \(j\). We have
\[|\int_{N}(f\circ\gamma_{j}^{-1}-f\circ\gamma^{-1})\phi dm|\leq\int_{F}|f\circ \gamma_{j}^{-1}-f\circ\gamma^{-1}||\phi|dm\leq\epsilon_{j}\int_{F}|\phi|dm\to 0\]
and hence \(\int_{N}f(\gamma_{j}\cdot\phi)dm\to\int_{N}f(\gamma\cdot\phi)dm\).
Since \(u_{\gamma}\) takes values in the interval \([1/\Lambda,\Lambda]\), we see that \(||\phi||/\Lambda\leq||\gamma\cdot\phi||\leq\Lambda||\phi||\) for any \(\phi\in L^{\infty}(N)\) and any \(\gamma\in\Gamma\). It follows that every \(\Gamma\) orbit is a bounded subset of \(L^{\infty}(N)\). Let \(\phi_{0}\) be the constant function \(1\) on \(N\) and \(K\) be the closure of the convex hull of \(\Gamma^{*}\cdot\phi_{0}\). Then \(K\) is a compact, convex subset in \(L^{\infty}(N)\). Since \(\Gamma\) is amenable, Day's fixed point theorem implies that \(\Gamma\) has a fixed point \(u\) in \(K\).
Notice that \(\gamma\cdot\phi_{0}=u_{\gamma}\) and so is a measurable function with values in \([1/\Lambda,\Lambda]\). It follows that every element in \(K\), in particular \(u\), takes values in \([1/\Lambda,\Lambda]\) a.e. By Fubini's theorem, for a.e. \(h\in H\), the map \(\mathbb{R}\to\mathbb{R},w\mapsto u(w,h)\), is measurable and takes values in \([1/\Lambda,\Lambda]\) for a.e. \(w\in W\).
Since \(u\) is a fixed point of \(\Gamma\), for each \(\gamma\in\Gamma\), we have \(u(w,h)=u_{\gamma}(w,h)u(\gamma^{\mathbb{R}}(w,h),\gamma^{H}(h))\) for a.e. \((w,h)\in W\times H\). By Fubini's theorem, for a.e. \(h\in H\), \(u(w,h)=u_{\gamma}(w,h)u(\gamma^{\mathbb{R}}(w,h),\gamma^{H}(h))\) for a.e. \(w\in W\).
Let \(\Gamma_{0}\subset\Gamma\) be a countable dense subgroup. There is a \(\Gamma_{0}\)-invariant full measure subset \(U\subset H\) with the following properties:
(1) for each \(h\in U\) and each \(\gamma\in\Gamma_{0}\), the equality \(u(w,h)=u_{\gamma}(w,h)u(\gamma^{\mathbb{R}}(w,h),\gamma^{H}(h))\) holds for a.e. \(w\in W\).
(2) for each \(h\in U\), the map \(\mathbb{R}\to\mathbb{R},w\mapsto u(w,h)\), is measurable and takes values in \([1/\Lambda,\Lambda]\) for a.e. \(w\in W\).
For each \(h\in U\) we define a function \(v_{h}:\mathbb{R}\to\mathbb{R}\) by \(v_{h}(w)=\int_{0}^{w}u(s,h)ds\). Clearly \(v_{h}\) is \(\Lambda\)-biLipschitz. We also define \(G_{0}:W\times U\to W\times U\) by \(G_{0}(w,h)=(v_{h}(w),h)\). We shall show that
(1) \(G_{0}\) is biLipschitz (Lemma 9.5) and so admits a biLipschitz extension \(\bar{G}_{0}:N\to N\);
(2) \(\bar{G}_{0}\Gamma\bar{G}_{0}^{-1}\) acts by similarities along the cosets of \(W\) (Lemma 9.6).
**Lemma 9.2**.: _The function \(v_{h}\) satisfies the following equation for all \(\gamma\in\Gamma_{0}\) and all \(w_{1},w_{2}\in W\):_
\[v_{\gamma^{H}(h)}(\gamma^{\mathbb{R}}(w_{1},h))-v_{\gamma^{H}(h)}(\gamma^{ \mathbb{R}}(w_{2},h))=e^{t_{\gamma}}(v_{h}(w_{1})-v_{h}(w_{2})).\]
Proof.: For each \(h\in U\), \(u_{\gamma}(w,h)=\frac{\partial\tilde{\gamma}}{\partial w}(w,h)\) for a.e. \(w\in W\). On the other hand, by the choice of \(U\), for each \(h\in U\) and each \(\gamma\in\Gamma_{0}\), the equality \(u(w,h)=u_{\gamma}(w,h)u(\gamma^{\mathbb{R}}(w,h),\gamma^{H}(h))\) holds for a.e. \(w\in W\). Now we have
\[v_{\gamma^{H}(h)}(\gamma^{\mathbb{R}}(w_{1},h))-v_{\gamma^{H}(h) }(\gamma^{\mathbb{R}}(w_{2},h)) =\int_{\gamma^{\mathbb{R}}(w_{2},h)}^{\gamma^{\mathbb{R}}(w_{1},h )}u(s,\gamma^{H}(h))ds\] \[=e^{t_{\gamma}}\int_{w_{2}}^{w_{1}}u(\gamma^{\mathbb{R}}(t,h), \gamma^{H}(h))\frac{\partial\tilde{\gamma}}{\partial t}(t,h)dt\] \[=e^{t_{\gamma}}\int_{w_{2}}^{w_{1}}u(t,h)dt=e^{t_{\gamma}}(v_{h}( w_{1})-v_{h}(w_{2})).\]
Let \(C_{b}(\Gamma)\) be the space of bounded continuous functions on \(\Gamma\). Let \(M_{\Gamma}\) be the set of means on \(C_{b}(\Gamma)\) of the form
\[\sum_{j=1}^{n}t_{j}\delta_{\gamma_{j}}\ \ \ \ (n\in\mathbb{N},\gamma_{1},\cdots, \gamma_{n}\in\Gamma,\;t_{1},\cdots,t_{n}\geq 0,\;t_{1}+\cdots+t_{n}=1),\]
where for \(\gamma\in\Gamma\), \(\delta_{\gamma}:C_{b}(\Gamma)\to\mathbb{C}\) is the linear functional given by \(\delta_{\gamma}(f)=f(\gamma)\). Then \(M_{\Gamma}\) is \(w^{*}\)-dense in the set of all means on \(C_{b}(\Gamma)\), see [10], page 29. Let \(\mathbb{Q}M_{\Gamma_{0}}\) be the set of means on \(C_{b}(\Gamma)\) of the form
\[\sum_{j=1}^{n}t_{j}\delta_{\gamma_{j}}\ \ \ \ (n\in\mathbb{N},\gamma_{1},\cdots, \gamma_{n}\in\Gamma_{0},\;t_{1},\cdots,t_{n}\geq 0,\;t_{j}\in\mathbb{Q},\;t_{1}+ \cdots+t_{n}=1).\]
Since \(\Gamma_{0}\) is dense in \(\Gamma\) and \(\mathbb{Q}\) is dense in \(\mathbb{R}\), we see that \(\mathbb{Q}M_{\Gamma_{0}}\) is also \(w^{*}\)-dense in the set of all means on \(C_{b}(\Gamma)\). Let \(K_{0}\) be the set of points in \(K\) of the form
\[\sum_{j=1}^{n}t_{j}u_{\gamma_{j}}\ \ \ \ (n\in\mathbb{N},\gamma_{1},\cdots, \gamma_{n}\in\Gamma_{0},\;t_{1},\cdots,t_{n}\geq 0,\;t_{j}\in\mathbb{Q},\;t_{1}+ \cdots+t_{n}=1).\]
**Lemma 9.3**.: _There is a sequence \(\{u_{i}\}\) in \(K_{0}\) that converges to \(u\) in the weak\({}^{*}\) topology._
Proof.: This follows from the proof of Day's fixed point theorem, see page 29 of [10]. The reader should have a copy of that proof in front of him/her while reading this proof. In that proof, we pick \(x_{0}=\phi_{0}\). Let \(A(K)\) be the set of all continuous affine functions on \(K\). For \(\psi\in A(K)\), define
\[\phi_{\psi}:\Gamma\to\mathbb{C},\ \ \gamma\mapsto\psi(\gamma\cdot\phi_{0})= \psi(u_{\gamma}).\]
Let \(m\) be a left invariant mean on \(C_{b}(\Gamma)\). Since \(\mathbb{Q}M_{\Gamma_{0}}\) is \(w^{*}\)-dense in the set of all means on \(C_{b}(\Gamma)\), there is a net \(\{m_{\beta}\}\) in \(\mathbb{Q}M_{\Gamma_{0}}\) that converges to \(m\) in the weak\({}^{*}\) topology. For each \(m_{\beta}=\sum_{j=1}^{n}t_{j}\delta_{\gamma_{j}}\), define \(u_{\beta}\in K_{0}\) by \(u_{\beta}=\sum_{j=1}^{n}t_{j}u_{\gamma_{j}}\). One checks that \(<\phi_{\psi},m_{\beta}>=\psi(u_{\beta})\) for all \(\psi\in A(K)\). Since \(K\) is compact, \(u_{\beta}\) sub-converges to some \(u\in K\). It is proved on page 30 of [10] that this \(u\) is a fixed point of \(\Gamma\). Finally, since \(L^{1}(N)\) is separable, by the sequential Banach-Alaoglu theorem,
closed balls in \(L^{\infty}(N)=(L^{1}(N))^{*}\) are metrizable. So \(K\) is compact and metrizable. Therefore we can pick a sequence \(\{u_{i}\}\) from the net \(\{u_{\beta}\}\) that converges to \(u\) in the weak\({}^{*}\) topology.
**Lemma 9.4**.: _There is a full measure subset \(U^{\prime}\subset U\) and a constant \(C>0\) such that \(|v_{h_{1}}(w)-v_{h_{2}}(w)|\leq C\cdot\bar{d}_{CC}(h_{1},h_{2})^{\frac{1}{ \alpha}}\) for all \(h_{1},h_{2}\in U^{\prime}\) and all \(w\in W\)._
Proof.: We consider the following \(D\)-homogeneous distance \(d\) on \(N\):
\[d((w_{1},h_{1}),(w_{2},h_{2}))=|w_{1}-w_{2}|+\bar{d}_{CC}(h_{1},h_{2})^{\frac{1} {\alpha}}.\]
Recall that each \(\gamma\in\Gamma\) has the expression \(\gamma(w,h)=(e^{t_{\gamma}}\tilde{\gamma}(w,h),\gamma^{H}(h))\) and \(\gamma^{H}:(H,\bar{d}_{CC})\to(H,\bar{d}_{CC})\) is a similarity with similarity constant \(e^{\alpha t_{\gamma}}\). Since \(\Gamma\) is a uniform quasisimilarity group, by increasing \(\Lambda\) if necessary we may assume each \(\gamma\) is a \((\Lambda,e^{t_{\gamma}})\) quasisimilarity of \((N,d)\). It follows that \(|\tilde{\gamma}(w,h_{1})-\tilde{\gamma}(w,h_{2})|\leq\Lambda\bar{d}_{CC}(h_{1},h_{2})^{\frac{1}{\alpha}}\) for all \(h_{1},h_{2}\in H\), all \(w\in W\) and all \(\gamma\in\Gamma\).
For each integer \(n\geq 1\), let \(B_{n}\subset H\) be the ball with radius \(n\) and center the origin (the identity element of \(H\)). Set \(D_{n}=[-n,n]\times B_{n}\subset W\times H\). Notice that \(u_{i}|_{D_{n}},u|_{D_{n}}\in L^{\infty}(D_{n})\) and that \(u_{i}|_{D_{n}}\to u|_{D_{n}}\) in the weak\({}^{*}\) topology. Since \(D_{n}\) has finite measure we see that \(u_{i}|_{D_{n}}\to u|_{D_{n}}\) weakly in \(L^{p}(D_{n})\) for any \(1\leq p<\infty\). By Mazur's lemma, there is a sequence \(\tilde{u}_{j}\) of convex combinations of the \(u_{i}^{\prime}s\) such that \(\tilde{u}_{j}|_{D_{n}}\) converges to \(u|_{D_{n}}\) in the \(L^{p}\) norm. Then after taking a subsequence if necessary we may assume that \(\tilde{u}_{j}|_{D_{n}}\) converges to \(u|_{D_{n}}\) a.e. By Fubini, there is a null set \(F_{n}\subset H\) such that for any \(h\in B_{n}\backslash F_{n}\), \(\tilde{u}_{j}(w,h)\to u(w,h)\) for a.e. \(w\in[-n,n]\). On the other hand, as each \(u_{i}\) is a convex combination of the \(u_{\gamma}\)'s, each \(\tilde{u}_{j}\) is also a convex combination of the \(u_{\gamma}\)'s. Write \(\tilde{u}_{j}=\sum_{i=1}^{k_{j}}t_{j,i}u_{\gamma_{j,i}}\) for some \(t_{j,i}\geq 0\) with \(\sum_{i}t_{j,i}=1\) and \(\gamma_{j,i}\in\Gamma\). Since each \(u_{\gamma}\) is bounded above by \(\Lambda\), so is \(\tilde{u}_{j}\). It follows from dominated convergence theorem that \(\int_{0}^{w}\tilde{u}_{j}(s,h)ds\to\int_{0}^{w}u(s,h)ds=v_{h}(w)\) for any \(w\in[-n,n]\), and any \(h\in B_{n}\backslash F_{n}\).
Set \(U^{\prime}=U-\cup_{n}F_{n}\). Then \(U^{\prime}\) has full measure in \(H\). Let \(h_{1},h_{2}\in U^{\prime}\) and \(w\in W\). Pick a sufficiently large \(n\) such that \((w,h_{1}),(w,h_{2})\in D_{n}\). Now
\[|\nu_{h_{2}}(w)-\nu_{h_{1}}(w)| = \lim_{j\to\infty}\left|\int_{0}^{w}\left(\tilde{u}_{j}(s,h_{2})- \tilde{u}_{j}(s,h_{1})\right)ds\right|\] \[= \lim_{j\to\infty}\left|\sum_{i}t_{j,i}\int_{0}^{w}(u_{\gamma_{j,i }}(s,h_{2})-u_{\gamma_{j,i}}(s,h_{1}))ds\right|\] \[= \lim_{j\to\infty}\left|\sum_{i}t_{j,i}(\tilde{\gamma}_{j,i}(w,h_ {2})-\tilde{\gamma}_{j,i}(0,h_{2})-\tilde{\gamma}_{j,i}(w,h_{1})+\tilde{ \gamma}_{j,i}(0,h_{1}))\right|\] \[\leq 2\Lambda\bar{d}_{CC}^{\frac{1}{\alpha}}(h_{1},h_{2}).\]
**Lemma 9.5**.: _The map \(G_{0}|_{W\times U^{\prime}}:(W\times U^{\prime},d)\to(W\times U^{\prime},d)\) is biLipschitz._
Proof.: We first show \(G_{0}|_{W\times U^{\prime}}\) is Lipschitz. Let \((w_{1},h_{1}),(w_{2},h_{2})\in W\times U^{\prime}\). Then
\[d(G_{0}(w_{1},h_{1}),G_{0}(w_{2},h_{2})) =d((v_{h_{1}}(w_{1}),h_{1}),(v_{h_{2}}(w_{2}),h_{2}))\] \[=|v_{h_{1}}(w_{1})-v_{h_{2}}(w_{2})|+d^{\frac{1}{\alpha}}_{CC}(h_ {1},h_{2})\] \[\leq|v_{h_{1}}(w_{1})-v_{h_{2}}(w_{1})|+|v_{h_{2}}(w_{2})-v_{h_{2} }(w_{1})|+d^{\frac{1}{\alpha}}_{CC}(h_{1},h_{2})\] \[\leq(2\Lambda+1)d^{\frac{1}{\alpha}}_{CC}(h_{1},h_{2})+\Lambda|w _{1}-w_{2}|\] \[\leq(2\Lambda+1)d((w_{1},h_{1}),(w_{2},h_{2})).\]
Next we show \(G_{0}^{-1}|_{W\times U^{\prime}}\) is also Lipschitz. First assume \(|w_{1}-w_{2}|\leq 4\Lambda^{2}\bar{d}^{\frac{1}{\alpha}}_{CC}(h_{1},h_{2})\). Then
\[d(G_{0}(w_{1},h_{1}),G_{0}(w_{2},h_{2}))\geq\bar{d}^{\frac{1}{ \alpha}}_{CC}(h_{1},h_{2})\geq\frac{1}{4\Lambda^{2}+1}d((w_{1},h_{1}),(w_{2},h _{2})).\]
Now we assume \(|w_{1}-w_{2}|\geq 4\Lambda^{2}\bar{d}^{\frac{1}{\alpha}}_{CC}(h_{1},h_{2})\). Then
\[d(G_{0}(w_{1},h_{1}),G_{0}(w_{2},h_{2})) =|v_{h_{1}}(w_{1})-v_{h_{2}}(w_{2})|+d^{\frac{1}{\alpha}}_{CC}(h_{ 1},h_{2})\] \[\geq|v_{h_{1}}(w_{1})-v_{h_{1}}(w_{2})|-|v_{h_{1}}(w_{2})-v_{h_{2 }}(w_{2})|+d^{\frac{1}{\alpha}}_{CC}(h_{1},h_{2})\] \[\geq\frac{1}{\Lambda}|w_{1}-w_{2}|-2\Lambda\bar{d}^{\frac{1}{ \alpha}}_{CC}(h_{1},h_{2})+d^{\frac{1}{\alpha}}_{CC}(h_{1},h_{2})\] \[\geq\frac{1}{2\Lambda}|w_{1}-w_{2}|+d^{\frac{1}{\alpha}}_{CC}(h_{ 1},h_{2})\] \[\geq\frac{1}{2\Lambda}d((w_{1},h_{1}),(w_{2},h_{2})).\]
Since \(W\times U^{\prime}\) is dense in \(N\), Lemma 9.5 implies \(G_{0}|_{W\times U^{\prime}}\) extends to a biLipschitz map \(\bar{G}_{0}:N\to N\).
**Lemma 9.6**.: _For each \(\gamma\in\Gamma\), \(\bar{G}_{0}\circ\gamma\circ\bar{G}_{0}^{-1}\) acts by similarities with similarity constant \(e^{t_{\gamma}}\) along the cosets of \(W\)._
Proof.: Notice that \(G_{0}^{-1}\) has the expression \(G_{0}^{-1}(w,h)=(v_{h}^{-1}(w),h)\). As above write \(\gamma\) as \(\gamma(w,h)=(\gamma^{\mathbb{R}}(w,h),\gamma^{H}(h))\). It follows that \(G_{0}\circ\gamma\circ G_{0}^{-1}(w,h)=(v_{\gamma^{H}(h)}(\gamma^{\mathbb{R}}(v _{h}^{-1}(w),h)),\gamma^{H}(h)).\) Let \(h\in U^{\prime}\cap(e^{t_{\gamma}\bar{D}}\circ\bar{\gamma})^{-1}(U^{\prime})\), \(w_{1},w_{2}\in W\). First assume \(\gamma\in\Gamma_{0}\). By Lemma 9.2 we have
\[|v_{\gamma^{H}(h)}(\gamma^{\mathbb{R}}(v_{h}^{-1}(w_{1}),h))-v_{\gamma^{H}(h)} (\gamma^{\mathbb{R}}(v_{h}^{-1}(w_{2}),h))|=e^{t_{\gamma}}|v_{h}(v_{h}^{-1}(w_ {1}))-v_{h}(v_{h}^{-1}(w_{2}))|=e^{t_{\gamma}}|w_{1}-w_{2}|.\]
Since \(U^{\prime}\cap(e^{t_{\gamma}\bar{D}}\circ\bar{\gamma})^{-1}(U^{\prime})\) has full measure in \(H\), we see that the map \(\bar{G}_{0}\circ\gamma\circ\bar{G}_{0}^{-1}\) restricted to almost every coset of \(W\) is a similarity with similarity constant \(e^{t_{\gamma}}\). Since \(\bar{G}_{0}\circ\gamma\circ\bar{G}_{0}^{-1}\) is biLipschitz, we see that \(\bar{G}_{0}\circ\gamma\circ\bar{G}_{0}^{-1}\) restricted to every coset of \(W\) is a similarity with similarity constant \(e^{t_{\gamma}}\).
Now consider a general \(\gamma\in\Gamma\). There is sequence \(\gamma_{j}\in\Gamma_{0}\) that converges to \(\gamma\) uniformly on compact subsets of \(N\). Since each \(\bar{G}_{0}\circ\gamma_{j}\circ\bar{G}_{0}^{-1}\) restricted to every coset of \(W\) is a similarity with similarity constant \(e^{t_{\gamma_{j}}}\), a similar statement is true for \(\bar{G}_{0}\circ\gamma\circ\bar{G}_{0}^{-1}\).
By Lemma 9.6, after a conjugation if necessary, we may assume \(\tilde{\gamma}(\cdot,h)\) is a translation of \(\mathbb{R}\) for each \(\gamma\in\Gamma\) and \(h\in H\). Now we can write \(\gamma\) as
\[\gamma(w,h)=\gamma(0,0)\cdot e^{t_{\gamma}D}(w+s_{\gamma}(h),Bh),\]
where \(s_{\gamma}:H\to\mathbb{R}\) is a map satisfying \(s_{\gamma}(0)=0\), and \(B:H\to H\) is a graded automorphism of \(H\) that is also an isometry with respect to \(\bar{d}_{CC}\). Since \(\{e^{-t_{\gamma}D}\circ\gamma|\gamma\in\Gamma\}\) is a family of uniform biLipschitz maps, we see that \(\{s_{\gamma}|\gamma\in\Gamma\}\) is a bounded subset of
\[E_{1}=\{s:H\to\mathbb{R}\text{ is }\tfrac{1}{\alpha}\text{-Holder},\;s(0)=0\}.\]
Now one can apply Day's theorem to a compact convex subset \(K\) of \(E_{1}\) to eliminate \(s_{\gamma}\) as in Section 7 (much easier now). After this, each \(\gamma\) has the form \(\gamma(w,h)=\gamma(0,0)\cdot e^{t_{\gamma}D}(w,Bh)\), which is a similarity of \((N,d)\).
We have finished the proof of Theorem 1.2 in the case when \(\dim(W)=1\), \(\dim(N/W)\geq 2\).
As already indicated above, the arguments in this section are valid for product metric spaces \(\mathbb{R}\times Y\):
**Theorem 9.7**.: _Let \(Y\) be a proper metric space with a distance \(d\). Let \(\alpha\geq 1\) and \(\tilde{d}\) be the distance on \(\mathbb{R}\times Y\) defined by \(\tilde{d}((w_{1},y_{1}),(w_{2},y_{2}))=|w_{1}-w_{2}|+d(y_{1},y_{2})^{\frac{1}{ \alpha}}\). Let \(\Gamma\) be an amenable uniform quasisimilarity group of \((\mathbb{R}\times Y,\tilde{d})\). Suppose the action of \(\Gamma\) on \(\mathbb{R}\times Y\) permutes the subsets \(\{\mathbb{R}\times\{y\}:y\in Y\}\) and induces a similarity action on \((Y,d)\). Then \(\Gamma\) can be biLipschitzly conjugated into the similarity group of \((\mathbb{R}\times Y,\tilde{d})\)._
Proof.: The argument in this section shows that, after a conjugation, \(\Gamma\) acts on the "fibers" \(\mathbb{R}\times\{y\}\) by similarities. At this point the elements of \(\Gamma\) may still contain shear components \(s\). One can get rid of the shear components by modifying the above arguments or apply Theorem 3.3 from [4].
## 10. Case \(\dim(N/W)=1\)
In this section we consider the case when \(\dim(N/W)=1\).
As before we shall make a biLipschitz conjugation so that \(\Gamma\) is a fiber similarity group. We first need to show that (by replacing \(\Gamma\) with a biLipschitz conjugate if necessary) \(\Gamma\) induces an action on the quotient by affine automorphisms. When \(\dim(N/W)\geq 2\), we can apply a Tukia type theorem to achieve this. We need to consider the case \(\dim(N/W)=1\) separately since we can not apply a Tukia type theorem in this case. The theorem of Farb-Mosher [10] says that a uniform quasisimilarity group of the real line is biLipschitz conjugate to a similarity group. Although the action of \(\Gamma\) on \(N\) induces a uniform quasisimilarity action of \(\Gamma\) on the quotient \(N/W=\mathbb{R}\), it is not clear how to lift the biLipschitz conjugating map of \(\mathbb{R}\) to a biLipschitz map of \(N\).
We consider two cases:
Case I. There is a one dimensional ideal \(H\) of \(\mathfrak{n}\) such that \(H\subset V_{\alpha}\) and \(H\cap W_{\alpha}=\{0\}\). In this case \(\mathfrak{n}=\mathfrak{w}\oplus H\) is a direct sum of two ideals and \(N=W\times H\) is a direct product.
Case II. \(\mathfrak{n}\) does not admit a direct sum decomposition as in Case I.
Case I is easy to deal with. Since \(\Gamma\) permutes the cosets of \(W\), it induces an action on the quotient \(N/W=H=\mathbb{R}\) and yields a subgroup \(\overline{\Gamma}\subset\operatorname{Homeo}(\mathbb{R})\) which is a uniform quasisimilarity group. The result of Farb-Mosher [10] says that there is a biLipschitz map \(f_{0}:\mathbb{R}\to\mathbb{R}\) such that \(f_{0}\overline{\Gamma}f_{0}^{-1}\) consists of similarities. Now let \(F_{0}=\operatorname{Id}\times f_{0}:N=W\times H\to W\times H=N\). Then \(F_{0}\) is a biLipschitz map of \(N\) and the conjugate \(F_{0}\Gamma F_{0}^{-1}\) has the property that its induced action on the quotient \(N/W=H\) is by similarities. If \(\dim(W)\geq 2\), then we can apply Theorem 3.7 to conjugate \(\Gamma\) into a fiber similarity group. If \(\dim(W)=1\), then Section 9 can now be used to finish the proof. This finishes Case I.
We next consider Case II. In this case we shall prove that the induced action of \(\Gamma\) on the quotient \(N/W=\mathbb{R}\) is already by similarities and so there is no need to conjugate. So we assume \(\mathfrak{n}\) does
not admit a direct sum decomposition as in Case I. We notice that in this case \(\dim(W)\geq 2\). After applying Theorem 3.7 we may assume that the action of \(\Gamma\) on the cosets of \(W\) are by similarities. By the discussion in Section 5, for each \(\gamma\in\Gamma\), there is a graded automorphism \(\phi_{\gamma}\) of \(W\) such that \((L_{\gamma(g)^{-1}}\circ\gamma\circ L_{g})|_{W}=\phi_{\gamma}\) for any \(g\in N\). The element \(\gamma\) induces a biLipschitz map \(\bar{\gamma}:N/W=\mathbb{R}\to N/W\).
**Claim**\(\bar{\gamma}\) has constant derivative a.e. and so is a similarity.
Suppose the contrary holds. Then there are two points \(\bar{g}_{1}\neq\bar{g}_{2}\in N/W\) such that \(\bar{\gamma}\) is differentiable at both \(\bar{g}_{1}\) and \(\bar{g}_{2}\) and \(\bar{b}_{1}\neq\bar{b}_{2}\), where \(\bar{b}_{1}\), \(\bar{b}_{2}\) are the derivatives of \(\bar{\gamma}\) at \(\bar{g}_{1}\) and \(\bar{g}_{2}\) respectively. Let \(\phi_{i}:N\to N\) be a blow-up of \(\gamma\) at \(g_{i}\); that is, there is a sequence \(t_{j}\to\infty\) such that the sequence of maps \(e^{t_{j}D}\circ L_{(\gamma(g_{i}))^{-1}}\circ\gamma\circ L_{g_{i}}\circ e^{-t _{j}D}:N\to N\) converges uniformly on compact subsets to \(\phi_{i}\). Such a sequence \(\{t_{j}\}\) exists by Arzela-Ascoli as \(\gamma\) is biLipschitz. Different choices of the sequences \(\{t_{j}\}\) may yield different limits \(\phi_{i}\). But all we need is one such limit. The \(\phi_{i}\) still acts on the cosets of \(W\) by \(\phi_{\gamma}\), but now has the additional property that its induced action on \(N/W=\mathbb{R}\) is multiplication by \(\bar{b}_{i}\) (as \(\bar{\gamma}\) is differentiable at \(\bar{g}_{i}\) with derivative \(\bar{b}_{i}\)). Let \(X\in V_{\alpha}\backslash W_{\alpha}\). The Case II assumption implies that \([X,W_{1}]\neq 0\) and so \(\alpha\) must be an integer as \([X,W_{1}]\subset W\cap V_{1+\alpha}\). By Lemma 5.4, there is an injective linear map \(B_{i}:\mathbb{R}X\to V_{\alpha}\) (\(B_{i}\) also depends on \(\gamma\)) such that \(B_{i}\) induces the linear map \(N/W=\mathbb{R}\to N/W,t\mapsto\tilde{b}_{i}t\) and \(d\phi_{\gamma}[X,w]=[B_{i}X,(d\phi_{\gamma})w]\) for all \(w\in\mathfrak{w}\). It follows that \([(B_{1}-B_{2})X,(d\phi_{\gamma})w]=0\) for all \(w\in\mathfrak{w}\). As \(d\phi_{\gamma}\) is an automorphism of \(\mathfrak{w}\), we have \([(B_{1}-B_{2})X,\mathfrak{w}]=0\). Notice that \((B_{1}-B_{2})X=(\bar{b}_{1}-\bar{b}_{2})X+Y\) for some \(Y\in W_{\alpha}\). Since \(\bar{b}_{1}-\bar{b}_{2}\neq 0\), \(X\notin\mathfrak{w}\) and \(\mathfrak{w}\) has codimension one in \(\mathfrak{n}\), we see that \(\mathfrak{n}\) is spanned by \(\mathfrak{w}\) and \((B_{1}-B_{2})X\). Let \(H^{\prime}\) be the subspace of \(\mathfrak{n}\) spanned by \((B_{1}-B_{2})X\). Then we have that \(\mathfrak{n}=\mathfrak{w}\oplus H^{\prime}\) is a direct sum of two ideals, contradicting our assumption. This completes Case II.
We have shown that \(\Gamma\) is a fiber similarity group, after possibly replacing \(\Gamma\) with a biLipschitz conjugate. Now the arguments in Sections 5-8 can be applied to show that \(\Gamma\) can be conjugated into \(\operatorname{Sim}(N,d_{0})\) where \(d_{0}\) is a fixed maximally symmetric \(D\)-homogeneous distance on \(N\).
Now we have finished the proof of Theorem 1.2 in the case when \(\dim(N/W)=1\). Combining this with Section 8 and Section 9 we have completed the proof of Theorem 1.2.
One can use semi-direct product (see Appendix A) to construct many examples of Case II. For instance, let \(\mathfrak{n}=\mathfrak{e}\rtimes\mathbb{R}\), where \(\mathfrak{e}\) is the Engel algebra (with basis \(e_{0},e_{1},e_{2},e_{3}\) and only non-trivial brackets \([e_{0},e_{j}]=e_{j+1}\), \(j=1,2\)) and \(\mathbb{R}=\mathbb{R}X\) acts on \(\mathfrak{e}\) by \([X,e_{1}]=e_{3}\) (all other brackets are \(0\)).
Finally we prove a Tukia-type theorem for the product of finitely many Carnot groups (with possibly different scalings on the factors).
**Theorem 10.1**.: _For \(1\leq i\leq m\), let \(\alpha_{i}\geq 1\) be a constant, \(N_{i}\) a Carnot group and \(d_{i}\) a maximally symmetric Carnot metric on \(N_{i}\). Set \(N=\prod_{i=1}^{m}N_{i}\) and let \(d\) be the distance on \(N\) given by \(d((x_{i}),(y_{i}))=\sum_{i}d_{i}^{\frac{1}{\alpha_{i}}}(x_{i},y_{i})\). Let \(\Gamma\) be a uniform quasisimilarity group of \((N,d)\). If the action of \(\Gamma\) on the space of distinct pairs of \(N\) is cocompact, then \(\Gamma\) can be conjugated by a biLipschitz map into the similarity group of \((N,d)\). If \(\dim(N_{i})=1\) for some \(i\) we further assume \(\Gamma\) is amenable._
Proof.: By combining those \(N_{i}\) where the corresponding \(\alpha_{i}\) are equal we may assume the \(\alpha_{i}\)'s are all distinct. After reordering we may further assume \(\alpha_{1}<\alpha_{2}<\cdots<\alpha_{m}\). Set \(M_{j}=\prod_{i=1}^{j}N_{i}\). Then \(\{M_{j}\}_{j}\) form the preserved subgroup sequence of \(N\). By Theorem 3.7 we may assume \(\Gamma\) acts on the fibers \(N_{j}=M_{j}/M_{j-1}\) by similarities for all \(j\) satisfying \(\dim(N_{j})\geq 2\). Denote \(H_{j}=\prod_{i=m-j}^{m}N_{i}\) and let \(\tilde{d}_{j}\) be the distance on \(H_{j}\) given by \(\tilde{d}_{j}((x_{i}),(y_{i}))=\sum_{i=m-j}^{m}d_{i}^{\frac{1}{\alpha_{i}}}(x_{i},y_{i})\). We next prove by induction on \(j\), starting from \(j=0\), that after a biLipschitz conjugation of \(\Gamma\) the induced action of \(\Gamma\) on \((H_{j},\tilde{d}_{j})\) is by similarities. For the base case we let \(j=0\). If \(\dim(N_{m})\geq 2\), then the induced
action on \((H_{0},\tilde{d}_{0})=(N_{m},d_{m}^{\frac{1}{\alpha m}})\) is already by similarities. If \(\dim(N_{m})=1\), then the argument in Case I of this section applies.
For the induction step, assume \(j\geq 1\) and that the induced action on \((H_{j-1},\tilde{d}_{j-1})\) is by similarities. We need to show that, after a further biLipschitz conjugation, the induced action on \((H_{j},\tilde{d}_{j})\) is by similarities. If \(\dim(N_{m-j})=1\), then Theorem 9.7 implies there is a biLipschitz map \(f_{0}\) of \((H_{j},\tilde{d}_{j})\) which conjugates the induced action on \((H_{j},\tilde{d}_{j})\) to an action by similarities. Let \(F_{0}\) be the biLipschitz map of \(N\) which equals \(f_{0}\) on \(H_{j}\) and is the identity map on \(N_{i}\), \(1\leq i<m-j\). Then \(F_{0}\Gamma F_{0}^{-1}\) induces a similarity action on \((H_{j},\tilde{d}_{j})\).
Next we assume \(\dim(N_{m-j})\geq 2\). By the induction hypothesis the induced action on \((H_{j-1},\tilde{d}_{j-1})\) is by similarities. By the first paragraph, the action on the fibers \(N_{m-j}=M_{m-j}/M_{m-j-1}\) is by similarities. It follows that the induced action of each \(\gamma\) on \(H_{j}\) is given by: \(\gamma(x_{n-j},y)=\gamma(0)(A_{\gamma}(x_{n-j}s_{\gamma}(y)),B_{\gamma}y)\), where \(x_{m-j}\in N_{m-j}\), \(y\in H_{j-1}\), \(A_{\gamma}\) is an automorphism of \(N_{m-j}\) and is also a similarity with respect to \(d_{m-j}\), \(B_{\gamma}\) is a similarity of \((H_{j-1},\tilde{d}_{j-1})\) and \(s_{\gamma}:(H_{j-1},\tilde{d}_{j-1})\to(Z(N_{m-j}),d_{m-j}^{\frac{1}{\alpha_{ m-j}}})\) is Lipschitz and satisfies \(s_{\gamma}(0)=0\). We identify the Lie group with its Lie algebra via the exponential map and write \(s_{\gamma}=\sum_{i}s_{\gamma,i}\), where \(s_{\gamma,i}:H_{j-1}\to Z_{i}(\mathfrak{n}_{m-j})=Z(\mathfrak{n}_{m-j})\cap \mathfrak{n}_{m-j,i}\) is the \(i\)-layer component of \(s_{\gamma}\) and \(\mathfrak{n}_{m-j}=\oplus_{i}\mathfrak{n}_{m-j,i}\) is the Carnot grading of \(\mathfrak{n}_{m-j}\). For \(i\geq 1\), let
\[E_{i}=\{c:(H_{j-1},\tilde{d}_{j-1})\to(Z_{i}(\mathfrak{n}_{m-j}),|\cdot|^{ \frac{1}{\alpha_{m-j}}})\,\,\,\text{is\,\,\,Lipschitz and}\,\,\,\,\,c(0)=0\},\]
where \(|\cdot|\) is a fixed Euclidean metric on \(Z_{i}(\mathfrak{n}_{m-j})\). It is easy to see that \(s_{\gamma,i}\in E_{i}\) for all \(\gamma\in\Gamma\). Furthermore, if \(s=\sum_{i}s_{i}\) with \(s_{i}:H_{j-1}\to Z_{i}(\mathfrak{n}_{m-j})\), then the shear map \(F_{s}:H_{j}\to H_{j}\), \(F_{s}(x_{n-j},y)=(x_{n-j}s(y),y)\) is biLipschitz if and only if \(s_{i}\in E_{i}\). Now applying the arguments from Section 7 we can find a biLipschitz shear map \(f_{0}\) of \((H_{j},\tilde{d}_{j})\) that conjugates the induced action of \(\Gamma\) on \((H_{j},\tilde{d}_{j})\) to an action by similarities. Letting \(F_{0}\) be the biLipschitz map of \(N\) that agrees with \(f_{0}\) on \(H_{j}\) and is the identity on \(N_{i}\) with \(1\leq i<m-j\) we see that \(F_{0}\Gamma F_{0}^{-1}\) induces a similarity action on \((H_{j},\tilde{d}_{j})\)
## Appendix A Examples of Carnot-by-Carnot groups
In this appendix we shall give examples of Carnot-by-Carnot groups. The upshot is that such groups are abundant. The groups are Carnot group by Carnot group extensions. These correspond to Lie algebra extensions of the type Carnot algebra by Carnot algebra. The classical connection between group extension and group cohomology has a counterpart for Lie algebras: there is a connection between Lie algebra extension and Lie algebra cohomology, see [10], [11]. We briefly recall this connection here. Any extension \(0\to\mathfrak{w}\to\mathfrak{n}\to\mathfrak{h}\to 0\) induces a Lie algebra homomorphism \(\bar{\alpha}:\mathfrak{h}\to\operatorname{out}(\mathfrak{w}):=\operatorname{ der}(\mathfrak{w})/\operatorname{ad}(\mathfrak{w})\). Conversely, any Lie algebra homomorphism \(\bar{\alpha}:\mathfrak{h}\to\operatorname{out}(\mathfrak{w})\) induces a Lie algebra homomorphism \(\beta:\mathfrak{h}\to\operatorname{der}(\mathcal{Z}(\mathfrak{w}))\), and there exists an extension \(0\to\mathfrak{w}\to\mathfrak{n}\to\mathfrak{h}\to 0\) inducing the given \(\bar{\alpha}\) if and only if a particular cohomology class in \(H^{3}(\mathfrak{h},\mathcal{Z}(\mathfrak{w}))\) vanishes, where the \(\mathfrak{h}\) module structure on \(\mathcal{Z}(\mathfrak{w})\) is given by \(\beta\). When an extension as above does exist, the set of equivalence classes of extensions of \(\mathfrak{w}\) by \(\mathfrak{h}\) inducing the given \(\bar{\alpha}\) is in one-to-one correspondence with the elements of \(H^{2}(\mathfrak{h},\mathcal{Z}(\mathfrak{w}))\). It follows that for every example of extension \(0\to\mathfrak{w}\to\mathfrak{n}\to\mathfrak{h}\to 0\), we get a collection of extensions parametrized by \(H^{2}(\mathfrak{h},\mathcal{Z}(\mathfrak{w}))\). If one starts with a split extension (which corresponds to semi-direct product) and the second cohomology is non-trivial, then one gets extensions that are no longer split. This applies to the semi-direct product examples below.
We next describe some examples. Let \(\mathfrak{w}=W_{1}\oplus\cdots\oplus W_{m}\) and \(\mathfrak{h}=H_{1}\oplus\cdots H_{n}\) be two Carnot algebras.
**Trivial extension.** This is the direct sum \(\mathfrak{w}\oplus\mathfrak{h}\).
**Semi-direct product.** Recall that each Lie algebra homomorphism \(\mathfrak{h}\to\text{der}(\mathfrak{w})\) determines a semi-direct product \(\mathfrak{w}\rtimes\mathfrak{h}\). The trivial homomorphism yields the direct sum \(\mathfrak{w}\oplus\mathfrak{h}\). We shall construct nontrivial Lie algebra homomorphisms \(\mathfrak{h}\to\text{der}(\mathfrak{w})\). We first recall a result about free nilpotent Lie algebras, see [10].
**Proposition A.1**.: _Let \(F=F_{1}\oplus\cdots\oplus F_{t}\) be a \(t\)-step free nilpotent Lie algebra, and \(L:F_{1}\to F\) be any linear map. Then (1) \(L\) extends uniquely to a derivation \(d:F\to F\); (2) \(L\) extends uniquely to a Lie algebra homomorphism \(\phi:F\to F\)._
Assume \(\mathfrak{h}=F/I\) and \(\mathfrak{w}=\tilde{F}/\tilde{I}\), where \(F=F_{1}\oplus\cdots\oplus F_{t}\) and \(\tilde{F}=\tilde{F}_{1}\oplus\cdots\oplus\tilde{F}_{\tilde{t}}\) are free nilpotent Lie algebras, and \(I\subset F\), \(\tilde{I}\subset\tilde{F}\) are graded ideals satisfying \(I\subset F_{k+1}\oplus\cdots\oplus F_{t}\), \(\tilde{I}\subset\tilde{F}_{\tilde{k}+1}\oplus\cdots\oplus\tilde{F}_{\tilde{t}}\) for some positive integers \(k,\tilde{k}\). For each integer \(s\geq 0\), let \(\text{der}_{s}(\mathfrak{w})\) be the linear subspace of \(\text{der}(\mathfrak{w})\) defined by:
\[\text{der}_{s}(\mathfrak{w})=\{\rho:\mathfrak{w}\to\mathfrak{w}\ \text{ is a derivation satisfying }\ \rho(W_{i})\subset W_{i+s},\forall i\}.\]
**Lemma A.2**.: _Let \(\alpha\geq 2\) be an integer and \(L:H_{1}\to\text{der}_{\alpha}(\mathfrak{w})\) be a linear map. If \(m\leq(k+1)\alpha\), then \(L\) extends uniquely to a Lie algebra homomorphism \(\mathfrak{h}\to\text{der}(\mathfrak{w})\)._
Proof.: Let \(L_{\alpha}\) be the Lie subalgebra of \(\text{der}(\mathfrak{w})\) generated by \(\text{der}_{\alpha}(\mathfrak{w})\). Since \(\mathfrak{w}\) is \(m\)-step, the assumption \(m\leq(k+1)\alpha\) implies that \(L_{\alpha}\) is a nilpotent Lie algebra with step at most \(k\). As \(\mathfrak{h}=F/I\) with \(I\subset F^{(k+1)}\), we may identity \(H_{1}\) with the first layer \(F_{1}\) of \(F\). Now \(F\) is free nilpotent with step at least \(k\) and \(L_{\alpha}\) has step at most \(k\). By the universal property of free nilpotent Lie algebra the linear map \(L:H_{1}=F_{1}\to L_{\alpha}\) extends uniquely to a Lie algebra homomorphism \(\rho:F\to L_{\alpha}\). Since \(L_{\alpha}\) has step at most \(k\), we have \(\rho(F^{(k+1)})=0\). As \(\mathfrak{h}=F/I\) with \(I\subset F^{(k+1)}\), \(\rho\) induces a Lie algebra homomorphism \(\mathfrak{h}\to L_{\alpha}\subset\text{der}(\mathfrak{w})\) that extends \(L\).
**Lemma A.3**.: _Let \(L(W_{1},W_{\alpha+1})\) be the vector space of linear maps from \(W_{1}\) to \(W_{\alpha+1}\), and \(\Phi:\text{der}_{\alpha}(\mathfrak{w})\to L(W_{1},W_{\alpha+1})\) be the restriction map, \(\Phi(\rho)=\rho|_{W_{1}}\). If \(m\leq\alpha+\tilde{k}\), then \(\Phi\) is a linear isomorphism._
Proof.: Since \(\mathfrak{w}\) is generated by \(W_{1}\), we see that \(\Phi\) is injective. We shall establish surjectivity of \(\Phi\) by showing that each map in \(L(W_{1},W_{\alpha+1})\) extends to a derivation, which necessarily lies in \(\text{der}_{\alpha}(\mathfrak{w})\). Let \(L\in L(W_{1},W_{\alpha+1})\). As \(\mathfrak{w}=\tilde{F}/\tilde{I}\) with \(\tilde{I}\subset\tilde{F}^{(\tilde{k}+1)}\), we may identity \(W_{1}\) with \(\tilde{F}_{1}\) and view \(L\) as a linear map from \(\tilde{F}_{1}\) to \(W_{\alpha+1}\). Since \(W_{\alpha+1}=\tilde{F}_{\alpha+1}/(\tilde{F}_{\alpha+1}\cap\tilde{I})\), we can lift \(L\) to a map into \(\tilde{F}_{\alpha+1}\), that is, there is a linear map \(\tilde{L}:\tilde{F}_{1}\to\tilde{F}_{\alpha+1}\) such that \(\pi\circ\tilde{L}=L\), where \(\pi:\tilde{F}\to\mathfrak{w}\) is the projection. By Proposition A.1 the linear map \(\tilde{L}\) extends to a derivation \(\tilde{d}:\tilde{F}\to\tilde{F}\). Notice that \(\tilde{d}(\tilde{F}^{(\tilde{k}+1)})\subset\tilde{F}^{(\tilde{k}+1+\alpha)}\). As \(\mathfrak{w}\) is \(m\)-step, the assumption \(m\leq\alpha+\tilde{k}\) implies that \((\pi\circ\tilde{d})(\tilde{F}^{(\tilde{k}+1)})=0\). As \(\mathfrak{w}=\tilde{F}/\tilde{I}\) with \(\tilde{I}\subset\tilde{F}^{(\tilde{k}+1)}\), it is easy to see that \(\tilde{d}\) induces a derivation \(d:\mathfrak{w}\to\mathfrak{w}\) that extends \(L\).
The following Corollary provides many examples of nontrivial semi-direct products of Carnot algebras and so nontrivial semi-direct products of Carnot groups.
**Corollary A.4**.: _Suppose \(m\leq\min\{\tilde{k}+\alpha,(k+1)\alpha\}\). Then for any linear map \(f:H_{1}\to L(W_{1},W_{\alpha+1})\), the map \(\Phi^{-1}\circ f\) extends uniquely to a Lie algebra homomorphism \(\mathfrak{h}\to\text{der}(\mathfrak{w})\). In particular, for any nonzero linear map \(f:H_{1}\to L(W_{1},W_{\alpha+1})\), there is a nontrivial Lie algebra homomorphism \(\mathfrak{h}\to\text{der}(\mathfrak{w})\) and so a nontrivial semidirect product \(\mathfrak{w}\rtimes\mathfrak{h}\)._
We mention a few special cases of Corollary A.4.
(1) \(\alpha=m-1\). In this case, the condition \(m\leq\min\{\tilde{k}+\alpha,(k+1)\alpha\}\) is automatically satisfied. So every nonzero linear map \(f:H_{1}\to L(W_{1},W_{m})\) will yield a nontrivial semidirect product \(\mathfrak{w}\rtimes\mathfrak{h}\).
(2) \(\alpha=m-2\), \(m\geq 4\) and \(\tilde{k}\geq 2\), In this case, the condition \(m\leq\min\{\tilde{k}+\alpha,(k+1)\alpha\}\) is satisfied. And so every nonzero linear map \(f:H_{1}\to L(W_{1},W_{m-1})\) will yield a nontrivial semidirect product \(\mathfrak{w}\rtimes\mathfrak{h}\).
**Central product.** Let \(\alpha\geq 2\) be an integer and \(\mathfrak{h}=H_{1}\oplus\cdots\oplus H_{n}\), \(\mathfrak{w}=W_{1}\oplus\cdots\oplus W_{\alpha n}\) be two Carnot algebras. Let \(W^{\prime}\subset W_{\alpha n}\) and \(H^{\prime}\subset H_{n}\) be linear subspaces and \(\phi:W^{\prime}\to H^{\prime}\) a linear isomorphism. The corresponding central product \(\mathfrak{w}\times_{\phi}\mathfrak{h}\) is the quotient of the direct sum \(\mathfrak{w}\oplus\mathfrak{h}\) by the central ideal \(\{(w,-\phi(w))|w\in W^{\prime}\}\). Clearly there is a short exact sequence \(0\to\mathfrak{w}\to\mathfrak{w}\times_{\phi}\mathfrak{h}\to\mathfrak{h}/H^{ \prime}\to 0\) and so \(\mathfrak{w}\times_{\phi}\mathfrak{h}\) is an extension of \(\mathfrak{w}\) by \(\mathfrak{h}/H^{\prime}\). Notice that \(\mathfrak{h}/H^{\prime}=H_{1}\oplus\cdots\oplus(H_{n}/H^{\prime})\) is a Carnot algebra.
## Appendix B Lattices in SOL-like groups
In this appendix we give examples of SOL-like groups that admit lattices.
There are necessary and sufficient conditions for the existence of lattices in solvable Lie groups, see Theorem 6.2 in [10] and Chapter III, Section 6 of [1]. But those conditions are not easy to check. Here we cite a result by Sawai-Yamada [2] which gives a simple sufficient condition for certain solvable Lie groups to admit lattices. Let \(\mathfrak{n}\) be a rational nilpotent Lie algebra. Here "rational" means \(\mathfrak{n}\) has a basis with rational structure constants. Then it follows that \(\mathfrak{n}\) has a basis \(\{X_{1},\cdots,X_{m}\}\) with integer structure constants. Let \(\mathfrak{n}^{i}\) (\(i=1,2\)) be a copy of \(\mathfrak{n}\) with corresponding basis \(\{X_{1}^{i},\cdots,X_{m}^{i}\}\). So \(X_{j}\mapsto X_{j}^{i}\) extends to an isomorphism from \(\mathfrak{n}\) to \(\mathfrak{n}^{i}\). Suppose \(k_{j}\), \(1\leq j\leq m\) are integers and the map \(X_{j}^{1}\mapsto k_{j}X_{j}^{1}\), \(X_{j}^{2}\mapsto-k_{j}X_{j}^{2}\), extends to a derivation \(D\) on \(\mathfrak{n}^{1}\times\mathfrak{n}^{2}\). Let \(S\) be the semidirect product \((N^{1}\times N^{2})\rtimes\mathbb{R}\), where \(N^{i}\) is the simply connected Lie group with Lie algebra \(\mathfrak{n}^{i}\) and the action of \(\mathbb{R}\) on \(N^{1}\times N^{2}\) is generated by the derivation \(D\).
**Theorem B.1**.: _([2], Theorem 2) The solvable Lie group \(S\) above admits a lattice._
We recall that lattices in solvable Lie groups are always uniform [10].
We give two explicit examples. The first is the so-called Benson-Gordon group [11], which was also discussed in [2]. In this example, \(\mathfrak{n}^{i}\) (\(i=1,2\)) is a copy of the Heisenberg algebra with basis \(X^{i},Y^{i},Z^{i}\) and the only nontrivial bracket among basis elements is \([X^{i},Y^{i}]=Z^{i}\). Let \(k_{1},k_{2}\) be integers. Let \(D_{i}:\mathfrak{n}^{i}\to\mathfrak{n}^{i}\) be the derivation given by \(D_{i}(X^{i})=k_{1}X^{i}\), \(D_{i}(Y^{i})=k_{2}Y^{i}\), \(D_{i}(Z^{i})=(k_{1}+k_{2})Z^{i}\). Let \(D=(D_{1},-D_{2})\) be the derivation of \(\mathfrak{n}^{1}\times\mathfrak{n}^{2}\). The semidirect product \(S_{k_{1},k_{2}}=(N^{1}\times N^{2})\rtimes_{D}\mathbb{R}\) is a Benson-Gordon group. When \(k_{1},k_{2}\) are positive, \(S_{k_{1},k_{2}}\) is a SOL-like group that we are interested. When \(k_{1}=k_{2}\) is positive, \((N^{i},D_{i})\) is of Carnot type. Next we give an example where \(N\) is Carnot-by-Carnot.
In the second example \(\mathfrak{n}\) is a semi-direct product \(\mathfrak{n}=\mathfrak{e}\rtimes\mathfrak{h}\), where \(\mathfrak{e}\) is the Engel algebra (with basis \(e_{0},e_{1},e_{2},e_{3}\) and only non-trivial brackets \([e_{0},e_{i}]=e_{i+1}\), \(i=1,2\)) and \(\mathfrak{h}\) is the Heisenberg algebra (with basis \(X,Y,Z\) and only non-trivial bracket \([X,Y]=Z\)), and the action of \(\mathfrak{h}\) on \(\mathfrak{e}\) is given by \([X,e_{0}]=e_{3}\), \([Y,e_{1}]=e_{3}\) (all other brackets are \(0\)). This semi-direct product is of Case (1) discussed after Corollary A.4. Let \(D_{0}\) be the derivation of \(\mathfrak{n}\) given by \(D_{0}(e_{0})=e_{0}\), \(D_{0}(e_{j})=je_{j}\) (\(j=1,2,3\)), \(D_{0}(X)=2X\), \(D_{0}(Y)=2Y\), \(D_{0}(Z)=4Z\). Set \(D_{1}=D_{2}=D_{0}\). Then \(D=(D_{1},-D_{2})\) is a derivation of \(\mathfrak{n}^{1}\times\mathfrak{n}^{2}\). Finally let \(S=(N^{1}\times N^{2})\rtimes_{D}\mathbb{R}\). By Theorem B.1\(S\) admits a lattice. In this example, \((N_{i},D_{i})\) is Carnot-by-Carnot.
## Appendix C Compatible expressions
In this appendix we prove Lemma 5.4. Recall the assumption: \((\mathfrak{n},D)\) is a diagonal Heintze pair, \(\mathfrak{w}\) is an ideal of \(\mathfrak{n}\) such that \(D(\mathfrak{w})=\mathfrak{w}\); \(F:N\to N\) is a biLipschitz map that permutes the cosets of \(W\), where \(W\) is the connected Lie subgroup of \(N\) with Lie algebra \(\mathfrak{w}\); for each \(g\in N\), the map \(F_{g}|_{W}\) is an automorphism \(\phi\) of \(W\), and \(F\) induces an affine map \(\bar{F}\) of \(N/W\). Let \(\bar{B}\) be the automorphism part of \(\bar{F}\).
The main ingredient in the proof is the fact that \(\phi\circ(\chi_{g}|_{W})=(\chi_{G(g)}|_{W})\circ\phi\), where \(G=F_{0}\). See Lemma 5.1. In the following proof we will implicitly (and repeatedly) use the fact that \([Z(\mathfrak{w}),\mathfrak{n}]\subset Z(\mathfrak{w})\). This follows from the Jacobi identity and the fact that \(W\) is an ideal of \(\mathfrak{n}\).
For a linear transformation \(T\) of a finite dimensional vector space, we denote by \(\sigma(T)\) the set of eigenvalues of \(T\). Recall that \(\sigma(\bar{D})\subset\sigma(D)\), where \(\bar{D}:\mathfrak{n}/\mathfrak{w}\to\mathfrak{n}/\mathfrak{w}\) is the derivation induced by \(D\). Set \(I=\sigma(D)-\sigma(\bar{D})\).
Proof.: Set \(G=F_{0}=L_{F(0)^{-1}}\circ F\). Then \(G(0)=0\). By Lemma 5.1, there is an automorphism \(\phi\) of \(W\) such that if we set \(A=d\phi\), then \(A\) is layer-preserving, \(A\circ d(\chi_{g}|_{W})\circ A^{-1}=d(\chi_{G(g)}|_{W})\) and \(G(h*w)=G(h)*Aw\) for any \(g=h*w\in\mathfrak{n}\), where \(h\in H,w\in\mathfrak{w}\). Let \(H^{\prime}\subset\mathfrak{n}\) be a graded subspace of \(\mathfrak{n}\) complementary to \(\mathfrak{w}\); that is, for each \(\lambda\in\sigma(\bar{D})\), \(H^{\prime}_{\lambda}\subset V_{\lambda}\) is a complementary linear subspace of \(W_{\lambda}\) in \(V_{\lambda}\) (if \(W_{\lambda}=\{0\}\), then \(H^{\prime}_{\lambda}=V_{\lambda}\)), and \(H^{\prime}=\oplus_{\lambda\in\sigma(\bar{D})}H^{\prime}_{\lambda}\). Denote by \(B_{0}:H\to H^{\prime}\) the linear isomorphism \((\pi|_{H^{\prime}})^{-1}\circ d\bar{B}\circ(\pi|_{H})\). Since \(\pi(G(h))=d\overline{B}(\bar{h})=\pi(B_{0}h)\), there is a map \(S:\mathfrak{n}/\mathfrak{w}\to\mathfrak{w}\) such that \(G(h)=B_{0}h*S(\bar{h})\). It follows that the map \(G\) has the form
\[G(h*w)=B_{0}h*S(\bar{h})*Aw.\]
In general, \(B_{0}\) and \(S\) do not satisfy conditions (2) and (3) in the definition of compatible expression. We need to modify the map \(B_{0}\) (and so also \(S\)).
For each \(\lambda\in\sigma(D|_{\mathfrak{w}})\), denote \(Z_{\lambda}(\mathfrak{w}):=Z(\mathfrak{w})\cap W_{\lambda}\) and let \(Z_{\lambda}^{\perp}(\mathfrak{w})\subset W_{\lambda}\) be a subspace such that \(W_{\lambda}=Z_{\lambda}(\mathfrak{w})\oplus Z_{\lambda}^{\perp}(\mathfrak{w})\). The map \(S\) can be written as \(S(\bar{h})=\sum_{\lambda\in\sigma(D|_{\mathfrak{w}})}S_{\lambda}(\bar{h})\), where \(S_{\lambda}:=\pi_{\lambda}\circ S\) and \(\pi_{\lambda}:\mathfrak{w}\to W_{\lambda}\) is the projection with respect to the decomposition \(\mathfrak{w}=\oplus_{\lambda}W_{\lambda}\). There are two maps \(\tilde{S}_{\lambda}:\mathfrak{n}/\mathfrak{w}\to Z_{\lambda}(\mathfrak{w})\) and \(S_{\lambda}^{\perp}:\mathfrak{n}/\mathfrak{w}\to Z_{\lambda}^{\perp}(\mathfrak{ w})\) such that \(S_{\lambda}(\bar{h})=\tilde{S}_{\lambda}(\bar{h})+S_{\lambda}^{\perp}(\bar{h})\).
We shall use the equation \(A\circ d(\chi_{g}|_{W})\circ A^{-1}=d(\chi_{G(g)}|_{W})\). Let \(\mu\in\sigma(D|_{\mathfrak{w}})\), \(w\in W_{\mu}\) and \(h\in H\) be arbitrary. We next calculate \(A\circ d(\chi_{h}|_{W})\circ A^{-1}(w)\) and \(d(\chi_{G(h)}|_{W})(w)\). By (2)
\[A\circ d(\chi_{h}|_{W})\circ A^{-1}(w)=w+A[h,A^{-1}w]+\sum_{i=2}^{\infty}\frac{1 }{i!}A(\operatorname{ad}h)^{i}(A^{-1}w)\]
and
\[d(\chi_{G(h)}|_{W})(w)=w+[G(h),w]+\sum_{i=2}^{\infty}\frac{1}{i!}(\operatorname{ ad}(\operatorname{G}(h)))^{i}(w).\]
From \(A\circ d(\chi_{g}|_{W})\circ A^{-1}=d(\chi_{G(g)}|_{W})\), we get
\[L:=A[h,A^{-1}w]+\sum_{i=2}^{\infty}\frac{1}{i!}A(\operatorname{ad}h)^{i}(A^{-1}w )=[G(h),w]+\sum_{i=2}^{\infty}\frac{1}{i!}(\operatorname{ad}(\operatorname{G}( \operatorname{h})))^{i}(w):=R. \tag{14}\]
Since \(\mathfrak{w}\) is an ideal in \(\mathfrak{n}\), every item in (14) lies in \(\mathfrak{w}\). By the BCH formula,
\[G(h)=B_{0}h*S(\bar{h})=B_{0}h+S(\bar{h})+\frac{1}{2}[B_{0}h,S(\bar{h})]+\cdots. \tag{15}\]
**First Claim**: \(S_{\lambda}^{\perp}=0\) for \(\lambda\in I\); in other words, \(S_{\lambda}(\bar{h})\in Z_{\lambda}(\mathfrak{w})\) for \(\lambda\in I\).
The proof is by induction. Let \(\lambda_{0}\in I\). Assume that \(S_{\lambda}^{\perp}=0\) for all \(\lambda\in I\) with \(\lambda<\lambda_{0}\). To simplify notation, we use \(w^{z}\) to denote an element of \(Z(\mathfrak{w})\), \(w^{>}\) to denote an element of \(\oplus_{\lambda>\lambda_{0}}W_{\lambda}\), use \(\bar{x}\), \(\bar{y}\) and so on to denote elements in \(\sum_{\lambda\in\sigma(\bar{D})}W_{\lambda}\), and use subscripts to denote different such elements. Using this we can write \(S(\bar{h})=\bar{x}_{1}+w_{1}^{z}+S_{\lambda_{0}}(\bar{h})+w_{1}^{>}\). Then \([B_{0}h,S(\bar{h})]=\bar{x}_{2}+w_{2}^{z}+w_{2}^{>}\) and all the iterated brackets of \(B_{0}h\) and \(S(\bar{h})\) have this form. Hence we have
\[G(h)=B_{0}h+\bar{x}_{3}+w_{3}^{z}+S_{\lambda_{0}}(\bar{h})+w_{3}^{>}.\]
From this we get \([G(h),w]=[B_{0}h+\bar{x}_{3},w]+[S_{\lambda_{0}}(\bar{h}),w]+[w_{3}^{>},w]\) and for \(i\geq 2\), \((\operatorname{ad}G(h))^{i}(w)=y_{1}+y_{2}\), with \(y_{1}\in\oplus_{\lambda\in\sigma(\bar{D})}W_{\lambda+\mu}\) and \(y_{2}\in\oplus_{\lambda>\lambda_{0}+\mu}W_{\lambda}\). Since by assumption \(\lambda_{0}\in I\) we see that \(\pi_{\lambda_{0}+\mu}L=0\) and \(\pi_{\lambda_{0}+\mu}R=[S_{\lambda_{0}}(\bar{h}),w]\). It follows that \([S_{\lambda_{0}}(\bar{h}),w]=0\) for any \(w\in W_{\mu}\) and any \(\mu\in\sigma(D|_{\mathfrak{w}})\) and therefore \(S_{\lambda_{0}}(\bar{h})\in Z_{\lambda_{0}}(W)\).
**Second Claim**: for each \(\lambda\in\sigma(\bar{D})\), there is a linear map \(B_{\lambda}:H\to\mathfrak{n}\) satisfying
\((a)_{\lambda}\). \(d\bar{B}\circ\pi|_{H}=\pi\circ B_{\lambda}\) and \(B_{\lambda}(H_{\lambda^{\prime}})\subset V_{\lambda^{\prime}}\) for any \(\lambda^{\prime}\in\sigma(\bar{D})\);
\((b)_{\lambda}\). \(B_{\lambda}|_{H_{\lambda^{\prime}}}=B_{0}|_{H_{\lambda^{\prime}}}\) for \(\lambda^{\prime}>\lambda\);
\((c)_{\lambda}\). \([B_{\lambda}h,Aw]=A[h,w]\) for any \(w\in\mathfrak{w}\), and \(h\in H_{\lambda^{\prime}}\) with \(\lambda^{\prime}\leq\lambda\);
\((d)_{\lambda}\). The map \(G\) can be written \(G(h*w)=B_{\lambda}h*S^{(\lambda)}(\bar{h})*Aw\), where \(S^{(\lambda)}:\mathfrak{n}/\mathfrak{w}\to\mathfrak{w}\) is a map satisfying \(S^{(\lambda)}_{\lambda^{\prime}}(\bar{h})\in Z_{\lambda^{\prime}}(\mathfrak{w})\) for \(\lambda^{\prime}\leq\lambda\), where \(S^{(\lambda)}_{\lambda^{\prime}}=\pi_{\lambda^{\prime}}\circ S^{(\lambda)}\).
The proof of the Second Claim is also by induction. We let \(\lambda_{0}\in\sigma(\bar{D})\) and assume that \(B_{\lambda}\) satisfying \((a)_{\lambda}-(d)_{\lambda}\) are defined for all \(\lambda<\lambda_{0}\). Let \(\lambda_{1}<\lambda_{0}\) be the largest \(\lambda\in\sigma(\bar{D})\) less than \(\lambda_{0}\). We shall first show that \({S^{(\lambda_{1})}_{\lambda_{0}}}^{\perp}(\bar{h})\) depends only on the \(H_{\lambda_{0}}\) component \(h_{\lambda_{0}}\) of \(h\) and is linear in \(h_{\lambda_{0}}\).
Let \(h\in H\) and \(w\in W_{\mu}\) for some \(\mu\in\sigma(D|_{\mathfrak{w}})\). We will apply \(\pi_{\lambda_{0}+\mu}\) to both sides of (14). Using the First Claim and the induction hypothesis we can write \(S^{(\lambda_{1})}(\bar{h})=w_{1}^{z}+S^{(\lambda_{1})}_{\lambda_{0}}(\bar{h})+w_ {1}^{>}\). From this we get \([B_{\lambda_{1}}h,S^{(\lambda_{1})}(\bar{h})]=w_{2}^{z}+w_{2}^{>}\); \(G(h)=B_{\lambda_{1}}h+w_{3}^{z}+S^{(\lambda_{1})}_{\lambda_{0}}(\bar{h})+w_{3} ^{>}\); \([G(h),w]=[B_{\lambda_{1}}h,w]+[S^{(\lambda_{1})}_{\lambda_{0}}(\bar{h}),w]+[w_{3} ^{>},w]\); and for \(i\geq 2\), \((\operatorname{ad}G(h))^{i}(w)=(\operatorname{ad}B_{\lambda_{1}}h)^{i}(w)+y_ {3}\) with \(y_{3}\in\oplus_{\lambda>\lambda_{0}+\mu}W_{\lambda}\). Hence we have
\[\pi_{\lambda_{0}+\mu}L=A[h_{\lambda_{0}},A^{-1}w]+\sum_{i\geq 2}\frac{1}{i!}\sum_{ \lambda_{j_{1}}+\cdots+\lambda_{j_{i}}=\lambda_{0}}A(\operatorname{ad}h_{ \lambda_{j_{1}}}\circ\cdots\circ\operatorname{ad}h_{\lambda_{j_{i}}}(A^{-1}w)),\]
\[\pi_{\lambda_{0}+\mu}R=[B_{\lambda_{1}}h_{\lambda_{0}},w]+[S^{(\lambda_{1})}_{ \lambda_{0}}(\bar{h}),w]+\sum_{i\geq 2}\frac{1}{i!}\sum_{\lambda_{j_{1}}+\cdots+ \lambda_{j_{i}}=\lambda_{0}}\operatorname{ad}B_{\lambda_{1}}h_{j_{1}}\circ\cdots \circ\operatorname{ad}B_{\lambda_{1}}h_{j_{i}}(w).\]
On the other hand \((c)_{\lambda_{1}}\) implies for \(i\geq 2\)
\[A(\operatorname{ad}h_{\lambda_{j_{1}}}\circ\cdots\circ\operatorname{ad}h_{ \lambda_{j_{i}}}(A^{-1}w))=\operatorname{ad}B_{\lambda_{1}}h_{j_{1}}\circ\cdots \circ\operatorname{ad}B_{\lambda_{1}}h_{j_{i}}(w).\]
It follows that
\[A[h_{\lambda_{0}},A^{-1}w]=[B_{\lambda_{1}}h_{\lambda_{0}},w]+[S^{(\lambda_{1})}_{ \lambda_{0}}(\bar{h}),w]. \tag{16}\]
Since the two terms \(A[h_{\lambda_{0}},A^{-1}w]\) and \([B_{\lambda_{1}}h_{\lambda_{0}},w]\) depend only on the \(H_{\lambda_{0}}\) component \(h_{\lambda_{0}}\) of \(h\) and are linear in \(h_{\lambda_{0}}\), we see that \(S^{(\lambda_{1})}_{\lambda_{0}}{}^{\perp}(\bar{h})=S^{(\lambda_{1})}_{\lambda_ {0}}{}^{\perp}(\bar{h}_{\lambda_{0}})\) also depends only on \(h_{\lambda_{0}}\) and is linear in \(h_{\lambda_{0}}\).
We define \(B_{\lambda_{0}}\) as follows: \(B_{\lambda_{0}}|_{H_{\lambda}}=B_{\lambda_{1}}|_{H_{\lambda}}\) for \(\lambda\neq\lambda_{0}\) and \(B_{\lambda_{0}}h=B_{\lambda_{1}}h+S^{(\lambda_{1})}_{\lambda_{0}}{}^{\perp}( \bar{h})\) for \(h\in H_{\lambda_{0}}\). We need to verify \((a)_{\lambda_{0}}-(d)_{\lambda_{0}}\). The properties \((a)_{\lambda_{0}}\) and \((b)_{\lambda_{0}}\) are easy to see and \((c)_{\lambda_{0}}\) follows from \((c)_{\lambda_{1}}\), (16) and the definition of \(B_{\lambda_{0}}\). For \((d)_{\lambda_{0}}\): use \((d)_{\lambda_{1}}\) and write \(G\) as
\[G(h*w)=B_{\lambda_{1}}h*S^{(\lambda_{1})}(\bar{h})*Aw=B_{\lambda_{0}}h*S^{( \lambda_{0})}(\bar{h})*Aw,\]
where \(S^{(\lambda_{0})}(\bar{h})=(-B_{\lambda_{0}}h)*B_{\lambda_{1}}h*S^{(\lambda_{ 1})}(\bar{h})\). We need to show \(S^{(\lambda_{0})}_{\lambda^{\prime}}(\bar{h})\in Z_{\lambda^{\prime}}(\mathfrak{ w})\) for \(\lambda^{\prime}\leq\lambda_{0}\). By the definition of \(B_{\lambda_{0}}\) and the linearity of \(B_{\lambda_{0}}\), \(B_{\lambda_{1}}\) we obtain \(B_{\lambda_{0}}h=B_{\lambda_{1}}h+S^{(\lambda_{1})}_{\lambda_{0}}{}^{\perp}( \bar{h}_{\lambda_{0}})=B_{\lambda_{1}}h+S^{(\lambda_{1})}_{\lambda_{0}}{}^{ \perp}(\bar{h})\) for any \(h\in H\). Using this we get
\[[-B_{\lambda_{0}}h,B_{\lambda_{1}}h]=-[S^{(\lambda_{1})}_{\lambda_{0}}{}^{\perp }(\bar{h}),B_{\lambda_{1}}h]\in\oplus_{\lambda\geq(\lambda_{0}+\alpha)}W_{ \lambda}.\]
By the BCH formula, \((-B_{\lambda_{0}}h)*B_{\lambda_{1}}h=(-B_{\lambda_{0}}h)+B_{\lambda_{1}}h+ \frac{1}{2}[-B_{\lambda_{0}}h,B_{\lambda_{1}}h]+\cdots=-S^{(\lambda_{1})}_{ \lambda_{0}}{}^{\perp}(\bar{h})+w_{4}\), with \(w_{4}\in\oplus_{\lambda\geq(\lambda_{0}+\alpha)}W_{\lambda}\). By \((d)_{\lambda_{1}}\), \([-S^{(\lambda_{1})}_{\lambda_{0}}{}^{\perp}(\bar{h}),S^{(\lambda_{1})}(\bar{h} )]\in\oplus_{\lambda\geq 2\lambda_{0}}W_{\lambda}\). Finally,
\[S^{(\lambda_{0})}(\bar{h}) =(-B_{\lambda_{0}}h)*B_{\lambda_{1}}h*S^{(\lambda_{1})}(\bar{h})\] \[=-S^{(\lambda_{1})}_{\lambda_{0}}{}^{\perp}(\bar{h})+w_{4}+S^{( \lambda_{1})}(\bar{h})+\frac{1}{2}[-S^{(\lambda_{1})}_{\lambda_{0}}{}^{\perp} (\bar{h})+w_{4},S^{(\lambda_{1})}(\bar{h})]+\cdots\] \[=S^{(\lambda_{1})}(\bar{h})-S^{(\lambda_{1})}_{\lambda_{0}}{}^{ \perp}(\bar{h})+w_{5},\]
with \(w_{5}\in\oplus_{\lambda\geq(\lambda_{0}+\alpha)}W_{\lambda}\). From this, \((d)_{\lambda_{1}}\) and the First Claim we see that \(S^{(\lambda_{0})}_{\lambda}(\bar{h})=S^{(\lambda_{1})}_{\lambda}(\bar{h})\in Z _{\lambda}(\mathfrak{w})\) for \(\lambda<\lambda_{0}\), and
\[S^{(\lambda_{0})}_{\lambda_{0}}(\bar{h})=S^{(\lambda_{1})}_{\lambda_{0}}(\bar{h })-S^{(\lambda_{1})}_{\lambda_{0}}{}^{\perp}(\bar{h})=\tilde{S}^{(\lambda_{1} )}_{\lambda_{0}}(\bar{h})\in Z_{\lambda_{0}}(\mathfrak{w}).\]
This verifies \((d)_{\lambda_{0}}\) and completes the induction argument for the Second Claim.
Denote by \(\bar{\lambda}\) the largest eigenvalue of \(\bar{D}\). We set \(B=B_{\bar{\lambda}}\) and \(s=A^{-1}\circ S^{(\bar{\lambda})}\). We need to verify conditions 1-3 in the definition of a compatible expression. Condition 1 follows from \((a)_{\bar{\lambda}}\) and Condition 2 follows from \((c)_{\bar{\lambda}}\). From \((d)_{\bar{\lambda}}\) we get an expression for \(F\) (as \(G=F_{0}=L_{F(0)^{-1}}\circ F\)):
\[F(h*w)=F(0)*Bh*As(\bar{h})*Aw.\]
By the First Claim and \((d)_{\bar{\lambda}}\) we see that \(s(\bar{h})\in Z(\mathfrak{w})\). This allows us to switch \(As(\bar{h})\) and \(Aw\) to arrive at a compatible expression for \(F\).
|
2306.04483 | Versatile Parametric Classes of Covariance Functions that Interlace
Anisotropies and Hole Effects | Covariance functions are a fundamental tool for modeling the dependence
structure of spatial processes. This work investigates novel constructions for
covariance functions that enable the integration of anisotropies and hole
effects in complex and versatile ways, having the potential to provide more
accurate representations of dependence structures arising with real-world data.
We show that these constructions extend widely used covariance models,
including the Mat\'ern, Cauchy, compactly-supported hypergeometric and cardinal
sine models. We apply our results to a geophysical data set from a
rock-carbonate aquifer and demonstrate that the proposed models yield more
accurate predictions at unsampled locations compared to basic covariance
models. | Alfredo AlegrÃa, Xavier Emery | 2023-06-07T14:53:05Z | http://arxiv.org/abs/2306.04483v1 | # Versatile Parametric Classes of Covariance Functions that Interlace Anisotropies and Hole Effects
###### Abstract
Covariance functions are a fundamental tool for modeling the dependence structure of spatial processes. This work investigates novel constructions for covariance functions that enable the integration of anisotropies and hole effects in complex and versatile ways, having the potential to provide more accurate representations of dependence structures arising with real-world data. We show that these constructions extend widely used covariance models, including the Matern, Cauchy, compactly-supported hypergeometric and cardinal sine models. We apply our results to a geophysical data set from a rock-carbonate aquifer and demonstrate that the proposed models yield more accurate predictions at unsampled locations compared to basic covariance models.
_Keywords: Nonmonotonic covariance models; Matern covariance; Cauchy covariance; Gauss hypergeometric covariance; Cardinal sine covariance; Anisotropic random fields._
## 1 Introduction
Data indexed by spatial (hereafter, Euclidean) coordinates arise in many disciplines of the natural sciences, including climatology (Sang et al., 2011), oceanography (Wikle et al., 2013), environment (Rodrigues et al., 2015), ecology (Finley et al., 2011) and geosciences (Davis, 2002). Statistical and geostatistical models often assume the observed data to be a realization of a Gaussian random field, with the covariance function being the fundamental ingredient to capture the spatial dependence (Chiles and Delfiner, 2012), to understand the underlying spatial patterns and to make reliable predictions.
Currently, there is a fairly extensive catalog of parametric families of stationary covariance functions that allow modeling a large number of patterns appearing in real situations, such as long-memory, hole effects, periodicities, degree of mean square differentiability, anisotropies, among others. Classical textbooks, such as Gaetan and Guyon (2010) and Chiles and Delfiner (2012), provide extensive insights into the wide range of available models. While existing models can handle many common patterns found in real data sets, some data sets may present complex combinations of features that require the development of new specialized models. In particular, anisotropies and hole effects are two common properties that can manifest on the covariance structure of data. Anisotropy refers
to the directional dependence of spatial data, where the level of association varies across different directions. We refer the reader to Allard et al. (2016) and Koch et al. (2020) for discussions on various types of anisotropy. Hole effects, on the other hand, refer to the occurrence of negative covariance values at large distances, which can be attributed to the structured occurrence of high (low) values of a georeferenced variable surrounded by low (high) values of this variable (Chiles and Delfiner, 2012).
Although some basic constructions that incorporate both anisotropy and hole effects can be designed easily (some examples are provided in Section 2), more complex and sophisticated relationships may be required in practice. Our focus is on covariance models that feature both amenable expressions and interpretable parameters, and that are capable of achieving negative values of varying intensities depending on the spatial orientation. In particular, some models could display negative values only along specific spatial directions. We are motivated to study this type of models in order to have a flexible framework capable of capturing intricate dependence patterns present in real-world data, and enable more robust inference and prediction.
To accomplish this goal, we begin by examining the conditions under which the difference between two geometrically anisotropic stationary covariance functions is valid. In a purely isotropic setting, Ma (2005), Buhmann and Jager (2020), Faouzi et al. (2020) and Posa (2023) utilized this methodology for constructing models with hole effects. Our findings thus expand upon these works by considering an anisotropic setting. Furthermore, we investigate an approach based on the difference between a merely isotropic model and the average of shifted isotropic models. The shift direction is a critical element of this formulation as it indicates the primary direction where the hole effect occurs. In addition, we study a construction that involves directional derivatives of a spatial process; thus, a significant hole effect is expected in a predominant direction (directional derivative's sign can amplify the transitions between high and low values). We also investigate how the aforementioned constructions can be coupled with popular existing covariance models, such as the Matern, Cauchy, compactly-supported hypergeometric and cardinal sine, to generalize these models to more versatile parametric functions.
The practical implications of this work will be explored through an application to a geophysical dataset. Our analysis will reveal that the proposed models lead to substantially improved predictions at unsampled locations in comparison with basic covariance models.
The article is organized as follows. Section 2 contains preliminary material on stationary spatial random fields, covariance functions and basic models that combine anisotropies and hole effects. Section 3 proposes general methodologies to construct models merging anisotropies and hole effects in a nontrivial manner. Section 4 offers explicit parametric families that use Matern, Cauchy, compactly-supported hypergeometric and cardinal sine models as a starting point. In Section 5, our findings are applied to a real data set. Section 6 presents conclusions and outlines potential avenues for future research.
## 2 Preliminaries
Let \(d\) be a positive integer and \(\{Z(\mathbf{x}):\mathbf{x}\in\mathbb{R}^{d}\}\) be a second-order zero-mean random field. The covariance function of such a random field is the mapping \(K:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) defined as \(K(\mathbf{x},\mathbf{x}^{\prime})=\operatorname{cov}\left[Z(\mathbf{x}),Z(\mathbf{x}^{\prime})\right]\). This is a positive semidefinite function, i.e., for all \(n\in\mathbb{N}\), \(v_{1},\ldots,v_{n}\in\mathbb{R}\) and
\(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\in\mathbb{R}^{d}\),
\[\sum_{i,j=1}^{n}v_{i}v_{j}K(\mathbf{x}_{i},\mathbf{x}_{j})\geq 0.\]
The mapping \(K\) is said to be stationary if there exists a function \(C:\mathbb{R}^{d}\to\mathbb{R}\) such that \(K(\mathbf{x},\mathbf{x}^{\prime})=C(\mathbf{x}-\mathbf{x}^{\prime})\), for all \(\mathbf{x},\mathbf{x}^{\prime}\in\mathbb{R}^{d}\). By abuse of language, \(C\) will be referred to as a stationary covariance function and we will say that \(C\) is positive semidefinite. Bochner's theorem (see, e.g., page 24 of Stein, 1999) provides a useful characterization of these mappings under an assumption of continuity: \(C\) is a continuous stationary covariance function if and only if it can be written as
\[C(\mathbf{h})=\int_{\mathbb{R}^{d}}\exp\left(\imath\mathbf{h}^{\top}\mathbf{\omega}\right) F(\mathrm{d}\mathbf{\omega}),\qquad\mathbf{h}\in\mathbb{R}^{d}, \tag{2.1}\]
for some nonnegative finite measure \(F\) (called spectral measure), with \(\imath\) standing for the imaginary unit. If \(F\) is absolutely continuous with respect to the Lebesgue measure, which happens if \(C\) is absolutely integrable, then \(F(\mathrm{d}\mathbf{\omega})=f(\mathbf{\omega})\mathrm{d}\mathbf{\omega}\), for some function \(f:\mathbb{R}^{d}\to\mathbb{R}\) known as the spectral density. In such a case, Fourier inversion yields
\[f(\mathbf{\omega})=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}\exp\left(-\imath\mathbf{ \omega}^{\top}\mathbf{h}\right)C(\mathbf{h})\mathrm{d}\mathbf{h},\qquad\mathbf{\omega}\in \mathbb{R}^{d}. \tag{2.2}\]
A stationary covariance function is said to be isotropic if there exists a function \(\varphi:[0,\infty)\to\mathbb{R}\) such that \(K(\mathbf{x},\mathbf{x}^{\prime})=\varphi(\|\mathbf{x}-\mathbf{x}^{\prime}\|)\), for all \(\mathbf{x},\mathbf{x}^{\prime}\in\mathbb{R}^{d}\). The function \(\varphi\) is referred to as the isotropic part of \(K\). We denote \(\Phi_{d}\) the set of continuous functions \(\varphi\) that are the isotropic part of some positive semidefinite function in \(\mathbb{R}^{d}\times\mathbb{R}^{d}\). Every member of \(\Phi_{d}\), for \(d\geq 2\), can be written as the Hankel transform of order \((d-2)/2\) of a nondecreasing bounded measure \(G_{d}\) on \([0,\infty)\)(Schoenberg, 1938), i.e.,
\[\varphi(h)=\int_{0}^{\infty}\Omega_{d}(hu)\mathrm{d}G_{d}(u),\qquad h\geq 0, \tag{2.3}\]
where \(\Omega_{d}(s)=2^{(d-2)/2}\Gamma(d/2)s^{-(d-2)/2}J_{(d-2)/2}(s)\), with \(\Gamma\) standing for the gamma function and \(J_{\nu}\) for the Bessel function of the first kind of order \(\nu\)(Olver et al., 2010). If the spectral measure \(F\) is absolutely continuous with respect to the Lebesgue measure, then so is \(G_{d}\) and one has
\[\varphi(h)=(2\pi)^{d/2}h^{(2-d)/2}\int_{0}^{\infty}J_{(d-2)/2}(uh)f_{d}(u)u^{ d/2}\mathrm{d}u,\qquad h\geq 0, \tag{2.4}\]
and
\[f_{d}(u)=\frac{1}{(2\pi)^{d/2}}u^{(2-d)/2}\int_{0}^{\infty}J_{(d-2)/2}(uh) \varphi(h)h^{d/2}\mathrm{d}h,\qquad u\geq 0, \tag{2.5}\]
where \(f_{d}\) is the radial part of \(f\) and will be referred to as the \(d\)-radial spectral density of \(\varphi\) (note that the expression of this radial density depends on the space dimension \(d\)): \(f(\mathbf{\omega})=f_{d}(\|\mathbf{\omega}\|)\) for all \(\mathbf{\omega}\in\mathbb{R}^{d}\).
As described in the introduction, the isotropic part of an isotropic covariance function \(\varphi\) can attain negative values at large distances, which is commonly referred to as a hole effect. For simplicity, suppose that \(\int_{0}^{\infty}\mathrm{d}G_{d}(u)=1\), then one has the following lower bound for the members of \(\Phi_{d}\):
\[\varphi(h)\geq\inf_{s\geq 0}\Omega_{d}(s).\]
When \(d=2\) and \(d=3\), this lower bound is \(-0.403\) and \(-0.218\), respectively (Stein, 1999). As the spatial dimension \(d\) approaches infinity, the lower bound of the isotropic covariance function tends to zero, indicating that an isotropic hole effect becomes negligible with large spatial dimensions.
In the following sections, we aim to investigate parametric covariance models that interlace anisotropy and hole effect. Note that some elementary constructions can be developed:
* Suppose that \(\varphi\in\Phi_{d}\) has a hole effect, then \(C(\mathbf{h})=\varphi\left(\sqrt{\mathbf{h}^{\top}\mathbf{A}\mathbf{h}}\right)\) is a valid stationary covariance function, for any positive semidefinite matrix \(\mathbf{A}\). This is one of the most utilized strategies to introduce anisotropy from an initial isotropic model, known as geometric (if \(|\mathbf{A}|>0\), with \(|\cdot|\) denoting the determinant of a square matrix) or zonal (if \(|\mathbf{A}|=0\)) anisotropy. Thus, hole effects and geometric/zonal anisotropies can coexist in a single family. However, this construction is overly rigid because the hole effect is constrained to occur in (almost) all directions with the same sharpness; of course, depending on the direction, the hole effect is attained at different ranges.
* Constructions of the form \(C(\mathbf{h})=\varphi_{1}(\|\mathbf{h}\|)\varphi_{2}(|h_{i}|)\), with \(\varphi_{1}\in\Phi_{d}\), \(\varphi_{2}\in\Phi_{1}\) and \(h_{i}\) being the \(i\)th element of \(\mathbf{h}\), can exhibit hole effects in directions that are close to the \(i\)-th axis, provided that \(\varphi_{2}\) has a hole effect, see for instance Le Blevec et al. (2018). This approach also produces a pattern that is quite rigid, where the interval of negative values in all directions exhibiting a hole effect (primarily, in orientations approximately parallel to the \(i\)-th axis) has a similar length regardless of the direction considered.
Figure 2.1 displays examples of these basic constructions, where the aforementioned structures can be visualized. This manuscript investigates other constructions that allow for complex combinations of these features.
## 3 General Results
### Difference Between Geometrically Anisotropic Models
In this section, we will examine the conditions under which the difference between two geometrically anisotropic covariance functions remains positive semidefinite.
**Proposition 3.1**.: Let \(\varphi\) be a member of the class \(\Phi_{d}\) possessing a \(d\)-radial spectral density \(f_{d}\)
Consider scalars \(b_{1},b_{2}\geq 0\) and symmetric positive definite matrices \({\bf A}_{1}\) and \({\bf A}_{2}\). Thus,
\[{\cal T}^{(1)}_{{\bf A}_{1},{\bf A}_{2},b_{1},b_{2}}[\varphi]({\boldsymbol{h}})=b _{1}\,\varphi\left(\sqrt{{\boldsymbol{h}}^{\top}{\bf A}_{1}{\boldsymbol{h}}} \right)-b_{2}\,\varphi\left(\sqrt{{\boldsymbol{h}}^{\top}{\bf A}_{2}{ \boldsymbol{h}}}\right),\qquad{\boldsymbol{h}}\in\mathbb{R}^{d}, \tag{3.1}\]
is a stationary covariance function in \(\mathbb{R}^{d}\) if and only if
\[b_{1}\geq b_{2}\,\frac{|{\bf A}_{1}|^{1/2}}{|{\bf A}_{2}|^{1/2}}\sup_{{ \boldsymbol{\omega}}\in\mathbb{R}^{d}}\frac{f_{d}\left(\sqrt{{\boldsymbol{ \omega}}^{\top}{\bf A}_{2}^{-1}{\boldsymbol{\omega}}}\right)}{f_{d}\left(\sqrt {{\boldsymbol{\omega}}^{\top}{\bf A}_{1}^{-1}{\boldsymbol{\omega}}}\right)}. \tag{3.2}\]
**Proof 3.1**.: Based on Bochner's theorem, one must show that the inverse Fourier transform of (3.1), which is positively proportional to
\[b_{1}\int_{\mathbb{R}^{d}}\exp\left(-\imath{\boldsymbol{\omega}}^{\top}{ \boldsymbol{h}}\right)\varphi\left(\sqrt{{\boldsymbol{h}}^{\top}{\bf A}_{1}{ \boldsymbol{h}}}\right)\mathrm{d}{\boldsymbol{h}}-b_{2}\int_{\mathbb{R}^{d}} \exp\left(-\imath{\boldsymbol{\omega}}^{\top}{\boldsymbol{h}}\right)\varphi \left(\sqrt{{\boldsymbol{h}}^{\top}{\bf A}_{2}{\boldsymbol{h}}}\right) \mathrm{d}{\boldsymbol{h}}, \tag{3.3}\]
is nonnegative for every \({\boldsymbol{\omega}}\in\mathbb{R}^{d}\). A change of variable allows writing (3.3) in the following format
\[\frac{b_{1}}{|{\bf A}_{1}|^{1/2}}\int_{\mathbb{R}^{d}}\exp\left(-\imath\left[ {\bf A}_{1}^{-1/2}{\boldsymbol{\omega}}\right]^{\top}{\boldsymbol{v}}\right) \varphi\left(\sqrt{{\boldsymbol{v}}^{\top}{\boldsymbol{v}}}\right)\mathrm{d}{ \boldsymbol{v}}-\frac{b_{2}}{|{\bf A}_{2}|^{1/2}}\int_{\mathbb{R}^{d}}\exp \left(-\imath\left[{\bf A}_{2}^{-1/2}{\boldsymbol{\omega}}\right]^{\top}{ \boldsymbol{v}}\right)\varphi\left(\sqrt{{\boldsymbol{v}}^{\top}{\boldsymbol {v}}}\right)\mathrm{d}{\boldsymbol{v}}.\]
Thus, up to a positive factor, (3.3) can be written as
\[\frac{b_{1}}{|{\bf A}_{1}|^{1/2}}f_{d}\left(\sqrt{{\boldsymbol{\omega}}^{\top }{\bf A}_{1}^{-1}{\boldsymbol{\omega}}}\right)-\frac{b_{2}}{|{\bf A}_{2}|^{1/ 2}}f_{d}\left(\sqrt{{\boldsymbol{\omega}}^{\top}{\bf A}_{2}^{-1}{\boldsymbol {\omega}}}\right). \tag{3.4}\]
The proof is completed by noting that (3.4) is nonnegative, for all \({\boldsymbol{\omega}}\in\mathbb{R}^{d}\), if and only if (3.2) holds.
The term with a negative sign in (3.1) is the one that induces the hole effect, so matrix \({\bf A}_{2}\) is essential to characterize the predominant directions of the hole effect.
When the spectral density is radial and nonincreasing, the previous proposition can be simplified. Before stating the next result, we introduce the notation \({\bf A}_{1}\succeq{\bf A}_{2}\), which indicates that \({\bf A}_{1}-{\bf A}_{2}\) is a positive semidefinite matrix.
**Corollary 3.1**.: Let \(\varphi\) be a member of the class \(\Phi_{d}\) having a nonincreasing \(d\)-radial spectral density \(f_{d}\). Let \({\bf A}_{1}\) and \({\bf A}_{2}\) be positive definite matrices such that \({\bf A}_{1}\succeq{\bf A}_{2}\), and \(b_{1},b_{2}\geq 0\). Thus, (3.1) is a stationary covariance function in \(\mathbb{R}^{d}\) if and only if
\[b_{1}\geq b_{2}\,\frac{|{\bf A}_{1}|^{1/2}}{|{\bf A}_{2}|^{1/2}}. \tag{3.5}\]
**Proof 3.2**.: Condition \({\bf A}_{1}\succeq{\bf A}_{2}\) is equivalent to \({\bf A}_{2}^{-1}\succeq{\bf A}_{1}^{-1}\). Thus, \({\boldsymbol{\omega}}^{\top}{\bf A}_{2}^{-1}{\boldsymbol{\omega}}\geq{ \boldsymbol{\omega}}^{\top}{\bf A}_{1}^{-1}{\boldsymbol{\omega}}\) for all \({\boldsymbol{\omega}}\in\mathbb{R}^{d}\). Since \(f_{d}\) is nonincreasing,
\[f_{d}\left(\sqrt{{\boldsymbol{\omega}}^{\top}{\bf A}_{2}^{-1}{\boldsymbol{ \omega}}}\right)\leq f_{d}\left(\sqrt{{\boldsymbol{\omega}}^{\top}{\bf A}_{1}^{ -1}{\boldsymbol{\omega}}}\right),\qquad{\boldsymbol{\omega}}\in\mathbb{R}^{d}.\]
Consequently, the supremum in the right hand side of (3.2) is identically equal to one (attained for \({\boldsymbol{\omega}}={\boldsymbol{0}}\)).
**Remark 3.1**.: A sufficient condition for the \(d\)-radial spectral density \(f_{d}\) to be nonincreasing is that \(\varphi\) belongs to \(\Phi_{d+2}\) and possesses a \((d+2)\)-radial spectral density \(f_{d+2}\). Indeed, in such a case, \(\varphi\) is the Hankel transform of order \((d-2)/2\) of \(f_{d}\), as per (2.4), and also the Hankel transform of order \(d/2\) of \(f_{d+2}\). This entails that \(f_{d}\) is the montee of order \(2\) of \(f_{d+2}\)(Matheron, 1965, formula I.4.8):
\[f_{d}(u)=2\pi\int_{u}^{\infty}vf_{d+2}(v)\mathrm{d}v,\quad u\geq 0, \tag{3.6}\]
which is a nonincreasing function of \(u\) insofar as \(f_{d+2}\) is nonnegative.
The conditions in the previous corollary can be stated in terms of the eigenvalues of \(\mathbf{A}_{1}\) and \(\mathbf{A}_{2}\). Let us denote by \(\lambda_{j}(\mathbf{A}_{i})\), \(\lambda_{\min}(\mathbf{A}_{i})\) and \(\lambda_{\max}(\mathbf{A}_{i})\), the \(j\)-th, minimum and maximum eigenvalues of matrix \(\mathbf{A}_{i}\), respectively, for \(i=1,2\) and \(j=1,\ldots,d\).
**Corollary 3.2**.: Let \(\varphi\) be a member of the class \(\Phi_{d}\) having a nonincreasing \(d\)-radial spectral density. Let \(\mathbf{A}_{1}\) and \(\mathbf{A}_{2}\) be positive definite matrices such that \(\lambda_{\min}(\mathbf{A}_{1})\geq\lambda_{\max}(\mathbf{A}_{2})\), and \(b_{1},b_{2}\geq 0\). Thus, (3.1) is a stationary covariance function in \(\mathbb{R}^{d}\) if and only if
\[b_{1}\geq b_{2}\left(\prod_{j=1}^{d}\frac{\lambda_{j}(\mathbf{A}_{1})}{ \lambda_{j}(\mathbf{A}_{2})}\right)^{1/2}. \tag{3.7}\]
**Remark 3.2**.: When \(\mathbf{A}_{i}=a_{i}\mathbf{I}_{d}\), for \(i=1,2\), with \(a_{1}\geq a_{2}\) and \(\mathbf{I}_{d}\) being the \(d\times d\) identity matrix, (3.1) reduces to the isotropic model
\[h\mapsto b_{1}\,\varphi(\sqrt{a_{1}}h)-b_{2}\,\varphi(\sqrt{a_{2}}h), \tag{3.8}\]
with \(h=\|\boldsymbol{h}\|\geq 0\), and the respective validity condition (3.7) simplifies into
\[b_{1}\geq b_{2}\left(\frac{a_{1}}{a_{2}}\right)^{d/2}. \tag{3.9}\]
Our results align with prior literature concerning this topic in the purely isotropic case. Specifically, we recover Theorem 1(ii) in Ma (2005), and generalize Theorem 3.1 in Faouzi et al. (2020) and Corollaries 3-12 in Posa (2023). The results of this section can therefore be seen as an anisotropic extension of previous literature related to the difference between isotropic covariance models (or nested models) and the so-called Zastavnyi operators.
### Construction Based on Shifted Isotropic Models
We propose here an alternative approach for constructing anisotropic covariance functions that exhibit negative values in specific orientations. We start with an isotropic model of the form (3.8). Therefore, it becomes crucial to satisfy both condition (3.9) and the requirement of having a nonincreasing \(d\)-radial spectral density for \(\varphi\) to ensure that we start with an admissible covariance model. Then, we incorporate a shift in a determined direction to produce an anisotropic structure.
**Proposition 3.2**.: Let \(\varphi\in\Phi_{d}\) possessing a nonincreasing \(d\)-radial spectral density and consider constants \(a_{1},a_{2}>0\) and \(b_{1},b_{2}\geq 0\) such that (3.9) holds. Thus, for all \(\boldsymbol{\eta}\in\mathbb{R}^{d}\), the mapping
\[\mathcal{T}^{(2)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{\eta}}[\varphi]( \boldsymbol{h})=b_{1}\,\varphi\,(\sqrt{a_{1}}\|\boldsymbol{h}\|)-\frac{b_{2}}{ 2}\big{[}\varphi(\sqrt{a_{2}}\|\boldsymbol{h}-\boldsymbol{\eta}\|)+\varphi( \sqrt{a_{2}}\|\boldsymbol{h}+\boldsymbol{\eta}\|)\big{]},\qquad\boldsymbol{h} \in\mathbb{R}^{d}, \tag{3.10}\]
is a stationary covariance function in \(\mathbb{R}^{d}\).
**Proof 3.3**.: Let \(f_{a_{i},d}\) denote the \(d\)-radial spectral density of \(\varphi(\sqrt{a_{i}}h)\), for \(i=1,2\). Note that
\[\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}\exp\left(-\imath\mathbf{ \omega}^{\top}\mathbf{h}\right)\varphi\left(\sqrt{a_{2}}\|\mathbf{h}-\mathbf{\eta}\|\right) \mathrm{d}\mathbf{h} =\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}\exp\left(-\imath\mathbf{ \omega}^{\top}\left[\mathbf{v}+\mathbf{\eta}\right]\right)\varphi\left(\sqrt{a_{2}}\| \mathbf{v}\|\right)\mathrm{d}\mathbf{v}\] \[=\exp\left(-\imath\mathbf{\omega}^{\top}\mathbf{\eta}\right)f_{a_{2},d}( \omega),\]
for all \(\mathbf{\omega}\in\mathbb{R}^{d}\), with \(\omega=\|\mathbf{\omega}\|\). Similarly,
\[\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}\exp\left(-\imath\mathbf{\omega}^{\top} \mathbf{h}\right)\varphi\left(\sqrt{a_{2}}\|\mathbf{h}+\mathbf{\eta}\|\right)\mathrm{d}\bm {h}=\exp\left(\imath\mathbf{\omega}^{\top}\mathbf{\eta}\right)f_{a_{2},d}(\omega).\]
Thus, the inverse Fourier transform of (3.10) can be written as
\[b_{1}f_{a_{1},d}(\omega)-\frac{b_{2}}{2}\left[\exp\left(-\imath\mathbf{\omega}^{ \top}\mathbf{\eta}\right)f_{a_{2},d}(\omega)+\exp\left(\imath\mathbf{\omega}^{\top} \mathbf{\eta}\right)f_{a_{2},d}(\omega)\right]=b_{1}f_{a_{1},d}(\omega)-b_{2}\cos \left(\mathbf{\omega}^{\top}\mathbf{\eta}\right)f_{a_{2},d}(\omega) \tag{3.11}\]
for all \(\mathbf{\omega}\in\mathbb{R}^{d}\). The right-hand side of (3.11) is lower-bounded by \(b_{1}f_{a_{1},d}(\omega)-b_{2}f_{a_{2},d}(\omega)\), where the latter expression corresponds to the \(d\)-radial spectral density of (3.8). This quantity is non-negative because condition (3.9) is satisfied, i.e., (3.8) is positive semidefinite. The proof is completed by invoking Bochner's theorem.
The interest of the above proposition lies in the fact that all the isotropic constructions of the form (3.8) can be adapted according to (3.10) to produce anisotropic models. When the separation vector \(\mathbf{h}\) is close to \(\pm\mathbf{\eta}\), the negative part of (3.10) becomes predominant; thus, the hole effect is more significant in that direction.
There are two limit cases of (3.10) worth noting. On the one hand, as the magnitude of \(\mathbf{\eta}\) approaches infinity, (3.10) tends to \(b_{1}\,\varphi(\sqrt{a_{1}}h)\) (a rescaled version of the initial covariance model). On the other hand, when the magnitude of \(\mathbf{\eta}\) approaches zero, the nested model (3.8) is recovered. Thus, this construction can encompass purely isotropic models, both with and without hole effect, as special cases.
### Models with Derivative Information
Our focus now turns to the study of anisotropic models whose construction incorporates directional derivatives of an isotropic random field. In contrast to previous strategies, this approach requires a covariance function twice differentiable at the origin as one of the initial ingredients, and no monotonicity conditions are required for the \(d\)-radial spectral density.
**Proposition 3.3**.: Let \(\varphi_{1},\varphi_{2}\in\Phi_{d}\), with \(\varphi_{2}\) being twice differentiable at the origin, and \(\mathbf{u}\) be a unit vector in \(\mathbb{R}^{d}\). Consider constants \(a_{1},a_{2}>0\) and \(b_{1},b_{2}\geq 0\). Thus, the mapping
\[\mathcal{T}^{(3)}_{a_{1},a_{2},b_{1},b_{2},\mathbf{u}}[\varphi_{1},\varphi_{2}]( \mathbf{h})=b_{1}\varphi_{1}(\sqrt{a_{1}}\|\mathbf{h}\|)-b_{2}\left[\cos^{2}(\theta( \mathbf{h},\mathbf{u}))\varphi_{2}^{\prime\prime}(\sqrt{a_{2}}\|\mathbf{h}\|)+\sin^{2}( \theta(\mathbf{h},\mathbf{u}))\frac{\varphi_{2}^{\prime}(\sqrt{a_{2}}\|\mathbf{h}\|)}{ \sqrt{a_{2}}\|\mathbf{h}\|}\right], \tag{3.12}\]
where \(\mathbf{h}\in\mathbb{R}^{d}\), with \(\theta(\mathbf{h},\mathbf{u})\) being the angle between \(\mathbf{h}\) and \(\mathbf{u}\), is a stationary covariance function in \(\mathbb{R}^{d}\).
**Proof 3.4**.: We provide a constructive proof. Let us consider two independent zero-mean random fields on \(\mathbb{R}^{d}\), denoted as \(Y_{1}\) and \(Y_{2}\), which possess covariance functions \(\varphi_{1}\) and \(\varphi_{2}\) in \(\Phi_{d}\), respectively. Equation (5.29) in Chiles and Delfiner (2012) establishes that
\[\mathrm{cov}\left[\frac{\partial Y_{2}}{\partial\mathbf{u}}(\mathbf{x}),\frac{\partial Y _{2}}{\partial\mathbf{v}}(\mathbf{x}+\mathbf{h})\right]=-\frac{\left(\mathbf{h}^{\top}\mathbf{u} \right)^{2}}{\|\mathbf{h}\|^{2}}\left[\varphi_{2}^{\prime\prime}(\|\mathbf{h}\|)-\frac{ \varphi_{2}^{\prime}(\|\mathbf{h}\|)}{\|\mathbf{h}\|}\right]-\left(\mathbf{u}^{\top}\mathbf{v} \right)\frac{\varphi_{2}^{\prime}(\|\mathbf{h}\|)}{\|\mathbf{h}\|},\]
for all \(\mathbf{x},\mathbf{h}\in\mathbb{R}^{d}\) and any pair of unit vectors \(\mathbf{u}\) and \(\mathbf{v}\) in \(\mathbb{R}^{d}\), provided that \(\varphi_{2}\) is twice differentiable at the origin. Thus, a direct calculation shows that the covariance function of the random field \(\left\{(\partial Y_{2}/\partial\mathbf{u})(\mathbf{x}):\mathbf{x}\in\mathbb{R}^{d}\right\}\) is given by
\[\mathbf{h}\mapsto-\cos^{2}(\theta(\mathbf{h},\mathbf{u}))\varphi_{2}^{\prime\prime}(\|\bm {h}\|)-\sin^{2}(\theta(\mathbf{h},\mathbf{u}))\frac{\varphi_{2}^{\prime}(\|\mathbf{h}\|)} {\|\mathbf{h}\|}.\]
Based on previous calculations, one concludes that a random field defined according to
\[Z(\mathbf{x})=\sqrt{b_{1}}Y_{1}(\sqrt{a_{1}}\mathbf{x})+\sqrt{\frac{b_{2}}{a_{2}}} \frac{\partial Y_{2}}{\partial\mathbf{u}}(\sqrt{a_{2}}\mathbf{x}),\qquad\mathbf{x}\in \mathbb{R}^{d}, \tag{3.13}\]
has a covariance function given by (3.12), indicating that (3.12) is positive semidefinite.
The rationale behind this approach is that the changes in sign of the directional derivative in (3.13) can accentuate the transitions between large and small values of the random field \(Z\) in a given direction; thus, marked hole effects in the orientation determined by \(\mathbf{u}\) are expected. If \(\mathbf{h}\) is approximately proportional to \(\mathbf{u}\), the second-order derivative of \(\varphi_{2}\) gains greater significance in (3.12). Conversely, if \(\mathbf{h}\) is approximately orthogonal to \(\mathbf{u}\), the term involving the first-order derivative becomes more dominant.
The parameters involved in this formulation do not require any elaborate restriction, as the positive semidefiniteness is inherently ensured by construction. A special case of (3.12) arises when setting \(b_{1}=0\), where the dominant component of the covariance structure is the term within brackets, representing the covariance function of the directional derivative of certain random field.
When the covariance functions of \(Y_{1}\) and \(Y_{2}\) are equal and given by \(\varphi_{1}=\varphi_{2}:=\varphi\), where \(\varphi\) is a function in \(\Phi_{d}\) that is twice differentiable at the origin, we can conveniently denote the expression (3.12) as \(\mathcal{T}^{(3)}_{a_{1},a_{2},b_{1},b_{2},\mathbf{u}}[\varphi]\).
**Remark 3.3**.: It is noteworthy that, in Proposition 3.3, one can substitute \(\varphi_{1}\) with a stationary covariance model, which need not be isotropic. The validity of this alternative model is guaranteed by following the same proof as before. This slight variation offers enhanced flexibility in spatial data modeling.
## 4 Explicit Parametric Families
### Matern, Cauchy and Compactly-Supported Hypergeometric Models
To provide concrete models derived from the findings presented in the previous section, we will now introduce three commonly used parametric families of covariance functions: the Matern, Cauchy and Gauss hypergeometric families.
1. The Matern family of covariance functions is given by (Stein, 1999) \[\mathcal{M}_{\nu}(t)=\frac{2^{1-\nu}}{\Gamma(\nu)}t^{\nu}\mathcal{K}_{\nu}(t ),\qquad t\geq 0,\] (4.1) where \(\mathcal{K}_{\nu}\) is the modified Bessel function of the second kind, with \(\nu>0\) being a shape parameter (Olver et al., 2010). The \(d\)-radial spectral density associated with this model, viewed as a function of \(\omega=\|\mathbf{\omega}\|\), is given by \[f_{d}^{\mathcal{M}}(\omega)=\frac{\Gamma(\nu+d/2)}{\Gamma(\nu)\pi^{d/2}}\frac{ 1}{(1+\omega^{2})^{\nu+d/2}},\qquad\omega\geq 0.\]
2. The Cauchy family of covariance functions is given by (see, e.g., Chiles and Delfiner, 2012) \[\mathcal{C}_{\delta}(t)=(t^{2}+1)^{-\delta},\qquad t\geq 0,\] (4.2) with \(\delta>0\) being a shape parameter. When \(\delta>(d-1)/4\), its \(d\)-radial spectral density adopts the explicit form (Lim and Teo, 2009) \[f_{d}^{\mathcal{C}}(\omega)=\frac{2^{1-d/2-\delta}}{\Gamma(\delta)\pi^{d/2}} \frac{\mathcal{K}_{d/2-\delta}(\omega)}{\omega^{d/2-\delta}},\qquad\omega\geq 0.\]
3. The Gauss hypergeometric family of covariance functions is given by (Emery and Alegria, 2022) \[\mathcal{H}_{\alpha,\beta,\gamma}(t)=(1-t^{2})_{+}^{\beta-\alpha+\gamma-d/2- 1}{}_{2}F_{1}(\beta-\alpha,\gamma-\alpha;\beta-\alpha+\gamma-d/2;(1-t^{2})_{+} ),\qquad t\geq 0,\] (4.3) with \({}_{2}F_{1}\) denoting the Gauss hypergeometric function (Olver et al., 2010), \((\cdot)_{+}\) denoting the positive part and \(\alpha,\beta,\gamma\) being shape parameters such that \(2\alpha>d\), \(2(\beta-\alpha)(\gamma-\alpha)\geq\alpha\) and \(2(\beta+\gamma)\geq 6\alpha+1\). Its \(d\)-radial spectral density is \[f_{d}^{\mathcal{H}}(\omega)=\kappa(\alpha;\beta,\gamma)_{1}F_{2}(\alpha;\beta,\gamma;-\omega^{2}/2),\qquad\omega\geq 0,\] with \(\kappa(\alpha;\beta,\gamma)\) a positive factor and \({}_{1}F_{2}\) a generalized hypergeometric function (Olver et al., 2010). This model encompasses the Euclid's hat (spherical), cubic, generalized Wendland and Askey covariances as particular cases.
Both \(\mathcal{M}_{\nu}\) and \(\mathcal{C}_{\delta}\) belong to the class \(\Phi_{d}\), for all \(d\geq 1\), and both \(f_{d}^{\mathcal{M}}\) and \(f_{d}^{\mathcal{C}}\) are decreasing functions. As for \(\mathcal{H}_{\alpha,\beta,\gamma}\), it belongs to \(\Phi_{d+2}\) if \(2\alpha>d+2\), \(2(\beta-\alpha)(\gamma-\alpha)\geq\alpha\) and \(2(\beta+\gamma)\geq 6\alpha+1\), in which case \(f_{d}^{\mathcal{H}}\) is a nonincreasing function (recall Remark 3.1). Thus, these three models are in the range of applicability of Propositions 3.1 and 3.2.
While the Cauchy model is infinitely differentiable at the origin (Chiles and Delfiner, 2012) and so is the Gauss hypergeometric model if \(2\alpha>d+2\)(Emery and Alegria, 2022), the Matern model is twice differentiable at the origin if and only if \(\nu>1\)(Stein, 1999) and, in this case, Proposition 3.3 can be applied.
In summary, we have the following corollaries.
**Corollary 4.1**.: Consider two positive definite matrices \(\mathbf{A}_{1}\) and \(\mathbf{A}_{2}\) such that \(\mathbf{A}_{1}\succeq\mathbf{A}_{2}\), and scalars \(b_{1},b_{2}\geq 0\). Thus, \(\mathcal{T}^{(1)}_{\mathbf{A}_{1},\mathbf{A}_{2},b_{1},b_{2}}[\mathcal{M}_{\nu}]\), \(\mathcal{T}^{(1)}_{\mathbf{A}_{1},\mathbf{A}_{2},b_{1},b_{2}}[\mathcal{C}_{ \delta}]\) and \(\mathcal{T}^{(1)}_{\mathbf{A}_{1},\mathbf{A}_{2},b_{1},b_{2}}[\mathcal{H}_{ \alpha,\beta,\gamma}]\), with \(\nu>0\), \(\delta>(d-1)/4\), \(2\alpha>d+2\), \(2(\beta-\alpha)(\gamma-\alpha)\geq\alpha\) and \(2(\beta+\gamma)\geq 6\alpha+1\), are stationary covariance functions in \(\mathbb{R}^{d}\) if and only if condition (3.5) holds.
**Corollary 4.2**.: Let \(a_{1},a_{2}>0\) and \(b_{1},b_{2}\geq 0\) be constants satisfying condition (3.9) and \(\boldsymbol{\eta}\in\mathbb{R}^{d}\). Thus, \(\mathcal{T}^{(2)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{\eta}}[\mathcal{M}_{\nu}]\), \(\mathcal{T}^{(2)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{\eta}}[\mathcal{C}_{ \delta}]\) and \(\mathcal{T}^{(2)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{\eta}}[\mathcal{H}_{ \alpha,\beta,\gamma}]\), with \(\nu>0\), \(\delta>(d-1)/4\), \(2\alpha>d+2\), \(2(\beta-\alpha)(\gamma-\alpha)\geq\alpha\) and \(2(\beta+\gamma)\geq 6\alpha+1\), are stationary covariance functions in \(\mathbb{R}^{d}\).
**Corollary 4.3**.: Consider constants \(a_{1},a_{2}>0\) and \(b_{1},b_{2}\geq 0\), and a unit vector \(\boldsymbol{u}\in\mathbb{R}^{d}\). Thus, \(\mathcal{T}^{(3)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{u}}[\mathcal{M}_{\nu}]\) with \(\nu>1\), \(\mathcal{T}^{(3)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{u}}[\mathcal{C}_{ \delta}]\) and \(\mathcal{T}^{(3)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{u}}[\mathcal{H}_{ \alpha,\beta,\gamma}]\) with \(2\alpha>d+2\), \(2(\beta-\alpha)(\gamma-\alpha)\geq\alpha\) and \(2(\beta+\gamma)\geq 6\alpha+1\), are stationary covariance functions in \(\mathbb{R}^{d}\).
In order to exhibit the versatility of the proposed models, we provide visual illustrations in dimension \(d=2\). These illustrations show the various shapes that can be achieved. We consider the following scenarios:
1. The models in Corollary 4.1, with \(\mathbf{A}_{1}=\mathbf{I}_{2}\) and \(\mathbf{A}_{2}=\mathbf{P}\operatorname{diag}(\mu_{1},\mu_{2})\,\mathbf{P}^{\top}\), with \(\mu_{1},\mu_{2}>0\) and \[\mathbf{P}=\begin{bmatrix}\cos(\pi/4)&-\sin(\pi/4)\\ \sin(\pi/4)&\cos(\pi/4)\end{bmatrix}\] being a rotation matrix. The conditions of Corollary 4.1 are satisfied if and only if \(\max(\mu_{1},\mu_{2})\leq 1\) and \(b_{1}\sqrt{\mu_{1}\mu_{2}}\geq b_{2}\). Thus, we fix \(b_{1}=2.5\), \(b_{2}=1\), \(\mu_{1}=0.2\) and \(\mu_{2}=0.8\).
2. The models in Corollary 4.2, with \(b_{1}=2\), \(b_{2}=1\), \(a_{1}=0.8\) and \(a_{2}=0.4\), with a shift vector given by \(\boldsymbol{\eta}=[1,1]^{\top}\).
3. The models in Corollary 4.3, with \(b_{1}=1\), \(b_{2}=2\), \(a_{1}=1\) and \(a_{2}=0.5\), and the unit vector \(\boldsymbol{u}=[1/\sqrt{2},1/\sqrt{2}]^{\top}\).
Figure 4.1 shows the contour plots of the Matern model with \(\nu=1.5\), the Cauchy model with \(\delta=1\) and the Gauss hypergeometric model with \(\alpha=3,\beta=7/2\) and \(\gamma=6\), after the application of the transformations described in Corollaries 4.1-4.3 under scenarios **I-III**, respectively, together with a normalization in order to obtain correlation functions. To improve the visualization of each individual model, we have chosen specific ranges for plotting. We consider \(\boldsymbol{h}=[h_{1},h_{2}]^{\top}\in[-10,10]^{2}\) for the first two models, and \(\boldsymbol{h}=[h_{1},h_{2}]^{\top}\in[-2,2]^{2}\) for the last model. All the covariance functions have been designed to present a hole effect around the northeast direction.
### Cardinal Sine Model
Our focus now turns to the cardinal sine (or wave) covariance function, defined through
\[\mathcal{W}(t)=\frac{\sin(t)}{t},\qquad t>0, \tag{4.4}\]
and \(\mathcal{W}(0)=1\). This model is a member of \(\Phi_{d}\), for \(d\leq 3\). When \(d=3\), this model does not possess a spectral density. However, for \(d\leq 2\), one has (Arroyo and Emery, 2021)
\[f_{d}^{\mathcal{W}}(\omega)=\frac{1}{2\pi^{(d-1)/2}\Gamma((3+d)/2)}(1-\omega^ {2})_{+}^{(1-d)/2},\qquad\omega\geq 0. \tag{4.5}\]
In particular, when \(d=2\) and \(0\leq\omega<1\), (4.5) is an increasing mapping. As a result, Propositions 3.1 and 3.2 are not applicable to this model. The conditions of Proposition 3.3, on the other hand, can be readily verified for \(d\leq 3\), leading to the subsequent corollary.
**Corollary 4.4**.: Let \(d\leq 3\). Consider constants \(a_{1},a_{2}>0\) and \(b_{1},b_{2}\geq 0\), and a unit vector \(\boldsymbol{u}\in\mathbb{R}^{d}\). Thus, \(\mathcal{T}^{(3)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{u}}[\mathcal{W}]\) is a stationary covariance function in \(\mathbb{R}^{d}\).
Recall that Proposition 3.3 offers the flexibility to combine models from different parametric families. As an example, we can consider \(\mathcal{T}^{(3)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{u}}[\mathcal{M}_{\nu}, \mathcal{W}]\), which constitutes a valid stationary covariance model for dimensions \(d\leq 3\).
Figure 4.2 shows \(\mathcal{T}^{(3)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{u}}[\mathcal{W}]\) and \(\mathcal{T}^{(3)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{u}}[\mathcal{M}_{1/2}, \mathcal{W}]\) in dimension \(d=2\), with parameters \(a_{1}=a_{2}=b_{1}=1\), \(b_{2}=2\) and \(\boldsymbol{u}=[1/\sqrt{2},1/\sqrt{2}]^{\top}\). While certain structural oscillations from the model (4.4) persist, the proposed models exhibit a notably amplified hole effect in the \(\boldsymbol{u}\) direction. Observe that \(\mathcal{T}^{(3)}_{a_{1},a_{2},b_{1},b_{2},\boldsymbol{u}}[\mathcal{W}]\) exceeds the lower bound required for isotropic models in \(\mathbb{R}^{2}\).
Figure 4.1: Different combinations of anisotropies and hole-effects for the transformed Matérn (top), the transformed Cauchy (middle) and the transformed Gauss hypergeometric (bottom) models. From left to right we consider the transformations introduced in Corollaries 4.1-4.3, respectively. The values of the parameters have been described in scenarios **I-III**.
## 5 Real Data Analysis
We consider a geophysical data set from a carbonate-rock aquifer located in Martin county, south Florida, and documented in Parra et al. (2006, 2009). The data set consists of a P-wave impedance vertical section obtained by inverting cross-well reflection seismic measurements, at a vertical resolution of 0.61 m (2 feet) and a horizontal resolution of 3.05 m (10 feet), totaling 17,145 data. The P-wave impedance can be used to delineate the lateral heterogeneities of the aquifer, to assess the fluid paths, and to map petrophysical properties such as the rock porosity, which is a key variable to forecast water production (Parra and Emery, 2013; Emery and Parra, 2013).
To reduce the number of data, we employ for our analysis a spatial resolution of 20 feet and 4 feet in the horizontal and vertical coordinates, respectively, which leads to a set of 4352 impedance data. Also, to remove the trend in the east coordinate and improve the description of the data by a stationary random field model, we utilize a smoothing spline approach. The estimated trend exhibits a distinct pattern, gradually transitioning from high to low values as one moves from west to east. In Figure 5.1, one can observe the original data, the trend that was fitted, the residuals, and the corresponding histogram. These residuals can be interpreted as the realization of a stationary zero-mean Gaussian random field. We randomly select and exclude 400 observations of the dataset (approximately 10% of the observations) for posterior validation purposes, while the remaining observations constitute the training set.
A significant hole effect is present in the vertical direction. This hole effect can be explained by the presence of major geological structures, corresponding to permeability barriers alternating vertically with high-porosity structures. The former are characterized by tight limestone and isolated vugs, whereas the latter are associated with interconnected matrix and vugs or with a combination of interconnected vugs surrounded by limestone (Parra et al., 2009). Cyclic behaviors in the vertical covariances or variograms of rock properties are often observed in carbonate sequences and can be explained by periodic processes of deposition due to eustatic sea level oscillations or to tectonic activities (Chiles and Delfiner, 2012; Le Blevec et al., 2020).
Taking into account this marked axial pattern, characterized by dissimilar scales along the east and depth coordinates, we consider the following models:
Figure 5.1: From top left to bottom right: original data set of impedance, fitted trend in the east direction, residuals and the corresponding histogram.
* **Model I.** A basic construction of the form \[C_{\text{basic}}(\mathbf{h};\sigma^{2},a_{1},a_{2})=\sigma^{2}\exp(-a_{1}\|\mathbf{h}\|) \frac{\sin(a_{2}|h_{2}|)}{a_{2}|h_{2}|},\] where \(\sigma^{2},a_{1}\) and \(a_{2}\) are positive parameters.
* **Model II.** We use the previous basic model as a building block and then incorporate derivative information using Proposition 3.3. The resulting model adopts the form \[C(\mathbf{h};\sigma^{2},a_{1},a_{2},a_{3})=\frac{3\sigma^{2}}{4}\left[C_{\text{ basic}}(\mathbf{h};\sigma^{2},a_{1},a_{2})+C_{\text{derivative}}(\mathbf{h};a_{3})\right],\] where \[C_{\text{derivative}}(\mathbf{h};a_{3})=\cos^{2}(\theta(\mathbf{h},\mathbf{u}))\varphi^{ \prime\prime}(\sqrt{a_{3}}\|\mathbf{h}\|)+\sin^{2}(\theta(\mathbf{h},\mathbf{u}))\frac{ \varphi^{\prime}(\sqrt{a_{3}}\|\mathbf{h}\|)}{\sqrt{a_{3}}\|\mathbf{h}\|},\] with \(\mathbf{u}=[0,1]^{\top}\) fixed and \(\varphi\) of the form (4.4). Here, \(a_{3}>0\) is an additional scale parameter. This model is an example of the variant described in Remark 3.3.
For each model, we estimate the parameters through a composite likelihood (CL) method based on differences (Curriero and Lele, 1999; Varin et al., 2011). Table 5.1 shows the CL estimates together with the value of the objective function at the optimum. For comparison purposes, we also fit a modified version of Model II using an automated least squares (LS) procedure instead of the CL method. For this strategy, we set \(\sigma^{2}=4\times 10^{6}\), \(a_{1}=0.135\), \(a_{2}=0.818\) and \(a_{3}=0.067\), in order to obtain a model that matches the structural features of the directional empirical variograms. Figure 5.2 shows the fitted variogram models along three spatial orientations. By construction, Model II that is based on the LS method presents a more accurate description of the empirical variograms, but a poorer log-CL value (Table 5.1). On the contrary, Models I and II that are based on the CL method do not perfectly match the empirical variograms, a situation that is commonly encountered in practice. To obtain a more comprehensive visualization of the fitted models, Figure 5.3 displays a global plot of the covariances.
To enhance our analysis, we conduct a split-sample study for model validation, with the 400 data that have been left out of the model fitting. We apply simple kriging using each model and evaluate the prediction accuracy using metrics such as the root mean square error (RMSE) and mean absolute error (MAE). Among the models that were tested, Model II fitted with the CL method demonstrates a clear advantage, with the RMSE and MAE reduced by 10% to 19% with respect to the other models (see Table 5.2). In Figure 5.4 (left panel), boxplots showing the absolute errors are presented. Model II based on CL outperforms the other models in terms of prediction accuracy. This superiority is evident through noticeably reduced quartiles and upper whisker. To gain insight into the dispersion of prediction errors, Figure 5.4 (right panel) compares the actual versus predicted values in the validation study, based on Model II fitted through the CL method.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & \(\widehat{\sigma}^{2}\) & \(\widehat{a}_{1}\) & \(\widehat{a}_{2}\) & \(\widehat{a}_{3}\) & log-CL \\ \hline I & \(4.036\times 10^{6}\) & \(1.105\times 10^{-2}\) & 0.717 & \(-\) & \(-\)13,165,439 \\ II & \(4.062\times 10^{6}\) & \(3.299\times 10^{-3}\) & 0.526 & 2.441 & \(-\)13,165,205 \\ II (based on LS) & \(4\times 10^{6}\) & 0.135 & 0.818 & 0.067 & \(-\)13,188,178 \\ \hline \hline \end{tabular}
\end{table}
Table 5.1: Parameters and log-CL of fitted covariance models.
Figure 5.2: Empirical (black circles) and modeled (solid lines) directional variograms of impedance along directions dipping \(90^{\circ}\) (left), \(65^{\circ}\) (center) and \(40^{\circ}\) (right). Blue: Model I fitted through CL; Red: Model II fitted through CL; Violet: Model II fitted through LS.
\begin{table}
\begin{tabular}{c c c} \hline \hline Model & RMSE & MAE \\ \hline I (based on CL) & 741.55 & 546.12 \\ II (based on CL) & 662.27 & 457.97 \\ II (based on LS) & 767.04 & 564.76 \\ \hline \hline \end{tabular}
\end{table}
Table 5.2: Cross-validation scores: root mean square error (RMSE) and mean absolute error (MAE).
Figure 5.3: From left to right: Model I fitted through CL, Model II fitted through CL and Model II fitted through LS.
## 6 Conclusions
This work aimed to design new covariance models with complex characteristics. We restricted our attention to models that combine anisotropies and hole effects, and illustrated their practical impact with an application to a geophysical data set. We believe that the pursuit of increasingly flexible models, while maintaining a certain level of simplicity and parsimony, is an area that should continue to be explored. Some recent ideas in this direction can be found in Alegria et al. (2021), Ma and Bhadra (2022), Fuglstad et al. (2015) and Berild and Fuglstad (2023), among others.
We illustrated the use of the proposed constructions with well-established families of covariance functions, although our formulations have the potential to be effectively combined with many other parametric families of covariance functions, such as the powered exponential or the hyperbolic models, among others. In particular, employing compactly supported covariances (such as the Gauss hypergeometric covariance) as a starting point provides models that lead to sparse covariance matrices with quite distinctive structures, allowing for computationally efficient inference (Kaufman et al., 2008), prediction (Furrer et al., 2006) and simulation (Dietrich and Newsam, 1993) techniques.
Extending these results to the multivariate setting, where several coregionalized variables are jointly analyzed and the covariance functions are matrix-valued, presents an interesting area of exploration, albeit accompanied by significant challenges, as the complexity of the models intensifies due to the rapid growth in the number of parameters and the intricate restrictions imposed among them to ensure positive semidefiniteness.
## Acknowledgements
This work was supported by the National Agency for Research and Development of Chile (ANID), through grants Fondecyt 1210050 (A.A. and X.E.), UTFSM PI_LIR_23_11 (A.A.) and ANID PIA AFB220002 (X.E.).
Figure 5.4: (Left) Comparison of absolute prediction errors among the covariance models. (Right) Comparison of actual versus predicted values in the cross-validation study, based on Model II fitted through the CL method. |
2303.01325 | A Pathway Towards Responsible AI Generated Content | AI Generated Content (AIGC) has received tremendous attention within the past
few years, with content generated in the format of image, text, audio, video,
etc. Meanwhile, AIGC has become a double-edged sword and recently received much
criticism regarding its responsible usage. In this article, we focus on 8 main
concerns that may hinder the healthy development and deployment of AIGC in
practice, including risks from (1) privacy; (2) bias, toxicity, misinformation;
(3) intellectual property (IP); (4) robustness; (5) open source and
explanation; (6) technology abuse; (7) consent, credit, and compensation; (8)
environment. Additionally, we provide insights into the promising directions
for tackling these risks while constructing generative models, enabling AIGC to
be used more responsibly to truly benefit society. | Chen Chen, Jie Fu, Lingjuan Lyu | 2023-03-02T14:58:40Z | http://arxiv.org/abs/2303.01325v3 | # A Pathway Towards Responsible AI Generated Content
###### Abstract
AI Generated Content (AIGC) has received tremendous attention within the past few years, with content ranging from image, text, to audio, video, etc. Meanwhile, AIGC has become a double-edged sword and recently received much criticism regarding its responsible usage. In this vision paper, we focus on three main concerns that may hinder the healthy development and deployment of AIGC in practice, including risks from privacy, bias, toxicity, misinformation, and intellectual property (IP). By documenting known and potential risks, as well as any possible misuse scenarios of AIGC, the aim is to draw attention to potential risks and misuse, help society to eliminate obstacles, and promote the more ethical and secure deployment of AIGC. Additionally, we provide insights into the promising directions for tackling these risks while constructing generative models, enabling AIGC to be used responsibly to benefit society.
## 1 Introduction
**Foundation models**. The success of high-quality AI Generated Content (AIGC) is strongly correlated with the emergence and rapid advancement of large foundation models. These models, with their vast capacity, enable the rapid development of domain-specific models, which are commonly employed for the production of various types of content, including images, texts, audio, and video.
For instance, many text generators are built on the Generative Pre-trained Transformer (GPT) [1] or its derivatives, such as GPT-2 [1] and GPT-3 [2]. Similarly, numerous text-to-image generators rely on vision-language models such as CLIP [1] and OpenCLIP [23].
**AIGC models**. In recent years, generative modeling has made rapid advances and tremendous progress. OpenAI's DALL-E [17] was one of the first text-to-image models to capture widespread public attention. It is trained to generate digital images from text descriptions, referred to as "prompts", using a dataset of text-image pairs [2]. Its successor, DALL-E 2 [17], which can generate more complex and realistic images, was unveiled in April 2022, followed by Stable Diffusion [11], which was publicly released in August 2022. Google, as a rival to OpenAI, presented two text-to-image models that can generate photorealistic images: the diffusion-based model Imagen [2], and the Pathways Autoregressive Text-to-Image model (Parti) [20].
Diffusion models have been used not only for text-to-image tasks, but also for image-to-image [2, 18] and text-to-video models, such as Runway [12], Make-A-Video [21], Imagen Video [19], and Phenaski [20]. Stable Diffusion has been adapted for various applications, from medical imaging [16] to music generation [23, 15].
In addition to image and video generation, text generation is a popular generative domain, and OpenAI's GPT-3 [2] is a notable example of a large language model (LLM). With a simple text prompt, GPT-3 can produce a piece of writing or an entire essay. It can also assist programmers in writing code. OpenAI has further developed GPT-3.5, an improved version which is better at generating complex text and poetry. Additionally, OpenAI launched Chat-GPT [1], a 175 billion parameter natural language processing (NLP) model that can produce responses in a conversational style. This model combines two popular AI topics: chatbots and GPT-3.5. ChatGPT is a specific chatbot use case wherein the chatbot interacts with a GPT informa
Figure 1: The scope of responsible AIGC.
tion source.
**AIGC dispute**. Despite its popularity, AIGC has raised concerns regarding privacy, bias, toxicity, misinformation, intellectual property (IP), and potential misuse of technology.
The recent release of ChatGPT has sparked much conversation surrounding its capabilities and potential risks, such as its ability to debug code or compose essays for university students [14]. It is important to consider whether AIGC models result in unique creative works or simply replicate content from their training sets. Ideally, AIGC should produce original and distinct outputs, but the source and intellectual property rights of the training data are often unknown due to the use of uncurated web-scale data [15]. Furthermore, the powerful memorization of large AIGC models [11, 12] poses a risk of reproducing data directly from the training data [13], which potentially violates privacy rights and raises legal concerns around copyright infringement and ownership. Most AIGC models rely on text encoders that are trained using large amounts of data from the internet, which may contain social biases, toxicity, and other limitations that are inherent in large language models.
The essential components of responsible AIGC are summarized in Figure 1, with particular focus given to the first three parts (e.g., privacy, bias, toxicity, misinformation, and intellectual property), which are highlighted in black. The remaining risks associated with responsible AIGC are discussed in Section 5, and other underlying issues may require further investigation. Table 1 lists recent AIGC models and their associated issues related to privacy, bias, toxicity, misinformation, and IP, noting which models have taken proactive actions.
## 2 Privacy
### Privacy leakage in foundation models
Large foundation models are known to be vulnerable to privacy risks, and it is possible that AIGC models that build upon these models could also be subject to privacy leakage. Previous research has demonstrated that large language models such as GPT-2 can be vulnerable to privacy attacks, as attackers can generate sequences from the trained model and identify those memorized from the training set [11]. Kandpal _et al._[12] have attributed the success of these privacy attacks to the presence of duplicated data in commonly used web-scraped training sets. It has been demonstrated that a sequence that appears multiple times in the training data is more likely to be generated than a sequence that occurred only once. This suggests that deduplication could be used as a potential countermeasure in privacy-sensitive applications.
### Privacy leakage in generative models
The replication behavior in Generative Adversarial Networks (GANs) has been studied extensively [13, 14, 15]. Due to the fact that AIGC models are trained on large-scale web-scraped data [13, 14, 15], the issue of overfitting and privacy leakage becomes especially relevant. For instance, Stable Diffusion memorized duplicate images in the training data [13]. Somepalli _et al._[15] demonstrated that Stable Diffusion blatantly copies images from its training data, and the generated images are simple combinations of the foreground and background objects of the training dataset (as shown in Figure 2). Moreover, the system occasionally displays the ability to reconstruct memories, producing objects that are semantically equivalent to the original without being identical in pixel form. The existence of such images raises concerns about data memorization and the ownership of diffusion images.
Similarly, recent research has shown that Google's Imagen can leak photos of real people and copyrighted images [14]. In Matthew Butterick's recent litigation [13], he pointed out that because all visual information in the system is derived from copyrighted training images, the images produced are necessarily works de
Figure 2: A comparison between training images and generated images (by Stable Diffusion). **Top row**: generated images. **Bottom row**: closest matches in the training dataset (LAION). The comparison shows that Stable Diffusion is able to replicate training data by combining foreground and background objects. Image source: [15].
rived from those training images, regardless of their outward appearance.
DALL-E 2 has also encountered similar problems. It can sometimes reproduce images from its training data rather than creating new ones. OpenAI found that this image regurgitation occurs due to images being replicated many times in the dataset [11]. Similarly, ChatGPT itself recognizes its privacy leakage in its response, as illustrated by an example shown in Figure 3.
### Privacy actions
Although a complete resolution to the privacy issues mentioned above has not been achieved, companies and researchers have taken proactive steps to address these issues,
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Models & Developer(s) & Initial release & Format unique & Main technique & Release to public by Mar, 2023 & Privacy & Bias, toxicity, misinformation & IP \\ \hline \(\bigcirc\) & OpenAI DALL-E, DALL-E & OpenAI DALL-E & Jan, 2021/ Apr, 2022 & Text-to-image & CLIP, diffusion model & No & Deduplication & Data filtering and reweighting & — \\ \hline \(\bigcirc\) & Craiyon (DALL-E Mini) & Boris Dayma et al. & Jul, 2021 & Text-to-image & CLIP, diffusion model & No & Deduplication & — & — \\ \hline \(\bigcirc\) & Stable Diffusion & CompVis; Runway; Stability AI & Aug, 2022 & Text-to-image & CLIP, diffusion model & Yes & — & Data filtering & — \\ \hline \(\bigcirc\) & ChatGPT & OpenAI & Dec, 2022 & Text-to-text reinforce- & GPT-3.5, reinforcement & No & Refusing to provide information (e.g., phone number) & Data filtering, building tools to screen harmful outputs, etc. & Classifier \\ \hline \(\bigcirc\) & Point-E & OpenAI & Dec, 2022 & Text-to-3D model & GLIDE, diffusion model & No & — & — & — \\ \hline \(\bigcirc\) & Midjourney’s algorithm & Midjourney & Mar, 2022 & Text-to-image & Unknown & No & — & — & DMCA takedown \\ \hline \(\bigcirc\) & Imagen & Google Brain & Dec, 2022 & Text-to-image & BERT, T5, CLIP, diffusion model & No & — & Data filtering & — \\ \hline \(\bigcirc\) & Parti & Google Brain & Dec, 2022 & Text-to-image & ViT-vQGAN, autoencoder & No & — & Prompt filtering, output filtering, and model re-calibration & Adding watermark \\ \hline \(\bigcirc\) & Video diffusion, Imagen Video & Google Brain & Dec, 2022 & Text-to-video & Diffusion model & No & — & Prompt filtering and output filtering & — \\ \hline \(\bigcirc\) & Video & Meta & Dec, 2022 & Text-to-video & CLIP, Pseudo-3D convolutions, diffusion model & No & — & Data filtering & Adding watermark \\ \hline \(\bigcirc\) & Tsinghua LogView, LogView, Bayesian & Tsinghua Alioba, BAAI & May, 2021 & Text-to-image & VQVAE, image & No & — & — & — \\ \hline \(\bigcirc\) & Tsinghua LogView, Bayesian & May, 2021 & Text-to-image & VQVAE, image & No & — & — & — \\ \hline \(\bigcirc\) & Tsinghua Alioba, BAAI & May, 2022 & Text-to-video & CogView 2 & No & — & — & — \\ \hline \end{tabular}
\end{table}
Table 1: A summary of recent AIGC models and associated issues. We use dots with different colors to indicate different modalities involved in the models: Text, Image, Video.
such as introducing warning messages and detecting replicated content.
At the industry level, Stability AI has recognized the limitations of Stable Diffusion, such as the potential for memorization of replicated images in the training data. To address this, they provide a website [1] to support the identification of such memorized images. In addition, art company Spawning AI has created a website called "Have I Been Trained" [15] to assist users in determining whether their photos or works have been used as AI training materials. OpenAI has taken steps to address privacy concerns by reducing data duplication through deduplication [16]. Furthermore, companies such as Microsoft and Amazon have implemented measures to prevent employee breaches of confidentiality by banning the sharing of sensitive data with ChatGPT, given that this information could be utilized for training data for future versions of ChatGPT [10].
Academic researchers, such as Sompalli _et al._[15], have studied image retrieval frameworks to identify content duplication, while Dockhorn _et al._[14] have proposed differentially private diffusion models to guarantee privacy in generative models.
Existing privacy measures are inadequate to meet the demands of privacy. It is essential to explore more reliable detection systems for data replication in generative models, and to further investigate memorization and generalization in deep learning systems.
## 3 Bias, toxicity, misinformation
### Problematic datasets
Since the training data used in AI models are collected in the real world, they can unintentionally reinforce harmful stereotypes, exclude or marginalize certain groups, and contain toxic data sources, which can incite hate or violence and offend individuals [13]. For example, the LAION dataset [13], which is used to train diffusion models, has been criticized for containing problematic content related to social stereotyping, pornography, racist slurs, and violence.
Although some AIGC models like Imagen [11] try to filter out undesirable data, such as pornographic imagery and toxic language, the filtered data can still contain sexually explicit or violent content. Moreover, recent research [20, 1] has pointed out that these unfiltered datasets utilized for training frequently encompass social biases, repressive perspectives, and derogatory connections towards underrepresented communities. Google's Imagen Video [12] is trained on a combination of the LAION-400M image-text dataset and their internal dataset, and Google is concerned that its Imagen tool could be used to generate harmful content. However, the dataset still inherits social biases and stereotypes that are difficult to remove.
### Problematic AIGC models
Models trained, learned, or fine-tuned on the aforementioned problematic datasets without mitigation strategies can inherit harmful stereotypes, social biases, and toxicity, leading to unfair discrimination and harm to certain social groups [13]. Furthermore, there is a risk of misinformation when models provide inaccurate or false answers [13].
Stable Diffusion v1 was trained primarily on the LAION-2B data set, which only contains images with English descriptions [12]. As a result, the model was biased towards white, Western cultures, and prompts in other languages may not be adequately represented. Follow-up versions of the Stable Diffusion model were fine-tuned on filtered versions of the LAION dataset, but the bias issue still occurs [12]. Similarly, DALLA-E and DALLA-E 2 have been found to exhibit negative stereotypes against minoritized groups [14]. Google's Imagen [11] also encodes several social biases and stereotypes, such as generating images of people with lighter skin tones and aligning with Western gender stereotypes. These biases can lead to unfair discrimination and harm to certain social groups. Furthermore, even when generating non-human images, Imagen has been shown to encode social and cultural biases [17]. Due to these issues, most companies decided not to make their AIGC models available to the public.
Beyond above issues, there is also a risk of misinformation when AIGC models provide inaccurate or false answers. For example, the content generated by GPT and its derivatives may appear to be accurate and authoritative, but it could be completely inaccurate. Therefore, it can be used for misleading purposes in schools, laws, medical domains, weather forecasting, or anywhere else. For example, the answer on medical dosages that ChatGPT provides could be inaccurate or incomplete, potentially leading to the user taking dangerous or even life-threatening actions [1].
Figure 3: An answer to “What is the privacy risk of ChatGPT” by ChatGPT (Jan. 30, 2023 version).
2018]. Prompted misinformation on traffic laws could cause accidents and even death if drivers follow the false traffic rules. ChatGPT also exhibits verbosity and overuse of certain phrases. For instance, it repeatedly states that it is a language model trained by OpenAI. These issues are due to biases inherent in training data, as trainers tend to prefer longer answers that appear more comprehensive [3].
To illustrate the inherent bias in AIGC models, we tested a toy example on Stable Diffusion v2.1. As shown in Figure 4, images generated with the prompt "Three engineers running on the grassland" were all male and none of them belong to the neglected racial minorities, indicating a lack of diversity in the generated images.
### Bias, toxicity, misinformation mitigation
The quality of the content generated by language models is inextricably linked to the quality of the training corpora. OpenAI took extra measures to ensure that any violent or sexual content was removed from the training data for DALLA-E 2 by carefully filtering the original training dataset. However, filtering can introduce biases into the training data that can then be propagated to the downstream models. To address this issue, OpenAI developed pre-training techniques to mitigate the consequent filter-induced biases [15].
To ensure that AI-driven models reflect the current state of society, it is essential to regularly update the training corpora used in AIGC models with the most recent information. This will help prevent information lag and ensure that the models remain updated, relevant, and beneficial to society. Recent research [11] has shown that transformer models cannot accurately predict data that did not fall into training data period. This is because test data and training data come from different periods, and increasing model size does not improve performance. It is thus essential to collect new training data and update the model regularly.
One noticeable point is that while biases and stereotypes can be reduced in the source datasets, they can still be propagated or even exacerbated during the training and development of AIGC models. Therefore, it is crucial to evaluate the existence of bias, toxicity, and misinformation throughout the entire lifecycle of model training and development, rather than staying solely at the data source level. Additionally, there is a challenge in defining a truly fair and non-toxic dataset. The extent and nature of these issues within AIGC models have not yet been comprehensively investigated.
## 4 IP Protection
As AIGC continues to advance in sophistication and popularity, it raises questions about the origin of content for copyright purposes and whether AI-generated content should be entitled to the same intellectual property protections as content created by humans.
### Difficulty of IP infringement detection
**Traditional understanding of copyright.** Copyright law generally protects original works of authorship that are created by human authors and are fixed in a tangible form [15]. For a work to be eligible for copyright protection, it needs to be expressed in a tangible form, either physical or digital, such as a book, painting, or computer file.
**Difficulty of copyright definition in AIGC.** The ownership and protection of generated content have raised a significant amount of concern and debate. It remains unclear whether such generated content should be considered original works eligible for copyright protection under current laws.
There are many different notions of replication from AIGC. Somepalli _et al._[16] gave an (informal) definition as follows: _An image is considered to contain replicated content if it includes an object that is identical to an object in a training image, regardless of minor variations in appearance resulting from data augmentation, whether the object is in the foreground or background_.
In fact, addressing AI copyright issues is a complex task that involves several factors, including: (1) unclear regulations on data collection, usage, rights confirmation, and commercial use of data; (2) the need for a fair benefit distribution mechanism for contributors; (3) the lack of a unified legal understanding of AIGC copyright worldwide, with disputes over ownership still unresolved; and (4) difficulties in identifying all original works used to train AIGC models, as these models can generate an unlimited amount of content, making it impossible to test all of it.
### IP infringement examples
There is a risk of copyright infringement with the generated content if it copies existing works, whether intentionally or not, raising legal questions about IP infringement.
Figure 4: Images generated with the text “Three engineers running on the grassland” by Stable Diffusion v2.1. There are \(28\) people in the \(9\) images, all of them are male. Moreover, none of them belong to the neglected racial minorities. This shows a huge bias of Stable Diffusion.
In November 2022, Matthew Butterick filed a class action lawsuit against Microsoft's subsidiary GitHub, accusing that their product Copilot, a code-generating service, violated copyright law [14]. The lawsuit centers around Copilot's illegal use of licensed code sections from the internet without attribution. Texas A&M professor Tim Davis also provided examples of his code being copied verbatim by Copilot [13]. Although Microsoft and OpenAI have acknowledged that Copilot is trained on open-source software in public GitHub repositories, Microsoft claims that the output of Copilot is merely a series of code "suggestions" and does not claim any rights in these suggestions. Microsoft also does not make any guarantees regarding the correctness, security, or copyright of the generated code.
For text-to-image models, several generative models have faced accusations of infringing on the creative work of artists. Somepalli _et al._[12] presented evidence suggesting that art-generating AI systems, such as Stable Diffusion, may copy from the data on which they were trained [23]. While Stable Diffusion disclaims any ownership of generated images and allows users to use them freely as long as the image content is legal and non-harmful, this freedom raises questions about ownership ethics. Generative models like Stable Diffusion are trained on billions of images from the Internet without the approval of the IP holders, which some argue is a violation of their rights.
### IP problem mitigation
To mitigate IP concerns, many AIGC companies have started implementing measures to accommodate content creators. Midjourney, for instance, has added a DMCA takedown policy to its terms of service, allowing artists to request the removal of their work from the dataset if they suspect copyright infringement [14]. Similarly, Stability AI plans to offer artists the option of excluding themselves from future versions of Stable Diffusion [15].
Furthermore, text watermarks, which have previously been used to protect the IP of language generation APIs [15, 16], can also be used to identify if these AIGC tools have utilized samples from other sources without permission. This is evident in Stable Diffusion, which has generated images with the Getty Images' watermark on them [20]. In light of the growing popularity of AIGC, the need for watermarking is becoming increasingly pressing. OpenAI is developing a watermark to identify text generated by its GPT model. It could be a valuable tool for educators and professors to detect plagiarism in assignments generated with such tools. Google has already applied a Patri watermark to all images it releases. John Kirchenbauer _et al._[17] proposed a watermark to detect whether the text is generated by an AI model. Still, they only tested it on the smaller open-source language model OPT-6.7B from Meta, leaving its performance on the larger and more widely used ChatGPT model unknown.
In addition to watermarking, OpenAI has released a classifier that can distinguish between text generated by AI and that written by humans. This tool has the potential to be extremely useful. However, it should not be relied exclusively on for critical decisions.
In general, the emergence of AIGC presents significant IP concerns and challenges that demand immediate attention. It is essential for technologists, lawyers, and policymakers to recognize these issues and work together to ensure that the intellectual property rights of human creators are protected.
## 5 Discussion
**Concerns on misuse**. Evaluating and mitigating risks associated with AIGC models and their potential harms is a complex and interdisciplinary challenge. In addition, it is important to tackle the problematic aspects of data encoded and propagated through these models, including hidden, harmful, and violent content. In fact, with the ability to generate highly realistic images and text that are difficult to distinguish from human-generated content, these models can be used for malicious purposes such as spreading fake news, hoaxes, and harassment. The foundation models that power AIGC have made it easier and cheaper to create deepfakes that are close to the original, posing additional risks and concerns.
In fact, many AIGC models are still far from satisfactory. Some models have gained negative reputations for producing useless, biased, or harmful information. For example, on the 4chan online forum, there are numerous discussions about images of naked celebrities and other forms of fake pornographic content generated by Stable Diffusion [23]. The misuse of these technologies could lead to the spread of misinformation, harm the reputations of individuals, or even break the law.
The potential negative impact of ChatGPT on education is significant, as students could use it to write homework or solve math problems, thus compromising the integrity of their work. Moreover, as ChatGPT is a chatbot, it lacks the necessary emotional connection that a human teacher can provide, which could lead to a diminished learning experience. In light of these concerns, New York City public schools have recently banned the use of ChatGPT [15]. Stack Overflow, a Q&A platform for coders and programmers, temporarily prohibited the sharing of ChatGPT information, acknowledging its potential to cause significant harm to the site and users who rely on it for accurate answers [12]. Writing and editing tools that rely on ChatGPT also face the risk of losing customers if they inadvertently introduce errors into the output.
Overall, the potential misuse of AIGC poses a threat to the creative industries. Therefore, it is crucial to use AIGC only in situations where the risk can be managed or corrected. To mitigate risks, it is also necessary to include governance mechanisms for AIGC models as soon as possible, such as establishing legal regulations.
**Vulnerability to poisoning attack**. AIGC models have made it easier to generate synthetic data, but it would be a disaster if the foundational model is compromised. For example, a diffusion model with a hidden "backdoor" could carry out malicious actions when it encounters a specific trigger pattern during data generation [10, 11, 12]. This Trojan effect could cause catastrophic damage to downstream applications that depend on the compromised diffusion model. Unfortunately, research on the ro
bustness of foundational and fine-tuned AIGC models is still limited.
**What about commercial usage: a vicious competition? Will AIGC replace humans and become a roadblock to human creativity?** Many AIGC models are being utilized for commercial art and graphic design. For example, PromptBase [12] is an early marketplace for DALL-E, Midjourney, Stable Diffusion & GPT-3 prompts. Microsoft is using DALL-E 2 to power a generative art feature that will be available in Microsoft Edge. Microsoft and OpenAI are collaborating on ChatGPT-Powered Bing [13]. Moreover, Microsoft is planning to integrate OpenAI's AIGC models into Word, PowerPoint, Outlook, and other applications to allow users to automatically generate text using simple prompts [14]. While using the generated works for profit or commercial purposes is not recommended, there are no mandatory legal restrictions at this stage.
The use of AIGC has faced criticism from those who fear that it will replace human jobs. Insider has listed several jobs that could potentially be replaced by ChatGPT, including coders, data analysts, journalists, legal assistants, traders, accountants, etc [15]. Some artists worry that the wide use of image generation tools such as Stable Diffusion could eventually make human artists, photographers, models, cinematographers, and actors commercially uncompetitive [1]. For example, the images generated by Stable Diffusion can be sold on the market. This creates direct competition and poses a significant threat to creators, such as writers, artists, and programmers, who could suffer permanent damage to their businesses [16]. Since Stable Diffusion can produce an unlimited number of infringing images, this threat is even more significant. However, David Holz, the founder of Midjourney, views artists as customers rather than competitors. Artists can use Midjourney to quickly prototype artistic concepts to show to clients before starting work themselves [10].
As AIGC models become more widespread, people may become too dependent on instant answers and less willing to think critically on their own, which could ultimately destroy human creativity and increase the risk of AI exerting control over humans. Overreliance on AIGC models could create opportunities for malicious attackers to exploit user trust and access their private information.
**Explainable AIGC**. The black-box nature of foundation models can lead to unsatisfactory results. It is frequently challenging to determine the information used to generate a model's output, which makes biases occur within datasets. An explanation is a critical element in comprehending how and why AIGC creates these problems.
For example, social and cultural bias is introduced and potentially amplified at many stages of system development and deployment. However, how the biases are propagated through these models remain unclear. While deduplication can be an effective method of preventing memorization, it does not completely explain why or how models like DALL-E 2 memorize training data.
To address these issues, comprehensive explanations are necessary to trade-off between risks and benefits for specific use cases of AIGC.
**Responsible Open-sourcing**. The responsible open-sourcing of code is a matter of great concern due to the aforementioned risks. Most companies chose not to release their models or source code before solving these risks. OpenAI has been criticized for not sharing more about how the most recent GPT-4 was created. Stable Diffusion is the only AI art generator that provides its source code and pretrained model (weights) available [17]. The risk is that anyone can use Stable Diffusionfor free, even for commercial or malicious purposes.
As the code and models behind AIGC are not transparent to the public, and their downstream applications are diverse and may have complex societal impacts, it is challenging to determine the potential harms they may cause. Therefore, the need for responsible open-sourcing becomes critical in determining whether the benefits of AIGC outweigh its potential risks in specific use cases.
**User feedback.** Gathering user feedback is also an essential element of responsible AIGC. Companies such as OpenAI actively seek feedback from users to identify harmful outputs that could arise in real-world scenarios, as well as to uncover and mitigate novel risks [1]. Actually, GPT-4 had incorporated an additional safety reward signal during Reinforcement Learning from Human Feedback (RLHF) training to reduce harmful outputs by training the model to refuse requests for such content [1]. By involving users in the feedback loop, AIGC developers can better understand the potential consequences of their models and take corrective actions to minimize any negative impacts.
**Consent, credit, and compensation**. Many AIGC models are trained on datasets without obtaining consent or providing credit or compensation to the original data contributors. For example, Simon Willison and Andy Baio found that a large number of images in LAION were copied from DeviantArt and used to train Stable Diffusion [23]. This results in data contributors' works being learned by AI models and recreated by other users for profit, without their knowledge or permission. This practice damages the interests of the original data contributors. To avoid negative impacts, AIGC companies should obtain consent from data contributors and take proactive measures before training their models on original or augmented works. Failure to do so could result in lawsuits against AIGC. Therefore, AIGC companies must ensure that data collection and model training are conducted in an ethical and responsible manner.
A potential solution to the issue of using creators' works for AI training is to notify them from the beginning and give them the option to benefit from subsequent creations based on their works generated by the model. Additionally, creators who give their consent for their data to be used can be rewarded based on how their creations contribute to AIGC each time the tool is queried. By incentivizing creators, companies can encourage creators to contribute more and accelerate the development of AIGC. For example, a more user-friendly version of Copilot could allow voluntary participation or compensate coders for contributing to the training corpus [16].
**Environment impact.** The massive size of AIGC models, which can have billions or trillions of parameters, results in high environmental costs for both model training and operation. For example, GPT-3 has 175 billion parameters and requires significant computing resources to train. Narayanan _et al._[20] estimated that training GPT-3 with A100s would require 1,024 GPUs, 34 days, and cost 4.6 million dollars, with an expected energy consumption of 936 MWh [1]. This raises important questions about how to reduce the energy consumption and carbon emission of AIGC models.
The upcoming GPT-4, with even more parameters than its predecessor, is expected to leave a more significant carbon emission. Failing to take appropriate steps to mitigate the substantial energy costs of AIGC could lead to irreparable damage to our planet. It is crucial to address these concerns and explore sustainable alternatives.
**Fairness of benefits.** It is important to recognize that AIGC models may have varying impacts on different groups of people depending on their environmental and individual abilities, which could further exacerbate global inequities [21]. Addressing the issue of how to fairly distribute the benefits of AIGC models is an area that requires further exploration and attention.
**Conflict among multiple goals**. It is critical to ensure that the mitigation of one risk does not exacerbate another [21]. For example, approaches to mitigate the use of toxic language in language models can introduce biases in model predictions against marginalized communities [22, 23]. Therefore, it is essential to explore effective mitigation strategies that can simultaneously address multiple risks.
## 6 Conclusion
Although AIGC is still in its infancy, it is rapidly expanding and will remain active for the foreseeable future. Current AIGC technologies only scratch the surface of what AI can create in the field of art. While AIGC offers many opportunities, it also carries significant risks. To acquire a thorough comprehension of these risks, we provide a synopsis of both current and potential threats in recent AIGC models, so that both the users and companies can be well aware of these risks, and make the appropriate actions to mitigate them.
In order to promote responsible usage of AIGC tools and mitigate associated risks, we propose several steps that companies and users can take. It is important for companies to incorporate responsible AI practices throughout all AIGC-related projects. Additionally, proactive measures should be taken to mitigate potential risks in data sources, models, and pre/post-processing steps. Without proper safeguards, AIGC development may face significant challenges and regulatory hurdles. Note that this vision paper is not exhaustive, and it is essential for the wider community to contribute to the understanding and implementation of responsible AIGC. To facilitate this, it is necessary to build comprehensive benchmarks for measuring and evaluating the risks associated with different AIGC models.
|
2310.16761 | IntenDD: A Unified Contrastive Learning Approach for Intent Detection
and Discovery | Identifying intents from dialogue utterances forms an integral component of
task-oriented dialogue systems. Intent-related tasks are typically formulated
either as a classification task, where the utterances are classified into
predefined categories or as a clustering task when new and previously unknown
intent categories need to be discovered from these utterances. Further, the
intent classification may be modeled in a multiclass (MC) or multilabel (ML)
setup. While typically these tasks are modeled as separate tasks, we propose
IntenDD, a unified approach leveraging a shared utterance encoding backbone.
IntenDD uses an entirely unsupervised contrastive learning strategy for
representation learning, where pseudo-labels for the unlabeled utterances are
generated based on their lexical features. Additionally, we introduce a
two-step post-processing setup for the classification tasks using modified
adsorption. Here, first, the residuals in the training data are propagated
followed by smoothing the labels both modeled in a transductive setting.
Through extensive evaluations on various benchmark datasets, we find that our
approach consistently outperforms competitive baselines across all three tasks.
On average, IntenDD reports percentage improvements of 2.32%, 1.26%, and 1.52%
in their respective metrics for few-shot MC, few-shot ML, and the intent
discovery tasks respectively. | Bhavuk Singhal, Ashim Gupta, Shivasankaran V P, Amrith Krishna | 2023-10-25T16:50:24Z | http://arxiv.org/abs/2310.16761v1 | # IntenDD: A Unified Contrastive Learning Approach for Intent Detection and Discovery
###### Abstract
Identifying intents from dialogue utterances forms an integral component of task-oriented dialogue systems. Intent-related tasks are typically formulated either as a classification task, where the utterances are classified into predefined categories or as a clustering task when new and previously unknown intent categories need to be discovered from these utterances. Further, the intent classification may be modeled in a multiclass (MC) or multilabel (ML) setup. While typically these tasks are modeled as separate tasks, we propose IntenDD a unified approach leveraging a shared utterance encoding backbone. IntenDD uses an entirely unsupervised contrastive learning strategy for representation learning, where pseudo-labels for the unlabeled utterances are generated based on their lexical features. Additionally, we introduce a two-step post-processing setup for the classification tasks using modified adsorption. Here, first, the residuals in the training data are propagated followed by smoothing the labels both modeled in a transductive setting. Through extensive evaluations on various benchmark datasets, we find that our approach consistently outperforms competitive baselines across all three tasks. On average, IntenDD reports percentage improvements of 2.32 %, 1.26 %, and 1.52 % in their respective metrics for few-shot MC, few-shot ML, and the intent discovery tasks respectively.
## 1 Introduction
Intents form a core natural language understanding component in task-oriented dialogue (ToD) systems. Intent detection and discovery not only have immense utility but are also challenging due to numerous factors. Intent classes vary vastly from one use case to another, and often arise out of business needs specific to a particular product or organization. Further, modeling requirements might necessitate considering fine-grained and semantically-similar concepts as separate intents Zhang et al. (2021). Overall, intent-related tasks typically are expected to be scalable and resource efficient, to quickly bootstrap to new tasks and domains; lightweight and modular for maintainability across domains and expressive to handle large, related often overlapping intent scenarios Vulic et al. (2022); Zhang et al. (2021).
IntenDD proposes a unified framework for intent detection and discovery from dialogue utterances from ToD systems. The framework enables the modeling of various intent-related tasks such as intent classification, both multiclass and multilabel, as well as intent discovery, both unsupervised and semi-supervised. In Intent detection (classification), we expect every class to have a few labeled instances, say 5 or 10. However, in intent discovery, not all classes are expected to have labeled instances and may even be completely unlabeled.
Recently, intent-related models focus more on contrastive representation learning, owing to the limited availability of labeled data and the presence of semantically similar and fine-grained label spaceKumar et al. (2022); Zhang et al. (2021). Similarly, a common utterance encoder forms the backbone of IntenDD, irrespective of the task. The utterance encoder is learned by updating the parameters of a general-purpose pre-trained encoder using a two-step contrastive representation learning process. First, we adapt a general-purpose pre-trained encoder by using unlabelled information from various publicly available intent datasets. Second, we update the parameters of the encoder using utterances from the target dataset, on which the task needs to be performed, making the encoder specialize on the corpus. Here, we use both labeled and unlabelled utterances from the target dataset, where pseudo labels are assigned to the latter.
For intent classification, both multiclass and multilabel, IntenDD consists of a three-step pipeline. It includes training a classifier that uses the rep
resentation from the encoder as its feature representation, followed by two post-processing steps in a transductive setting. Specifically, a multilayer perceptron-based classifier is trained by stacking it on top of the utterance representation from our encoder. The post-processing steps consider the target corpus as a graph in a transductive setting. The first postprocessing step involves propagating the residual errors in the training data to the neighbors. The second one further performs label smoothing by propagating the labels obtained from the previous step. Both these steps are performed using Modified Adsorption, an iterative algorithm that enables controlling the propagation of information that passes through a node more tightly (Talukdar and Pereira, 2010).
Major contributions:IntenDD reports performance improvements compared to that of competitive baselines in all the tasks and settings we experimented with, including multiclass and multilabel classification in few-shot and high data settings; unsupervised and semi-supervised intent discovery. Our two-step post-processing setup for intent classification leads to statistically significant performance improvements to our base model. While existing intent models focus primarily on better representation learning and data augmentation, we show that classical transductive learning approaches can help improve the performance of intent models even in fully supervised settings. Finally, we show that with a careful construction of a graph structure in a transductive learning setting in terms of both edge formation and edge weight formation can further improve our outcomes.
## 2 IntenDD
IntenDD consists of a two-step representation learning module, a classification module, and an intent detection module. We elaborate on each of these modules in this section.
### Continued Pretraining
We start with a general-purpose pre-trained model and use it as a cross-encoder for the continued pretraining (Gururangan et al., 2020). We start with a standard general-purpose pre-trained model as the encoder. We follow Zhang et al. (2021) for our pretraining phase where the model parameters are updated both using a combination of token-level masked language modeling loss and a sentence-level self-supervised contrastive loss. For a batch of \(K\) sentences, we compute the contrastive loss (Wu et al., 2020; Liu et al., 2021) as follows
\[\mathcal{L}_{sscl}=-\frac{1}{K}\sum_{i=1}^{K}log\frac{exp(sim(h_{i},\bar{h}_{ i})/\tau)}{\sum_{j=1}^{K}exp(sim(h_{i},\bar{h}_{j})/\tau)} \tag{1}\]
For a sentence \(x_{i}\), we obtain a masked version of the sentence \(\bar{x}_{i}\), where a few tokens of \(x_{i}\) are randomly masked. Further, we dynamically mask tokens such that each sentence has different masked positions across different training epochs. In \(\mathcal{L}_{sscl},\)\(h_{i}\) is the representation of the sentence \(x_{i}\) and \(\bar{h}_{i}\) is the representation of the \(\bar{x}_{i}\). \(\tau\) is the temperature parameter that controls the penalty to negative samples and \(sim(.,.)\) denotes the cosine similarity between two vectors. The final loss \(\mathcal{L}_{pretraining}\) is computed as \(\mathcal{L}_{pretraining}=\mathcal{L}_{sscl}+\lambda\mathcal{L}_{mlm}\). Here, \(\mathcal{L}_{mlm}\) is token level masked language modelling loss and \(\lambda\) is a weight hyper-parameter.
### Corpus-specialized Representation Learning
The pretraining step uses unlabelled sentences from publicly available intent datasets which should ideally expose a pre-trained language model with utterances in the domain. Now, we consider contrastive representation learning using the target dataset on which the task needs to be performed.
Consider a dataset \(\mathcal{D}\) with a total of \(N\) unlabelled input utterances. Here, assuming \(\mathcal{D}\) to be completely unlabeled, we first assign pseudo labels to each of the utterances in \(\mathcal{D}\). Using the pseudo labels, we learn corpus-level contrastive representation by using supervised contrastive loss (Khosla et al., 2020). The pseudo labels are assigned by first finding clusters of utterances by using a community detection algorithm, 'Louvain' (Blondel et al., 2008). Community detection assumes the construction of a graph structure. We form a connected weighted directed graph \(G_{\mathcal{D}}(V_{\mathcal{D}},E,W)\), the input utterances in \(\mathcal{D}\) form the nodes in \(G_{\mathcal{D}}\). We identify lexical features in the form of word-level n-grams.
We identify keyphrases that are representative of the target corpus on which the representation learning is performed. The keyphrases are obtained by finding word-level n-grams that have a high association with the target corpus, as compared to the likelihood of finding those in other arbitrary corpora. Here, we obtain the pointwise mutual information (PMI) of the n-grams in the target corpus, based on the likelihood of the n-gram occurring in
the corpus, compared to a set of utterances formed via the union of the sentences in the target corpus and that in the corpora used during pretraining setup. Let \(\mathcal{P}\) be the union of all the sentences in the corpora used in the pretraining step. Now, the PMI is calculated as
\[\begin{split} PMI(kp,\mathcal{D})&=\log df(kp, \mathcal{P}\cup\mathcal{D})\times\\ &\log\frac{df(kp,\mathcal{D})|\mathcal{P}\cup\mathcal{D}|}{df(kp,\mathcal{P}\cup\mathcal{D})|\mathcal{D}|}\end{split} \tag{2}\]
Here, \(df(kp,\mathcal{D})\) is the count of utterances in \(\mathcal{D}\) that contain the keyphrase \(kp\). \(df(kp,|\mathcal{P}\cup\mathcal{D}|)\) is the frequency of the keyphrase from the combined collection \(\mathcal{D}\) and \(\mathcal{P}\). Here, we only consider those keyphrases which is present at least five times in \(\mathcal{D}\). Moreover, the log frequency of the count of the keyphrase is also multiplied with PMI to avoid high scores for rare words Jin et al. (2022). Further, the PMI value is multiplied by the square of the number of the words in the ngram so as to have higher scores for ngrams with larger values of \(n\) Banerjee and Pedersen (2002). We validated this decision during preliminary experiments where we found that multiplying PMI with the square of the number of words generally worked better for the datasets considered in this work. That said, it's important to note that this design choice may vary in its necessity when applied to a different dataset, and its requirement should be established through empirical investigation.
Now, the keyphrases are used to construct \(G_{\mathcal{D}}\). Two nodes have edges between them if they both have at least one common keyphrase. The edge weights are the sum of the keyphrase scores common between two nodes. The weight matrix \(W\) is a \(N\times N\) matrix representing the edge weights in the graph. \(W\) is row-normalized using min-max normalization, a form of feature scaling. The graph \(G_{\mathcal{D}}\) is then used to perform community detection using Louvain, a modularity-based community detection algorithm. Community membership is used to form clusters of inputs. Here, all the nodes in \(G_{\mathcal{D}}\) that belong to the same cluster are assigned with a common (pseudo)-label.
Louvain Method:is a modularity-based graph partitioning approach for detecting hierarchical community structure Blondel et al. (2008). Here, each utterance is considered a node in a graph and the edge weights capture the strength of the relation between node pairs. Louvain Method attempts to iteratively maximize the quality function it optimizes, generally modularity. While the approach may be started with any arbitrary partitioning of the graph, we start with each data point belonging to its own community (singleton communities). It then works iteratively in two phases. In the first phase, the algorithm tries to assign the nodes to their neighbors' community as long as that reassignment leads to a gain in the modularity value. The second phase then aggregates the nodes within a community and forms a super node, thus creating a new graph where each community in the first phase becomes a node in the second phase. The process iteratively continues until the modularity value can no longer be improved.
Until now, we were assuming \(G_{\mathcal{D}}\) to be completely unlabeled. However, we are yet to discuss two crucial questions. One, how to incorporate labeled information for an available subset of utterances in a semi-supervised setup. Here, we need to ensure that nodes belonging to the same true label should not get partitioned into separate clusters. We merge those inputs with the same true label as a single node before constructing \(G_{\mathcal{D}}\), and initialize Louvain with the graph structure so obtained. The merging of the utterances with a common label into a single node trivially ensures that no two utterances of the same label get partitioned into different clusters. Hence, we ensure that no two nodes with the same true label are assigned with different pseudo labels. However, at this stage, the pseudo-labels are obtained purely for representation learning. It is not intended to be representative of the real intent classes but is rather simply a partition based on the keyphrases in the utterances. Finally, Using the pseudo labels obtained via Louvain, we learn corpus-level contrastive representation by using supervised contrastive loss Khosla et al. (2020). Here, during the representation learning each utterance is treated separately and we do not consider the merging that we performed for the community detection.
Keyphrase selection for constructing \(G_{\mathcal{D}}\):While we have a list of n-grams, along with their feature scores. Here, we employ recursive feature elimination (RFE), a greedy feature elimination approach as our feature selection strategy. In RFE we start with a large set of features and greedily eliminate features, one at a time. We start with the top k features and perform the community detection using Louvain. We then start with the least
promising feature from our selected features and check if removing the feature leads to an increase in the overall modularity of the graph, as compared to the modularity when the feature was included. Here, the number of nodes in \(G_{\mathcal{D}}\) remain the same, though the number of edges and their edge weights are dependent on the features. A single run of the Louvain algorithm has a complexity of O(n.logn), where n is the number of nodes. So in worst case, the time complexity for graph construction is \(O(n.d.logn)\), where \(d\) is the number of features. We perform the feature selection for a few fixed iterations. We incorporate some additional constraints to keep track of for the feature selection, which are as follows: The graph needs to remain a single connected graph and if the removal of a feature violates it, then we keep the feature. Second, in all the tasks we consider, we assume the knowledge of the total number of intents. Hence a feature, whose presence, even if contributes positively to modularity but results in increasing the gap between the total number of true intent classes and the number of clusters Louvain provides with it as the feature, then the feature is removed as well.
### Intent Discovery
We perform intent discovery in both unsupervised and semi-supervised setups. Intent discovery is performed via clustering. Here, we start with the same graph construction as was used for Louvain in SS2.2. The weight matrix \(\mathbf{W}\) is row-normalized. Additionally, we obtain a similarity matrix \(\mathbf{A}\) based on the cosine similarity between the utterance level encodings of two nodes. The encodings are obtained from the encoder learned in SS2.2. We obtain a weighted average of the edge weights in \(\mathbf{W}\) and \(\mathbf{A}\). Specifically, the weights for the average is obtained via grid search and selects the configuration that optimizes the silhouette score, an intrinsic measure for clustering quality. The new graph will be referred to as \(\mathcal{G}_{pred}\). With \(\mathcal{G}_{pred}\), we perform Louvain again for intent discovery. The labeled nodes in a semi-supervised setup would be merged as a single node before running Louvain. When a new set of utterances arrive, these utterances are added as nodes in \(\mathcal{G}_{pred}\). Their corresponding values in \(\mathbf{A}\) are obtained based on their representation obtained from our encoder (SS2.2). The corresponding values in \(\mathbf{W}\) are obtained based on the existing set of ngrams and no new feature selection is performed.
### Intent Classification
Irrespective of whether multiclass or multilabel setup, our base classifier is a multilayer perceptron comprising of a single hidden layer with non-linearity. It uses the utterance level representation, learned in SS2.1 and SS2.2, as its input feature, which remains frozen during the training of the classifier. The classifier is trained using cross-entropy loss with label smoothing (Vulic et al., 2022; Zhang et al., 2021c). The activation function at the output layer is set to softmax and sigmoid for multiclass and multilabel classification respectively.
Modified Adsorption (MAD)is a graph-based semi-supervised transductive learning approach (Talukdar and Crammer, 2009). MAD is a variant of the label propagation approach. While label propagation (Zhu et al., 2003) forces the unlabeled instances to agree with their neighboring labeled instances, MAD enables prediction on labeled instances to vary and incorporates node uncertainty (Yang et al., 2016). It is expressed as an unconstrained optimization problem and solved using an iterative algorithm that guarantees convergence to a local optima (Talukdar and Pereira, 2010; Sun et al., 2016). The graph typically contains a few labeled nodes, referred to as seed nodes, and a large set of unlabelled nodes. The graph structure can be explicitly designed in MAD. The unlabelled nodes are typically assigned a dummy label. In MAD, a node actually is assigned a label distribution than a hard assignment of a label.
From a random walk perspective, it can be seen as a controlled random walk with three possible actions, each with predefined probabilities, all adding to one (Kirchhoff and Alexandrescu, 2011). The three actions involve a) continuing a random walk to the neighbors of a node based on the transition matrix probability, b) stopping and returning the label distribution for the node, and c) abandoning and returning an all-zero distribution or a high probability to the dummy label. Each of these components forms part of the MAD objective in the form of seed label loss, smoothness loss across edges, and the label prior loss. The objective is:
\[\arg\min_{\hat{\mathbf{Y}}}\sum_{l=1}^{K+1}\biggl{[}\left\|\mathbf{S}\hat{ \mathbf{Y}}_{\mathbf{l}}-\mathbf{S}\mathbf{Y}_{\mathbf{l}}\right\|^{2}+\]
\[\mu_{1}\sum_{i,j}\mathbf{M}_{ij}(\mathbf{\hat{Y}_{\mathbf{i}1}}-\mathbf{\hat {Y}_{\mathbf{j}l}})^{2}+\mu_{2}\left\|\mathbf{\hat{Y}}_{\mathbf{l}}-\mathbf{R} _{l}\right\|^{2}\biggr{]}\]
Here \(\mathbf{M}\) is the symmetrized weight matrix, \(\mathbf{Y_{jl}}\) is the initial weight assignment or the seed weight for label \(l\) on node \(j\), \(\mathbf{\widehat{Y}_{jl}}\) is the updated weight of the label \(l\) on node \(j\). \(\mathbf{S}\) is diagonal matrix indicating seed nodes, and \(\mathbf{R}_{jl}\) is the regularization target for label \(l\) on node \(j\). Here, we are assuming a classification task with \(K\) labels, and MAD introduces a dummy label as an initial assignment for the unlabeled nodes.
We follow Huang et al. (2021) and perform two post-processing steps. While the original approach use label spreading Zhou et al. (2003) for both steps, we replace it with MAD. Moreover, our graphs are constructed by a combination of embedding-based similarity and n-gram based similarity as described in SS2.3, i.e. \(\mathcal{G}_{pred}\). Both the postprocessing steps are applied on the same graph structure. However, the seed label initializations differ in both settings.
Propagation of Residual Errors:We obtain the predictions from the base predictor, where each prediction is a distribution over the labels. Using the predictions, we compute the residual errors for the training nodes and propagate the residual errors through the edges of the graph. The unlabelled and validation nodes are initialized with a zero value (or a dummy value), and the seed nodes are initialized with their residuals. Essentially \(\mathbf{Y}\) is initialized with a non-zero error for the training nodes with a non-zero residual error. With this initialization of \(\mathbf{Y}\) we apply MAD on \(\mathcal{G}_{MAD}\). The key assumption here is that the errors in the base prediction are positively correlated with the similarity neighborhood in the graph and hence the residuals need to be propagated Huang et al. (2021). Here, the residuals are propagated. Hence at the end of the propagation, each node has the smoothed errors as a distribution over the labels. To get the predictions after this step, the smoothed errors are added to predictions from the base predictor for each node.
Smoothing Label DistributionThe last step in our classification pipeline involves a smoothing step. Here, we make the fundamental assumption of homophily, where adjacent nodes tend to have similar labels. Here, \(\mathbf{Y}\) is initialized as follows: Seed labels are provided with their ground truth labels, the validation nodes and the unlabelled nodes are provided with initialized with the predictions after the error propagation step. With this initialization, we perform MAD over \(\mathcal{G}_{MAD}\). In multiclass classification, the label with the maximum value for each node is predicted as the final class. In multilabel classification, all the labels with a score above a threshold are predicted as the final labels.
## 3 Experimental Setup
We perform experiments for the three intent related tasks - Intent Discovery, Multiclass Intent Detection, and Multi-label Intent Detection. Here, we provide training and modeling details that are common to all three tasks and then mention task-specific details such as the baselines and evaluation metrics at appropriate sections.
Pretraining Datasets.One feature of IntentDD is the unification of these three tasks via a common pretrained transformer backbone. This common pretraining step is performed on CLINC-150 Larson et al. (2019), BANKING77 Casanueva et al. (2020), HWU64 Liu et al. (2019), NLU++ Casanueva et al. (2022), and StackOverflow Xu et al. (2015). Following prior work on contrastive learning for intent detection by Zhang et al. (2021), we additionally include TOP Gupta et al. (2018), SNIP Coucke et al. (2018), and ATIS Tur et al. (2010). Table 4 shows some of the relevant statistics for the datasets.
Training and Modeling Details.We choose RoBERTa Liu et al. (2019) with the base configuration as our common encoding backbone and pretrain with aforementioned datasets. For encoding the input utterances, we use a cross-encoder architecture as detailed by Mesgar et al. (2023). In this setup, the joint embedding for any pair of utterances (\(p,q\)) -needed for contrastive learning for instance- is obtained by embedding it as "[CLS] p [SEP] q" and the [CLS] representation is used as the representation for that pair. Mesgar et al. (2023) found that a cross-encoder approach works much better than a Bi-encoder where any pair of utterances are independently embedded.
We perform all of our experiments using the tranformers library Wolf et al. (2020) and the pytorch framework Paszke et al. (2019). We train our models using the AdamW optimizer with learning rate set to 2e-5, warmup rate of 0.1, and weight decay of 0.01. We pretrain our model for 15 epochs, and thereafter perform task-specific training for another 20 epochs. All experiments are performed on a machine with NVIDIA A100 80GB and we choose the maximum batch size that
fits the GPU memory (\(=96\)). We perform hyperaparameter search for the temperature \(\tau\) and lambda \(\lambda\) over the ranges \(\tau\in\{0.1,0.3,0.5\}\), and \(\lambda\in\{0.01,0.03,0.05\}\).
## 4 Experiments and Results
### Intent Discovery
Datasets.We use three datasets for benchmarking IntenDD for Intent Discovery, namely, BANKING77, CLINC-150, and Stack Overflow. We assess the effectiveness of our proposed ID system in two practical scenarios: unsupervised ID and semi-supervised ID. To ensure clarity, we introduce the term Known Intent Ratio (KIR), which represents the ratio of known intents in the training data: the number of known intent categories (\(|\mathcal{I}_{k}|\)) divided by the sum of the known intent categories and unknown categories (\(|\mathcal{I}_{k}|+|\mathcal{I}_{u}|\)). In this context, a value of \(|\mathcal{I}_{k}|=0\) corresponds to unsupervised ID, indicating the absence of any known intent classes. For semi-supervised ID, we adopt the approach outlined in previous works Kumar et al. (2022); Zhang et al. (2021), conducting experiments using three KIR values: \(\{25\%,50\%,75\%\}\).
Evaluation Metrics.Following previous work Zhang et al. (2021), we report three metrics, namely Clustering Accuracy (**ACC**) Yang et al. (2010), Normalized Mutual Information (**NMI**) Strehl and Ghosh (2002), Adjusted Rand Index (**ARI**) Hubert and Arabie (1985). All metrics range between 0 and 100 and larger values are more desirable.
Baselines.We follow the recent work of Kumar et al. (2022) to select suitable baselines for unsupervised and semi-supervised scenarios. Due to space constraints, we detail these in the appendix.
Results.We report all the intent discovery results in table 1. To begin with, it is important to highlight that our proposed method IntenDD consistently demonstrates superior performance surpassing all baseline techniques in both unsupervised and semi-supervised settings across all three datasets. Specifically, in an entirely unsupervised scenario, SBERT-KM emerges as the most formidable baseline, yet our approach significantly outperforms it. It should be noted that the fundamental distinction between IntenDD and SBERT-KM lies in our graph construction strategy for clustering. Our strategy relies on a combination of semantic similarity (via embeddings) and n-gram based similarity (via keyphrases), underscoring the importance of incorporating both these similarity measures.
Furthermore, while our approach demonstrates notable enhancements across all configurations, these improvements are particularly pronounced when the amount of labeled data is limited, resulting in an average increase of nearly 3% in accuracy for KIR values of 0% and 25%.
### Multiclass Intent Detection
Datasets and Evaluation Metric.Following Zhang et al. (2021), we perform few-shot intent detection and select three challenging datasets for our experiments, namely, CLINC-150, BANKING77, and HWU64. We use the same training and test splits as specified in that paper, and use detection accuracy as our evaluation metric.
Baselines.Due to space constraints, we provide detailed description of all baselines in the appendix (please refer SSA.1). We use the following baselines: RoBERTa-base Zhang et al. (2020), CONVBERT Mehri et al. (2020), CONVBERT + Combined Mehri and Eric (2021), Zhang et al. (2020), and CPFT Zhang et al. (2021), Contrastive Pre-training and Fine-Tuning). CPFT is the current state-of-the-art employing self-supervised contrastive pre-training on multiple intent detection datasets, followed by fine-tuning using supervised contrastive learning.
Results.Table 2 shows the results of our experiments for multiclass intent detection. Our proposal, IntenDD demonstrates superior performance across all three setups when compared to the baseline models in the 5-shot, 10-shot, and full data scenarios. In the 5-shot setting, exhibits an average absolute improvement of 2.47%, with the highest absolute improvement of 4.31% observed in the BANKING77 dataset. Across all the datasets, IntenDD achieves average absolute improvements of 1.31% and 0.71% in the 10-shot and full data settings, respectively.
IntenDD currently does not incorporate any augmented data in its experimental setup. We do not compare our work with data augmentation methods as they are orthogonal to ours. One such example is that of ICDA Lin et al. (2023), where a large language model (OPT-66B) Zhang et al. (2022) is used to augment the intent detection datasets for few-shot data settings. Nevertheless, we find that our method performs better than ICDA. We mention this comparison in the appendix B.1.
**Is Modified Adsorption important for Intent Detection?**IntenDD uses a pipeline of three classification setups: one using the MLP, and two in a transductive setting using the Modified Ad
\begin{table}
\begin{tabular}{l l r r r r r r r r} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{ELINC} & \multicolumn{4}{c}{BANKING} & \multicolumn{4}{c}{Stack Overlow} \\ \cline{3-10} & & ACC & NMI & ARI & ACC & NMI & ARI & ACC & NMI & ARI \\ \hline \multirow{6}{*}{Unsupervised} & BERT-KM & 45.06 & 70.89 & 26.86 & 29.55 & 54.57 & 12.18 & 13.85 & 11.60 & 1.60 \\ & DAC & 55.94 & 78.40 & 40.49 & 27.41 & 47.35 & 14.24 & 16.30 & 14.71 & 2.76 \\ & DCN & 49.29 & 75.66 & 31.15 & 41.99 & 67.54 & 26.81 & 57.09 & 61.34 & 34.98 \\ & DEC & 46.89 & 74.83 & 27.46 & 41.29 & 67.78 & 27.21 & 57.09 & 61.32 & 21.17 \\ & SAE-KM & 46.75 & 73.13 & 29.95 & 38.92 & 63.79 & 22.85 & 37.16 & 48.72 & 23.36 \\ & SBERT-KM & 61.04 & 82.22 & 48.56 & 55.72 & 74.68 & 42.77 & - & - & - \\ & IntenDD (Ours) & **63.87** & **83.12** & **51.76** & **58.74** & **75.91** & **47.88** & **79.32** & **73.88** & **62.49** \\ \hline \multirow{6}{*}{KIR = 25\%} & CDAC+ & 64.64 & 84.25 & 50.35 & 48.71 & 69.78 & 35.09 & 74.30 & 74.33 & 39.44 \\ & DeepAligned & 73.71 & 88.71 & 64.27 & 48.88 & 70.45 & 36.81 & 69.66 & 70.23 & 53.69 \\ & DSSCCBERT & 75.72 & 89.12 & 66.72 & 55.52 & 72.73 & 42.11 & - & - & - \\ & DSSCCSBERT & 80.36 & 91.43 & 72.83 & 64.93 & **80.17** & 53.60 & 81.72 & 76.57 & 68.00 \\ & IntenDD (Ours) & **83.11** & **92.32** & **76.31** & **67.50** & 76.79 & **57.85** & **84.82** & **78.93** & **71.64** \\ \hline \multirow{6}{*}{KIR = 50\%} & CDAC+ & 69.02 & 86.18 & 54.15 & 53.34 & 71.53 & 40.42 & 76.30 & 76.18 & 41.92 \\ & DeepAligned & 80.22 & 91.63 & 72.34 & 59.23 & 76.52 & 47.82 & 72.89 & 74.49 & 57.96 \\ \cline{1-1} & DSSCCSBERT & 81.46 & 91.39 & 73.48 & 63.08 & 77.60 & 50.64 & - & - & - \\ \cline{1-1} & DSSCCSBERT & 83.49 & 92.78 & 76.80 & 69.38 & 82.68 & 58.95 & 82.43 & 77.30 & 68.94 \\ \cline{1-1} & IntenDD (Ours) & **84.57** & **93.91** & **78.42** & **71.16** & **84.56** & **63.17** & **85.01** & **79.14** & **72.49** \\ \hline \multirow{6}{*}{KIR = 75\%} & CDAC+ & 69.89 & 86.65 & 54.33 & 53.83 & 72.25 & 40.97 & 75.34 & 76.68 & 43.97 \\ \cline{1-1} & DeepAligned & 86.01 & 94.03 & 79.82 & 64.90 & 79.56 & 53.64 & 74.51 & 76.24 & 59.45 \\ \cline{1-1} & DSSCCSBERT & 87.91 & 93.87 & 81.09 & 69.82 & 81.24 & 58.09 & - & - & - \\ \cline{1-1} & DSSCCSBERT & 88.47 & 94.50 & 82.40 & 75.15 & 85.04 & 64.83 & 82.65 & 77.08 & 68.67 \\ \cline{1-1} & IntenDD (Ours) & **90.99** & **96.29** & **83.62** & **77.08** & **87.39** & **68.69** & **85.47** & **77.12** & **72.90** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Results for Intent Discovery. First set of results are in a completely unsupervised setting, while others are when some of the intent categories are known. KIR is used to represent the Known Intent Ratio. In all the experiments involving known intents classes, we assume the proportion of labeled examples to be 10% (Kumar et al., 2022). Baseline results are taken from Kumar et al. (2022) and those marked with - have not been reported in literature. DSSCC paper does not report results for DSSCCBERT on Stack Overflow, and we could not get access their code to independently run that model. The best results for each dataset and setting are marked in bold. We note that our proposed method consistently outperform recent baselines by a significant margin.**
\begin{table}
\begin{tabular}{l r r r r r r r r r} \hline \hline Method & \multicolumn{4}{c}{BANKING77} & \multicolumn{4}{c}{HWU64} & \multicolumn{4}{c}{CLINC150} \\ \cline{2-10} & 5 & 10 & Full & 5 & 10 & Full & 5 & 10 & Full \\ \hline RoBERTa & 74.65 & 84.67 & 93.08 & 76.75 & 83.42 & 90.97 & 88.27 & 91.21 & 96.46 \\ CONVBERT & - & 83.63 & 92.95 & - & 83.77 & 90.43 & - & 92.10 & 97.07 \\ + MLM & - & 83.99 & 93.44 & - & 84.52 & 92.38 & - & 92.75 & 97.11 \\ + MLM + Example & - & 84.09 & 94.06 & - & 83.44 & 92.47 & - & 92.35 & 97.11 \\ + Combined & - & 85.95 & 93.83 & - & 86.28 & 93.03 & - & 97.97 & 97.31 \\ DNNC & 80.40 & 86.71 & - & 80.46 & 84.72 & - & 91.02 & 93.76 & - \\ CPFT & 80.86 & 87.20 & - & 82.03 & 87.13 & - & 92.34 & 94.18 & - \\ IntenDD-MLP (Ours) & 82.17 & 88.70 & 93.63 & 81.27 & 85.32 & 92.89 & 91.34 & 93.66 & 96.92 \\ IntenDD-EP (Ours) & 83.25 & 88.96 & 94.18 & 83.17 & 86.35 & 93.31 & 92.70 & 92.24 & 97.93 \\ IntenDD (Ours) & **85.34** & **89.62** & **94.86** & **84.11** & **88.37** & **93.64** & **93.52** & **94.71** & **98.03** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Results for Multiclass Intent Detection. We report intent detection accuracy for three data settings. We use the baseline numbers from (Lin et al., 2023). The best results for each dataset and setting are marked in bold.**
sorption (MAD). We perform ablation experiments with these components and report results in the table 2. We report results from three systems by progressively adding one component at a time. IntenDD-MLP denotes the results without using the two steps of Modified Adsorption, IntenDD-EP denotes the results with MAD but only the residual propagation step (i.e. without the label smoothing). We observe consistent performance improvements due to each of the components of the pipeline. Notably, the label propagation step leads to more significant improvements and these gains are not only observed in the few-shot setups but also in the fully data scenarios.
### Multilabel Intent Detection
Datasets and Evaluation Metric.Following Vulic et al. (2022), we use three datasets for multilabel intent detection: BANKING77, MixATIS, and HOTELS subset is taken from NLU++ benchmark. MixATIS consists of a multilabel dataset synthetically obtained via concatenating single-label instances from the ATIS dataset. We do not perform experiments with InsuranceFAQ from that paper since it was an internal data. We report standard evaluation metrics: F1 and exact match accuracy (Acc). We report results on all datasets in two settings: _low-data_, and the _high-data_ regimes, again replicating the experimental settings from Vulic et al. (2022).
Baselines.Our main baseline is the MultiConvFiT model proposed by Vulic et al. (2022) with two variants. MultiConvFiT (FT) where full fine-tuning along with the updating encoder parameters is performed. The second, more efficient alternative MultiConvFiT (Ad) where an adapter is used instead of updating all parameters. Along with this, two other baselines from ConVFiT (Vulic et al., 2021) are adapted -DRoB, and mini-LM. Please refer to Vulic et al. (2022) for more details on these methods.
Results.The results of our experiments are shown in table 3. First, the results demonstrate consistent gains achieved by our method across all three datasets. Notably, in low-data scenarios, we observe an average increase of approximately 1% in F-scores. As anticipated, the performance enhancements are more substantial in low-data settings. However, it is noteworthy that our model outperforms MultiConVFiT even in high-data setup.
We find the results of our base predictor and our final classifier to be statistically significant for all the settings of multi-class and multi-label intent detection using the t-test (\(p<0.05\)).
## 5 Conclusion
In summary, this paper presents a novel approach, IntenDD, for intent detection and discovery in task-oriented dialogue systems. By leveraging a shared utterance encoding backbone, IntenDD unifies intent classification and novel intent discovery tasks. Through unsupervised contrastive learning, the proposed approach learns representations by generating pseudo-labels based on lexical features of unlabeled utterances. Additionally, the paper introduces a two-step post-processing setup using modified adsorption for classification tasks. While intent classification tasks typically focus on contrastive representation learning or data augmentation, we show that a two-step post-processing setup in a transductive setting leads to statistically significant improvements to our base classifier, often rivaling or at par with data augmentation approaches. Extensive evaluations on diverse benchmark datasets demonstrate the consistent improvements achieved by our system over competitive baselines.
## 6 Limitations
While our research provides valuable insights and contributions, we acknowledge certain limitations that should be considered. In this section, we discuss two main limitations that arise from our work.
First, a limitation of our proposed intent discovery algorithm is its reliance on prior knowledge of the number of intent clusters. This assumption may not hold in real-world scenarios where the underlying intent structure is unknown or may change dynamically. The requirement of knowing the exact number of intent clusters can be impractical and unrealistic, limiting the generalizability of our approach. However, we recognize that this limitation can be addressed through modifications to our algorithm. Future investigations should explore techniques that allow for automated or adaptive determination of the number of intent clusters, making the approach more robust and applicable to diverse real-world settings.
The second limitation of our research lies in the reliance on the construction of a graph using extracted keyphrases during the contrastive pretrain
ing step, which is a common requirement across all three tasks explored in our study. While this graph construction step facilitates the representation learning process, it introduces a constraint on the flexibility of modifying the graph structure. Even a minor modification to the graph construction would necessitate retraining all systems, which can be time-consuming and resource-intensive. Currently, we mitigate the need for covering new utterances (with no overlapping keyphrases) by simply relying on similarity from the encoder representation itself. However, it still may still lead to concept drift over time, and the representation might need to be updated by retraining all the modules in IntenDD. In future work, we intend to explore alternative approaches that offer more flexibility in graph construction, allowing for easier modifications without the need for extensive retraining. By addressing this limitation, we aim to enhance the adaptability and scalability of our framework.
|
2302.08996 | Neuro-symbolic Meta Reinforcement Learning for Trading | We model short-duration (e.g. day) trading in financial markets as a
sequential decision-making problem under uncertainty, with the added
complication of continual concept-drift. We, therefore, employ meta
reinforcement learning via the RL2 algorithm. It is also known that human
traders often rely on frequently occurring symbolic patterns in price series.
We employ logical program induction to discover symbolic patterns that occur
frequently as well as recently, and explore whether using such features
improves the performance of our meta reinforcement learning algorithm. We
report experiments on real data indicating that meta-RL is better than vanilla
RL and also benefits from learned symbolic features. | S I Harini, Gautam Shroff, Ashwin Srinivasan, Prayushi Faldu, Lovekesh Vig | 2023-01-15T16:38:43Z | http://arxiv.org/abs/2302.08996v1 | # Neuro-symbolic Meta Reinforcement Learning for Trading
###### Abstract
We model short-duration (e.g. day) trading in financial markets as a sequential decision-making problem under uncertainty, with the added complication of continual concept-drift. We, therefore, employ meta reinforcement learning via the RL2 algorithm. It is also known that human traders often rely on frequently occurring symbolic patterns in price series. We employ logical program induction to discover symbolic patterns that occur frequently as well as recently, and explore whether using such features improves the performance of our meta reinforcement learning algorithm. We report experiments on real data indicating that meta-RL is better than vanilla RL and also benefits from learned symbolic features.
## Introduction
Deep learning techniques have achieved human-like and even superhuman performance in a number of areas: video games, strategy games, robotics, etc. In many of these areas, the spectrum of human performance varies widely, from average to expert. Human traders in financial markets also differ greatly in skill and performance. The consistent success of expert traders is unlikely to be due to chance alone; it is more likely that such traders are explicitly or implicitly relying on patterns in the data they see.
External events in the world clearly affect prices in financial markets. Thus, accurately forecasting financial prices over medium term, e.g., months, weeks or even days is a challenging if not hopeless task, even for deep-learning techniques, see [23]. The impact of external events is reduced when trading over a shorter time frame, such as sub-second high-frequency trading as well as, to an extent, intra-day trading. However, intra-day price variations are the result of complex feedback between buyers and sellers, as modeled in [22]. Such feedback results in near chaotic, but not random, behavior, i.e., the auto-correlation in the price series is not zero: the price series has'memory'. Thus, short-duration price series, while near chaotic, do seem to exhibit patterns albeit not consistently, which is what human traders exploit.
It is known that deep-learning techniques can indeed track chaotic systems, such as the Lorenz equations [1]; thus, it is not unreasonable to expect deep learning could be effective in near chaotic environments such as financial markets. Indeed, there have been many attempts at applying machine learning, and, more recently, deep learning to trading, some of which are mentioned in the next section.
Even so, applying deep learning in financial markets faces challenges: First, data is scarcer than one may expect - there are only so many days of past history, and so many financial instruments, even if one assumes all instruments behave alike. This is far fewer data than, say, text or images on the internet. Second, price series exhibit patterns, these change over time, and it is well known that markets continually change their'regime', e.g. from 'trending' to'mean-reverting', as well as their volatility.
In this paper we (a) model trading as one of sequential decision-making under uncertainty, (b) apply **deep meta reinforcement learning** to make trading decisions and (c) investigate whether the incorporation of features based on **hand-crafted patterns** that are often used by human traders improves performance. Further, we observe a **meta-pattern** in such hand-crafted patterns which we use to automatically learn a large number of similar features using techniques borrowed from **inductive logic programming**, and investigate whether these add to the effectiveness of our meta-RL based trading agent. We present _preliminary_ results on real data that indicate that both meta reinforcement learning and logical features, both hand-crafted and learned, are more effective than vanilla RL or primary price features alone. We conclude with ideas for future exploration.
## Background/Related Work
### Meta-learning
Meta-learning approaches seek to learn in situations training data is scarce, either inherently or due to rapid distribution shifts that render older data less relevant to the current task, as is the case for short-duration trading. Meta-learning techniques are' learning to learn' by training on many related tasks, so that performance on similar future tasks is improved. Optimization-based meta-learning, exemplified by the MAML algorithm [20] attempts to learn a good parameter initialization such that a few steps of gradient descent starting from there are sufficient to adapt rapidly to a new task even with very few data samples. While MAML and related meta-learning techniques do apply in reinforcement learning setting, they still
require training on a new task, albeit with limited data. In the case of trading, where an episode is ideally an entire day, it is of limited use since a model so adapted could only be used the next day, and without further adaptation as the day progresses. (Metric-based meta-learning techniques, such as matching networks [22] do not easily apply in reinforcement learning.) The third class of traditional meta-learning techniques are model-based, wherein adaption on new data takes place within the activations of a network's hidden states rather than via any gradient-based updates. The RL\({}^{2}\) algorithm [23] which we use here is also a model-based meta-learning technique. In such a technique, adaption on new data, in our case new rewards, takes place within the network activations as rewards arrive, making such a technique most applicable in our scenario.
### Machine-learning for Trading
Recent works applying machine learning to trading based on price signals alone have also used deep neural networks as well as reinforcement learning: 'Deep Momentum Networks' [15] as well as 'Momentum Transformer' [26] formulate the trading task as one of suggesting the position to take, e.g., 1 for a long (i.e., buy) position, -1 for a short (i.e., sell) position, and 0 for no position. Exiting a buy/sell position takes place when a 0 action follows the previous 1 actions, etc. Neural networks are trained to directly optimize volatility-adjusted expected returns, adjusted for transaction costs, over a trading period. Transaction costs are computed by tracking when positions are entered and exited, i.e., when actions change from one time step to the next. The former paper uses MLPs and LSTMs, while the latter uses transformers. Both works are essentially applying vanilla REINFORCE to the MDP formulation of the trading problem. Deep Reinforcement Learning in Trading [16] uses the same formulation as the above two works, but applies more refined reinforcement learning techniques, e.g., policy-gradient, actor-critic, and deep-Q-learning algorithms. In contrast to the formulation used in all the above three works, we also model actions as buy (1), sell(-1), or do nothing (0), but these can _only_ be taken when no position (i.e., zero shares) are held. As soon as a position is taken our training environment computer when this position exits due to the pre-defined stop-loss/target being met or the day end being reached, at which point a reward is returned to the RL agent. We postulate that such a formulation makes for easier learning since the agent only needs to deal with one kind of situation, i.e. when it holds no position. The downside is that exit conditions (i.e., stop-loss/targets/end-of-day) are fixed in advance, rather than determined based on price movements. Note that however, such exit conditions can also be outputs of the policy, i.e., varying stop-loss/targets for each buy or sell, based on current volatility, or whatever the agent finds useful; nevertheless, we have not reported experiments with this enhancement here.
### Inductive Logic Programming
Inductive logic programming (ILP) [20] investigates the inductive construction of first-order clausal theories from examples and background knowledge. ILP is ordinarily employed in a supervised learning setting, where positive (and usually negative) examples of a target concept are given in terms of base features. Also supplied is background knowledge in the form of facts as well as logical rules, typically in a logic programming language, i.e., Prolog. The ILP process involves constructing a theory that explains the examples provided with the desired accuracy, support, and confidence. At each stage in this process, possible theories are tested against target examples via resolution using background knowledge.
In our case we do not have target examples or concepts, instead, price data is translated into Prolog facts, and feature _templates_, or'meta-rules', are added as background knowledge using techniques introduced in [20]. Thereafter, starting from instances selected randomly as in [20], features with high support in the data are discovered as in [1], via (SLD) resolution1 using the supplied background facts and meta-rules.
Footnote 1: [https://en.wikipedia.org/wiki/SLD_resolution](https://en.wikipedia.org/wiki/SLD_resolution)
## Methodology
### Task Formulation and Learning Environment
Each task represents a trading day for a particular symbol (i.e., stock). Data arrives each minute with the open, high, low, and close prices for the past minute along with technical analysis indicators (as will be detailed below in a subsequent section). The agent issues buy/sell/do-nothing actions based on the data seen so far for the day. Once an order (i.e., buy or sell) is placed, the agent does not see any data (or reward) until the price moves by an amount determined by pre-defined stop-loss or target values; e.g. if these are each 1%, the agent receives a reward (positive or negative) when the close price changes by 1% from the point at which the order was placed, At this point, the agent resumes receiving data every minute until it places another order. Alternatively, if the day ends, the agents receive a reward based on the final closing price of the day. As noted earlier, this formulation differs from that in prior works [15, 20, 21].
### Meta-reinforcement Learning: RL\({}^{2}\)
To deal with continual distribution shift, we employ the meta reinforcement learning approach RL\({}^{2}\) from [23]. In a standard RL formulation the agent predicts the next action based on the current state (or history of states, in the case of a recurrent network) and subsequently receives a reward. In RL\({}^{2}\), the previous action and reward are also input to the network, and a recurrent network is used. As a result, changes in the {state,action,reward} distribution are visible to the agent as it takes actions. Note that the agent is trained over many trials where it encounters sequences of tasks with possibly different {state, action, reward} distributions. Thus, in principle, the agent can learn to adapt to a new distribution when encountering a new task.
The RL\({}^{2}\) agent is trained on past data comprising of trials, where each trial is a sequence of tasks. In the trading context, this entails training the agent over many day-symbol combinations, and then testing for a new day (for one of the symbols already seen; though unseen symbols could also be used - here we use seen symbols). In principle, the meta-RL agent should rapidly adapt to the reward pattern it experiences in its first few orders even if these differ from the recent past. We use PPO to train the meta-RL agent and an LSTM agent (whereas [1] used TRPO and a GRU agent), since PPO is known to be more stable while training and LSTMs are more expressive than GRUs.
### Hand-crafted Features
Human traders use price patterns as signals on which to base their trading decisions. An example of two such patterns are depicted in Figure 1. The 'three crows' pattern involves three successive observations over which both opening prices and closing prices decrease sequentially. Similarly, the 'four horsemen' pattern involves a sequence of four rising open/close prices.
In order to explore whether such features add additional value we detect the presence or absence of such patterns and append two Boolean features to the state at each time step to indicate if and which of these holds: thus these features would be \((0,0)\) for all time steps except at time step 17 where a \((0,1)\) would indicate the presence of 'four horsemen', and at 6,7, and 8 where \((1,0)\) would indicate the presence of 'three crows' over three recent steps.
### Learned Logical Features
The two handcrafted features above are based on patterns involving increasing/decreasing sequences of primary features, viz., open and close prices respectively. We postulate that increasing and decreasing sequences of other primary features might also form useful features. For example, increasing/decreasing sequences of highs, lows, or even moving averages or other technical indicators.
Further, it is also possible that increasing/decreasing sequences of _derived_ features, in particular, differences between primary features may be useful. For example, whether the difference between open and close is narrowing or widening may be indicative of decreasing or increasing volatility, which in turn should be useful for deciding an appropriate action. The same can be said for differences in highs and lows, opens and highs, etc. It is also reasonable to consider differences between technical indicators as well, e.g., differences between moving averages of different lengths; the crossing of such moving averages is known to be used by traders, so narrowing differences would point to a possible impending crossing.
Of course, exhaustively enumerating all such differences for each time step would be both computationally inefficient and likely to confuse the neural network model. Instead, we use techniques borrowed from inductive logic programming to enumerate such features in a principled manner and filter these first based on the frequency of occurrence in the training data, and then by importance using standard feature-importance determinants.
**Meta-patterns** are defined in Prolog to capture the concept of 'runs', i.e., a sequence of continually increasing/decreasing values (of one or more primary/derived features) of arbitrary length. Thereafter randomized search followed by resolution in Prolog is used to enumerate frequently occurring patterns (i.e., those that occur in the training data more often than a given support value). Background clauses are included to define the concept of a 'derived' feature as the difference between two primary features. Randomness is used in the search process to select features to test for. Search proceeds until the number of high support patterns found reaches a pre-determined maximum limit.
The above procedure yields a very large number of patterns, which are used to augment the price data with pattern-based features determined by the presence or absence of one or more patterns at each time step. These features are used to train a random forest regression model to predict reward; as a side-effect, the random forest model returns the importance of each feature towards predicting reward. These importance values are used to rank pattern-based features. Finally, a small number of top-ranking features are used to augment the meta-RL agent's neural network model.
## Evaluation
### Data
Data received by the agent at each time step is price data, i.e., open, high, low, close, and volume. These are normalized by dividing the volume column by the first non zero volume at the beginning of the episode and the rest by dividing by the close price at beginning of the episode. These normalized values are then used to compute technical analysis indicators such as simple moving averages, relative strength indicator,s etc. We use a total of 15 such technical analysis features in addition to open, high, low, close prices and volume.
### Results
The agent is trained on \(n\) symbols for \(m\) days and test scores on the day \(m+1\) are calculated. The results averaged over multiple non-overlapping subsets of symbols and days are as shown in Table 1 and Table 2. Each table entry indicates the % average daily return achieved on the test day when trained using the data of the given number of symbols and previous days. Table 1 shows the performance of vanilla reinforcement learning (i.e., wherein the previous action and reward is _not_ fed back into the neural network, vs meta reinforcement learning, and Table 2 shows the performance
Figure 1: Example of hand-crafted features
when using different sets of features as inputs to the RL-agent's neural network.
## 6 Discussion
We draw the following indicative conclusions from the results presented above:
1. [leftmargin=*]
2. Meta reinforcement learning improves over vanilla reinforcement learning, indicating that distribution shift may be impacting the former.
3. Logical features, both hand-crafted as well as learned, improve performance vs using primary features alone.
4. **Learned logical features add value over and above hand-crafted features alone.**
5. Training using too much past data (15 days) is inferior to training using a moderate amount of past data (5 or 10 days). This may be further evidence of distribution shift over longer time periods.
6. Training on more symbols is better, indicating that distribution shifts take place more over days rather than across different symbols.
## 7 Conclusions and Future Work
We submit that meta reinforcement learning is a promising direction to explore for building trading agents using deep learning. Also, logical features learned using meta-patterns inspired by hand-crafted features may be useful.
Many recent advances in deep learning are worthwhile exploring in the context of building trading agents: Language models have proven to be few-shot learners even for numerical data expressed symbolically (Hegselmann et al., 2022); could logical features as we have used here form the basis for exploring whether these systems could be applied in the trading arena? Recently de-noising diffusion models have also been used for planning (Janner et al., 2022); trying such approaches in trading may be worth exploring as well.
|
2304.13792 | Kappa distribution from particle correlations in non-equilibrium,
steady-state plasmas | Kappa-distributed velocities in plasmas are common in a wide variety of
settings, from low-density to high-density plasmas. To date, they have been
found mainly in space plasmas, but are recently being considered also in the
modelling of laboratory plasmas. Despite being routinely employed, the origin
of the kappa distribution remains, to this day, unclear. For instance,
deviations from the Maxwell-Boltzmann distribution are sometimes regarded as a
signature of the non-additivity of the thermodynamic entropy, although there
are alternative frameworks such as superstatistics where such an assumption is
not needed. In this work we recover the kappa distribution for particle
velocities from the formalism of non-equilibrium steady-states, assuming only a
single requirement on the dependence between the kinetic energy of a test
particle and that of its immediate environment. Our results go beyond the
standard derivation based on superstatistics, as we do not require any
assumption about the existence of temperature or its statistical distribution,
instead obtaining them from the requirement on kinetic energies. All of this
suggests that this family of distributions may be more common than usually
assumed, widening its domain of application in particular to the description of
plasmas from fusion experiments. Furthermore, we show that a description of
kappa-distributed plasma is simpler in terms of features of the
superstatistical inverse temperature distribution rather than the traditional
parameters $\kappa$ and the thermal velocity $v_{\text{th}}$. | Sergio Davis, Gonzalo Avaria, Biswajit Bora, Jalaj Jain, José Moreno, Cristian Pavez, Leopoldo Soto | 2023-04-26T19:17:33Z | http://arxiv.org/abs/2304.13792v2 | # A derivation of the kappa distribution in non-equilibrium, steady-state plasmas
###### Abstract
Kappa-distributed velocities in plasmas are common in a wide variety of settings, from low-density to high-density plasmas, and appear in both space and laboratory plasmas. Despite being widely used as a model, the origin of the kappa distribution remains, to this day, unclear. For instance, deviations from the Maxwell-Boltzmann distribution are sometimes regarded as a signature of the non-additivity of the thermodynamic entropy, although there are alternative frameworks such as superstatistics where such an assumption is not needed. In this work we recover the kappa distribution for particle velocities in a non-equilibrium, steady-state plasma from the theory of superstatistics, assuming only a single requirement on the dependence between the kinetic energy of a test particle and that of its immediate environment. Most importantly, unlike previous derivations based on superstatistics, we do not make any assumption about temperature or its statistical distribution, instead obtaining them from the requirement on kinetic energies. Our results suggest that a description of kappa-distributed plasma is simpler in terms of features of the inverse temperature distribution rather than the traditional parameters \(\kappa\) and the thermal velocity \(v_{\rm th}\).
## I Introduction
Modelling the velocity distribution of particles in a non-equilibrium, steady-state plasma is an interesting challenge from both theoretical and practical points of view [1; 2; 3]. In general, particles in a steady state plasma do not follow the well-known Maxwell-Boltzmann distribution of velocities, but instead, their velocities are described by more general families of distributions. Among them, the kappa distribution [4] is ubiquitous, appearing mostly in the weakly-collisional space plasmas [5; 6] while also being used in modelling the energy distribution of suprathermal ions in fusion plasmas [7; 8].
For the velocity \(\mathbf{v}\) of a particle of mass \(m\), the kappa distribution is commonly written in the form
\[P(\mathbf{v}|\kappa,v_{\rm th})=\frac{1}{\eta_{\kappa}(v_{\rm th})}\left[1+\frac{ 1}{\kappa-\frac{3}{2}}\frac{\mathbf{v}^{2}}{v_{\rm th}^{2}}\right]^{-(\kappa+1)} \tag{1}\]
where \(\kappa\geq 0\) is a shape parameter, sometimes referred to as the _spectral index_, \(v_{\rm th}\) is the thermal velocity [9],
\[v_{\rm th}:=\sqrt{\frac{2k_{B}T}{m}}, \tag{2}\]
and \(\eta_{\kappa}(v_{\rm th})\) is a normalization constant given by
\[\eta_{\kappa}(v_{\rm th}):=\left(\sqrt{\pi(\kappa-3/2)}v_{\rm th}\right)^{3} \frac{\Gamma(\kappa-1/2)}{\Gamma(\kappa+1)}. \tag{3}\]
In the limit \(\kappa\to\infty\), the kappa distribution in Eq. (1) reduces to the Maxwell-Boltzmann distribution,
\[P(\mathbf{v}|m,T)=\left(\sqrt{\frac{m}{2\pi k_{B}T}}\right)^{3}\exp\Big{(}-\frac{ m\mathbf{v}^{2}}{2k_{B}T}\Big{)}, \tag{4}\]
precisely the distribution expected in equilibrium at temperature \(T\). However, for finite \(\kappa\) the interpretation of the parameter \(T\) in Eq. (2) is not straightforward [10; 11], mainly because there are multiple admissible definitions of temperature and not all of them agree with \(T\).
Although the presence of kappa distributions in plasmas has been traditionally explained [12; 13] by the use of non-extensive statistical mechanics, also known as Tsallis statistics [14], more recent frameworks such as superstatistics [15; 16] can recover them in a direct manner. Moreover, recently we have shown [17] that superstatistics arises as a natural description for collisionless plasmas in non-equilibrium steady states, providing support to recent efforts [18; 19] in establishing a foundational basis for steady-state distributions in plasmas using superstatistics as a starting point.
Despite these advances, superstatistics still requires the assumption of a gamma distribution for the inverse temperature \(\beta:=1/(k_{B}T)\) in order to recover Tsallis statistics, and in particular, the kappa distributions. Motivated by this still unexplained assumption, in this work we delve deeper into the formalism established in Ref. [17], by connecting it with more recent theoretical developments [20; 21] on the structure of superstatistics. In particular, we show that the assumption of a gamma distribution for \(\beta\) can be replaced by a simple assumption on the dependence between the kinetic energy of a test particle and that of its surrounding environment.
In the following section we provide a brief account of the superstatistical formalism and we connect it with a generalized definition of temperature for steady states [22], namely the _fundamental inverse temperature_ function \(\beta_{F}\).
## II Non-equilibrium steady states and superstatistics
Steady states are a special kind of non-equilibrium states which are time-independent, that is, where the non-equilibrium probability density of microstates \(p(\mathbf{\Gamma};t)\) at a time \(t\) reduces to \(p(\mathbf{\Gamma})\). In particular, we will consider steady states where \(p(\mathbf{\Gamma})\) depends on \(\mathbf{\Gamma}\) only through the Hamiltonian \(\mathcal{H}(\mathbf{\Gamma})\), and we will write their probability density as
\[P(\mathbf{\Gamma}|S)=\rho(\mathcal{H}(\mathbf{\Gamma})), \tag{5}\]
where \(\rho\) is the _ensemble function_, and \(S\) denotes the set of parameters that uniquely define the steady state. Superstatistics is a natural extension of statistical mechanics to this kind of steady states, where the canonical ensemble
\[P(\mathbf{\Gamma}|\beta)=\frac{\exp(-\beta\mathcal{H}(\mathbf{\Gamma}))}{Z(\beta)} \tag{6}\]
is replaced by a superposition of canonical ensembles at different temperatures. The inverse temperature \(\beta\) is promoted from a constant to a random variable, such that its joint distribution with the microstates is given by
\[P(\mathbf{\Gamma},\beta|S)=P(\mathbf{\Gamma}|\beta)P(\beta|S)=\left[\frac{\exp\big{(}- \beta\mathcal{H}(\mathbf{\Gamma})\big{)}}{Z(\beta)}\right]P(\beta|S), \tag{7}\]
in agreement [23] with the product rule of probability theory. By marginalization of \(\beta\), the distribution of microstates becomes
\[P(\mathbf{\Gamma}|S)=\int_{0}^{\infty}d\beta P(\beta|S)\left[\frac{\exp(-\beta \mathcal{H}(\mathbf{\Gamma}))}{Z(\beta)}\right], \tag{8}\]
which has the form of Eq. (5) with an ensemble function
\[\rho(E)=\int_{0}^{\infty}d\beta f(\beta)\exp(-\beta E), \tag{9}\]
that is the Laplace transform of the _superstatistical weight function_\(f(\beta)\), defined by
\[f(\beta):=\frac{P(\beta|S)}{Z(\beta)}. \tag{10}\]
Using this definition we can write \(\rho(E)=\mathcal{L}\big{\{}f\big{\}}(E)\) and, conversely, \(f(\beta)=\mathcal{L}^{-1}\big{\{}\rho\big{\}}(\beta)\). An important consequence of this is that \(\rho\) is completely determined by \(f\) and viceversa, and as the latter depends on both the inverse temperature distribution and the partition function, then both aspects together define the form of the statistical ensemble \(P(\mathbf{\Gamma}|S)\).
Let us now consider a composite system, divided into subsystems \(A\) and \(B\) such that \(\mathbf{\Gamma}=(\mathbf{\Gamma}_{A},\mathbf{\Gamma}_{B})\), and where the Hamiltonian of the entire system is of the form
\[\mathcal{H}(\mathbf{\Gamma}_{A},\mathbf{\Gamma}_{B})=\mathcal{H}_{A}(\mathbf{\Gamma}_{A})+ \mathcal{H}_{B}(\mathbf{\Gamma}_{B}). \tag{11}\]
It is easy to show that, in this case, \(P(\beta|S)\) is a universal property of the entire system and its parts, unlike \(f(\beta)\) which is dependent on the details of the subsystem. We can see this as follows. When the composite system is described by an inverse temperature distribution \(P(\beta|S)\) we have
\[P(\mathbf{\Gamma}_{A},\mathbf{\Gamma}_{B}|S)=\int_{0}^{\infty}d\beta P(\beta|S)\left[ \frac{\exp(-\beta(\mathcal{H}_{A}+\mathcal{H}_{B}))}{Z_{AB}(\beta)}\right], \tag{12}\]
and the marginal distribution of \(\mathbf{\Gamma}_{A}\) is given by
\[\begin{split} P(\mathbf{\Gamma}_{A}|S)&=\int d\mathbf{ \Gamma}_{B}\int_{0}^{\infty}d\beta P(\beta|S)\left[\frac{\exp(-\beta( \mathcal{H}_{A}+\mathcal{H}_{B}))}{Z_{AB}(\beta)}\right]\\ &=\int_{0}^{\infty}d\beta P(\beta|S)\exp\big{(}-\beta\mathcal{H} _{A}(\mathbf{\Gamma}_{A})\big{)}\int d\mathbf{\Gamma}_{B}\left[\frac{\exp\big{(}-\beta \mathcal{H}_{B}(\mathbf{\Gamma}_{B})\big{)}}{Z_{AB}(\beta)}\right]\\ &=\int_{0}^{\infty}d\beta P(\beta|S)\frac{Z_{B}(\beta)}{Z_{AB}( \beta)}\exp\big{(}-\beta\mathcal{H}_{A}(\mathbf{\Gamma}_{A})\big{)},\end{split} \tag{13}\]
that is,
\[P(\mathbf{\Gamma}_{A}|S)=\int_{0}^{\infty}d\beta P(\beta|S)\left[\frac{\exp(-\beta \mathcal{H}_{A}(\mathbf{\Gamma}_{A}))}{Z_{A}(\beta)}\right], \tag{14}\]
where we have used the well-known factorization of the partition function for additive systems, \(Z_{AB}(\beta)=Z_{A}(\beta)Z_{B}(\beta)\). We see from Eq. (14) and Eq. (12) that the subsystem \(\mathbf{\Gamma}_{A}\) is governed by the same inverse temperature distribution \(P(\beta|S)\) as the composite system \((\mathbf{\Gamma}_{A},\mathbf{\Gamma}_{B})\) and, because the choice of \(A\) and \(B\) is arbitrary, it follows that any possible subsystem is governed by the same \(P(\beta|S)\). In the following we will use this fact to recover subsystem-independent parameters for the kappa distribution describing the velocity of a single particle.
We will now define the _fundamental inverse temperature_ function \(\beta_{F}\), motivated by the conditional distribution of \(\beta\) given a fixed energy \(E\). First, note that the distribution of energy in an steady state given by Eq. (5) is
\[P(E|S)=\left\langle\delta(E-\mathcal{H})\right\rangle_{S}=\int d\mathbf{\Gamma} \rho(\mathcal{H}(\mathbf{\Gamma}))\delta(E-\mathcal{H}(\mathbf{\Gamma}))=\rho(E) \Omega(E), \tag{15}\]
where \(\Omega(E):=\int d\mathbf{\Gamma}\delta(E-\mathcal{H}(\mathbf{\Gamma}))\) is the density of states associated to \(\mathcal{H}\). Now, from Bayes' theorem [24; 25] we obtain
\[P(\beta|E,S)=\frac{P(\beta|S)P(E|\beta,S)}{P(E|S)} \tag{16}\]
and, because exact knowledge of \(\beta\) supersedes the state of knowledge \(S\), we can replace \(P(E|\beta,S)\) in the numerator with the usual canonical distribution of energy,
\[P(E|\beta)=\frac{\exp(-\beta E)\Omega(E)}{Z(\beta)}, \tag{17}\]
a particular case of Eq. (15) with \(\rho(E)=\exp(-\beta E)/Z(\beta)\). Therefore, replacing Eq. (17) and Eq. (15) into Eq. (16) and cancelling the factor \(\Omega(E)\), we have
\[P(\beta|E,S)=\frac{f(\beta)\exp(-\beta E)}{\rho(E)}, \tag{18}\]
and we immediately see that Eq. (9) ensures that the left-hand side is a properly normalized distribution. The fluctuation-dissipation theorem [26] associated to \(P(\beta|E,S)\) is
\[\frac{\partial}{\partial E}\big{\langle}\omega\big{\rangle}_{E,S}=\left\langle \frac{\partial\omega}{\partial E}\right\rangle_{E,S}+\left\langle\omega\frac{ \partial}{\partial E}\ln P(\beta|E,S)\right\rangle_{E,S} \tag{19}\]
which, by replacing Eq. (18), becomes
\[\frac{\partial}{\partial E}\big{\langle}\omega\big{\rangle}_{E,S}=\bigg{\langle} \frac{\partial\omega}{\partial E}\bigg{\rangle}_{E,S}+\bigg{\langle}\omega( \beta_{F}-\beta)\bigg{\rangle}_{E,S} \tag{20}\]
where we have defined the fundamental inverse temperature function \(\beta_{F}(E)\) by
\[\beta_{F}(E):=-\frac{\partial}{\partial E}\ln\rho(E). \tag{21}\]
Two consequences of the fluctuation-dissipation relation in Eq. (20) are straightforward to obtain. First, by using \(\omega=1\) and recalling that \(\big{\langle}f\big{\rangle}_{E,S}=f(E)\) for any function \(f(E)\) of the energy, we immediately see that
\[\beta_{F}(E)=\big{\langle}\beta\big{\rangle}_{E,S}, \tag{22}\]
which then gives meaning to the fundamental inverse temperature in superstatistics: it is the conditional expectation of the superstatistical inverse temperature given the energy of the system. Second, by taking expectation of Eq. (22) under \(S\) on both sides, we obtain
\[\big{\langle}\beta_{F}\big{\rangle}_{S}=\big{\langle}\beta\big{\rangle}_{S}, \tag{23}\]
that is, the expectation values of \(\beta_{F}\) and \(\beta\) coincide, and we can use this common value to define the inverse temperature \(\beta_{S}\) of the ensemble \(S\) without ambiguity as
\[\beta_{S}:=\big{\langle}\beta_{F}\big{\rangle}_{S}. \tag{24}\]
In the following sections, we will recover the kappa distribution for the single-particle velocity from superstatistics plus just one additional assumption. Furthermore, we will show how a superstatistical approximation produces a distribution \(P(\beta|S)\) as the thermodynamic limit of the distribution of the inverse fundamental temperature, \(P(\beta_{F}|S)\), thus proving a deeper connection between the superstatistical parameter \(\beta\) and the function \(\beta_{F}\).
## III The Kappa Distribution in Steady State Plasmas
The total energy of a system of \(N\) classical, non-relativistic interacting particles forming a plasma in a steady state can be written as
\[E(\mathbf{r}_{1},\dots,\mathbf{r}_{N},\mathbf{v}_{1},\dots,\mathbf{v}_{N})=\sum_{i=1}^{N} \frac{m_{i}\mathbf{v}_{i}^{2}}{2}+\Phi(\mathbf{r}_{1},\dots,\mathbf{r}_{N}), \tag{25}\]
in such a way that the details of the interaction with the (self-consistent) electromagnetic fields are contained inside the potential energy function \(\Phi\). This _energy function_\(E\) is different from the Hamiltonian \(\mathcal{H}\), as the latter should be written in terms of momenta instead of velocities. However, in a steady state the joint probability of positions and velocities actually depends only on the energy function \(E\) (as we have shown earlier [17]), that is, is of the form
\[P(\mathbf{R},\mathbf{V}|S)=\rho(E(\mathbf{R},\mathbf{V});S), \tag{26}\]
where we have introduced the shortcut notation \(\mathbf{R}:=(\mathbf{r}_{1},\dots,\mathbf{r}_{N})\) and \(\mathbf{V}:=(\mathbf{v}_{1},\dots,\mathbf{v}_{N})\). The joint distribution of velocities can be obtained by marginalization of the particle positions,
\[P(\mathbf{v}_{1},\dots,\mathbf{v}_{N}|S)=\int d\mathbf{R}\,\rho\big{(}E(\mathbf{R},\mathbf{v}_{1 },\dots,\mathbf{v}_{N});S\big{)}=p_{N}\Big{(}\sum_{i=1}^{N}\tfrac{m_{i}\mathbf{v}_{i}^ {2}}{2}\Big{)}, \tag{27}\]
where this relation defines the \(N\)-particle ensemble function of velocities \(p_{N}\). Moreover, the single-particle velocity distribution, which is our main target in this work, is given by marginalization in \(P(\mathbf{v}_{1},\dots,\mathbf{v}_{N}|S)\) of the remaining \(N-1\) particle velocities,
\[P(\mathbf{v}_{1}|S)=\int d\mathbf{v}_{2}\dots d\mathbf{v}_{N}P(\mathbf{v}_{1},\dots,\mathbf{v}_{ N}|S)=p_{1}\Big{(}\frac{m_{1}\mathbf{v}_{1}^{2}}{2}\Big{)}. \tag{28}\]
Here it is important to note that Eq. (26) together with the form of the energy function in Eq. (25) will only lead to isotropic velocity distributions because then \(P(\mathbf{v}_{1}|S)\) depends on \(\mathbf{v}_{1}\) through its magnitude, according to Eq. (28). By comparing \(P(\mathbf{v}_{1}|S)\) with the kappa distribution in Eq. (1), we see that our single-particle ensemble function \(p_{1}\) must be given by
\[p_{1}(k_{1})=\frac{1}{\eta_{\kappa}(v_{\rm th})}\left[1+\frac{2k_{1}}{m_{1}v_{ \rm th}^{2}(\kappa-\frac{3}{2})}\right]^{-(\kappa+1)}, \tag{29}\]
where \(k_{1}:=m_{1}\mathbf{v}_{1}^{2}/2\) is the kinetic energy of the particle with \(i=1\). In the next section, we will arrive at the kappa form for \(p_{1}(k_{1})\) using a single requirement on the dependence between the kinetic energy \(k_{1}\) of a particle and the kinetic energy \(K\) of its surrounding environment.
## IV Derivation of the kappa distribution
In the following analysis, we will be considering a group of \(n\leq N\) particles as a subsystem, regarding only their kinetic energy. Without loss of generality we can take the first particle as a test particle with kinetic energy \(k_{1}\), and the remaining \(n-1\) particles as its environment with kinetic energy
\[K:=\sum_{i=2}^{n}\frac{m_{i}\mathbf{v}_{i}^{2}}{2}. \tag{30}\]
Then, the energy \(\mathcal{K}\) of the subsystem is directly \(\mathcal{K}:=k_{1}+K\). Recalling that the density of states of kinetic energy for a group of \(n\) particles is given by
\[\Omega_{n}(\mathcal{K}):=\int d\mathbf{v}_{1}\ldots d\mathbf{v}_{n}\delta\Big{(} \mathcal{K}-\sum_{i=1}^{n}\frac{m_{i}\mathbf{v}_{i}^{2}}{2}\Big{)}=W_{n}\; \mathcal{K}^{\frac{3n}{2}-1} \tag{31}\]
where we have defined the constants
\[W_{n}:=\frac{(2\pi)^{\frac{3n}{2}}M^{-\frac{3}{2}}}{\Gamma\big{(}\frac{3n}{2} \big{)}} \tag{32}\]
and \(M:=\prod_{i=1}^{n}m_{i}\), the partition function associated to \(\Omega_{n}\) is its Laplace transform,
\[Z_{n}(\beta;M)=\int_{0}^{\infty}d\mathcal{K}\Omega_{n}(\mathcal{K})\exp(- \beta\mathcal{K})=W_{n}\beta^{-\frac{3n}{2}}\Gamma\Big{(}\frac{3n}{2}\Big{)}= \big{(}2\pi\big{)}^{\frac{3n}{2}}M^{-\frac{3}{2}}\beta^{-\frac{3n}{2}}, \tag{33}\]
which contains the single-particle partition function \(Z_{1}(\beta;m)\) as a particular case with \(n=1\) and \(M=m\),
\[Z_{1}(\beta;m)=\left(\sqrt{\frac{2\pi}{m}}\right)^{3}\beta^{-\frac{3}{2}}. \tag{34}\]
Now we will show that only one condition is sufficient to obtain the kappa distribution for a single particle in a plasma, namely that the most probable kinetic energy \(k^{*}\) of the test particle given the kinetic energy \(K\) of its \((n-1)\)-particle environment is linear in \(K\). In more precise terms, we require that
\[k^{*}:=\operatorname*{argmax}_{k_{1}}P(k_{1}|K,S)=\gamma_{n}+\alpha_{n}K, \tag{35}\]
where the parameters \(\gamma_{n}\) and \(\alpha_{n}\) are functions of \(n\). In order to show that Eq. (35) leads to the kappa distribution, let us first compute the joint distribution \(P(k_{1},K|S)\) of test particle plus environment, which is given by
\[\begin{split} P(k_{1},K|S)&=\left\langle\delta \Big{(}k_{1}-\frac{m_{1}\mathbf{v}_{1}^{2}}{2}\Big{)}\delta\Big{(}K-\sum_{i=2}^{n} \frac{m_{i}\mathbf{v}_{i}^{2}}{2}\Big{)}\right\rangle_{S}\\ &=\int d\mathbf{v}_{1}\ldots d\mathbf{v}_{n}p_{n}\Big{(}\sum_{i=1}^{n} \frac{m_{i}\mathbf{v}_{i}^{2}}{2}\Big{)}\delta\Big{(}k_{1}-\frac{m_{1}\mathbf{v}_{1}^ {2}}{2}\Big{)}\delta\Big{(}K-\sum_{i=2}^{n}\frac{m_{i}\mathbf{v}_{i}^{2}}{2} \Big{)}\\ &=p_{n}(k_{1}+K)\Bigg{[}\int d\mathbf{v}_{1}\delta\Big{(}k_{1}-\frac {m_{1}\mathbf{v}_{1}^{2}}{2}\Big{)}\Bigg{]}\left[\int d\mathbf{v}_{2}\ldots d\mathbf{v}_ {n}\delta\Big{(}K-\sum_{i=2}^{n}\frac{m_{i}\mathbf{v}_{i}^{2}}{2}\Big{)}\right], \end{split} \tag{36}\]
and that by using the definition of \(\Omega_{n}\) in Eq. (31), becomes
\[P(k_{1},K|S)=p_{n}(k_{1}+K)\Omega_{1}(k_{1})\Omega_{n-1}(K). \tag{37}\]
The conditional distribution \(P(k_{1}|K,S)\) appearing in Eq. (35) can then be obtained as
\[P(k_{1}|K,S)=\frac{P(k_{1},K|S)}{P(K|S)}=\frac{p_{n}(k_{1}+K)\Omega_{1}(k_{1})} {p_{n-1}(K)} \tag{38}\]
where a factor \(\Omega_{n-1}(K)\) has been cancelled, and the single-particle density of states \(\Omega_{1}(k_{1})\) is readily obtained from Eq. (31) with \(n=1\),
\[\Omega_{1}(k_{1})=\frac{2}{\sqrt{\pi}}\Big{(}\frac{2\pi}{m}\Big{)}^{3/2}\sqrt{ k_{1}}. \tag{39}\]
Now, because \(k^{*}\) is the argument of the maximum of \(P(k_{1}|K,S)\) according to Eq. (35), it follows that \(k^{*}\) is the solution of the extremum equation
\[0=\left[\frac{\partial}{\partial k_{1}}\ln P(k_{1}|K,S)\right]_{k_{1}=k^{*}}, \tag{40}\]
and by replacing Eq. (38) and Eq. (39) we obtain
\[\beta_{F}^{(n)}(k^{*}+K)=\frac{1}{2k^{*}}, \tag{41}\]
where \(\beta_{F}^{(n)}\) is the fundamental inverse temperature of the group of \(n\) particles, defined by
\[\beta_{F}^{(n)}(\mathcal{K}):=-\frac{\partial}{\partial\mathcal{K}}\ln p_{n}( \mathcal{K}). \tag{42}\]
We can replace \(k^{*}\) in Eq. (41) in terms of \(K\) using Eq. (35) and, after some algebra, obtain
\[\beta_{F}^{(n)}(\mathcal{K})=\frac{\alpha_{n}+1}{2(\gamma_{n}+\alpha_{n} \mathcal{K})}, \tag{43}\]
from which we can recover the \(n\)-particle ensemble function \(p_{n}\) by integration,
\[p_{n}(\mathcal{K})=p_{n}(0)\exp\left(-\frac{\alpha_{n}+1}{2}\int_{0}^{ \mathcal{K}}\frac{d\epsilon}{\gamma_{n}+\alpha_{n}\epsilon}\right)=p_{n}(0) \left[1+\Big{(}\frac{\alpha_{n}}{\gamma_{n}}\Big{)}\mathcal{K}\right]^{-\frac{ 1}{2\alpha_{n}}-\frac{1}{2}}, \tag{44}\]
where \(p_{n}(0)\) is a normalization constant to be determined. By marginalizing \(K\) in Eq. (37) and using Eq. (15) as
\[P(k_{1}|S)=p_{1}(k_{1})\Omega_{1}(k_{1}) \tag{45}\]
we see that
\[p_{1}(k_{1})=\int_{0}^{\infty}dKp_{n}(k_{1}+K)\Omega_{n-1}(K). \tag{46}\]
Now, making use of the definite integral
\[\int_{0}^{\infty}dy\;y^{m}\Big{[}1+r(x+y)\Big{]}^{-c}=r^{-m-1}B\big{(}c-m-1,m+ 1\big{)}\cdot\Big{[}1+rx\Big{]}^{m+1-c} \tag{47}\]
for \(x>0\), \(r>0\), \(m>-1\) and \(c>m+1\) with \(B(a,b):=\int_{0}^{1}dt\,t^{a-1}(1-t)^{b-1}\) the Beta function, we finally arrive at
\[p_{1}(k_{1})=p_{1}(0)\left[1+\Big{(}\frac{\alpha_{n}}{\gamma_{n}}\Big{)}k_{1} \right]^{\frac{3n}{2}-\frac{1}{2\alpha_{n}}-2}. \tag{48}\]
By comparing Eq. (48) and Eq. (29) we see that we have recovered the kappa distribution for the test particle. However, the dependence of \(\alpha_{n}\) and \(\gamma_{n}\) with \(n\) is not yet known. Because superstatistics imposes, through Eq. (9), that
\[p_{1}(k_{1})=\left(\sqrt{\frac{m}{2\pi}}\right)^{3}\int_{0}^{\infty}d\beta P( \beta|S)\exp(-\beta k_{1})\beta^{\frac{3}{2}} \tag{49}\]
and we have already shown that \(P(\beta|S)\) is size-independent, then \(p_{1}(k_{1})\) must also be size-independent, even when \(\alpha_{n}\) and \(\gamma_{n}\) are functions of \(n\). This allows us to define new size-independent parameters \(u\) and \(\beta_{S}\) such that
\[\frac{1}{u} :=\frac{1}{2}-\frac{3n}{2}+\frac{1}{2\alpha_{n}}, \tag{50a}\] \[\beta_{S} :=\frac{\alpha_{n}}{u\,\gamma_{n}}, \tag{50b}\]
and whose meaning will be revealed shortly. In terms of these parameters, we can rewrite Eq. (48) as
\[p_{1}(k_{1})=p_{1}(0)\Big{[}1+(u\beta_{S})k_{1}\Big{]}^{-(\frac{1}{u}+\frac{ 3}{2})}. \tag{51}\]
Comparison with Eq. (29) gives the usual parameters \(\kappa\) and \(v_{\text{th}}\) of the kappa distribution for a single particle in terms of \(u\) and \(\beta_{S}\) as
\[\kappa =\frac{1}{u}+\frac{1}{2}, \tag{52a}\] \[\frac{mv_{\text{th}}^{2}}{2} =\frac{1}{(1-u)\beta_{S}}, \tag{52b}\]
and we can use these new parameters \(u\) and \(\beta_{S}\) to rewrite the fundamental inverse temperature \(\beta_{F}^{(n)}(\mathcal{K})\) in Eq. (43) as
\[\beta_{F}^{(n)}(\mathcal{K})=\Big{(}1+\frac{3nu}{2}\Big{)}\left[\frac{\beta_{ S}}{1+u\beta_{S}\mathcal{K}}\right]. \tag{53}\]
We see that \(u\to 0\), that is, \(\kappa\to\infty\), reduces \(\beta_{F}^{(n)}(\mathcal{K})\) to the constant function equal to \(\beta_{S}\) for all \(\mathcal{K}\), thus recovering the canonical ensemble. Replacing Eq. (51) and Eq. (39) into Eq. (45) we obtain the single-particle energy distribution, which after normalization yields
\[P(k_{1}|u,\beta_{S})=\frac{\left(\sqrt{u\beta_{S}}\right)^{3}}{B\Big{(}\frac{ 3}{2},\frac{1}{u}\Big{)}}\Big{[}1+(u\beta_{S})k_{1}\Big{]}^{-(\frac{1}{u}+ \frac{3}{2})}\sqrt{k_{1}}, \tag{54}\]
result that fixes the normalization constant \(p_{1}(0)\) to be
\[p_{1}(0)=\left(\sqrt{\frac{nu\beta_{S}}{2\pi}}\right)^{3}\frac{\Gamma(\frac{ 1}{u}+\frac{3}{2})}{\Gamma(\frac{1}{u})}, \tag{55}\]
in full agreement with \(p_{1}(0)=\eta_{\kappa}^{-1}\) as it appears in Eq. (3). The mean and relative variance of \(P(k_{1}|u,\beta_{S})\) in Eq. (54) are given by
\[\left\langle k_{1}\right\rangle_{u,\beta_{S}} =\frac{3}{2\beta_{S}(1-u)}, \tag{56a}\] \[\frac{\left\langle(\delta k_{1})^{2}\right\rangle_{u,\beta_{S}}} {\left\langle k_{1}\right\rangle_{u,\beta_{S}}^{2}} =\frac{2+u}{3(1-2u)}, \tag{56b}\]
and from these two equations we can, in principle, determine \(u\) and \(\beta_{S}\) from the observed statistics of \(k_{1}\). Note that the relative variance in Eq. (56b) increases monotonically with \(u\) from its value of \(2/3\) for \(u=0\). Additionally, we see
that in order to keep \(\left\langle(\delta k_{1})^{2}\right\rangle_{u,\beta_{S}}\) a non-negative quantity, it is required that \(u<1/2\), that is, the spectral index \(\kappa\) must be larger than \(5/2\). Again, in the limit \(u\to 0\) we can confirm, using
\[\lim_{u\to 0}\Big{[}1+(u\beta_{S})k_{1}\Big{]}^{-(\frac{1}{u}+\frac{3}{2})}=\exp( -\beta_{S}k_{1}),\]
that \(P(k_{1}|u,\beta_{S})\) in Eq. (54) reduces to the Maxwell-Boltzmann distribution of single-particle energies,
\[P(k_{1}|\beta)=\left(\frac{2}{\sqrt{\pi}}\right)\beta^{\frac{3}{2}}\exp(-\beta k _{1})\sqrt{k_{1}} \tag{57}\]
with \(\beta=\beta_{S}\). Similarly, using Eq. (15) as \(P(\mathcal{K}|u,\beta_{S},n)=p_{n}(\mathcal{K})\Omega_{n}(\mathcal{K})\) we obtain the energy distribution for the group of \(n\) particles as
\[P(\mathcal{K}|u,\beta_{S},n)=\frac{\left(\sqrt{u\beta_{S}}\right)^{3n}}{B \Big{(}\frac{3n}{2},\frac{1}{u}\Big{)}}\Big{[}1+u\beta_{S}\mathcal{K}\Big{]}^ {-\left(\frac{1}{u}+\frac{3n}{2}\right)}\mathcal{K}^{\frac{3n}{2}-1}, \tag{58}\]
and we can verify that
\[\left\langle\mathcal{K}\right\rangle_{u,\beta_{S}}=\frac{3n}{2\beta_{S}(1-u)} =n\big{\langle}k_{1}\big{\rangle}_{u,\beta_{S}}, \tag{59}\]
hence the mean kinetic energy is an extensive quantity for all \(n>1\) and for all \(u\). By simple inspection we can also confirm that Eq. (58) includes Eq. (54) as a particular case with \(n=1\) and \(\mathcal{K}\to k_{1}\).
We can gain further insight on the relationship between \(k^{*}\) and \(K\) if we write our original requirement in Eq. (35) in terms of \(u\), \(\beta_{S}\) and \(n\) as
\[k^{*}(K)=\frac{1+u\beta_{S}K}{\beta_{S}\big{(}[3n-1]u+2\big{)}}. \tag{60}\]
We readily see that the only case where \(k^{*}\) is independent of \(K\) corresponds to \(u=0\), that is, to the canonical ensemble with
\[\beta_{S}=\frac{1}{2k^{*}}, \tag{61}\]
while for \(u>0\) in the thermodynamic limit we have
\[\lim_{n\to\infty}k^{*}(K)=\lim_{n\to\infty}\frac{K}{3(n-1)}=\frac{k}{3}, \tag{62}\]
where we have defined \(k:=\lim_{n\to\infty}K/(n-1)\) as the average kinetic energy of the environment. This is in agreement with the mode and mean of the Maxwell-Boltzmann distribution of energies in Eq. (57), namely
\[k^{*}(\beta)=\frac{1}{2\beta}=\frac{1}{3}\big{\langle}k_{1}\big{\rangle}_{ \beta}. \tag{63}\]
On the other hand, the joint distribution \(P(k_{1},K|u,\beta_{S})\) in Eq. (37) yields the covariance between \(k_{1}\) and \(K\) as
\[\left\langle\delta k_{1}\delta K\right\rangle_{u,\beta_{S}}=\frac{9u(n-1)}{4 \beta_{S}^{2}(1-u)^{2}(1-2u)}\geq 0, \tag{64}\]
with equality only for \(u=0\). We can check that this covariance increases monotonically with \(u\), and that \(k_{1}\) and \(K\) are statistically independent if and only if \(u=0\).
## V Statistical distribution of inverse temperatures
The superstatistical distribution of the inverse temperature \(\beta\), namely \(P(\beta|u,\beta_{S})\), can now be determined by using Eq. (10) in the form
\[P(\beta|u,\beta_{S})=f_{1}(\beta)Z_{1}(\beta) \tag{65}\]
with \(f_{1}=\mathcal{L}^{-1}\{p_{1}\}\) the inverse Laplace transform of the single-particle ensemble function \(p_{1}\) in Eq. (51). Because the inverse Laplace transform is unique if it exists, and recalling the Euler integral
\[\int_{0}^{\infty}d\beta\exp(-\beta A)\beta^{R-1}=\Gamma(R)A^{-R}, \tag{66}\]
we obtain for \(A=k_{1}+1/(u\beta_{S})\) and \(R=1/u+3/2\), that
\[f_{1}(\beta)=\frac{p_{1}(0)}{u\beta_{S}\Gamma(\frac{3}{2}+\frac{1}{u})}\exp \left(-\frac{\beta}{u\beta_{S}}\right)\left(\frac{\beta}{u\beta_{S}}\right)^{ \frac{1}{u}+\frac{1}{2}}. \tag{67}\]
After multiplying by \(Z_{1}(\beta)\) in Eq. (34) and replacing Eq. (55), we obtain the properly normalized probability distribution for \(\beta\) as
\[P(\beta|u,\beta_{S})=\frac{1}{u\beta_{S}\ \Gamma(1/u)}\exp\left(-\frac{\beta} {u\beta_{S}}\right)\left(\frac{\beta}{u\beta_{S}}\right)^{\frac{1}{u}-1}, \tag{68}\]
which is a gamma distribution with mean and variance given by
\[\left\langle\beta\right\rangle_{u,\beta_{S}} =\beta_{S}, \tag{69a}\] \[\left\langle(\delta\beta)^{2}\right\rangle_{u,\beta_{S}} =u(\beta_{S})^{2}. \tag{69b}\]
Here we see that \(\beta_{S}\) is directly the mean superstatistical inverse temperature, in agreement with Eq. (24) and Eq. (23), while \(u\) is the relative variance of \(\beta\), thus together with \(u<1/2\) we see that we must have \(0\leq u<1/2\). The most probable inverse temperature is given by
\[\beta_{S}^{*}:=\beta_{S}(1-u), \tag{70}\]
and it is clear that \(u\to 0\) recovers the canonical ensemble, because
\[\left\langle(\delta\beta)^{2}\right\rangle_{u,\beta_{S}} \to 0, \tag{71}\] \[\beta_{S}^{*} \to\beta_{S}, \tag{72}\]
which together imply \(P(\beta|u,\beta_{S})\to\delta(\beta-\beta_{S})\), in agreement with the limit \(\kappa\to\infty\) of the kappa distribution, i.e. the Maxwell-Boltzmann distribution. Furthermore, using Eq. (70) and letting \(k_{B}T_{S}^{*}:=1/\beta_{S}^{*}\), we can rewrite Eq. (52b) as
\[v_{\rm th}=\sqrt{\frac{2k_{B}T_{S}^{*}}{m}}, \tag{73}\]
which agrees with Eq. (2) if we interpret the parameter \(T\) appearing in the kappa distribution as \(T_{S}^{*}\) of the superstatistical description. The conditional distribution of inverse temperature given \(K\) follows from Bayes' theorem as
\[P(\beta|K,u,\beta_{S},n)=\frac{P(\beta|u,\beta_{S})P(K|\beta)}{P(K|u,\beta_{ S},n)}=\frac{P(\beta|u,\beta_{S})\exp(-\beta K)}{Z_{n-1}(\beta)\,p_{n-1}(K)}, \tag{74}\]
where we have cancelled a factor \(\Omega_{n-1}(K)\). This is also a gamma distribution, written explicitly as
\[P(\beta|K,u,\beta_{S},n)=\frac{\left[1+u\beta_{S}K\right]^{\frac{1}{u}+\frac{ 3(n-1)}{2}}}{u\beta_{S}\Gamma\big{(}\frac{1}{u}+\frac{3(n-1)}{2}\big{)}}\exp \left(-\frac{\beta}{u\beta_{S}}\Big{[}1+u\beta_{S}K\Big{]}\right)\left(\frac{ \beta}{u\beta_{S}}\right)^{\frac{1}{u}+\frac{3(n-1)}{2}-1}, \tag{75}\]
but, unlike \(P(\beta|u,\beta_{S})\) in Eq. (68), this distribution is explicitly dependent on the size \(n\). The mean inverse temperature given \(K\) is
\[\left\langle\beta\right\rangle_{K,u,\beta_{S},n}=\left(1+\frac{3(n-1)u}{2} \right)\left[\frac{\beta_{S}}{1+u\beta_{S}K}\right], \tag{76}\]
and, by comparing with Eq. (53), we can verify that Eq. (22) holds, in the form
\[\left\langle\beta\right\rangle_{K,u,\beta_{S},n}=\beta_{F}^{(n-1)}(K). \tag{77}\]
This means \(\left\langle\beta\right\rangle_{K,u,\beta_{S},n}\) also reduces to \(\beta_{S}\) in the limit \(u\to 0\) with finite \(n\), becoming independent of \(K\). In the thermodynamic limit, that is, when \(n\to\infty\), we have that
\[\lim_{n\to\infty}\left\langle\beta\right\rangle_{K,u,\beta_{S},n}=\lim_{n\to \infty}\frac{3(n-1)}{2K}=\frac{3}{2k} \tag{78}\]
for \(u>0\). The relative variance of \(P(\beta|K,u,\beta_{S},n)\) is
\[\frac{\left\langle\left(\delta\beta\right)^{2}\right\rangle_{K,u,\beta_{S},n} }{\left\langle\beta\right\rangle_{K,u,\beta_{S},n}^{2}}=\frac{2u}{2+3(n-1)u}, \tag{79}\]
and vanishes both in the limit \(u\to 0\) and in the thermodynamic limit with \(u>0\), unlike the relative variance of \(P(\beta|u,\beta_{S})\) which is independent of \(n\). This last result, combined with Eq. (78), implies that
\[\lim_{n\to\infty}P(\beta|K,u,\beta_{S},n)=\delta\Big{(}\beta-\frac{3}{2k} \Big{)}. \tag{80}\]
We can interpret this result as the following statement: in the thermodynamic limit, the kinetic energy of a group of particles uniquely fixes its superstatistical temperature, and this temperature becomes exactly the fundamental temperature.
## VI Connection with the superstatistical approximation for kinetic energies
In this section we will recast our earlier result [17] on the distribution of velocities for a particle in a collisionless plasma, noting it is actually valid for the marginal distribution of velocities of any system in a steady state. From Eq. (46) we know that the single-particle ensemble function \(p_{1}(k_{1})\) can be written as
\[p_{1}(k_{1})=\int_{0}^{\infty}dKp_{n}(k_{1}+K)\Omega_{n-1}(K)\left[\int_{0}^{ \infty}d\beta\delta\big{(}\beta-\beta_{F}^{(n)}(K)\big{)}\right] \tag{81}\]
where in the square brackets on the right-hand side we have introduced a factor of \(1\) as an integral over a parameter \(\beta\). Rearranging the integrals and inserting a factor \(1=\exp(-\beta k_{1})\exp\big{(}\beta_{F}^{(n)}(K)k_{1}\big{)}\) by virtue of the delta function, we obtain
\[p_{1}(k_{1})=\int_{0}^{\infty}d\beta\exp(-\beta k_{1})\left[\int_{0}^{\infty} dK\exp\big{(}\beta_{F}^{(n)}(K)k_{1}\big{)}p_{n}(k_{1}+K)\Omega_{n-1}(K) \delta\big{(}\beta-\beta_{F}^{(n)}(K)\big{)}\right] \tag{82}\]
which is superstatistics with a single-particle weight function
\[f_{1}(\beta)=\int_{0}^{\infty}dK\exp\big{(}\beta_{F}^{(n)}(K)k_{1}\big{)}p_{ n}(k_{1}+K)\Omega_{n-1}(K)\delta\big{(}\beta-\beta_{F}^{(n)}(K)\big{)}. \tag{83}\]
The probability distribution of \(\beta\) is then obtained by multiplying \(f_{1}(\beta)\) by \(Z_{1}(\beta;m)\),
\[\begin{split} P(\beta|S)&=\left[\int_{0}^{\infty} dk^{\prime}\Omega_{1}(k^{\prime})\exp(-\beta k^{\prime})\right]\int_{0}^{ \infty}dK\exp\big{(}\beta_{F}^{(n)}(K)k_{1}\big{)}p_{n}(k_{1}+K)\Omega_{n-1}(K )\delta\big{(}\beta-\beta_{F}^{(n)}(K)\big{)}\\ &=\int_{0}^{\infty}dK\int_{0}^{\infty}dk^{\prime}p_{n}(k_{1}+K) \Omega_{1}(k^{\prime})\Omega_{n-1}(K)\exp(-\beta_{F}^{(n)}[k^{\prime}-k_{1}] )\delta\big{(}\beta-\beta_{F}^{(n)}(K)\big{)},\end{split} \tag{84}\]
which we can now approximate by taking \(k_{1}\ll K\), leading to
\[\begin{split} P(\beta|S)&\approx\int_{0}^{\infty} dK\int_{0}^{\infty}dk^{\prime}p_{n}(k^{\prime}+K)\Omega_{1}(k^{\prime})\Omega_{n-1}(K )\delta\big{(}\beta-\beta_{F}^{(n)}(K)\big{)}\\ &=\int_{0}^{\infty}dK\int_{0}^{\infty}dk^{\prime}P(k_{1}=k^{\prime },K|S)\delta\big{(}\beta-\beta_{F}^{(n)}(K)\big{)}\\ &=\int_{0}^{\infty}dKP(K|S)\delta\big{(}\beta-\beta_{F}^{(n)}(K) \big{)},\end{split} \tag{85}\]
where we have used Eq. (37) and the first-order approximation
\[\ln p_{n}(K+\Delta k)\approx\ln p_{n}(K)-\beta_{F}^{(n)}(K)\Delta k \tag{86}\]
for \(\Delta k\ll K\). Defining
\[\mathcal{B}_{F}(K):=\beta_{F}^{(n-1)}(K) \tag{87}\]
and using the approximation \(\mathcal{B}_{F}(K)\approx\beta_{F}^{(n)}(K)\) for \(n\gg 1\) we finally arrive at the general result,
\[P(\beta|S)=\lim_{n\to\infty}\int_{0}^{\infty}dKP(K|S)\delta\big{(}\beta- \mathcal{B}_{F}(K)\big{)}, \tag{88}\]
that is,
\[P(\beta|S)=\lim_{n\to\infty}P\big{(}\mathcal{B}_{F}=\beta\big{|}S\big{)}. \tag{89}\]
The result in Eq. (89) is a generalization of a property recently derived recently by Gravanis _et al_[27], and taken as an interpretation of the superstatistical parameter \(\beta\), namely that, in the thermodynamic limit,
\[\beta\sim\frac{3N}{2\mathcal{K}}\]
in the sense that they are equivalent as random variables. In fact in our case we see, from Eq. (87) and the thermodynamic limit of Eq. (53), that
\[\lim_{n\to\infty}\mathcal{B}_{F}(K)=\frac{3}{2k}. \tag{90}\]
Here we would like to put special emphasis on the fact that the superstatistical \(\beta\) is, in general, a _statistical parameter_ and cannot be understood as the value of a phase space function for an arbitrary superstatistical model, in accordance with an earlier proof of impossibility [28]. Instead, the limit in Eq. (89) clearly shows that the distribution of \(\beta\) converges to the distribution of the fundamental inverse temperature _of the environment_ of the test particle. Replacing Eq. (90) into Eq. (89) gives
\[P(\beta|u,\beta_{S})=P\Big{(}\frac{3}{2k}=\beta\Big{|}u,\beta_{S}\Big{)}= \frac{3}{2\beta^{2}}\ P\Big{(}k=\frac{3}{2\beta}\Big{|}u,\beta_{S}\Big{)}, \tag{91}\]
after using the transformation property [29] of probability densities from a variable \(X\) to a function \(f(X)\),
\[P\big{(}f=F\big{|}I\big{)}=P\big{(}X=f^{-1}(F)\big{|}I\big{)}\Bigg{|}\frac{df^{ -1}(F)}{dF}\Bigg{|}, \tag{92}\]
where \(f^{-1}(F)\) is the inverse function to \(f(X)\) such that \(f(f^{-1}(F))=F\) for all \(F\). We can use Eq. (91) to obtain the distribution \(P(k|u,\beta_{S})\) by replacing Eq. (68), leading to
\[P(k|u,\beta_{S})=\bigg{[}\frac{2\beta^{2}}{3}P(\beta|u,\beta_{S})\bigg{]}_{ \beta=3/(2k)}=\frac{2u\beta_{S}}{3\Gamma(1/u)}\exp\Big{(}-\frac{3}{2ku\beta_{ S}}\Big{)}\Big{(}\frac{3}{2ku\beta_{S}}\Big{)}^{\frac{1}{u}+1}, \tag{93}\]
which is an inverse gamma distribution, and we can verify that it agrees with
\[P(k|u,\beta_{S})=\lim_{n\to\infty}nP(\mathcal{K}=nk|u,\beta_{S},n) \tag{94}\]
by using Eq. (58). An alternative way of deriving Eq. (91) is by using the marginalization rule
\[P(\beta|S)=\int_{0}^{\infty}dKP(\beta|K,S)P(K|S) \tag{95}\]
and replacing Eq. (80), where we obtain
\[\begin{split} P(\beta|u,\beta_{S})&=\lim_{n\to\infty} \int_{0}^{\infty}dKP(\beta|K,u,\beta_{S})P(K|u,\beta_{S})\\ &=\lim_{n\to\infty}\int_{0}^{\infty}dK\delta\Big{(}\beta-\frac{3} {2k}\Big{)}P(K|u,\beta_{S})\\ &=\lim_{n\to\infty}n\int_{0}^{\infty}dk\delta\Big{(}\beta-\frac{3 }{2k}\Big{)}\frac{1}{n}P(k|u,\beta_{S})\\ &=\frac{3}{2\beta^{2}}P\Big{(}k=\frac{3}{2\beta}\Big{|}u,\beta_{S }\Big{)},\end{split} \tag{96}\]
where we have again used the property in Eq. (92) as
\[P(aX=F|I)=\frac{1}{a}P\left(X=\frac{F}{a}\Big{|}I\right).\]
Please note that the distribution \(P(k|u,\beta_{S})\) in Eq. (93) is not the same as the distribution of the single-particle kinetic energy \(P(k_{1}|u,\beta_{S})\) in Eq. (54) because, as random variables,
\[k_{1}\neq\lim_{n\to\infty}\frac{1}{n-1}\sum_{i=2}^{n}k_{i}.\]
In fact, although both distributions have the same mean,
\[\left\langle k\right\rangle_{u,\beta_{S}}=\left\langle k_{1}\right\rangle_{u, \beta_{S}}=\frac{3}{2\beta_{S}(1-u)}, \tag{97}\]
according to Eq. (56b) their variances are different, and are related by the inequality
\[\left\langle(\delta k)^{2}\right\rangle_{u,\beta_{S}}=\left(\frac{u}{1-2u} \right)\left\langle k\right\rangle^{2}_{u,\beta_{S}}<\left\langle(\delta k_{1 })^{2}\right\rangle_{u,\beta_{S}}. \tag{98}\]
It is interesting to note that the variance of \(k\), a random variable itself defined as the average of an infinite number of random variables, only vanishes for \(u=0\), seemingly in contradiction with the law of large numbers (LLN). It is precisely because for \(u>0\) the kinetic energies \(k_{2},k_{3},\ldots,k_{n}\) are correlated that the LLN does not apply.
## VII Summary and discussion
We have shown that the kappa distribution for particle velocities in a plasma can be recovered from superstatistics plus a single assumption, namely Eq. (35) which imposes linearity of the most probable kinetic energy \(k^{*}\) of a test particle as a function of the kinetic energy \(K\) of its environment. Our results do not rely on the concept of entropy or its maximization, non-additivity or any such concept, and do not assume any particular distribution of temperature _a priori_. Nevertheless, in such a plasma the inverse temperature \(\beta\) does have a well-defined distribution, namely the gamma distribution \(P(\beta|u,\beta_{S})\) in Eq. (68). Although this distribution describes a statistical parameter rather than a phase space observable, it is closely related to the distribution of the fundamental inverse temperature \(\beta_{F}\) for a group of \(n\) particles in the thermodynamic limit, a general result that is consistent with the insight provided by Gravanis _et al_[27] on the interpretation of the superstatistical \(\beta\). This insight, however, has to be carefully stated in order to understand temperature as a property of the environment surrounding the system of interest.
Our result shows that the kappa distribution can arise whenever there are kinetic energy correlations, suggesting that it may be realized in more diverse experimental conditions than are currently considered. Relevant new scenarios to be explored may include laser-produced plasmas [30], Z-pinches [31] and in particular plasma focus devices [32; 33; 34], where a rich phenomenology has been observed, including dense plasma [35], plasma shocks [36], plasma filaments [37] and supersonic plasma jets [32; 33; 34; 35].
## Acknowledgements
S.D. gratefully acknowledges funding from ANID FONDECYT 1220651 grant. |
2306.05789 | Reflective Conditions for Radiative Transfer in Integral Form with
H-Matrices | In a recent article the authors showed that the radiative Transfer equations
with multiple frequencies and scattering can be formulated as a nonlinear
integral system. In the present article, the formulation is extended to handle
reflective boundary conditions. The fixed point method to solve the system is
shown to be monotone. The discretization is done with a $P^1$ Finite Element
Method. The convolution integrals are precomputed at every vertices of the mesh
and stored in compressed hierarchical matrices, using Partially Pivoted
Adaptive Cross-Approximation. Then the fixed point iterations involve only
matrix vector products. The method is $O(N\sqrt[3]{N}\ln N)$, with respect to
the number of vertices, when everything is smooth. A numerical implementation
is proposed and tested on two examples. As there are some analogies with ray
tracing the programming is complex. | Olivier Pironneau, Pierre-Henri Tournier | 2023-06-09T10:05:48Z | http://arxiv.org/abs/2306.05789v1 | # Reflective Conditions for
###### Abstract
In a recent article the authors showed that the radiative Transfer equations with multiple frequencies and scattering can be formulated as a nonlinear integral system. In the present article, the formulation is extended to handle reflective boundary conditions. The fixed point method to solve the system is shown to be monotone. The discretization is done with a \(P^{1}\) Finite Element Method. The convolution integrals are precomputed at every vertices of the mesh and stored in compressed hierarchical matrices, using Partially Pivoted Adaptive Cross-Approximation. Then the fixed point iterations involve only matrix vector products. The method is \(O(N\sqrt[3]{N}\ln N)\), with respect to the number of vertices, when everything is smooth. A numerical implementation is proposed and tested on two examples. As there are some analogies with ray tracing the programming is complex.
**Keywords** MSC classification 85A25, 37N30, 31A10, 35Q30, 68P30, 74S05, Radiative Transfer, Reflective boundaries, Integral equation, H-Matrix, Finite Element Methods.
## Introduction
The Radiative Transport Equations (RTE) describe the behavior of electromagnetic radiation in a domain \(\Omega\) as it interacts with matter [14]. It is used to model a wide range of physical phenomena, including the propagation of light through plasma, tomography [18], atmospheric media [13], etc.
The RTE is derived from the basic principles of quantum and statistical mechanics; it is a partial differential equation (PDE) that describes the distribution of radiation intensity in space, time and frequencies, coupled with a budget balance equation (BBE) for the electronic temperature. The PDE takes into
account both absorption and scattering of radiation by matter, as well as emission of radiation by sources, which, in the present case, will be restricted to the boundaries of the emitting material.
In [6],[7] the authors have shown that the PDE can be converted into an integral equation for the total radiation at each point in the domain and that the coupling with the BBE can be handled by fixed point iterations. The method leads also to a general proof of existence, uniqueness and regularity of the solution. The difference with earlier studies such as [4] is in the coupling with the equation for the temperature, the BBE, or even the PDE for the temperature when diffusion is important.
In [5] the authors have presented an implementation of the method using H-Matrix compression, a crucial ingredient which makes the evaluation of the integrals \(O(N\sqrt[3]{N}\ln N)\) with respect to the number of vertices \(N\) in the 3D mesh which discretizes the domain \(\Omega\); \(N\ln N\) is the complexity of the H-Matrix approximations but each element of the matrix requires an integral along a line in the domain. Compared with a brute force solution of the equations as in [10], the integral method keeps a manageable computing time for problems with frequency dependent parameters. However, it did not handle reflective boundary conditions [17].
H-Matrix compression [8], [1],[3], is a mathematical technique used to efficiently represent and manipulate large matrices that arise in a variety of applications. The technique uses a hierarchical structured representation of the matrices allowing fast and accurate numerical computations when the integrals have a convolution type integrand which decays with the distance.
H-Matrix compression works by first defining a hierarchical geometric partitioning of the matrix into smaller and smaller submatrices. This so-called hierarchical _block tree_ is then traversed recursively and _far-field_ interaction blocks which verify a _geometric admissibility condition[1]_ are compressed by using a low rank approximation. The resulting H-Matrix allows for efficient matrix-vector multiplications, among other operations of linear algebra. The technique is particularly important and popular for computational electromagnetics in integral form such as boundary element methods.
With the _Partially Pivoted Adaptive Cross-Approximation_ (ACA) [3] only the needed coefficients of the matrices are computed (\(r\) rows and \(r\) columns, where \(r\) is the rank of the approximation). However the theory requires geometrical smoothness [2].
We have extended the implementation done in [5] using FreeFEM[9] and htool1; htool is a parallel C++ toolbox implementing H-Matrices, used in particular for the boundary element method in electromagnetism. FreeFEM is a popular open-source software package for solving PDE systems by the finite element method (FEM).
Footnote 1: [https://github.com/htool-ddm/htool](https://github.com/htool-ddm/htool)
FreeFEM provides a wide range of pre-built FEM, as well as tools for mesh generation. It has a dedicated high level progr
users to meet their specific needs. FreeFEM also supports parallel computing with mpi.
One of the main advantages of FreeFEM for the present study is its ability to handle complex geometries and boundary conditions, especially thanks to its powerful automatic interpolation from volume to surface meshes.
Adding reflective conditions (RC) to the FreeFEM code presented in [5] turned out to require solving the following difficulties:
* Integrate the RC into the integral formulation of the problem
* Show that the fixed point iterations are still monotone.
* Find a formulation compatible with the use of H-Matrices
* Implement the method in the FreeFEM language.
This paper presents the solutions found to overcome these four difficulties. It ends with a numerical test proposed in [11].
## 1 The Radiative Transfer Equations
The problem is formulated in a domain \(\Omega\subset\mathbb{R}^{3}\) with boundary \(\Gamma\). The unit sphere in \(\mathbb{R}^{3}\) is called \(\mathbb{S}^{2}\). One must find the radiation (called light from now on) intensity \(I_{\nu}(\mathbf{x},\boldsymbol{\omega})\) at all points \(\mathbf{x}\in\Omega\), for all directions all \(\boldsymbol{\omega}\in\mathbb{S}^{2}\) and all frequencies \(\nu\in\mathbb{R}_{+}\), satisfying:
\[\boldsymbol{\omega}\cdot\nabla I_{\nu}+\kappa_{\nu}I_{\nu}= \kappa_{\nu}(1-a_{\nu})B_{\nu}(T)+\kappa_{\nu}a_{\nu}J_{\nu}\,,\quad J_{\nu}:= \tfrac{1}{4\pi}\,\int_{\mathbb{S}^{2}}I_{\nu}\mathrm{d}\boldsymbol{\omega}\,, \tag{1}\] \[\int_{0}^{\infty}\kappa_{\nu}(1-a_{\nu})(J_{\nu}-B_{\nu}(T)) \mathrm{d}\nu=0\,,\] (2) \[I_{\nu}(\mathbf{x},\boldsymbol{\omega})=R_{\nu}(\mathbf{x}, \boldsymbol{\omega})I_{\nu}(\mathbf{x},\boldsymbol{\omega}-2(\mathbf{n}\cdot \boldsymbol{\omega})\mathbf{n})+Q_{\nu}(\mathbf{x},\boldsymbol{\omega}),\] \[\text{ on }\Sigma:=\{(\mathbf{x},\boldsymbol{\omega})\in\Gamma \times\mathbb{S}^{2}\ :\ \boldsymbol{\omega}\cdot\mathbf{n}(\mathbf{x})<0\,\}, \tag{3}\]
where \(B_{\nu}(T)=\frac{\nu^{3}}{e^{\frac{1}{T}}-1}\) is the (rescaled) Planck function. In the RC (3), \(R_{\nu}\) is the portion of light which is reflected and \(Q_{\nu}\) is the light source; \(\mathbf{n}(\mathbf{x})\) is the outer normal of \(\Gamma\) at \(\mathbf{x}\). \(\kappa_{\nu}>0\) and \(a_{\nu}\in[0,1]\) are the absorption and scattering coefficients; in general they depend on \(\nu\) and \(\mathbf{x}\).
**Example 1**: _If an object \(\mathcal{O}\) inside a box \(\mathcal{B}\) radiates because it is at temperature \(T_{0}\), then, \(\Omega=\mathcal{B}\backslash\mathcal{O}\), \(Q_{\nu}=Q^{0}[\boldsymbol{\omega}\cdot\mathbf{n}]_{-}B_{\nu}(T_{0})\) on \(\mathcal{O}\) and zero elsewhere and \(\Sigma\subset\partial\mathcal{B}\times\mathbb{S}^{2}\). As usual \([f]_{-}=-\min(f,0)\)._
### An Integral Formulation
For clarity we drop the subscript \(\nu\) on \(\kappa,\ a\) and \(I\). Assume that \(\Omega\) is bounded and convex (see remark 1). Let
\[S_{\nu}(\mathbf{x})=\kappa(1-a)B_{\nu}(T)+\kappa aJ_{\nu}, \tag{4}\]
For a given \(\mathbf{x}\) and \(\boldsymbol{\omega}\), let \(\tau_{\mathbf{x},\boldsymbol{\omega}}\) be such that \((\mathbf{x}_{\Sigma}(\mathbf{x},\boldsymbol{\omega})\coloneqq\mathbf{x}-\tau_{ \mathbf{x},\boldsymbol{\omega}}\boldsymbol{\omega},\boldsymbol{\omega})\in\Sigma\); the method of characteristics tells us that
\[I(\mathbf{x},\boldsymbol{\omega})=I(\mathbf{x}_{\Sigma}(\mathbf{x},\boldsymbol {\omega}),\boldsymbol{\omega})\mathrm{e}^{-\int_{0}^{\tau_{\mathbf{x}, \boldsymbol{\omega}}}\kappa(\mathbf{x}-\boldsymbol{\omega}s)\mathrm{d}s}+\int_{ 0}^{\tau_{\mathbf{x},\boldsymbol{\omega}}}\mathrm{e}^{-\int_{0}^{s}\kappa( \mathbf{x}-\boldsymbol{\omega}s^{\prime})\mathrm{d}s^{\prime}}S_{\nu}( \mathbf{x}-\boldsymbol{\omega}s)\mathrm{d}s. \tag{5}\]
Notice that \(\tau_{\mathbf{x},\boldsymbol{\omega}}=|\mathbf{x}_{\Sigma}-\mathbf{x}|\) (see Figure 1), therefore, let
\[\begin{split} J_{\nu}(\mathbf{x})&\coloneqq\tfrac{1} {4\pi}\,\int_{\mathbb{S}^{2}}I(\mathbf{x},\boldsymbol{\omega})\mathrm{d} \omega=S_{\nu}^{E}(\mathbf{x})+\mathcal{J}[S_{\nu}](\mathbf{x})\quad\text{ with }\\ S_{\nu}^{E}(\mathbf{x})&\coloneqq\tfrac{1}{4\pi}\, \int_{\mathbb{S}^{2}}I(\mathbf{x}_{\Sigma}(\mathbf{x},\boldsymbol{\omega}), \boldsymbol{\omega})\mathrm{e}^{-\int_{0}^{\tau_{\mathbf{x},\boldsymbol{\omega }}}\kappa(\mathbf{x}-\boldsymbol{\omega}s)\mathrm{d}s}\mathrm{d}\omega,\\ \mathcal{J}[S](\mathbf{x})&\coloneqq\tfrac{1}{4\pi} \,\int_{\mathbb{S}^{2}}\int_{0}^{\tau_{\mathbf{x},\boldsymbol{\omega}}} \mathrm{e}^{-\int_{0}^{s}\kappa(\mathbf{x}-\boldsymbol{\omega}s^{\prime}) \mathrm{d}s^{\prime}}S(\mathbf{x}-\boldsymbol{\omega}s)\mathrm{d}s\mathrm{d} \omega\\ &\qquad\qquad\qquad=\tfrac{1}{4\pi}\,\int_{\Omega}S(\mathbf{y}) \frac{\mathrm{e}^{-\int_{[\mathbf{x},\mathbf{y}]}\kappa}}{|\mathbf{y}- \mathbf{x}|^{2}}\mathrm{d}\mathbf{y},\end{split} \tag{6}\]
where \(\boldsymbol{\omega}^{\prime}(\boldsymbol{\omega})\coloneqq\boldsymbol{\omega}- 2(\mathbf{n}\cdot\boldsymbol{\omega})\mathbf{n}\) and \(\int_{[\mathbf{x},\mathbf{y}]}f:=|\mathbf{y}-\mathbf{x}|\int_{0}^{1}f(s \mathbf{y}+(1-s)\mathbf{x})\mathrm{d}s.\)
To justify the last formula we refer to the following lemma with \(\Psi(\mathbf{x},\mathbf{y})=S(\mathbf{y})\mathrm{e}^{-\int_{[\mathbf{x}, \mathbf{y}]}\kappa}\). Again, for clarity, we drop the first argument \(\mathbf{x}\).
**Lemma 1**: _Let \(\Omega\) be a convex bounded open set of \(\mathbb{R}^{3}\); let \(\Gamma\) be its boundary. Let \(\Psi:\Omega\mapsto\mathbb{R}\) be continuous. Let \(\tau_{\mathbf{x},\boldsymbol{\omega}}\geq 0\) be such that \(\mathbf{x}-\tau_{\mathbf{x},\boldsymbol{\omega}}\boldsymbol{\omega}\in\Gamma\), \(\mathbf{x}\in\Omega\). Then_
\[\int_{\mathbb{S}^{2}}\int_{0}^{\tau_{\mathbf{x},\boldsymbol{\omega}}}\Psi( \mathbf{x}-\boldsymbol{\omega}s)\mathrm{d}s\mathrm{d}\omega=\int_{\Omega}\frac {\Psi(\mathbf{y})}{|\mathbf{y}-\mathbf{x}|^{2}}\mathrm{d}\mathbf{y}.\]
_Proof_: Denote \(\tilde{\Psi}\) the extension of \(\Psi\) by zero outside \(\Omega\). Let \(\boldsymbol{\omega}=(\cos\theta\sin\varphi,\sin\theta\sin\varphi,\cos\varphi)^ {T}\), \(\theta\in\left(0,2\pi\right)\), \(\varphi\in\left(-\tfrac{\pi}{2},\tfrac{\pi}{2}\right)\). Consider a partition of the semi infinite line starting at \(\mathbf{x}\) in direction \(-\boldsymbol{\omega}\) into segments of size \(\delta s\) and denote \(\mathbf{x}_{n}=\mathbf{x}-n\delta s\boldsymbol{\omega}\). Then
\[\begin{split}\int_{\mathbb{S}^{2}}\int_{0}^{\tau_{\mathbf{x}, \boldsymbol{\omega}}}\tilde{\Psi}(\mathbf{x}-\boldsymbol{\omega}s)\mathrm{d}s \mathrm{d}\omega=\lim_{\delta s\to 0}\sum_{n>0}\delta s\int_{0}^{2\pi}\int_{-\frac{\pi}{2}} \tilde{\Psi}(\mathbf{x}_{n})\cos\varphi\mathrm{d}\varphi\mathrm{d}\theta\\ =\lim_{\delta s\to 0}\sum_{n>0}\int_{0}^{2\pi}\int_{-\frac{\pi}{2}} ^{\frac{\pi}{2}}\frac{\tilde{\Psi}(\mathbf{x}_{n})}{|\mathbf{x}-\mathbf{x}_{n}| ^{2}}|\mathbf{x}-\mathbf{x}_{n}|^{2}|\mathbf{x}_{n+1}-\mathbf{x}_{n}|\cos \varphi\mathrm{d}\varphi\mathrm{d}\theta.\end{split} \tag{7}\]
We note that \(|\mathbf{x}-\mathbf{x}_{n}|^{2}|\mathbf{x}_{n+1}-\mathbf{x}_{n}|\cos\varphi \mathrm{d}\theta\mathrm{d}\varphi\) is the elementary volume in the sector \(\mathrm{d}\theta\mathrm{d}\varphi\) between the spheres centered at \(\mathbf{x}\) and of radii \(|\mathbf{x}-\mathbf{x}_{n}|\) and \(|\mathbf{x}-\mathbf{x}_{n+1}|\). Therefore the right-hand side is an integral in \(\mathbf{y}\in\mathbb{R}^{3}\) of \(\frac{\tilde{\Psi}(\mathbf{y})}{|\mathbf{x}-\mathbf{y}|^{2}}|\mathbf{x}- \mathbf{x}_{n}|^{2}\). \(\Box\)
**Remark 1**: _When \(\Omega\) is not convex, on may apply the lemma to its convex closure \(\bar{\Omega}\) with \(\kappa\) extended to \(+\infty\) in \(\bar{\Omega}\backslash\Omega\)._
**Remark 2**: _When \(R_{\nu}\equiv 0\), \(S^{E}\) is given by (6) with \(Q_{\nu}\) in place of \(I\). As (2)defines a map \(\mathcal{T}:J\mapsto T\),_
\[T(\mathbf{x})=\mathcal{T}[J_{\nu}](\mathbf{x}),\ \ \forall\mathbf{x}\in\Omega,\]
_then, (4), (6) is a nonlinear integral formulation for \(J\):_
\[J_{\nu}(\mathbf{x})=S_{\nu}^{E}(\mathbf{x})+\mathcal{J}[\kappa(1-a)B_{\nu}( \mathcal{T}[J_{\nu}])+\kappa aJ_{\nu}](\mathbf{x}),\ \ \ \forall\mathbf{x}\in\Omega. \tag{8}\]
The following fixed point method was shown in [6] to be monotone and convergent:
\[J_{\nu}^{k+1}(\mathbf{x})=S_{\nu}^{E}(\mathbf{x})+\mathcal{J}[\kappa(1-a)B_{ \nu}(\mathcal{T}[J_{\nu}^{k}](\mathbf{x}))+\kappa aJ_{\nu}^{k}](\mathbf{x}),\ \ k=0,1,\ldots \tag{9}\]
Let us extend these properties to the RTE with RC. For clarity let \(\mathbf{x}_{\Sigma}\) be short for \(\mathbf{x}_{\Sigma}(\mathbf{x},\boldsymbol{\omega})\) and let
\[\boldsymbol{\omega}^{\prime}(\boldsymbol{\omega}):=\boldsymbol{\omega}-2 \boldsymbol{\omega}\cdot\mathbf{n}(\mathbf{x}^{\prime})\;\mathbf{n}(\mathbf{ x}^{\prime}),\ \ \ \mathbf{x}^{\prime}_{\Sigma}:=\mathbf{x}_{\Sigma}(\mathbf{x}_{\Sigma}(\mathbf{x}, \boldsymbol{\omega}),\boldsymbol{\omega}^{\prime})\ \ \ \ \text{with}\ \boldsymbol{\omega}:=\frac{\mathbf{x}-\mathbf{x}^{\prime}}{|\mathbf{x}- \mathbf{x}^{\prime}|}.\]
Let us insert (5) and (3) in (6). Then,
\[S_{\nu}^{E}(\mathbf{x})=S_{\nu,1}^{E}+S_{\nu,2}^{E}+S_{\nu,3}^{E}\ \text{with}\] \[S_{\nu,1}^{E}(\mathbf{x}):=\tfrac{1}{4\pi}\int_{\mathbb{S}^{2}}Q _{\nu}(\mathbf{x}_{\Sigma},\boldsymbol{\omega})\mathrm{e}^{-\int_{0}^{\tau_{ \mathbf{x},\boldsymbol{\omega}}}\kappa(\mathbf{x}-\boldsymbol{\omega}s) \mathrm{d}s}\mathrm{d}\omega,\] \[S_{\nu,2}^{E}(\mathbf{x}):=\tfrac{1}{4\pi}\int_{\mathbb{S}^{2}}R _{\nu}(\mathbf{x}_{\Sigma},\boldsymbol{\omega})\mathrm{\nu}_{\nu}(\mathbf{x}^ {\prime}_{\Sigma},\boldsymbol{\omega}^{\prime})\left[\mathrm{e}^{-\int_{0}^{ \tau_{\mathbf{x}_{\Sigma},\boldsymbol{\omega}^{\prime}}}\kappa(\mathbf{x}_{ \Sigma}-\boldsymbol{\omega}^{\prime}s)\mathrm{d}s}\right.\] \[\left.\mathrm{e}^{-\int_{0}^{\tau_{\mathbf{x},\boldsymbol{\omega} }}\kappa(\mathbf{x}-\boldsymbol{\omega}s)\mathrm{d}s}\right]\mathrm{d}\omega,\] \[S_{\nu,3}^{E}(\mathbf{x}):=\tfrac{1}{4\pi}\int_{\mathbb{S}^{2}} \Big{[}R_{\nu}(\mathbf{x}_{\Sigma},\boldsymbol{\omega})\mathrm{e}^{-\int_{0}^{ \tau_{\mathbf{x},\boldsymbol{\omega}}}\kappa(\mathbf{x}-\boldsymbol{\omega}s^{ \prime})\mathrm{d}s^{\prime}}\] \[\int_{0}^{\tau_{\mathbf{x}_{\Sigma},\boldsymbol{\omega}^{\prime}} }\mathrm{e}^{-\int_{0}^{\tau_{\mathbf{x}}}\kappa(\mathbf{x}_{\Sigma}- \boldsymbol{\omega}^{\prime}s^{\prime})\mathrm{d}s^{\prime}}S_{\nu}(\mathbf{x} _{\Sigma}-\boldsymbol{\omega}^{\prime}s)\mathrm{d}s\ \Big{]}\mathrm{d}\omega.\]
**Hypothesis 1**: _Let us rule out multiple reflections and focal points:_
1. _If_ \(R_{\nu}(\mathbf{x}_{\Sigma}(\mathbf{x},\boldsymbol{\omega}),\boldsymbol{\omega })>0\)_, then_ \(R_{\nu}(\mathbf{x}_{\Sigma}(\mathbf{x}_{\Sigma}(\mathbf{x},\boldsymbol{\omega}), \boldsymbol{\omega}^{\prime}),\boldsymbol{\omega})=0\)_._
2. _Given_ \(\mathbf{x}\) _and_ \(\mathbf{y}\)_, there is only a finite number_ \(M\) _of_ \(\mathbf{x}^{\prime}_{n}\in\Gamma\) _such that_ \([\mathbf{x}^{\prime}_{n},\mathbf{y}]\) _is the reflected ray of_ \([\mathbf{x},\mathbf{x}^{\prime}_{n}]\)_. Note that_ \(\mathbf{x}^{\prime}_{n}\) _depends on_ \(\mathbf{x}\) _and_ \(\mathbf{y}\)_._
**Proposition 1**: _Under Hypothesis 1_
\[S_{\nu,3}^{E}(\mathbf{x}):=\sum_{n=1}^{M}\tfrac{1}{4\pi}\int_{\Omega}R_{\nu}( \mathbf{x}^{\prime}_{n},\tfrac{\mathbf{x}-\mathbf{x}^{\prime}_{n}}{|\mathbf{x}- \mathbf{x}^{\prime}_{n}|})\frac{\mathrm{e}^{-\int_{[\mathbf{x},\mathbf{x}^{ \prime}_{n}]\cup[\mathbf{x}^{\prime}_{n},\mathbf{y}]}\kappa}}{(|\mathbf{x}- \mathbf{x}^{\prime}_{n}|+|\mathbf{x}^{\prime}_{n}-\mathbf{y}|)^{2}}S(\mathbf{y })\mathrm{d}\mathbf{y}.\]
_Proof_ Let \(\mathbf{x}(s):=\mathbf{x}_{\Sigma}-\omega^{\prime}s\). By Lemma 1,
\[\int_{\mathbb{S}^{2}}\int_{0}^{\tau_{\mathbf{x}_{\Sigma}, \boldsymbol{\omega}^{\prime}}} \Big{[}R_{\nu}(\mathbf{x}_{\Sigma},\boldsymbol{\omega})\mathrm{e}^{- \int_{[\mathbf{x},\mathbf{x}_{\Sigma}]\cup[\mathbf{x}_{\Sigma},\boldsymbol{ \omega}(s)]}\kappa}S(\mathbf{x}(s))\mathrm{d}s\ \Big{]}\mathrm{d}\omega\] \[=\int_{\Omega}R_{\nu}(\mathbf{x}_{\Sigma},\boldsymbol{\omega})S( \mathbf{y})\frac{\mathrm{e}^{-\int_{[\mathbf{x},\mathbf{x}_{\Sigma}]\cup[\mathbf{x }_{\Sigma},\mathbf{y}]}\kappa}}{(|\mathbf{x}-\mathbf{x}_{\Sigma}|+|\mathbf{x }_{\Sigma}-\mathbf{y}|)^{2}}\mathrm{d}\mathbf{y},\]
provided that \([\mathbf{x}_{\Sigma},\mathbf{y}]\) is reflected from \([\mathbf{x},\mathbf{x}_{\Sigma}]\). Now, by hypothesis, if \(\mathbf{x}\) and \(\mathbf{y}\) are given in \(\Omega\) there are only a finite number of \(\mathbf{x}_{\Sigma}\in\Gamma\) for which \([\mathbf{x}_{\Sigma},\mathbf{y}]\) is reflected from \([\mathbf{x},\mathbf{x}_{\Sigma}]\), (see Figure 1). \(\Box\)
**Proposition 2**: _Let Hypothesis 1 hold. Then the source terms from the boundaries are_
\[S^{E}_{\nu,1}(\mathbf{x})=\tfrac{1}{4\pi}\,\int_{\Gamma}Q_{\nu}( \mathbf{y},\tfrac{\mathbf{y}-\mathbf{x}}{|\mathbf{y}-\mathbf{x}|})\frac{[( \mathbf{y}-\mathbf{x})\cdot\mathbf{n}(\mathbf{y})]_{-}}{|\mathbf{y}-\mathbf{x} |^{3}}\mathrm{e}^{-\int_{[\mathbf{x},\mathbf{y}]}\kappa}\mathrm{d}\Gamma( \mathbf{y}), \tag{10}\] \[S^{E}_{\nu,2}(\mathbf{x})=\sum_{n=1}^{M}\tfrac{1}{4\pi}\,\int_{ \Gamma}R_{\nu}(\mathbf{x}^{\prime}_{n},\tfrac{\mathbf{x}-\mathbf{x}^{\prime}_ {n}}{|\mathbf{x}-\mathbf{x}^{\prime}_{n}|})Q_{\nu}(\mathbf{y},\tfrac{\mathbf{ x}^{\prime}_{n}-\mathbf{y}}{|\mathbf{x}^{\prime}_{n}-\mathbf{y}|})\] \[\frac{[(\mathbf{x}^{\prime}_{n}-\mathbf{y})\cdot\mathbf{n}( \mathbf{y})]_{-}\mathrm{e}^{-\int_{[\mathbf{x},\mathbf{x}^{\prime}_{n}]} \cup[\mathbf{x}^{\prime}_{n},\mathbf{y}]}\kappa}{|\mathbf{x}^{\prime}_{n}- \mathbf{y}|\;(|\mathbf{x}-\mathbf{x}^{\prime}_{n}|+|\mathbf{x}^{\prime}_{n}- \mathbf{y}|)^{2}}\mathrm{d}\Gamma(\mathbf{y}). \tag{11}\]
Recall that \(\mathbf{x}^{\prime}_{n}\) depends on \(\mathbf{y}\).
_Proof_ : Recall that a solid angle integral at \(\mathbf{x}\) of a surface \(\Sigma\) is
\[\int_{\mathbb{S}^{2}}f(\mathbf{x},\mathbf{x}^{\prime})\mathrm{d}\boldsymbol{ \omega}(\mathbf{x}^{\prime})=\int_{\Sigma}f(\mathbf{x},\mathbf{x}^{\prime}) \frac{[(\mathbf{x}-\mathbf{x}^{\prime})\cdot\mathbf{n}(\mathbf{x}^{\prime})]_ {-}}{|\mathbf{x}-\mathbf{x}^{\prime}|}\frac{\mathrm{d}\Sigma(\mathbf{x}^{ \prime})}{|\mathbf{x}-\mathbf{x}^{\prime}|^{2}}.\]
Therefore, from the definition of \(S^{E}_{\nu,2}\) above we see that (10) holds.
To prove (11) we start from the definition of \(S^{E}_{\nu,2}\) above. For clarity let us assume that \(Q_{\nu}\) and \(R_{\nu}\) do not depend on \(\boldsymbol{\omega}\).
Observe that if a ray from \(\mathbf{x}\) in the direction \(-\boldsymbol{\omega}\) does not hit, after reflection at \(\mathbf{x}^{\prime}\) on some \(\Gamma_{R}\), a boundary \(\Gamma_{Q}\) at \(\mathbf{y}\) where \(Q_{\nu}(\mathbf{y})\) is non zero, then \(\boldsymbol{\omega}\) does not contribute to \(S^{E}_{\nu,2}\). Thus, we can use the solid angle of \(\Gamma_{Q}\). However the solid angle is not seen from \(\mathbf{x}\) but from \(\bar{\mathbf{x}}\), the symmetric of \(\mathbf{x}\) with respect to the tangent plane to \(\Gamma_{R}\) at \(\mathbf{x}^{\prime}\). As the distance from \(\bar{\mathbf{x}}\) to \(\mathbf{y}\) is also \(|\mathbf{x}-\mathbf{x}^{\prime}|+|\mathbf{x}^{\prime}-\mathbf{y}|\), we obtain (11). \(\Box\)
**Corollary 1**: \[J_{\nu}(\mathbf{x})=\bar{S}^{E}_{\nu}(\mathbf{x})+\bar{\mathcal{J}}[S_{\nu}] (\mathbf{x}),\] (12)
_with \(\bar{S}^{E}_{\nu}(\mathbf{x}):=S^{E}_{\nu,1}(\mathbf{x})+S^{E}_{\nu,2}( \mathbf{x})\) given by Proposition 2 and_
\[\bar{\mathcal{J}}[S](\mathbf{x})=\tfrac{1}{4\pi}\,\int_{\Omega}\! \left[\frac{\mathrm{e}^{-\int_{[\mathbf{x},\mathbf{y}]}\kappa}}{|\mathbf{y}- \mathbf{x}|^{2}}+\sum_{n=1}^{M}\frac{\mathrm{e}^{-\int_{[\mathbf{x},\mathbf{x} ^{\prime}_{n}]}\cup[\mathbf{x}^{\prime}_{n},\mathbf{y}]}\kappa}{(|\mathbf{x}- \mathbf{x}^{\prime}_{n}|+|\mathbf{x}^{\prime}_{n}-\mathbf{y}|)^{2}}R_{\nu}( \mathbf{x}^{\prime}_{n},\tfrac{\mathbf{x}-\mathbf{x}^{\prime}_{n}}{|\mathbf{x} -\mathbf{x}^{\prime}_{n}|})\right]\!S(\mathbf{y})\mathrm{d}\mathbf{y}. \tag{13}\]
### Example
Assume that \(\Gamma=\Gamma_{Q}\cup\Gamma_{R}\) and \(Q_{\nu}(\mathbf{x},\boldsymbol{\omega})=[\boldsymbol{\omega}\cdot\mathbf{n}( \mathbf{x})]_{-}\)\(Q^{0}\) with \(Q^{0}>0\) on \(\Gamma_{Q}\) and \(0\) on \(\Gamma_{R}\). Assume \(R_{\nu}(\mathbf{x},\boldsymbol{\omega})=R^{0}\) with \(R_{0}>0\) on \(\Gamma_{R}\) and \(0\) on \(\Gamma_{Q}\). Assume that
there is never more than one reflection point on \(\Gamma_{R}\), i.e. \(M=1\). Then
\[\bar{S}_{\nu}^{E}({\bf x})=\frac{Q^{0}}{4\pi}\,\int_{\Gamma_{Q}} \left[\left(\frac{[({\bf y}-{\bf x})\cdot{\bf n}({\bf y})]_{-}}{|{\bf y}-{\bf x }|^{2}}\right)^{2}{\rm e}^{-\int_{[{\bf x},{\bf y}]}\kappa}\right.\] \[+\left.R^{0}\frac{([({\bf x}_{1}^{\prime}-{\bf y})\cdot{\bf n}({ \bf y})]_{-})^{2}{\rm e}^{-\int_{[{\bf x},{\bf x}_{1}^{\prime}]\cup[{\bf x}_{1 }^{\prime},{\bf y}]}^{\kappa}}}{|{\bf x}_{1}^{\prime}-{\bf y}|^{2}\,\,(|{\bf x }-{\bf x}_{1}^{\prime}|+|{\bf x}_{1}^{\prime}-{\bf y}|)^{2}}\right]{\rm d} \Gamma({\bf y}),\] \[\bar{\cal J}[S]({\bf x})=\frac{1}{4\pi}\,\int_{\Omega}\Biggl{[} \frac{{\rm e}^{-\int_{[{\bf x},{\bf x}_{1}^{\prime}]}\kappa}}{|{\bf y}-{\bf x }|^{2}}+R^{0}\frac{{\rm e}^{-\int_{[{\bf x},{\bf x}_{1}^{\prime}]}\kappa}}{(|{ \bf x}-{\bf x}_{1}^{\prime}|+|{\bf x}_{1}^{\prime}-{\bf y}|)^{2}}\Biggr{]}S({ \bf y}){\rm d}{\bf y}.\]
### Fixed Point Iterations
Consider the fixed point iterations initialized with \(T^{0}\) and \(J^{0}=0\).
**Algorithm** For \(k=0,1,\ldots\):
Set \(S_{\nu}^{k}({\bf x})=\kappa(1-a)B_{\nu}(T^{k})+\kappa aJ_{\nu}^{k}\).
Set \(J_{\nu}^{k+1}({\bf x})=\bar{S}_{\nu}^{E}({\bf x})+\bar{\cal J}[S_{\nu}^{k}]({ \bf x})\).
Compute \(T^{k+1}\) by solving (using Newton algorithm) for each \({\bf x}\in\Omega\)
\[\int_{0}^{\infty}\kappa_{\nu}(1-a_{\nu})(J_{\nu}^{k+1}-B_{\nu}(T^{k+1})){\rm d }\nu=0.\]
**Proposition 3**: _Let \(\{J_{\nu}^{*},T^{*}\}\) be the solution. If \(T^{0}({\bf x})>T^{*}({\bf x}),\ \forall{\bf x}\in\Omega\) then the iterations are monotone decreasing: \(T^{k}({\bf x})>T^{k+1}>T^{*}({\bf x}),\ \forall{\bf x}\in\Omega\). Conversely
Figure 1: In this configuration the source \(\Gamma_{Q}\) is the upper square. An RC is imposed on the lower plane \(\Gamma_{R}\). \(S_{\nu}^{E}\) has an integral of the solid angle of the upper square seen from \({\bf x}\) plus an integral of the solid angle of the upper square seen from \(\bar{\bf x}\), the symmetric of \({\bf x}\) with respect to \(\Gamma_{R}\).
_if \(T^{0}({\bf x})<T^{*}({\bf x}),\ \forall{\bf x}\in\Omega\) then the iterations are monotone increasing: \(T^{k}({\bf x})<T^{k+1}<T^{*}({\bf x}),\ \forall{\bf x}\in\Omega\)._
_Proof_ : Le us prove it for the monotone increasing sequence.
By subtracting the definition \(J^{k}_{\nu}\) from that of \(J^{k+1}_{\nu}\) and using the linearity of \(\bar{\cal J}\), we obtain
\[J^{k+1}_{\nu}({\bf x})-J^{k}_{\nu}({\bf x})=\bar{\cal J}[S^{k}_{\nu}-S^{k-1}_{ \nu}]({\bf x}).\]
As \({\cal J}\) is a strictly positive operator, if \(S^{k}_{\nu}>S^{k-1}_{\nu}\) for all \({\bf x}\) then \(J^{k+1}_{\nu}({\bf x})>J^{k}_{\nu}({\bf x})\). The equation for \(T^{k+1}\) is also monotone in the sense that
\[J^{k}_{\nu}({\bf x})>J^{k}_{\nu}({\bf x})\quad\Longrightarrow\quad B_{\nu}(T ^{k+1})>B_{\nu}(T^{k})\quad\Longrightarrow\quad T^{k+1}>T^{k},\]
because \(B_{\nu}\) is increasing in \(T\).
Conclusion: if \(T^{1}>T^{0}\) and \(S^{1}>S^{0}\) then \(T^{k+1}>T^{k}\) for all \(k\). One sure way to impose it is to choose \(T^{0}=0\) and \(J^{0}=0\).
To prove that \(T^{k}<T^{*}\) we observe that
\[J^{k}_{\nu}({\bf x})-J^{*}_{\nu}({\bf x})=\bar{\cal J}[S^{k-1}_{ \nu}-S^{*}_{\nu}]({\bf x}).\] \[\mbox{Hence }S^{k-1}_{\nu}<S^{*}_{\nu}\quad\Longrightarrow\quad J ^{k}_{\nu}({\bf x})<J^{*}_{\nu}({\bf x})\quad\Longrightarrow\quad T^{k}<T^{*}.\]
\(\Box\)
Discretization seems to preserve this property (see Figure 2).
**Remark 3**: _Henceforth, convergence and uniqueness can probably be proved as in [7], but there are technical difficulties of functional analysis which may not be appropriately discussed here._
## 2 FEM discretization and Compressed H-Matrices
For clarity consider example 1.2. As the values of \(Q^{0}\) and \(R^{0}\) take different values on \(\Gamma_{Q}\) and \(\Gamma_{R}\), we write \(Q^{0}({\bf x})\) and \(R^{0}({\bf x})\).
The domain \(\Omega\) is discretized by a tetraedral mesh; the boundary \(\Gamma\) is discretized by a triangular mesh, not necessarily conforming with the volume mesh.
Let \(\{{\bf x}^{j}\}_{1}^{N}\) be the vertices of the tetraedra of \(\Omega\) and \(\{\tilde{\bf x}^{l}\}_{1}^{L}\) the vertices of the triangles of \(\Gamma\).
A continuous \(P^{1}\) interpolation of \(J\) on the tetraedral mesh is:
\[J({\bf x})=\sum_{1}^{N}J_{j}\hat{w}^{j}({\bf x})\ \mbox{where }\hat{w}^{j} \mbox{ is the $P^{1}$- Finite Element hat function of vertex }{\bf x}^{j}.\]
Then
\[S_{\nu,j}:=aJ_{\nu,j}+(1-a)B_{\nu}(T_{j}),\quad J_{\nu,i}:=\bar{S}_{\nu,i}^{E}+ \sum_{j}G_{\kappa}^{ij}S_{\nu,j}\quad\text{ where }\]
\[G_{\kappa}^{ij}=\tfrac{1}{4\pi}\,\int_{\Omega}\!\left[\kappa\frac{\mathrm{e}^{- \int_{[\mathbf{x}^{i}-\mathbf{y}]}\kappa}}{|\mathbf{x}^{i}-\mathbf{y}|^{2}} \mathrm{d}y+\sum_{n=1}^{M}R^{0}(\mathbf{x}_{n}^{\prime})\frac{\mathrm{e}^{- \int_{[\mathbf{x}^{i},\mathbf{x}_{n}^{\prime}]\cup(\mathbf{x}_{n}^{\prime}, \mathbf{y}]}\kappa}}{(|\mathbf{x}^{i}-\mathbf{x}_{n}^{\prime}|+|\mathbf{x}_{n} ^{\prime}-\mathbf{y}|)^{2}}\right]\!\tilde{w}^{j}(\mathbf{y})\mathrm{d}\mathbf{y}\]
and where \(\bar{S}_{\nu,i}^{E}=\frac{1}{4\pi}\,\int_{\Gamma}Q^{0}(\mathbf{y})\!\left[ \left(\frac{\left[(\mathbf{x}^{i}-\mathbf{y})\cdot\mathbf{n}(\mathbf{y}) \right]_{-}}{|\mathbf{x}^{i}-\mathbf{y}|^{2}}\right)^{2}\mathrm{e}^{-\int_{[ \mathbf{x}^{i},\mathbf{y}]}\kappa}\right.\)
\[\left.+\sum_{n=1}^{M}R^{0}(\mathbf{x}_{n}^{\prime})\frac{\left(\left[( \mathbf{x}_{n}^{\prime}-\mathbf{y})\cdot\mathbf{n}(\mathbf{y})\right]_{-} \right)^{2}\mathrm{e}^{-\int_{[\mathbf{x}^{i},\mathbf{x}_{n}^{\prime}]\cup( \mathbf{x}_{n}^{\prime},\mathbf{y}]}\kappa}}{|\mathbf{x}_{n}^{\prime}-\mathbf{y }|^{2}(|\mathbf{x}^{i}-\mathbf{x}_{n}^{\prime}|+|\mathbf{x}_{n}^{\prime}- \mathbf{y}|)^{2}}\right]\!\mathrm{d}\Gamma(\mathbf{y}).\]
The integrals are approximated with quadrature at points \(\left\{\mathbf{x}_{q}^{j}\right\}_{1}^{M_{q}}\). The points are inside the elements; consequently \(|\mathbf{x}^{i}-\mathbf{x}_{q}^{j}|\) is never zero. A formula of degree \(5\), with \(M_{q}=14\), is used when \(|\mathbf{x}^{i}-\mathbf{y}|\) is small and of degree \(2\), with \(M_{q}=4\), otherwise; the results do not change when higher degrees are used. Fortunately when \(\mathbf{x}^{i}\) is close to \(\Gamma\) an analytical formula can be used [7].
To compute \(\mathbf{x}_{n}^{\prime}\) such that \([\mathbf{y},\mathbf{x}_{n}^{\prime}]\) is the reflected ray of \([\mathbf{x}_{n}^{\prime},\mathbf{x}^{i}]\) a loop on all the elements of the reflecting boundaries is necessary. This can be expensive, but in the case of planar reflective boundaries the symmetric point \(\bar{\mathbf{x}}^{i}\) is easy to compute and so is the intersection of \([\bar{\mathbf{x}}^{i},\mathbf{y}]\) with the reflective boundary.
Finally, to the vector \(\{\bar{S}_{\nu,i}^{E}\}_{i=1}^{N}\) we associate a matrix \(\{\bar{S}_{i,l}^{E}\}_{i,l=1}^{N,L}\) by replacing \(Q^{0}(\mathbf{y})\) above by \(\tilde{w}^{l}(\mathbf{y})\). Then:
\[Q^{0}(\mathbf{y})=\sum_{1}^{L}Q_{l}^{0}\tilde{w}^{l}(\mathbf{y})\implies\bar {S}_{\nu,i}^{E}=\sum_{1}^{L}\bar{S}_{i,l}^{E}Q_{l}^{0}.\]
### Compression
For each \(\nu\) we have two large dense matrices, \(\{\bar{G}_{i,j}\}_{i,j=1}^{N,N}\) and \(\{\bar{S}_{i,l}^{E}\}_{i,l=1}^{N,L}\).
**Remark 4**: _Note that for each value of \(\nu\) two matrices are needed. However on close inspection it is really two matrices for each value of \(\kappa_{\nu}\). Very often, less than ten values are sufficient to represent a general \(\kappa_{\nu}\) by a piece-wise constant interpolation on these values._
These matrices can be compressed as \(\mathcal{H}\)-matrices [2],[15],[16] (and the references therein) so that the matrix-vector product has complexity \(O(N\ln N)\).
The method works best when the kernel in the integrals decays with the distance between \(\mathbf{x}^{i}\) and \(\mathbf{y}\). In all matrices the kernel decays with the square of the distance. The \(\mathcal{H}\)-matrix approximation views \(\mathbf{G}\) as a hierarchical tree of square blocks. The blocks correspond to interactions between clusters of points near \(\mathbf{x}^{j}\) and near \(\mathbf{x}^{i}\). A far-field interaction block can be approximated by a low-rank matrix because its singular value decomposition (SVD) has fast decaying singular values. We use the _Partially Pivoted Adaptive Cross-Approximation
(ACA) [3] to approximate the first terms of the SVD of the blocks, because only \(r\) rows and \(r\) columns are needed instead of the whole block, where \(r\) is the rank of the approximation. The rank is a function of a user defined parameter \(\epsilon\) connected to the relative Frobenius norm error. Another criterion must be met: if \(R_{1}\) (resp. \(R_{2}\)) is the radius of a cluster of points centered at \(\mathbf{x}_{1}\) (resp. \(\mathbf{x}_{2}\)), then one goes down the hierarchical tree until the corresponding block satisfies \(\max(R_{1},R_{2})<\eta|\mathbf{x}_{1}-\mathbf{x}_{2}|\) where \(\eta\) is a user defined parameter. If a leaf is reached, the block is not compressed and all the elements are computed.
The precision is not guaranteed if \([(\mathbf{x}-\mathbf{y})\cdot\mathbf{n}(\mathbf{y})]_{-}\) jumps from one triangular face to another is large. A similar singularity caused by normals is analyzed for a double layer potential formulation in [2] (Example 3.38, p.148) and a remedy is proposed. To check whether this remedy is needed here we ran two cases, one without compression and one with 97% compression. No difference was observed.
## 3 An Academic Test
In [11] a semi-analytic solution of the RTE is given for a geometry shown on Figure 3. In this test \(a=0\) and \(\kappa\) is a function of \(\mathbf{x}\) but not of \(\nu\). Hence the grey formulation can be used for \(\bar{I}=\int_{0}^{\infty}I_{\nu}\mathrm{d}\nu\). By averaging (14) in \(\nu\) and due
Figure 2: Values of \(J\), for the academic test, along the \(y\) axis at \(x=z=15\) computed with a RC. Convergence versus iteration number n. When the scaled temperature is initialized to \(T^{0}=0.001\) at \(n=0\) the convergence is monotonously increasing. When \(T^{0}=0.44\) the convergence is monotonously decreasing.
to the Stefan-Boltzmann relation, the following holds:
\[\begin{array}{l}\int_{0}^{\infty}B_{\nu}(T)\mathrm{d}\nu=\sigma T^{4}\text{ with }\sigma=\frac{\pi^{4}}{15}\quad\Longrightarrow\\ \bar{J}^{k+1}(\mathbf{x})=\bar{S}^{E}(\mathbf{x})+\bar{\mathcal{J}}[\kappa \sigma(T^{k})^{4}](\mathbf{x}),\quad\kappa\sigma(T^{k+1})^{4}=\kappa\bar{J}^{ k+1}.\end{array} \tag{15}\]
### The Geometry
The outer container is \(D=(0,60)\times(0,100)\times(0,60)\), in cm. A cube \(C=[0,10]^{3}\), inside \(D\), radiates with intensity \(Q^{0}=0.1\). A rectangular cylinder prolonging the radiating cube \((0,10)\times(10,100)\times(0,10)\) has a low absorption \(\kappa=10^{-4}\) while the rest has \(\kappa=0.1\). In Kobayashi's test case 1A there is no scattering, and the three planes containing the origin reflects the radiations perfectly (\(R_{\nu}=1\)): \((O,x,z)\), \((O,x,y)\), \((O,y,z)\).
Unfortunately the present method cannot handle volume radiating region. Consequently we have kept the geometry but only the 3 faces of \(C\) inside \(D\) radiate in all directions \(\boldsymbol{\omega}\) with intensity \(Q^{0}[\boldsymbol{\omega}\cdot\mathbf{n}]_{-}\), where \(\mathbf{n}\) is the normal to the cube's face pointing inside the cube. The domain is \(\Omega=D\backslash C\) (see Figure 3). We refer to this case as Test-3.
### Results
To assert the precision of the method we consider first only one reflective plane, \(\Gamma_{R}=(0,y,z)\) and a constant \(\kappa=0.1\). We refer to this case as Test-1. Test-2 is Test-1 with \(\kappa\) is as in Test-3.
First we verify, on Test-3, that the convergence is monotone increasing if \(T^{0}\) is small and monotone decreasing if \(T^{0}\) is large (Figure 2). Note that the monotone increasing sequence converges faster.
Next, we compare the results, on Test-1, with a computation on a domain \(\bar{D}=(-60,60)\times(0,100)\times(0,60)\) which is \(D\) plus the symmetric of \(D\) with respect to the plane \((0,y,z)\). This is because reflection on a plane is equivalent to extending the domain by symmetry with respect to that plane.
Figure 4 shows, on Test-1, level surfaces of \(J\) computed on the symmetrized domain (but restricted to the original domain) and compared with the same level surfaces but computed with the RC. Surfaces with similar colors should be near each other. In fact the difference is not visible except near \(z=0\).
Figure 3: A small cube (colored blue and red on the figure) radiates normally to its faces in a medium which has a very small absorption coefficient \(\kappa=10^{-4}\) in the cylinder prolonging the cube and \(\kappa=0.1\) elsewhere.
Figure 5 shows, on Test-2, the level surfaces of \(J\) computed with the RC, and the same level surfaces but computed without any RC on the \((O,y,z)\) plane. It is seen that surfaces with similar color are far from each other. By comparing Figure 4 with Figure 5 we see that the RC does almost the same as symmetry and that no RC at all is a non viable approximation for this problem.
Similarly, Figure 6 shows \(x\mapsto J(x,25,25)\), computed on the symmetrized do
Figure 4: Level surfaces of \(J\) using a log scale computed with \(\kappa=0.1\) and only one reflective plane, \((0,y,z)\) facing us, slightly to the left. Comparison between a computation done with the RC and a computation done on a symmetrized domain, double in size.Surfaces of equal colors are so near each other that it is hard to distinguish them.
Figure 5: Same as in Figure 4 but the RC is not used in one computation. Surfaces of equal colors are far from those using the RC, indicating the absolute necessity of a RC. Here \(\kappa\) is as in Test-3.
main, or with the radiative condition or without it.
Finally, Figure 7 shows \(x\mapsto J(x,25,25)\), computed with the RC on 4 meshes. The same first 4 meshes are used in Table 1 where the theoretical complexity \(N\sqrt[3]{N}\ln N\) is approximately observed. The compressing ratio for the surface and the volume matrices are shown too.
Test 1A of [11], denoted here Test-3, has been computed, i.e. non constant \(\kappa\) and 3 reflective planes. The surface levels of \(J\) are shown on Figure 10. Convergence versus mesh size on the line \((x,25,25)\) is shown on Figure 7.
The comparison with the data in [11] on the line \((5,y,5)\) is shown on Figure 8. But since the radiative sources are different (volumic in Kobayashi's and surfacic in our case) we have scaled the result with Kobayashi's value at \(x=5,y=15,z=5\).
Finally the \(L^{2}\) error is computed by using the finest mesh, \(N=195974\) as a
reference solution. The results are displayed on Figure 9. It shows the \(L^{2}\)-error versus \(h:=\sqrt[3]{N}\), in log-log scales.
Figure 8: Values of \(J\) versus \(y\geq 15\) at \(x=z=5\) and comparison with the values given in [11]. A scaling is applied so that the radiation intensities coincide at \(y=15\) (because [11] is given for volumic source and the present method handles only surface sources).
The Chamonix Valley
In [5] the temperature in the Chamonix Valley due to sunlight was studied. With units in 10km, the emitting domain (the ground) is a rectangle \([-0.2,3.32]\times[-3.35,0.163]\), with the Chamonix city at \((1.5,-1.5)\). The 3D domains is the emitting domain extruded above the ground up to \(z=1\), i.e. 10km altitude. The Mont Blanc, in the lower left part of the map is 4807m high. The domain is discretized into tetraedra by a 3D automatic mesh generator from a surface mesh.
Naturally the results are affected by the domain truncation because points near the boundaries receive less than half the scattered light. Now RC can be applied to the 4 vertical planes of the truncation. Note that near the corners there is still a light deficiency which could be corrected by allowing multiple reflections (a programming challenge).
### Settings
The ground surface radiates, proportionally to the vertical component of the normal \(\mathbf{n}_{z}\), the light of a black body at temperature \(300^{\circ}\)C at all frequencies (but mostly infrared). The intensity was set at \(Q^{0}=2.5\) so as to obtain meaningful temperatures, but since the Earth is not in thermal equilibrium with the sunlight it receives, this choice is arbitrary. In any case rescaling is easily done as \(J\) is proportional to \(Q^{0}\).
All mountains are covered with snow above 2500m. The snow-covered ground emits only \(0.3Q^{0}\).
The ground surface mesh has 95K vertices and the volume mesh has 786K vertices. In general 12 fixed point iterations are sufficient to find the temperature \(T\) from \(J_{\nu}\) and these decrease the error by 6 orders of magnitude.
The surface-to-volume matrix compressed to level 0.942. The volume-to-volume matrix compressed to level 0.982.
### Test 1: the Grey Case
In this test \(\kappa\) depends on the altitude, but not on \(\nu\): \(\kappa=\frac{1}{2}(1-az)\), with \(a=\frac{3}{4}\), except in the cloud. The cloud is a layer between altitude \(z_{m}=0.3\), i.e; 3000m and \(z_{M}=0.7\), i.e.7000m where \(\kappa\) is multiplied by a Gaussian random number of means 0.2 and variance 0.8. Scattering is only in the cloud with \(a=0.3(z-z_{m})_{+}(z_{M}-z)_{+}/(4(z_{M}-z_{m})^{2}\).
The program ran on a French national supercomputer in 5'42" using 1920 processors and 12 threads per MPI process. The surface-to-volume matrix was constructed in 38.5" with RC and 13.53 without. The volume-to-volume matrix took 145" with RC and 95.6 without.
Figure 11: Ground temperatures (in \({}^{o}\)C) computed with RC on the 4 vertical boundaries (left) and without them (right). The third figure displays the difference \(T\) with RC minus \(T\) without RC.
Figure 12: Ground and vertical temperatures (in \({}^{o}\)C) computed with RC in the valley of Chamonix. The mesh is shown in blue on the ground and the intersection of the mesh with the vertical plane is shown in white.
Figure 11 shows the computed temperatures (in Celcius) on the ground with and without RC. The difference between the two temperature fields is also displayed: it is noticeably hotter everywhere by a few degrees when computed with RC. Finally, Figure 12 shows the mesh and the temperatures on the ground and on a vertical plane accross the Chamonix valley. The parabolic shape of the mountains increases the temperatures above the ground.
### Test 2: the General Case
The setting does not change except that now \(\kappa\) varies with altitude as before but with \(a=\frac{3}{4}\) and its dependance on \(\nu\) is read from the Gemini website [12] (see [5] for details). 583 points are needed to discretize \(\nu\), but only 8 values are retained for a piecewise discontinuous approximation of \(\kappa_{\nu}\) in the exponentials in the matrices. The computing time is roughly 8 times that of the grey case.
The temperature versus altitude above the Chamonix city is plotted on Figure 13. The sudden temperature increase just above the ground is persistent with mesh refinement near the ground. The same computation was done in the same domain with the same mesh but on a flat ground \(z=0\) (the domain is parallelipedic). Then there is no sudden increase, implying that the sudden increase is due to the radiation in a U-shaped valley.
In a second computation the Gemini values for \(\kappa_{\nu}\) are modified to be 1 in the range \(\nu=(3/18,3/14)\) to simulate an increase of \(\texttt{CO}_{2}\) in the atmosphere. On Figure 13, it is seen that, in this configuration, the \(\texttt{CO}_{2}\) increase the temperature near the ground and decrease it at high altitude.
The light intensity \(J\) at \((1.5,-1.5,0.5)\) versus wavelength, \(c/\nu\), (\(c\approx 3\) is the scaled speed of light), is plotted on Figure 14. Notice that the computation
captures complex details due to the discontinuities of \(\nu\mapsto\kappa_{\nu}\).
## Conclusion
Compressed H-matrices is an ideal tool for RTE in integral form because the complexity of the method is \(O(N\sqrt[3]{N}\ln N)\) where \(N\) is the number of vertices in the 3D mesh and because it can handle frequency dependent absorption and scattering coefficients at the expense of a finite number of compressed matrices and a finite number of matrix-vector products.
In the present study, the integral nonlinear formulation of RTE studied in [5] has been extended to handle reflective boundary conditions. The monotonicity property of the iterative solver is kept. The discretization with the finite element method is the same. However it is much harder to write a general computer code because of the complexity of possible multiple reflections, as in ray tracing. Hence in this article the numerical validation has been done only for a finite number of plane reflective boundaries and with at most one reflection per ray. For the academic test case and for the Chamonix valley it is essential to add reflective conditions for accuracy.
### Acknowledgement
We would like to thank warmly Frederic Hecht, for his constant good will to adapt FreeFEM to our needs. The FreeFEM++ app can be downloaded from www.freefem.org
Some computations were made on the machine Joliot-Curie of the national computing center TGCC-GENCI under allocation A0120607330.
|
2304.02847 | Robustmix: Improving Robustness by Regularizing the Frequency Bias of
Deep Nets | Deep networks have achieved impressive results on a range of well-curated
benchmark datasets. Surprisingly, their performance remains sensitive to
perturbations that have little effect on human performance. In this work, we
propose a novel extension of Mixup called Robustmix that regularizes networks
to classify based on lower-frequency spatial features. We show that this type
of regularization improves robustness on a range of benchmarks such as
Imagenet-C and Stylized Imagenet. It adds little computational overhead and,
furthermore, does not require a priori knowledge of a large set of image
transformations. We find that this approach further complements recent advances
in model architecture and data augmentation, attaining a state-of-the-art mCE
of 44.8 with an EfficientNet-B8 model and RandAugment, which is a reduction of
16 mCE compared to the baseline. | Jonas Ngnawe, Marianne ABEMGNIGNI NJIFON, Jonathan Heek, Yann Dauphin | 2023-04-06T03:24:00Z | http://arxiv.org/abs/2304.02847v1 | # Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets
###### Abstract
Deep networks have achieved impressive results on a range of well-curated benchmark datasets. Surprisingly, their performance remains sensitive to perturbations that have little effect on human performance. In this work, we propose a novel extension of Mixup called Robustmix that regularizes networks to classify based on lower-frequency spatial features. We show that this type of regularization improves robustness on a range of benchmarks such as ImageNet-C and Stylized ImageNet. It adds little computational overhead and furthermore does not require a priori knowledge of a large set of image transformations. We find that this approach further complements recent advances in model architecture and data augmentation attaining a state-of-the-art mean corruption error (mCE) of 44.8 with an EfficientNet-B8 model and RandAugment, which is a reduction of 16 mCE compared to the baseline.
## 1 Introduction
Deep neural networks have achieved state-of-the-art accuracy across a range of benchmark tasks such as image segmentation (Ren et al., 2015) and speech recognition (Hannun et al., 2014). These successes have led to the widespread adoption of neural networks in many real-life applications. However, while these networks perform well on curated benchmark datasets, their performance can suffer greatly in the presence of small data corruptions (Szegedy et al., 2014; Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2017; Athalye et al., 2018; Hendrycks and Dietterich, 2018). This poses significant challenges to the application of deep networks.
Hendrycks and Dietterich (2018) show that the accuracy of a standard model on ImageNet can drop from 76% to 20% when evaluated on images corrupted with small visual transformations. This shows modern networks are not robust to certain small shifts in the data distribution. That is a concern because such shifts are common in many real-life applications. Secondly, Szegedy et al. (2014) show the existence of _adversarial_ perturbations which are imperceptible to humans but have a disproportionate effect on the predictions of a network. This raises significant concerns about the safety of using deep networks in critical applications such as self driving cars (Sitawarin et al., 2018).
These problems have led to numerous proposals to improve the robustness of deep networks. Some of these methods such as those proposed by Hendrycks et al. (2019) require a priori knowledge of the visual transformations in the test domain. Others, such as Geirhos et al. (2018) use a deep network to generate transformations which comes with significant computation cost.
This paper proposes a new data augmentation technique to improve the robustness of deep networks by regularizing frequency bias. This new regularization technique is based on Mixup and has many advantages compared to related robustness regularizers: (1) it does not require knowledge of a large set of priori transformations, (2) it is inexpensive, and (3) it doesn't have many hyper-parameters. The key idea is to bias the network to rely more on lower spatial frequencies to make predictions.
We demonstrate on ImageNet-C that this method works well with recent advances and reaches a state-of-the-art mCE of 44.8, 85.0 clean accuracy with EfficientNet-B8 and RandAugment(Cubuk et al., 2019). This is an improvement of 16 mCE compared to the baseline EfficientNet-B8 and matches ViT-L16 (Dosovitskiy et al., 2020), which is trained on \(300\times\) more data. We find that our implementation of the method with DCT transform adds negligible overhead in our experiments. We find that Robustmix improves accuracy on Stylized-ImageNet by up to 15 points and we show that it can increase adversarial robustness.
## 2 Related Work
The proposed approach can be seen as a generalization of Mixup (Zhang et al., 2018), a data augmentation method that regularizes models by training them on linear interpolations of two input examples and their respective labels. These new examples are generated as follows
\[\tilde{x} =\texttt{mix}(x_{1},x_{2},\lambda), \text{where }x_{1},x_{2}\text{ are input images}\] \[\tilde{y} =\texttt{mix}(y_{1},y_{2},\lambda), \text{where }y_{1},y_{2}\text{ are labels}\]
with mix being the linear interpolation function
\[\texttt{mix}(x_{1},x_{2},\lambda)=\lambda x_{1}+(1-\lambda)x_{2} \tag{1}\]
where \(\lambda\sim\mathrm{Beta}(\alpha,\alpha)\), \(\alpha\) is the Mixup coefficient hyper-parameter.
Zhang et al. (2018) show that Mixup improves the accuracy of networks and can also improve the robustness of the network. In the past years, several versions of Mixup were proposed with application in Fairness (Chuang and Mroueh, 2021), 3D reconstruction (Cheng et al., 2022), semi-supervised learning (Beckham et al., 2019), as well as robustness ((Mai et al., 2021; Yun et al., 2019; Faramarzi et al., 2020; Kim et al., 2020; Verma et al., 2019)). The novel version we propose here is frequency-based and does not include additional learnable parameters.
Augmix (Hendrycks et al., 2019) is a data augmentation technique to improve robustness by training on a mix of known image transformations. It adds little computational overhead but requires knowledge of a diverse set of domain-specific transformations. Hendrycks et al. (2019) mixes a set of 9 different augmentations to reach \(68.4\) mCE on ImageNet. In contrast, the proposed method does not rely on specific image augmentations but on the more general principle that natural images are a kind of signal where most of the energy is concentrated in the lower frequencies.
The idea of frequency filtering is popular in Deep learning frameworks and has numerous applications including unsupervised domain adaptation (Yang and Soatto (2020)) and adversarial perturbation attacks (Guo et al. (2018); Li et al. (2021)). Unlike the latter papers which focus on measuring the accuracy of a model after an adversarial attack, we focus on common (noise) corruptions by measuring mCE as a robustness assessment.
Zhang (2019) uses low pass filters directly inside the model to improve the frequency response of the network.Wang et al. (2019) uses a differentiable neural network to extract textual information from images without modeling the lower-frequency. Our method also makes use of low-pass filtering but does not completely remove high-frequency features. Additionally, we only use frequency filtering during training, and therefore no computational overhead is incurred during evaluation.
## 3 Method
In this section, we introduce a novel extension of Mixup called _Robustmix_ that increases robustness by regularizing the network to focus more on the low-frequency features in the signal.
**Motivation**Wang et al. (2020) suggest that convolutional networks trade robustness for accuracy in their use of high-frequency image features. Such features can be perturbed in ways that change the
prediction of the model, even though humans cannot perceive the change. This can lead models to make puzzling mistakes, such as with adversarial examples. Our aim is to increase robustness while retaining accuracy by regularizing how high-frequency information is used by the model.
**Robustmix** We propose to regularize the sensitivity of the model to each frequency band by extending Mixup's linear interpolations with a new type of band interpolation. The key insight is that we can condition the sensitivity to each band using images that mix the frequency bands of two different images. Suppose that we mix the lower-frequency band of an image of a boathouse with the high-frequency band of an image of a dog. We can encourage sensitivity to the lower band by training the model to predict "dog" for this mixed image. However, this approach is too simplistic because it completely disregards the impact of the image in the high band. Indeed, the ablation study in section 4.4 shows that it is insufficient.
Instead, we interpolate the label of such mixed images according to an estimate of the importance of each frequency band. We propose to use the relative amount of energy in each band as an estimate of the importance. Thus the sensitivity of the model to high-frequency features will be proportional to their energy contribution in natural images. And as we can see in Figure 2, most of the spectral energy in natural images is concentrated in the lower end of the spectrum. This should limit the ability of high-frequency perturbations to change the prediction unilaterally.
Figure 1: Illustration of the method. In order to better illustrate the method, we display the Fourier spectrum of the images next to them. We can see that even though 90% of the higher frequencies belong to the image of a dog, Robustmixassigns more weight to the boathouse label because it assigns more weight to the lower frequencies.
Figure 2: Plot of the cumulative energy in ImageNet images as a function of the frequency cutoff.
Furthermore, we use linear interpolations of images like in mixup within each band instead of raw images. This closely reflects the more common case where the features in the bands are merely corrupted instead of entirely swapped. It also has the benefit of encouraging linearity inside the same frequency band.
Specifically, the mixing formula for Robustmix is given by
\[\tilde{x} =\texttt{Low}(\texttt{mix}(x_{1},x_{2},\lambda_{L}),c)+\texttt{ High}(\texttt{mix}(x_{1},x_{2},\lambda_{H}),c) \tag{2}\] \[\tilde{y} =\lambda_{\texttt{c}}\texttt{mix}(y_{1},y_{2},\lambda_{L})+(1- \lambda_{c})\texttt{mix}(y_{1},y_{2},\lambda_{H}) \tag{3}\]
where \(\lambda_{L},\lambda_{H}\sim\operatorname{Beta}(\alpha,\alpha)\), \(\alpha\) is the Mixup coefficient hyper-parameter, and \(\texttt{Low}(\cdot,c),\texttt{High}(\cdot,c)\) are a low pass and high pass filter respectively with a uniformly sampled cutoff frequency \(c\in[0,1]\). And \(\lambda_{c}\) is the coefficient that determines how much weight is given to the lower frequency band. It is given by the relative amount of energy in the lower frequency band for natural images
\[\lambda_{c}=\frac{E[\|\texttt{Low}(x_{i},c)\|^{2}]}{E[\|x_{i}\|^{2}]}. \tag{4}\]
This coefficient can be efficiently computed on a mini-batch of examples.
**Implementation** Computational overhead is an important consideration for data augmentation techniques since training deep networks is computationally intensive and practitioners have limited computational budgets. We note that many popular techniques such as Mixup (Zhang et al., 2018) add little overhead.
The frequency separation is implemented using a Discrete Cosine Transform (DCT) to avoid the complex multiplication required by a Discrete Fourier Transform. We multiply the images with the 224x224 DCT matrix directly because the spatial dimensions are relatively small and (non-complex) matrix multiplication is well-optimized on modern accelerators. A batch of images is transformed into frequency space and the low and high-pass filtered images must be transformed back to image space. Additionally, we must apply the DCT transform over the x and y dimension separately. Thus, 6 DCT matrix multiplications are required which results in \(0.2\) GFLOPs per image. In contrast, just the forward pass of ResNet50 requires \(3.87\) GFLOPs (Hasanpour et al., 2016).
In our implementation of Robustmix, we reorder commutative operations (low pass and mixing) in order to compute the DCT only a single time per minibatch. The pseudocode is provided in Algorithm 1, where reverse is a function that reverses the rows of its input matrix.
```
Input: Minibatch of inputs \(X\in\mathbb{R}^{N\times H\times W\times D}\) and labels \(Y\in\mathbb{R}^{N\times C}\), \(\alpha\in\mathbb{R}\) Output: Augmented minibatch of inputs \(\tilde{X}\in\mathbb{R}^{N\times W\times H\times D}\) and labels \(\tilde{Y}\in\mathbb{R}^{N\times C}\) \(\lambda_{L},\lambda_{H}\sim\operatorname{Beta}(\alpha,\alpha)\) and \(c\sim U(0,1)\) \(L\leftarrow\texttt{Low}(X,c)\) \(H\gets 1-L\) \(\lambda_{c}\leftarrow\frac{\|L\|^{2}}{\|X\|^{2}}\) \(\tilde{X}\leftarrow\texttt{mix}(L,\operatorname{reverse}(L),\lambda_{L})+ \texttt{mix}(H,\operatorname{reverse}(H),\lambda_{H})\) \(\tilde{Y}\leftarrow\texttt{mix}(Y,\operatorname{reverse}(Y),\lambda_{c}* \lambda_{L}+(1-\lambda_{c})*\lambda_{H})\)
```
**Algorithm 1** Robustmix
## 4 Results
### Datasets and Metrics
The results presented in this paper rely on the mCE measurement on ImageNet-C, the clean accuracies on ImageNet and Stylized-ImageNet (SIN) as well as the shape bias on SIN. These measurements are found in a range of paper studying robustness (Hendrycks and Dietterich, 2018; Hendrycks et al., 2019; Geirhos et al., 2018; Laugros et al., 2020). The Stylized-ImageNet benchmark is meant to distinguish between a bias towards shape or texture. We believe our results on Stylized-ImageNet complement the standard robustness results because it shows that the inductive bias is more human-like, in the sense that it is more sensitive to shape than texture.
**ImageNet.** ImageNet (Deng et al., 2009) is a classification dataset that contains 1.28 million training images and 50000 validation images with 1000 classes. We evaluate the common classification accuracy which will be referred to as clean accuracy. We use the standard Resnet preprocessing resulting in images of size 224x224 (He et al., 2015). The standard models, without any additional data augmentation process, will be qualified as the baseline.
**ImageNet-C.** This dataset is made of 15 types of corruption drawn from four main categories: noise, blur, weather, and digital (Hendrycks and Dietterich, 2018). These corruptions are applied to the validation images of ImageNet at 5 different intensities or levels of severity. Following (Hendrycks and Dietterich, 2018), we evaluate the robustness of our method by reporting its **mean corruption error (mCE)** normalized with respect to AlexNet errors:
\[\text{mCE}=\frac{\sum\limits_{\text{corruption }c}\text{CE}_{c}}{\text{Total Number of Corruptions}},\]
\[\text{with CE}_{c}=\frac{\sum\limits_{\text{severity }s}E_{c,s}}{\sum_{s}E_{c,s}^{\text{ AlexNet}}}\]
**Stylized-ImageNet.** Stylized-ImageNet (SIN) is constructed from ImageNet by replacing the texture in the original image using style transfer, such that the texture gives a misleading cue about the image label (Geirhos et al., 2018). The 1000 classes from ImageNet are reduced to 16 shape categories, for instance, all labels for dog species are grouped under one "dog" label, same for "chair", "car", etc. There are 1280 generated cue conflict images (80 per category). With SIN, we evaluate the classification accuracy (SIN accuracy) and measure the model's shape bias. Following Geirhos et al. (2018), the model's bias towards shape versus texture is measured as
\[\text{shape bias}=\frac{\text{correct shapes}}{\text{correct shapes + correct textures}}.\]
### Experimental Setup
We chose to do evaluations on residual nets (ResNet-50 and ResNet-152) and EfficientNets (EfficientNet-B0, EfficientNet-B1, EfficientNet-B5, and EfficientNet-B8). Experiments were run on 8x8 TPUv3 instances for the bigger EfficientNets (EfficientNet-B5 and EfficientNet-B8), and the other experiments were run on 4x4 TPUv3 slices. For the Resnet models, we use the same standard training setup outlined in Goyal et al. (2017). However, we use cosine learning rate Loshchilov and Hutter (2016) with a single cycle for Resnets trained for 600 epochs.
### Robustness Results
**ImageNet-C** First, we evaluate the effectiveness of the proposed method in improving the robustness to visual corruptions considered in ImageNet-C. In Table 1, we can see that Robustmix consistently improves robustness to the considered transformations, with a 15-point decrease in mCE over the baseline for ResNet-50. Robustmix with ResNet-50 achieves 61.2 mCE without degrading accuracy on the clean dataset compared to the baseline. In fact, we find a small improvement over the baseline of 0.8% on the clean error. While Mixup yields a larger gain of 1.9% on the clean accuracy, we find that Robustmix improves mCE by up to 6 points more than Mixup. These results also compare favorably to Augmix, which needs to be combined with training on Stylized ImageNet (SIN) to reduce the mCE by 12 points. And this improvement comes at a significant cost to the accuracy due to the use of the Stylized ImageNet dataset. We also observe a similar trade-off between accuracy and robustness as we can observe in Figure 3. We observe that Mixup consistently produces lower clean error for smaller models, but the accuracy gap with Robustmix disappears as the model gets bigger.
While it is not directly comparable to ViT-L/16 due to its use of \(300\times\) more data, we see that EfficientNet-B8 with Robustmixand RandAugment has better robustness at \(44.8\) mCE. It is also competitive with DeepAugment (Hendrycks et al., 2020) which requires training additional specialized image-to-image models on tasks such as super-resolution to produce augmented images. By comparison, our approach does not rely on extra data or extra-trained models.
Our experiments also show that Doublemix combines well with RandAugment (RA), improving further both accuracy and mCE. We removed augmentations from RA that overlap with corruptions in ImageNet-C (contrast, color, brightness, sharpness, and Cut-out) (Hendrycks et al., 2019).
In our cross-validation of \(\alpha\), we found small values less than \(0.2\) perform poorly both on accuracy and mCE. Values of \(\alpha\) such that \(0.2\leq\alpha\leq 0.5\) not only give the best accuracies and mCEs but also the best trade-off of mCE versus accuracy as bigger values of \(\alpha\) have shown giving good values for accuracy but do not do as well on mCE. In our experiments, we found that we typically achieve good results with a frequency cutoff \(c\) sampled between \([0,1]\) as described in Algorithm 1. However, for ResNet-50 trained with a training budget that is too limited (200 instead of 600 epochs) and its smaller versions (ResNet-18 and ResNet-34), it can be beneficial to fix a minimum \(c\geq\tau\) for the cutoff by sampling in the interval \([\tau,1]\). The minimum cutoff determines the range at which band mixing will occur. We can remove band interpolation entirely and recover standard Mixup by setting \(\tau=1\). For Resnet-50 with too few training epochs, we found that a good value for the minimum is \(0.1\), but we found much better results can be achieved with 600 epochs without any modifications to Algorithm 1.
**Stylized-ImageNet.** We confirm that our method indeed increases both accuracy on Stylized ImageNet and the shape bias as shown in table 3. For ResNet-50, Robustmix almost doubles the shape bias from baseline (from 19 to 37) and improves it by 63% over Mixup; while relative improvements on SIN accuracy are of 72% and 33% respectively over baseline and Mixup. The same observation is for EfficientNet-B5 which improves shape bias by nearly 50% and SIN accuracy by nearly 60 % over the baseline.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & \begin{tabular}{c} Clean \\ Accuracy \\ \end{tabular} & mCE & Size & \begin{tabular}{c} Extra \\ Data \\ \end{tabular} \\ \hline ResNet-50 Baseline (200 epochs) & 76.3 & 76.9 & 26M & 0 \\ ResNet-50 Baseline (600 epochs) & 76.3 & 78.1 & 26M & 0 \\ ResNet-50 BlurPool (Zhang, 2019) & 77.0 & 73.4 & 26M & 0 \\ ResNet-50 Mixup (200 epochs) & 77.5 & 68.1 & 26M & 0 \\ ResNet-50 Mixup (600 epochs) & **78.2** & 67.5 & 26M & 0 \\ ResNet-50 Augmix & 77.6 & 68.4 & 26M & 0 \\ ResNet-50 Augmix + SIN & 74.8 & 64.9 & 26M & 0 \\ ResNet-50 Robustmix (600 epochs) & 77.1 & **61.2** & 26M & 0 \\ \hline EfficientNet-B0 Baseline & 76.8 & 72.4 & 5.3M & 0 \\ EfficientNet-B0 Mixup (\(\alpha=0.2\)) & **77.1** & 68.3 & 5.3M & 0 \\ EfficientNet-B0 Robustmix (\(\alpha=0.2\)) & 76.8 & **61.9** & 5.3M & 0 \\ \hline EfficientNet-B1 Baseline & 78.1 & 69.4 & 7.8M & 0 \\ EfficientNet-B1 Mixup (\(\alpha=0.2\)) & **78.9** & 64.7 & 7.8M & 0 \\ EfficientNet-B1 Robustmix (\(\alpha=0.2\)) & 78.7 & **57.8** & 7.8M & 0 \\ \hline EfficientNet-B5 Baseline & 82.7 & 65.6 & 30M & 0 \\ EfficientNet-B5 Mixup (\(\alpha=0.2\)) & 83.3 & 58.9 & 30M & 0 \\ EfficientNet-B5 Robustmix (\(\alpha=0.2\)) & 83.3 & 51.7 & 30M & 0 \\ EfficientNet-B5 RandAug+Robustmix (\(\alpha=0.2\)) & **83.8** & **48.7** & 30M & 0 \\ \hline BiT m-r101x3 (Kolesnikov et al., 2020) & 84.7 & 58.27 & 387.9M & 12.7M \\ ResNeXt-101 \(32\times 8d\)+DeepAugment+AugMix (Hendrycks et al., 2020) & 79.9 & **44.5** & 88.8M &
\begin{tabular}{c} Extra \\ models \\ \end{tabular} \\ ViT-L/16 (Dosovitskiy et al., 2020) & **85.2** & 45.5 & 304.7M & 300M \\ RVT-\(B^{*}\)(Mao et al., 2022) & **82.7** & 46.8 & 91.8M & PAAS+Patch-wise\({}^{2}\) \\ \hline EfficientNet-B8 Baseline & 83.4 & 60.8 & 87.4M & 0 \\ EfficientNet-B8 Robustmix (\(\alpha=0.4\)) & 84.4 & 49.8 & 87.4M & 0 \\ EfficientNet-B8 RandAug+Robustmix (\(\alpha=0.4\)) & **85.0** & **44.8** & 87.4M & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of various models based on ImageNet accuracy and ImageNet-C robustness (mCE). The robustness results for BiT and ViT are as reported by Paul & Chen (2021)(Table 3).
### Ablation Study
In order to measure the effect of Robustmix we apply some simplifications both with respect to the image mixing as well as the labeling. The results are compiled in table 2. It can be noticed from the first two lines that ablating the energy weighting results in a significant drop in mCE even though there is a slight accuracy improvement. However, keeping the energy weighting but not applying the inband mixups is largely detrimental both to accuracy and robustness. These results show that Robustmixachieves a better combination of mCE and accuracy than these ablations.
### Analysis and Discussion
**Low-frequency bias** In order to quantify the degree to which models rely on lower frequencies, we measure how much accuracy drops as we remove higher-frequency information with a low-pass filter. Figure 4 shows that Robustmixis comparatively more robust to the removal of high frequencies. This indicates that models trained with Robustmix rely significantly less on these high-frequency features to make accurate predictions.
\begin{table}
\begin{tabular}{l|l|l|l l} \hline \hline Method & Mixed Image & Label & \begin{tabular}{l} Test \\ Accuracy \\ \end{tabular} & mCE \\ \hline \begin{tabular}{l} Robustmix- Full \\ (inband mixups and energy weighting) \\ \end{tabular} & Equation 2 & Equation 3 & 77.1 & 61.2 \\ \hline \multirow{2}{*}{\begin{tabular}{l} Robustmix without energy weighting \\ \end{tabular} } & Equation 2 & \begin{tabular}{l} Equation 3 \\ with \(\lambda_{c}\) \\ replaced by \(c\) \\ \end{tabular} & 77.6 & 67.7 \\ \hline \begin{tabular}{l} Robustmix without inband mixups \\ and with energy weighting \\ (\(\lambda_{L}=1,\lambda_{H}=0\)) \\ \end{tabular} & \begin{tabular}{l} Low\((x_{1},c)+\texttt{High}(x_{2},c)\) \\ \end{tabular} & \(\lambda_{c}y_{1}+(1-\lambda_{c})y_{2}\) & 68.6 & 75.3 \\ \hline \begin{tabular}{l} Robustmix without inband mixups \\ and without energy weighing \\ (\(\lambda_{L}=1,\lambda_{H}=0\) \\ and cutoff c as label coefficient) \\ \end{tabular} &
\begin{tabular}{l} Low\((x_{1},c)+\texttt{High}(x_{2},c)\) \\ \end{tabular} & \(cy_{1}+(1-c)y_{2}\) & 74.8 & 77.5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of Robustmix with simplified cases. The results are reported on ResNet50.
Figure 3: Highlighting the tradeoff between mCE and Clean Error for various models.
## 5 Conclusion
In this paper, we have introduced a new method to improve robustness called Robustmix which regularizes models to focus more on lower spatial frequencies to make predictions. We have shown that this method yields improved robustness on a range of benchmarks including ImageNet-C and Stylized ImageNet. In particular, this approach attains an mCE of 44.8 on ImageNet-C with EfficientNet-B8, which is competitive with models trained on \(300\times\) more data.
Our method offers a promising new research direction for robustness with a number of open challenges. We have used a standard DCT-based low-pass filter on images and L2 energy metric to determine the contribution of each label. This leaves many alternatives to be explored, such as different data modalities like audio; more advanced frequency separation techniques like Wavelets; and alternative contribution metrics for mixing labels.
Figure 4: Test accuracy on ImageNet samples passed through a low-pass filter with increasing cut-off. As expected, we observe that Robustmixs more robust to the removal of high frequencies than Mixup. The comparison is done here on ResNet-50 models.
\begin{table}
\begin{tabular}{l l l} \hline \hline \multirow{2}{*}{Method/Parameters} & SIN & Shape \\ & Accuracy & Bias \\ \hline ResNet-50 Baseline & 15.6 & 19.25 \\ ResNet-50 Mixup & 20.1 & 22.7 \\ ResNet-50 Robustmix & **26.8** & **37.0** \\ \hline EfficientNet-B5 Baseline & 25.3 & 44.4 \\ EfficientNet-B5 Mixup & 28.75 & 48.3 \\ EfficientNet-B5 Robustmix & **40.3** & **66.1** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy and shape bias computed on Stylized ImageNet. |
2302.00306 | Uniqueness and homogeneity of non-separable Urysohn universal
ultrametric spaces | Urysohn constructed a separable complete universal metric space homogeneous
for all finite subspaces, which is today called the Urysohn universal metric
space. Some authors have recently investigated an ultrametric analogue of this
space. The isometry problem of such ultrametric spaces is our main subject in
this paper. We first introduce the new notion of petaloid ultrametric spaces,
which is intended to be a standard class of non-separable Urysohn universal
ultrametric spaces. Next we prove that all petaloid spaces are isometric to
each other and homogeneous for all finite subspaces (and compact subspaces).
Moreover, we show that the following spaces are petaloid, and hence they are
isometric to each other and homogeneous: (1) The space of all continuous
functions, whose images contain the zero, from the Cantor set into the space of
non-negative real numbers equipped with the nearly discrete topology, (2) the
space of all continuous ultrametrics on a zero-dimensional infinite compact
metrizable space, (3) the non-Archimedean Gromov--Hausdorff space, and (4) the
space of all maps from the set of non-negative real numbers into the set of
natural numbers whose supports are finite or decreasing sequences convergent to
the zero. | Yoshito Ishiki | 2023-02-01T08:16:51Z | http://arxiv.org/abs/2302.00306v4 | # Uniqueness and homogeneity of non-separable Urysohn universal ultrametric spaces
###### Abstract.
Urysohn constructed a separable complete universal metric space homogeneous for all finite subspaces, which is today called the Urysohn universal metric space. Some authors have recently investigated an ultrametric analogue of this space. The isometry problem of such ultrametric spaces is our main subject in this paper. We first introduce the new notion of petaloid ultrametric spaces, which is intended to be a standard class of non-separable Urysohn universal ultrametric spaces. Next we prove that all petaloid spaces are isometric to each other and homogeneous for all finite subspaces (and compact subspaces). Moreover, we show that the following spaces are petaloid, and hence they are isometric to each other and homogeneous: (1) The space of all continuous functions, whose images contain the zero, from the Cantor set into the space of non-negative real numbers equipped with the nearly discrete topology, (2) the space of all continuous ultrametrics on a zero-dimensional infinite compact metrizable space, (3) the non-Archimedean Gromov-Hausdorff space, and (4) the space of all maps from the set of non-negative real numbers into the set of natural numbers whose supports are finite or decreasing sequences convergent to the zero.
Key words and phrases:Urysohn universal ultrametric space 2020 Mathematics Subject Classification: Primary 54E35, Secondary 51F99
## 1. Introduction
For a class \(\mathcal{C}\) of metric spaces, a metric space \((X,d)\) is said to be _\(\mathcal{C}\)-injective_ if for all \((A,a)\) and \((B,b)\) in \(\mathcal{C}\) and for all isometric embeddings \(\phi\colon(A,a)\to(B,b)\) and \(\psi\colon(A,a)\to(X,d)\), there exists an isometric embedding \(\theta\colon(B,b)\to(X,d)\) such that \(\theta\circ\phi=\psi\). Let \(\mathcal{F}\) denote the class of all finite metric spaces. Urysohn [11] constructed a separable complete \(\mathcal{F}\)-injective metric space, which is today called the _Urysohn universal (metric) space_. A remarkable fact is that all separable complete \(\mathcal{F}\)-injective metric spaces are isometric to each other. This isometry theorem is proven by the so-called back-and-forth argument, which is a variant of the mathematical induction. Some authors have
recently investigated a non-Archimedean analogue of the Urysohn universal space (see [5], [12], [2], and [13]), which is also our main subject of the paper.
A metric \(d\) on a set \(X\) is said to be an _ultrametric_ if for all \(x,y,z\in X\), it satisfies the _strong triangle inequality_\(d(x,y)\leq d(x,z)\lor d(z,y)\), where \(\vee\) stands for the maximum operator on \(\mathbb{R}\). If a pseudo-metric \(d\) satisfies the strong triangle inequality, then \(d\) is called a _pseudo-ultrametric_.
A set \(R\) is said to be a _range set_ if it is a subset of \([0,\infty)\) and \(0\in R\). If an ultrametric \(d\) (resp. pseudo-ultrametric) on a set \(X\) satisfies \(d(x,y)\in S\) for all \(x,y\in R\), then we call \(d\) an \(R\)_-ultrametric_ or \(R\)_-valued ultrametrics_ (resp. \(R\)_-pseudo-ultrametric_ or \(R\)_-valued pseudo-ultrametrics_).
For a range set \(R\), there are several constructions of complete \(\mathcal{N}(R)\)-injective \(R\)-valued ultrametric spaces. For example, in [5], the author provided some new constructions of complete \(\mathcal{N}(R)\)-injective \(R\)-ultrametric space (see also [12], [2], and [13]). If \(R\) is countable, then, by the back-and-forth argument, a separable complete \(\mathcal{N}(R)\)-injective \(R\)-ultrametric space is unique up to isometry whichever construction we choose. However, when \(R\) is uncountable, it was not know whether \(\mathcal{N}(R)\)-injective spaces are isometric to each other. In this paper, we solve the isometry problem of Urysohn universal ultrametric spaces in the non-separable case.
By observing and comparing constructions of \(\mathcal{N}(R)\)-injective spaces, we reach the new notion of \(R\)-petaloid spaces, which is intended to be a standard class of non-separable Urysohn universal spaces.
Before explaining petaloid spaces, we prepare some notations and notions. A subset \(E\) of \([0,\infty)\) is said to be _semi-sporadic_ if there exists a strictly decreasing sequence \(\{a_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) in \((0,\infty)\) such that \(\lim_{i\to\infty}a_{i}=0\) and \(E=\{0\}\cup\{\,a_{i}\mid i\in\mathbb{Z}_{\geq 0}\,\}\). A subset of \([0,\infty)\) is said to be _tenuous_ if it is finite or semi-sporadic (see [5]). For a range set \(R\), we denote by \(\mathbf{TEN}(R)\) the set of all tenuous range subsets of \(R\). In this paper, we often represent a restricted metric as the same symbol to the ambient one. For a metric space \((X,d)\), and for a subset \(A\) of \(X\), we write \(d(x,A)=\inf_{a\in A}d(x,a)\).
**Definition 1.1**.: Let \(R\) be an uncountable range set. We say that a metric space \((X,d)\) is \(R\)_-petaloid_ if it is an \(R\)-valued ultrametric space and there exists a family \(\{\Pi(X,S)\}_{S\in\mathbf{TEN}(R)}\) of subspaces of \(X\) satisfying the following properties:
1. The set \(\Pi(X,\{0\})\) is a singleton.
2. If \(S\in\mathbf{TEN}(R)\) satisfies \(S\neq\{0\}\), then \((\Pi(X,S),d)\) is a separable complete \(\mathcal{N}(S)\)-injective \(S\)-ultrametric space.
3. We have \(\bigcup_{S\in\mathbf{TEN}(R)}\Pi(X,S)=X\).
4. If \(S,T\in\mathbf{TEN}(R)\), then \(\Pi(X,S)\cap\Pi(X,T)=\Pi(X,S\cap T)\).
* If \(S,T\in\mathbf{TEN}(R)\) and \(x\in\Pi(X,T)\), then \(d(x,\Pi(X,S))\) belongs to \((T\setminus S)\cup\{0\}\).
* For all \(S\in\mathbf{TEN}(R)\), and for all \(x\in X\), there exists \(p\in\Pi(X,S)\) such that \(d(x,\Pi(X,S))=d(x,p)\).
We call the family \(\{\Pi(X,S)\}_{S\in\mathbf{TEN}(R)}\) an \(R\)_-petal of \(X\)_, and call \(\Pi(X,S)\) the \(S\)-piece of the \(R\)-petal \(\{\Pi(X,S)\}_{S\in\mathbf{TEN}(R)}\). We simply write \(\Pi(S)=\Pi(X,S)\) when the whole space is clear by the context.
Section 2 presents some basic statements on petaloid spaces. In particular, it is shown that all petaloid spaces are complete.
The next is our first main result:
**Theorem 1.1**.: _Let \(R\) be an uncountable range set. If \((X,d)\) and \((Y,e)\) are \(R\)-petaloid ultrametric spaces, then \((X,d)\) and \((Y,e)\) are isometric to each other. Namely, an \(R\)-petaloid ultrametric space is unique up to isometry._
A metric space \((X,d)\) is said to be _ultrahomogeneous_ (resp. _compactly ultrahomogeneous_) if for every finite subset (resp. compact subset) \(A\) of \(X\), and for every isometric embedding \(\phi\colon A\to X\), there exists an isometric bijection \(F\colon X\to X\) such taht \(F|_{A}=\phi\). Remark that the usual Urysohn universal metric space and the separable \(\mathcal{N}(R)\)-injective complete \(R\)-ultrametric spaces are ultrahomogeneous and compactly ultrahomogeneous, which are also proven by the back-and-forth argument. For the ultrahomogeneity, see [6, Theorem 3.2] and [2, Proposition 2.7]. For the compact ultrahomogeneity, see [2, Corollary 6.18], [4], [1], and [6, Subsection 4.5].
Our second result sates that all petaloid spaces satisfy the compact ultrahomogeneity despite their non-separability.
**Theorem 1.2**.: _For every uncountable range set \(R\), every \(R\)-petaloid ultrametric space is compactly ultrahomogeneous. Consequently, it is ultrahomogeneous._
The proofs of Theorems 1.1 and 1.2 will be presented in Section 3, and both of them are reduced to Theorem 3.3 asserting that for all uncountable range set \(R\), \(S\in\mathbf{TEN}(R)\), and \(R\)-petaloid spaces \((X,d)\) and \((Y,e)\), every isometric bijection \(f\colon\Pi(X,S)\to\Pi(Y,S)\) can be extended to an isometric bijection \(F\colon X\to Y\). In the proof of Theorem 3.3, to obtain an isometric bijection \(F\colon X\to Y\), we construct isometric bijections between \(\Pi(X,T)\) and \(\Pi(Y,T)\) for all \(T\in\mathbf{TEN}(R)\) such that \(T\setminus S\) is finite using back-and-forth argument, and we glue them together by the transfinite induction.
In the next theorem, we show that injective spaces constructed in [5], [13], and [2] become naturally petaloid. The proof and the precise definition of the spaces will be given in Section 4.
**Theorem 1.3**.: _For every uncountable range set \(R\), all the following spaces are \(R\)-petaloid._
1. _The space_ \((\mathrm{C}_{0}(\Gamma,R),\triangledown)\) _of all continuous functions_ \(f\) _from the Cantor set_ \(\Gamma\) _into the space_ \(R\) _equipped with the nearly discrete topology such that_ \(0\in f(X)\)_._
2. _The ultrametric space_ \((\mathrm{C}\mathrm{pu}(X,R),\mathcal{UD}_{X}^{R})\) _of all_ \(R\)_-valued continuous pseudo-ultrametrics on a compact ultrametrizable space_ \(X\) _with an accumulation point._
3. _The non-Archimedean Gromov-Hausdorff space_ \((\mathcal{U}_{R},\mathcal{N}\mathcal{A})\) _associated with_ \(R\)_._
4. _The space_ \((\mathrm{G}(R,\omega_{0}),\triangle)\) _of all maps_ \(f\colon R\to\omega_{0}\) _for which_ \(f(0)=0\) _and the support of_ \(f\) _are tenuous, where_ \(\omega_{0}\) _is the set of all natural numbers._
_Remark 1.1_.: Intriguingly, it is unknown whether petaloid ultrametric spaces can be obtained by the classical constructions, i.e., the method using the Katetov function spaces, and the Urysohn-type amalgamation (the way of the Fraisse limit).
## 2. Preliminaries
From the properties (P4) or (P5), we deduce the following:
**Lemma 2.1**.: _Let \(R\) be an uncountable range set, and \((X,d)\) be an \(R\)-petaloid space. If \(S,T\in\mathbf{TEN}(R)\) satisfy \(S\subseteq T\), then \(\Pi(X,S)\subseteq\Pi(X,T)\)._
The proof of the next lemma is presented in [5, Lemma 2.12].
**Lemma 2.2**.: _Let \(K\) be a subset of \([0,\infty)\). Then \(K\) is tenuous if and only if \(K\) is a closed subset of \([0,\infty)\) with respect to the Euclidean topology and satisfies that \(K\cap[r,\infty)\) is finite for all \(r\in(0,\infty)\)._
By the definition of the tenuous sets, we obtain:
**Lemma 2.3**.: _If \(\{S_{i}\}_{i=0}^{k}\) is a finite family of tenuous subset of \([0,\infty)\), then the unison \(\bigcup_{i=0}^{k}S_{i}\) is also tenuous._
In what follows, without referring to Lemma 2.3, we use the property that the union of finite many tenuous sets is tenuous.
We now discuss the compactness of petaloid spaces. Let \(R\) be an uncountable range set, and \((X,d)\) be an \(R\)-petaloid ultrametric space. For all \(x\in X\), we define the _trace_\(\mathrm{Tr}(x)\) of \(x\) by \(\mathrm{Tr}(x)=\bigcap\{\,S\in\mathbf{TEN}(R)\mid x\in\Pi(S)\,\}\). Note that \(\mathrm{Tr}(x)\in\mathbf{TEN}(R)\) and \(\mathrm{Tr}(x)\subseteq S\) whenever \(x\in\Pi(S)\).
**Lemma 2.4**.: _Let \(R\) be an uncountable range set, and \(X\) be an \(R\)-petaloid ultrametric space. Then the following are true:_
1. _For all_ \(x\in X\)_, and for all_ \(r\in(0,\infty)\)_, there exists_ \(S\in\mathbf{TEN}(R)\) _such that_ \(\mathrm{Tr}(x)\cap(r,\infty)=S\cap(r,\infty)\) _and_ \(x\in\Pi(S)\)_._
2. _For all_ \(x\in X\)_, we have_ \(x\in\Pi(\mathrm{Tr}(x))\)
Proof.: First we show (1). By Lemma 2.2, the set \(\operatorname{Tr}(x)\cap(r,\infty)\) is finite. From the definition of \(\operatorname{Tr}(x)\), there exists finite sets \(S_{0},\dots,S_{k}\in\mathbf{TEN}(R)\) such that \(\operatorname{Tr}(x)\cap(r,\infty)=\left(\bigcap_{i=0}^{k}S_{i}\right)\cap(r,\infty)\). Put \(S=\bigcap_{i=0}^{k}S_{i}\). According to the property (P4), the set \(S\) satisfies \(x\in\Pi(S)\). Thus \(S\) is a desired set.
Second we verify (2). For all \(i\in\mathbb{Z}_{\geq 0}\), put \(t_{i}=2^{-i}\). According to the statement (1), we can find a family \(\{S_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) of tenuous range subsets of \(R\) such that \(\bigcap_{i\in\mathbb{Z}_{\geq 0}}S_{i}=\operatorname{Tr}(x)\), and for each \(i\in\mathbb{Z}_{\geq 0}\), we have \(x\in\Pi(S_{i})\) and \(S_{i}\cap(t_{i},\infty)=\operatorname{Tr}(x)\cap(t_{i},\infty)\). Combining the fact that \(x\in\Pi(S_{i})\) and the property (P5), we obtain \(d(x,\Pi(\operatorname{Tr}(x)))\leq t_{i}\). Since \(t_{i}\to 0\) as \(i\to 0\), we conclude that \(x\in\Pi(\operatorname{Tr}(x))\).
We see the relationship between the traces and the distances.
**Lemma 2.5**.: _Let \(R\) be an uncountable range set, and \(X\) be an \(R\)-petaloid ultrametric space. If \(S\in\mathbf{TEN}(R)\) and \(x\not\in\Pi(S)\), then we obtain \(d(x,\Pi(S))=\min\{\,t\in\operatorname{Tr}(x)\mid\operatorname{Tr}(x)\cap(t, \infty)\subseteq S\cap(t,\infty)\,\}\)._
Proof.: Put \(u=d(x,\Pi(S))\). By \(x\not\in\Pi(S)\), it is true that \(u\in\operatorname{Tr}(x)\setminus S\). For the sake of contradiction, suppose that \(\operatorname{Tr}(x)\cap(u,\infty)\not\subseteq S\cap(u,\infty)\). Then there exists \(s\in\operatorname{Tr}(x)\cap(u,\infty)\) such that \(s\not\in S\). In particular, we have \(u<s\). Put \(A=(\operatorname{Tr}(x)\setminus\{s\})\cup S\). From the definition of \(\operatorname{Tr}(x)\) and \(s\in\operatorname{Tr}(x)\), it follows that \(x\not\in\Pi(A)\). Put \(v=d(x,\Pi(A))\). Since \(\Pi(S)\subseteq\Pi(A)\) (see Lemma 2.1), we obtain \(v\leq u\). The property (P5) yields \(v\in\operatorname{Tr}(x)\setminus A=\{s\}\), and hence \(s\leq u\). This is a contradiction to \(u<s\). Thus we have \(\operatorname{Tr}(x)\cap(u,\infty)\subseteq S\cap(u,\infty)\). The minimality of \(u\) follows from \(u\in\operatorname{Tr}(x)\setminus S\) and \(\operatorname{Tr}(x)\cap(u,\infty)\subseteq S\cap(u,\infty)\).
**Corollary 2.6**.: _Let \(R\) be an uncountable range set, and \(X\) be an \(R\)-petaloid ultrametric space. If \(x,y\in X\) and \(w\in R\) satisfies \(x\neq y\) and \(d(x,y)\leq w\), then we have \(\operatorname{Tr}(x)\cap(w,\infty)=\operatorname{Tr}(y)\cap(w,\infty)\)._
**Lemma 2.7**.: _Let \(\{S_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) be a family of tenuous range sets. If there exists a sequence \(\{t_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) in \((0,\infty)\) such that for all \(i\in\mathbb{Z}_{\geq 0}\), we have \(t_{i+1}\leq t_{i}\) and \(S_{i}\cap(t_{i},\infty)=S_{i+1}\cap(t_{i},\infty)\), and \(t_{i}\to 0\) as \(i\to\infty\), then the union \(\bigcup_{i\in\mathbb{Z}_{\geq 0}}S_{i}\) is also a tenuous range set._
Proof.: Put \(S=\bigcup_{i\in\mathbb{Z}_{\geq 0}}S_{i}\). For all \(r\in(0,\infty)\), take \(N\in\mathbb{Z}_{\geq 0}\) with \(t_{N}<r\) and \(t_{N+1}<t_{N}\). In this setting, by induction, we notice that if \(n\in\mathbb{Z}_{\geq 0}\) satisfies \(n\geq N\), then \(S_{n}\cap[r,\infty)=S_{N}\cap[r,\infty)\). Thus we obtain \(S\cap[r,\infty)=\bigcup_{i=0}^{N}(S_{i}\cap[r,\infty))\). Due to Lemma 2.2, each \(S_{i}\cap[r,\infty)\) is finite, and hence so is \(S\cap[r,\infty)\). By the properties that \(0\in S\) and \(S_{i}\cap[r,\infty)\) is finite for all \(r\in(0,\infty)\), we see that the set \(S\) is closed in \([0,\infty)\) with respect to the Euclidean topology. Using Lemma 2.2 again, the set \(S\) is tenuous.
For a metric space \((X,d)\), for \(\epsilon\in(0,\infty)\) a subset \(A\) of \(X\) is said to be an \(\epsilon\)-_net of \(X\)_ if \(A\) is finite and satisfies that for all \(x\in X\), there
exists \(a\in A\) with \(d(x,a)\leq\epsilon\). We say that a metric space \((X,d)\) is _totally bounded_ if for all \(\epsilon\in(0,\infty)\), there exists an \(\epsilon\)-net of \(X\).
**Proposition 2.8**.: _Let \(R\) be an uncountable range set, and \((X,d)\) be an \(R\)-petaloid ultrametric space. If \(K\) is a totally bounded subset of \(X\), then there exists \(S\in\mathbf{TEN}(R)\) such that \(K\subseteq\Pi(S)\)._
Proof.: For each \(n\in\mathbb{Z}_{\geq 0}\), let \(L_{n}\) be a \((2^{-n})\)-net of \(K\). We may assume that \(L_{n}\subseteq L_{n+1}\) for all \(n\in\mathbb{Z}_{\geq 0}\). Put \(T_{n}=\bigcup_{x\in L_{n}}\operatorname{Tr}(x)\). Then \(T_{n}\in\mathbf{TEN}(R)\) and \(T_{n}\subseteq T_{n+1}\) for all \(n\in\mathbb{Z}_{\geq 0}\). Since \(L_{n}\) is a \((2^{-n})\)-net of \(L_{n+1}\), for each \(x\in L_{n+1}\), there exists \(a\in L_{n}\) with \(d(x,a)\leq 2^{-n}\). According to Corollary 2.6, we have \(\operatorname{Tr}(x)\cap(2^{-n},\infty)=\operatorname{Tr}(a)\cap(2^{-n},\infty)\). Thus \(T_{n}\cap(2^{-n},\infty)=T_{n+1}\cap(2^{-n},\infty)\) for all \(n\in\mathbb{Z}_{\geq 0}\). Put \(S=\bigcup_{n\in\mathbb{Z}_{\geq 0}}T_{n}\). Therefore Lemma 2.2 shows that the set \(S\) belongs to \(\mathbf{TEN}(R)\). From the definition of the traces, it follows that \(\bigcup_{n\in\mathbb{Z}_{\geq 0}}L_{n}\subseteq\Pi(S)\). Since \(\bigcup_{n\in\mathbb{Z}_{\geq 0}}L_{n}\) is dense in \(K\) and \(\Pi(X,S)\) is complete (see (P2)), we can conclude that \(K\subseteq\Pi(S)\).
We now prove that all petaloid spaces are complete.
**Proposition 2.9**.: _Let \(R\) be an uncountable range set, and \((X,d)\) be an \(R\)-petaloid ultrametric space. Then \((X,d)\) is complete._
Proof.: Let \(\{x_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) be a Cauchy sequence in \((X,d)\). Then the set \(K=\{\,x_{i}\mid i\in\mathbb{Z}_{\geq 0}\,\}\) is totally bounded. Then Proposition 2.8 enables us to take \(S\in\mathbf{TEN}(R)\) such that \(K\subseteq\Pi(S)\). Since \(\Pi(S)\) is complete, then the sequence \(\{x_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) has a limit. Thus \((X,d)\) itself is complete.
To confirm the naturality of petaloid spaces, we observe their injectivity for all finite \(R\)-ultrametric spaces.
**Proposition 2.10**.: _Let \(R\) be an uncountable range set. Then all \(R\)-petaloid spaces are \(\mathcal{N}(R)\)-injective._
Proof.: Let \((X,d)\) be an arbitrary \(R\)-petaloid space. Take \((A,e)\) and \((B,e)\) be finite metric spaces in \(\mathcal{N}(R)\) with \(A\subseteq B\). Take an isometric embedding \(\phi\colon A\to X\). Put \(S=e(B\times B)\cup\bigcup_{x\in A}\operatorname{Tr}(\phi(x))\). Since \(B\) is finite, we notice that \(S\in\mathbf{TEN}(R)\) and \((B,e)\) belongs to \(\mathcal{N}(S)\). By the definition of the trace, we also have \(\phi(A)\subseteq\Pi(S)\). The property (P2) implies that \(\Pi(S)\) is \(\mathcal{N}(S)\)-injective, and hence there exists an isometric embedding \(F\colon B\to\Pi(S)\) such that \(F|_{B}=\phi\). Therefore we conclude that \((X,d)\) is \(\mathcal{N}(R)\)-injective.
For range sets \(R\) and \(S\) with \(S\subseteq R\), we denote by \(\mathbf{F}(R/S)\) the set of all \(T\in\mathbf{TEN}(R)\) such that \(S\subseteq T\) and \(T\setminus S\) is finite. The following lemma plays an important role in the proofs of our main theorems. The proof is deduced from Lemma 2.5 and the property (P3).
**Lemma 2.11**.: _If \(R\) be an uncountable range set, \(S\in\mathbf{TEN}(R)\), and \((X,d)\) be an \(R\)-petaloid ultrametric space, then \(\bigcup_{T\in\mathbf{F}(R/S)}\Pi(T)\) is dense in \(X\)._
Proof.: Take \(x\in X\). For all \(n\in\mathbb{Z}_{\geq 0}\), we put \(T_{n}=S\cup(\operatorname{Tr}(x)\cap[2^{-n},\infty))\). Then \(T_{n}\in\mathbf{F}(R/S)\). By the property (P5), we have \(d(x,\Pi(X,T_{n}))\leq 2^{-n}\) for all \(n\in\mathbb{Z}_{\geq 0}\). Thus, the set \(\bigcup_{T\in\mathbf{F}(R/S)}\Pi(T)\) is dense in \(X\).
## 3. Proofs of uniqueness and homogeneity
The following lemma is a generalization of the injectivity.
**Lemma 3.1**.: _Let \(R\) be an uncountable range set and \(X\) and \(Y\) be \(R\)-petaloid ultrametric spaces. Let \(k\in\mathbb{Z}_{\geq 0}\), \(T_{0},\ldots,T_{k}\in\mathbf{TEN}(R)\) and \(S\in\mathbf{TEN}(R)\) such that \(\bigcup_{i=0}^{k}T_{i}\subseteq S\). Let \(A\sqcup\{\omega\}\) be a finite subset of \(\Pi(X,S)\) and \(B\) be a finite subset of \(\Pi(Y,S)\). Put \(G=\bigcup_{i=0}^{k}\Pi(X,T_{i})\) and \(H=\bigcup_{i=0}^{k}\Pi(Y,T_{i})\). If \(f\colon G\sqcup A\to H\cup B\) is an isometric bijection such that \(f(G)=H\) and \(f(A)=B\), then there exists \(\theta\in\Pi(Y,S)\) for which \(d(f(x),\theta)=d(x,\omega)\) for all \(x\in G\cup A\). Namely, we obtain an isometric bijection \(F\colon G\cup A\cup\{\omega\}\to H\cup B\cup\{\theta\}\) such that \(F|_{G\cup A}=f\)._
Proof.: If \(\omega\in G\), then putting \(\theta=f(\omega)\) proves the lemma. We may assume that \(\omega\not\in G\). For each \(i\in\{0,\ldots,k\}\), the property (P6) enables us to take \(p_{i}\in\Pi(X,T_{i})\) such that \(d(\omega,\Pi(X,T_{i}))=d(\omega,p_{i})\). Put \(P=\{p_{0},\ldots,p_{k}\}\), \(Q=\{f(p_{0}),\ldots,f(p_{k})\}\), and \(\phi=f|_{A\cup P}\). Then \(\phi\colon A\cup P\to\Pi(Y,S)\) is an isometric embedding. Due to (P2), we can apply the \(\mathcal{N}(R)\)-injectivity of \(\Pi(Y,S)\) to \(\phi\), \(A\cup P\), and \((A\cup P)\cup\{\omega\}\). Then we obtain \(\theta\in\Pi(Y,S)\) such that \(e(\phi(x),\theta)=d(x,\omega)\) for all \(x\in P\cup A\).
We now prove that \(e(\phi(x),\theta)=d(x,\omega)\) for all \(x\in G\cup A\). Take an arbitrary point \(x\in G\cup A\). If \(x\in A\cup P\), then we have \(e(f(x),\theta)=d(x,\omega)\). If \(x\not\in A\cup P\), then we can take \(j\in\{0,\ldots,k\}\) such that \(x\in\Pi(X,T_{j})\). Put \(c=d(\omega,\Pi(X,T_{j}))(=d(\omega,p_{j}))\). Then we have \(e(\theta,f(p_{j}))=c\) and the assumption that \(\omega\not\in G\) shows \(c>0\). The property (P5) implies that \(c\not\in T_{j}\). In particular, we obtain \(e(f(x),f(p_{j}))\neq c\). Recall that \(e(f(x),f(p_{j}))=d(x,p_{j})\). We divide the proof into two parts.
Case 1. \([e(f(x),f(p_{j}))<c]\): In this case, \(e(f(x),f(p_{j}))<e(\theta,f(p_{j}))\) and \(d(x,p_{j})<d(\omega,p_{j})\). The strong triangle inequality shows that \(e(\theta,f(x))=e(\theta,f(p_{j}))\) and \(d(\omega,x)=d(\omega,p_{j})\). Hence \(e(f(x),\theta)=d(x,\omega)\).
Case 2. \([c<e(f(x),f(p_{j}))]\): We have \(e(\theta,f(p_{j}))<e(f(x),f(p_{j}))\) and \(d(\omega,p_{j})<d(x,p_{j})\). Using the strong triangle inequality again, we also have \(e(f(x),f(p_{j}))=e(f(x),\theta)\) and \(d(x,p_{j})=d(x,\omega)\). Hence \(e(f(x),\theta)=d(x,\omega)\). This finishes the proof.
**Lemma 3.2**.: _Let \(R\) be an uncountable range set and \(X\) and \(Y\) be \(R\)-petaloid ultrametric spaces. Let \(k\in\mathbb{Z}_{\geq 0}\), \(T_{0},\ldots,T_{k}\in\mathbf{TEN}(R)\) and \(S\in\mathbf{TEN}(R)\) such that \(\bigcup_{i=0}^{k}T_{i}\subseteq S\). For each \(i\in\{0,\ldots,k\}\), let \(g_{i}\colon\Pi(X,T_{i})\to\Pi(Y,T_{i})\) be an isometric bijection. Assume that the following condition is satisfied:_
_._
* _For all_ \(i,j\in\{0,\ldots,k\}\)_, we have_ \(g_{i}(x)=g_{j}(x)\) _for all_ \(x\in\Pi(X,T_{i}\cap T_{j})\)_._
_Then we can glue \(\{g_{i}\}_{i=0}^{k}\) together and extend it, i.e., there exists an isometric bijection \(g\colon\Pi(X,S)\to\Pi(Y,S)\) such that \(g|_{\Pi(X,T_{i})}=g_{i}\) for all \(i\in\{0,\ldots,k\}\)._
Proof.: Put \(G=\bigcup_{i=0}^{k}\Pi(X,T_{i})\) and \(H=\bigcup_{i=0}^{k}\Pi(Y,T_{i})\). We notice that \(\Pi(X,S)\setminus G\), and \(\Pi(Y,S)\setminus H\) are separable and take a countable dense subsets \(A=\{a_{i}\}_{i\in\mathbb{Z}_{\geq 1}}\) and \(B=\{b_{i}\}_{i\in\mathbb{Z}_{\geq 1}}\) of \(\Pi(X,S)\setminus G\), and \(\Pi(Y,S)\setminus H\), respectively. We put \(A_{i}=\{a_{1},\ldots,a_{i}\}\) and \(B_{i}=\{b_{1},\ldots,b_{i}\}\) for all \(i\in\mathbb{Z}_{\geq 0}\). We also put \(A_{0}=B_{0}=\emptyset\). Define a map \(u\colon G\to H\) by \(u(x)=g_{i}(x)\) if \(x\in\Pi(X,T_{i})\). By the assumption (G), the map \(u\) is well-defined. To construct an isometric bijection \(g\), we use the back-and-forth argument, in which we repeat the two types of operations alternately; in the first operation, we extend the domain of an isometric map, and in the second operation, we extend the codomain using the inverse map. Namely, we construct a sequence \(\{w_{i}\colon P_{i}\to Q_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) of maps such that
* for all \(i\in\mathbb{Z}_{\geq 0}\), \(P_{i}\) and \(Q_{i}\) are subsets of \(\Pi(X,S)\) and \(\Pi(Y,S)\), respectively, satisfying that \(G\cup A_{i}\subseteq P_{i}\) and \(H\cup B_{i}\subseteq Q_{i}\);
* each \(w_{i}\) is an isometric bijection;
* for all \(i\in\mathbb{Z}_{\geq 0}\), we have \(w_{i}|_{G}=u\);
* for all \(i\in\mathbb{Z}_{\geq 0}\), we have \(P_{i}\subseteq P_{i+1}\), \(Q_{i}\subseteq Q_{i+1}\), and \(w_{i+1}|_{P_{i}}=w_{i}\).
First, we define \(P_{0}=G\), \(Q_{0}=H\), and \(w_{0}=u\). Next fix \(k\in\mathbb{Z}_{\geq 0}\) and assume that we have already obtained \(P_{k}\), \(Q_{k}\), and an isometric bijection \(w_{k}\colon P_{k}\to Q_{k}\) such that \(w_{k}|_{G}=u\) and \(G\cup A_{i}\subseteq P_{i}\) and \(H\cup B_{i}\subseteq Q_{i}\). We now define \(v_{k+1}\colon P_{k}\cup\{a_{k+1}\}\to Q_{k}\cup\{v_{k+1}(a_{k+1})\}\). Let \(v_{k+1}\colon P_{k}\cup\{a_{k+1}\}\to Q_{k}\cup\{v_{k+1}(a_{k+1})\}\) be an extended map of \(w_{k}\) stated in Lemma 3.1. Applying the same argument to \(v_{k+1}^{-1}\), we obtain an isometric map \(m_{k+1}\colon Q_{k}\cup\{v_{k+1}(a_{k+1})\}\cup\{b_{k+1}\}\to P_{k}\cup\{a_{k +1}\}\cup\{m_{k+1}(b_{k+1})\}\). Then we define \(w_{k+1}=m_{k+1}^{-1}\), \(P_{k+1}=P_{k}\cup\{a_{k+1}\}\cup\{m_{k+1}(b_{k+1})\}\) and \(Q_{k+1}=Q_{k}\cup\{v_{k+1}(a_{k+1})\}\cup\{b_{k+1}\}\).
Therefore we obtain sequences \(\{w_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\), \(\{P_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\), and \(\{Q_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) as required. Put \(K=\bigcup_{i\in\mathbb{Z}_{\geq 0}}P_{i}\) and \(L=\bigcup_{i\in\mathbb{Z}_{\geq 0}}Q_{i}\). We define \(h\colon K\to L\) by \(h(x)=w_{k}(x)\), where \(k\in\mathbb{Z}_{\geq 0}\) satisfies \(x\in P_{k}\). Due to the conditions (W1)-(W4), the map \(h\) is well-defined. Using a canonical method by Cauchy sequences together with the completeness of \(\Pi(X,S)\) and \(\Pi(Y,S)\), we obtain an isometric bijection \(g\colon\Pi(X,S)\to\Pi(Y,S)\) such that \(g|_{G}=u\). Since \(h\) is an isometric bijection, and since \(K\) and \(L\) are dense in \(\Pi(X,S)\) and \(\Pi(Y,S)\), respectively, the map \(g\) is an isometric bijection between \(\Pi(X,S)\) and \(\Pi(Y,S)\). This proves the lemma.
**Theorem 3.3**.: _Let \(R\) be an uncountable range set, and let \((X,d)\) and \((Y,e)\) be \(R\)-petaloid ultrametric spaces. Let \(f\colon\Pi(X,S)\to\Pi(Y,S)\) be
an isometric bijection, where \(S\in\mathbf{TEN}(R)\). Then there exists an isometric bijection \(F\colon X\to Y\) such that \(F|_{\Pi(X,S)}=f\)._
Proof.: Let \(\kappa\) be the cardinal of \(\mathbf{F}(R/S)\). We represent \(\mathbf{F}(R/S)=\{S_{\alpha}\}_{\alpha<\kappa}\) such that \(R_{0}=S\) and \(S_{\alpha}\neq S_{\beta}\) for all distinct \(\alpha,\beta<\kappa\). We shall construct a family \(\{g_{\alpha}\colon\Pi(X,S_{\alpha})\to\Pi(Y,S_{\alpha})\}_{\alpha<\kappa}\) of isometric bijections such that
(C) If \(\alpha,\beta<\kappa\) satisfy \(S_{\beta}\subseteq S_{\alpha}\), then \(g_{\alpha}|_{\Pi(X,S_{\beta})}=g_{\beta}\).
We use transfinite induction. If \(\alpha=0\), we define \(g_{0}=f\). We next consider general steps. At the \(\alpha\)-th step (\(\alpha<\kappa\)), we will define not only \(g_{\alpha}\) but also \(g_{\gamma}\) for all \(\gamma<\kappa\) for which \(S_{\gamma}\subseteq S_{\alpha}\). Fix \(\alpha<\kappa\) and assume that we have already passed through the \(\beta\)-th step for all \(\beta<\alpha\). Then we walk up the \(\alpha\)-th step as follows:
Case 1. [if for all \(\beta<\alpha\), we have \(\Pi(X,S_{\alpha})\not\subseteq\Pi(X,S_{\beta})\)]: In this case, we denote by the \(\Lambda\) the set of all \(\beta<\kappa\) such that \(g_{\beta}\) is already defined and \(S_{\beta}\subseteq S_{\alpha}\). Since \(S_{\alpha}\setminus S\) is a finite set, so is \(\Lambda\). By the condition (C) with respect to all \(\gamma<\kappa\) for which \(g_{\gamma}\) is already defined, the set \(S_{\alpha}\), the family \(\{S_{\beta}\}_{\beta\in\Lambda}\), and the maps \(\{g_{\beta}\}_{\beta\in\Lambda}\) satisfy the assumptions of Lemma 3.2 (especially, the condition (G)). Then we obtain an isometric map \(g_{\alpha}\colon\Pi(X,S_{\alpha})\to\Pi(Y,S_{\alpha})\) such that \(g_{\alpha}|\Pi(X,S_{\beta})=g_{\beta}\) for all \(\beta\in\Lambda\). Moreover, for all \(\gamma<\kappa\) with \(\alpha<\gamma\) such that \(S_{\gamma}\subseteq S_{\alpha}\), we define \(g_{\gamma}=g_{\beta}|_{\Pi(X,S_{\gamma})}\).
Case 2. [there exists \(\beta<\kappa\) such that \(\Pi(X,S_{\alpha})\subseteq\Pi(X,S_{\beta})\)]: Take a minimal ordinal \(\beta<\kappa\) such that \(S_{\alpha}\subseteq S_{\beta}\). Then, at the \(\beta\)-th step, the map \(g_{\alpha}\) has been already defined by \(g_{\alpha}=g_{\beta}|_{\Pi(X,S_{\alpha})}\).
Therefore, we obtain a family \(\{g_{\alpha}\colon\Pi(X,S_{\alpha})\to\Pi(Y,S_{\alpha})\}_{\alpha<\kappa}\) of isometric bijections with the condition (C). Put \(K=\bigcup_{T\in\mathbf{F}(R/S)}\Pi(X,T)\) and \(L=\bigcup_{T\in\mathbf{F}(R/S)}\Pi(Y,T)\). We define a map \(G\colon K\to L\) by \(G(x)=g_{\alpha}(x)\) if \(x\in\Pi(X,S_{\alpha})\). By the condition (C), the map \(G\) is well-defined. Lemma 2.11 shows that \(K\) and \(L\) are dense in \(X\) and \(Y\), respectively. In a canonical way using Cauchy sequences, owing to Proposition 2.9, we can obtain \(F\colon X\to Y\) satisfying that \(F|_{K}=G\). Then \(F\) is an isometric bijection as required.
Using Theorem 3.3, we can prove our main results.
Proof of Theorem 1.1.: Let \(R\) be an uncountable range set, and \((X,d)\) and \((Y,e)\) be \(R\)-petaloid \(R\)-ultrametric spaces. Put \(S=\{0\}\) and let \(f\colon\Pi(X,S)\to\Pi(X,S)\) be the trivial map (see (P1)). Applying Theorem 3.3 to \(S\) and \(f\), we obtain an isometric bijection \(F\colon X\to Y\).
Proof of Theorem 1.2.: Let \(R\) be an uncountable range set, and \((X,d)\) be an \(R\)-petaloid ultrametric space. Assume that \(A\) is a compact subset of \(X\) and \(f\colon A\to X\) is an isometric embedding. Using Proposition 2.8, we take \(S\in\mathbf{TEN}(R)\) with \(K\subseteq\Pi(X,S)\) and \(\phi(K)\subseteq\Pi(X,S)\). Since \(\Pi(X,S)\) is compactly ultrahomogeneous (see [2, Corollary 6.18]), we obtain an isometric bijection \(g\colon\Pi(X,S)\to\Pi(X,S)\) with \(g|_{K}=f\)
Theorem 3.3 implies that there exists an isometric bijection \(F\colon X\to X\) such that \(F|_{\Pi(S)}=g\). The map \(F\) is a desired one, and hence the proof of Theorem 1.2 is finished.
## 4. Examples
We first briefly review the four constructions of injective spaces.
For a range set \(R\), we define an ultrametric \(M_{R}\) on \(R\) by \(M_{R}(x,y)=x\lor y\) if \(x\neq y\); otherwise, \(0\). We call it the _nearly discrete (ultra)metric_ and call the topology generated by \(M_{R}\) the _nearly discrete topology_. For a topological space \(X\) and a range set \(R\), we denote by \(\mathrm{C}_{0}(X,R)\) the set of all continuous maps \(f\colon X\to R\) from \(X\) to \((R,M_{M})\) such that \(0\in f(X)\). For \(f,g\in\mathrm{C}(X,R)\), we define \(\triangledown(f,g)\) by the infimum of all \(\epsilon\in(0,\infty)\) such that \(f(x)\leq g(x)\vee\epsilon\) and \(g(x)\leq f(x)\vee\epsilon\) for all \(x\in X\).
Let \(X\) be a topological space, and \(R\) be a range set, we denote by \(\mathrm{Cpu}(X,R)\) the set of all continuous \(R\)-valued pseudo-ultrametrics \(d\colon X\times X\to R\) on \(X\), where \(X\times X\) and \(R\) are equipped with the product topology and the Euclidean topology, respectively. For \(d,e\in\mathrm{Cpu}(X,S)\), we define \(\mathcal{UD}^{S}_{X}(d,e)\) the infimum of all \(\epsilon\in S\) such that \(d(x,y)\leq e(x,y)\vee\epsilon\) and \(e(x,y)\leq d(x,y)\vee\epsilon\) for all \(x,y\in X\). For more details on \(\mathrm{C}_{0}(X,R)\) and \(\mathrm{Cpu}(X,R)\), we refer the readers to [5].
For a range set \(R\), we denote by \(\mathcal{U}_{R}\) the set of all isometry classes of compact \(R\)-ultrametric spaces and denote by \(\mathcal{NA}\) the non-Archimedean Gromov-Hausdorff distance on \(\mathcal{U}_{R}\), i.e., the value \(\mathcal{NA}((X,d),(Y,e))\) is the infimum of all \(\mathcal{HD}(i(X),j(Y);Z,h)\), where \((Z,h)\) is an \(R\)-valued ultrametric space, \(\mathcal{HD}(*,*;Z,h)\) is the Hausdorff distance associated with \((Z,h)\), and \(i\colon X\to Z\) and \(j\colon Y\to Z\) are isometric embedding. We call \((\mathcal{U}_{R},\mathcal{NA})\) the Gromov-Hausdorff \(R\)-ultrametric space (For more details, see [14], [10] and [13]).
We denote by \(\omega_{0}\) the set of all non-negative integers. Let \(R\) be a range set. We also denote by \(\mathrm{G}(R,\omega_{0})\) the set of all function \(f\colon R\to\omega_{0}\) such that \(f(0)=0\) and the set \(\{0\}\cup\{\,x\in R\mid f(x)\neq 0\,\}\) is tenuous. For \(f,g\in\mathrm{G}(R,\omega_{0})\), we define an \(R\)-ultrametric \(\triangle\) on \(\mathrm{G}(R,\omega_{0})\) by \(\triangle(f,g)=\max\{\,r\in R\mid f(r)\neq g(r)\,\}\) if \(f\neq g\); otherwise, \(\triangle(f,g)=0\). For more information, we refer the readers to [2] and [9].
**Theorem 4.1**.: _For every range set \(R\), the following spaces are all \(\mathcal{N}(R)\) and complete. Moreover, if \(R\) is finite or countable, they are separable._
1. _The space_ \((\mathrm{C}_{0}(\Gamma,R),\triangledown)\) _of all continuous functions from the Cantor set_ \(\Gamma\) _into_ \((R,M_{R})\)_._
2. _The ultrametric space_ \((\mathrm{Cpu}(X,R),\mathcal{UD}^{R}_{X})\) _of all_ \(R\)_-valued continuous pseudo-ultrametrics on a compact ultrametrizable space_ \(X\) _with an accumulation point._
3. _The Gromov-Hausdorff_ \(R\)_-ultrametric space_ \((\mathcal{U}_{R},\mathcal{NA})\)_._
4. _The_ \(R\)_-ultrametric space_ \((\mathrm{G}(R,\omega_{0}),\triangle)\)_._
Proof.: The cases (1) and (2) are proven in [5, Theorem 3.2 and Theorem 4.2], respectively For the separable part, see [5, Proposition 3.5 and Proposition 4.3]. The case (3) is proven by Wan [13]. The case (4) can be found in [2, Section 6], [9, Proposition 11] and [8, Subsection 1.3].
We now prove that all the spaces described above are petaloid.
Proof of Theorem 1.3.: Let \(R\) be an uncountable range set, and \(X\) be a compact ultrametrizable space with an accumulation point. Let \(\Gamma\) stands for the Cantor set. For every \(S\in\mathbf{TEN}(R)\), we define the petals of the spaces stated in the theorem by \(\Pi(\mathrm{C}_{0}(X,R),S)=\mathrm{C}_{0}(X,S)\), \(\Pi(\mathrm{C}\mathrm{pu}(X,R),S)=\mathrm{C}\mathrm{pu}(X,S)\), \(\Pi(\mathcal{U}_{R},S)=\mathcal{U}_{S}\), and \(\Pi(\mathrm{G}(R,\omega_{0}),S)=\mathrm{G}(S,\omega_{0})\). Then the families satisfy the properties (P1), (P3), and (P4). Theorem 4.1 implies that they enjoy the property (P2).
We now consider the case of \((\mathrm{G}(R,\omega_{0}),\vartriangle)\). To verify the properties (P5) and (P6), we need the following claim that is deduced from the definition of \(\vartriangle\):
**Claim 4.2**.: _Let \(R\) be a range set. Let \(r\in R\setminus\{0\}\), and \(f,g\in\mathrm{G}(R,\omega_{0})\). Then \(r=\vartriangle(f,g)\) if and only if \(f(r)\neq g(r)\) and \(f(x)=g(x)\) whenever \(r<x\)._
Let \(S\in\mathbf{TEN}(R)\). For every \(f\in\mathrm{G}(R,\omega_{0})\), put \(T=\{0\}\cup\{\,x\in R\mid f(x)\neq 0\,\}\in\mathbf{TEN}(R)\). If \(T\subseteq S\), then \(\vartriangle(f,\mathrm{C}_{0}(X,S))=\vartriangle(f,f)=0\). If \(T\not\subseteq S\), then let \(r\) be the maximum of \(T\setminus S\). Thus we have \(T\cap(r,\infty)\subseteq S\cap(r,\infty)\). We define \(g\in\mathrm{G}(R,\omega_{0})\) by
\[g(x)=\begin{cases}f(x)&\text{if }r<x;\\ f(r)+1&\text{if }x=r;\\ 0&\text{otherwise.}\end{cases}\]
Then by Claim 4.2, we have \(r=\vartriangle(f,g)\) and \(\vartriangle(f,g)=\vartriangle(f,\mathrm{C}_{0}(X,S))\). This proves the property (P5), and also proves (P6).
Remark that using the statements corresponding to Claim 4.2 such as [5, Corollary 2.17], [5, Corollary 2.30], and [7, Theorem 5.1], in a similar way to the case of \((\mathrm{G}(R,\omega_{0}),\vartriangle)\), we can prove that the spaces \((\mathrm{C}_{0}(X,R),\triangledown)\), \((\mathrm{C}\mathrm{pu}(X,R),\mathcal{U}\mathcal{D}_{X}^{R})\), and \((\mathcal{U}_{R},\mathcal{NA})\) satisfy (P5) and (P6), respectively, and hence they are \(R\)-petaloid. Since there is a small gap, we give a little explanation of the case of \((\mathrm{G}(R,\omega_{0}),\vartriangle)\), for example. Let \(S\in\mathbf{TEN}(R)\), and \(f\in\mathrm{G}(R,\omega_{0})\), and put \(T=f(\Gamma)\in\mathbf{TEN}(R)\). If \(T\subseteq S\), then put \(g=f\). If \(T\not\subseteq S\), then we take non-empty clopen subsets \(A\) and \(B\) of \(X\) such that \(A\cap B=\emptyset\) and \(A\cup B=f^{-1}([0,r])\). We then define \(g\in\Pi(\mathrm{G}(R,\omega_{0}),S)\) by
\[g(x)=\begin{cases}f(x)&\text{if }r<f(x);\\ r&\text{if }x\in A;\\ 0&\text{if }x\in B.\end{cases}\]
Thus, by [5, Corollary 2.17], we have \(\triangledown(f,g)=\triangledown(f,\Pi(\operatorname{G}(R,\omega_{0}),S))\).
This completes the proof of Theorem 1.3.
**Question 4.3**.: Let \(R\) be a range set, and \(\Gamma\) be the Cantor set. In this setting, we see that \((\mathcal{U}_{R},\mathcal{N}\mathcal{A})\) and \((\operatorname{Cpu}(\Gamma,R)),\mathcal{U}\mathcal{D}_{X}^{R})\) are isometric to each other by Theorem 1.1. Does there exists a natural isometric bijection between them?
**Question 4.4**.: Let \(R\) be an uncountable range set, and \((X,d)\) be an \(R\)-petaloid space. We denote by \(\operatorname{Isom}(X,d)\) the group of isometric bijections from \(X\) into itself. What is the isometry group \(\operatorname{Isom}(X,d)\)?. Namely, what interesting properties does \(\operatorname{Isom}(X,d)\) satisfy?
**Question 4.5**.: Let \((X,d)\) and \((Y,e)\) be complete \(\mathcal{N}(R)\)-injective \(R\)-valued ultrametric spaces with the same topological wight. In this setting, are \((X,d)\) and \((Y,e)\) isometric to each other? If not, is there a sufficient and necessary condition for isometry? The author thinks that we can find counterexamples since there exists a complete \(\mathcal{F}\)-injective metric space \((X,d)\) such that \(\operatorname{Isom}(X,d)\) is trivial (see [3]).
|
2303.17553 | Topological Circular Dichroism in Chiral Multifold Semimetals | Uncovering the physical contents of the nontrivial topology of quantum states
is a critical problem in condensed matter physics. Here, we study the
topological circular dichroism in chiral semimetals using linear response
theory and first-principles calculations. We show that, when the low-energy
spectrum respects emergent SO(3) rotational symmetry, topological circular
dichroism is forbidden for Weyl fermions, and thus is unique to chiral
multifold fermions. This is a result of the selection rule that is imposed by
the emergent symmetry under the combination of particle-hole conjugation and
spatial inversion. Using first-principles calculations, we predict that
topological circular dichroism occurs in CoSi for photon energy below about 0.2
eV. Our work demonstrates the existence of a response property of
unconventional fermions that is fundamentally different from the response of
Dirac and Weyl fermions, motivating further study to uncover other unique
responses. | Junyeong Ahn, Barun Ghosh | 2023-03-30T17:21:48Z | http://arxiv.org/abs/2303.17553v2 | # Topological Circular Dichroism in Chiral Multifold Semimetals
###### Abstract
Uncovering the physical contents of the nontrivial topology of quantum states is a critical problem in condensed matter physics. Here, we study the topological circular dichroism in chiral semimetals using linear response theory and first-principles calculations. We show that topological circular dichroism is forbidden for Weyl fermions, and thus is unique to chiral multifold fermions. This is a result of the selection rule that is imposed by the emergent symmetry under the combination of particle-hole conjugation and spatial inversion. We discuss a photogalvanic mechanism through this topological circular dichroism combined with the quantized circular photogalvanic effect. Finally, using first-principles calculations, we predict that topological circular dichroism occurs in CoSi for photon energy below about 0.2 eV. Our work demonstrates the existence of a response property of unconventional fermions that is fundamentally different from the response of Dirac and Weyl fermions, motivating further study to uncover other unique responses.
_Introduction.--_ The interaction between chiral materials and circularly polarized light is a topic of broad interest in fundamental sciences [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Because chiral materials have a definite left- or right-handed crystalline structure, they respond differently to the left and right circularly polarized light. Natural optical activity (i.e., optical rotation and circular dichroism with time-reversal symmetry) and the circular photogalvanic effect are such phenomena due to the light-helicity dependence in the refractive index and DC photocurrent, respectively.
The quantization of the circular photogalvanic effect in chiral topological semimetals have gained attention recently [7; 8; 9; 11; 12; 13]. In three-dimensional chiral crystals, a band-crossing point carries a quantized magnetic monopole charge in momentum space, which is the Chern number [14; 15]. While the magnetic monopoles appear in pairs in the Brillouin zone by the fermion doubling theorem [16], monopole and anti-monopole are not at the same energy, in general, because there is no symmetry to relate them in chiral crystals. The uncompensated monopole charge of a chiral fermion near the Fermi level can manifest through physical responses. The quantized circular photogalvanic effect is a rare example of topological optical responses originating from the monopole charge of a chiral fermion.
More recently, another topological optical phenomenon was discovered in chiral topological semimetals [17; 18]. It was proposed that linearly dispersing chiral fermions show topological circular dichroism, where the helicity-dependent absorption of light is determined only by universal quantities, including fundamental constants and the ratio between the sample thickness and the light wavelength. While this discovery provides another exciting example of topological optical responses, the results in Ref. [17; 18] need further investigation because they were derived from physical arguments using Fermi's Golden rule without rigorous derivations.
In this paper, we investigate topological circular dichroism in chiral topological semimetals using linear response theory and first-principles calculations. Remarkably, we find that topological circular dichroism does not appear for Weyl fermions, which are chiral fermions with twofold degenerate band-crossing points, and is thus unique to chiral multifold fermions having three- or four-fold degenerate band-crossing points. We also find differences in the magnitude and spectral range of the quantized response for chiral multifold fermions compared to the original proposal. We show that these new features are mainly because of the selection rule imposed by the symmetry under the combination of particle-hole conjugation and spatial inversion.
Unlike the quantized circular photogalvanic effect, topological circular dichroism does not depend on the current relaxation time, which depends on materials. Instead, the topological circular dichroism relies on linear dispersion. To test our model analysis, we perform first-principles calculations of the circular dichroism for CoSi, a chiral threefold semimetal with good linear dispersions per spin degrees of freedom [19; 20; 21; 22]. The result agrees well with model analysis, showing good quantizations for photon energies below about 0.2 eV.
We note that the simultaneous presence of the topological circular photogalvanic effect and topological circular dichroism implies a topological mechanism of photogalvanic effect using non-helical light. We propose a chiral photogalvanic device employing this effect.
_Spinless k dot p model.--_ We first consider the model of a spinless isotropic chiral pseudospin-\(j\) fermion in three dimen
Figure 1: Topological circular dichroism by a chiral multifold semimetal hosting a pseudospin-\(j\) fermion near the Fermi level. \(I_{L/R}\)s are the transmitted intensity for the left (\(L\)) and right (\(R\)) handed light. \(N_{1/2}=0\), \(N_{1}=1\), and \(N_{3/2}=3\).
sions [14; 20].
\[H_{0}(\mathbf{k})=-\mu+\chi v\mathbf{k}\cdot\mathbf{J}, \tag{1}\]
where \(\mathbf{k}\) is the wave vector, \(\mathbf{J}\) is the pseudospin-\(j\) operator. The sign \(\chi=\pm 1\) determines the chirality. The energy eigenvalues are
\[E_{n}(\mathbf{k})=-\mu+\hbar vkh_{n}, \tag{2}\]
where the integer \(h_{n}=-j,\ldots,j\) is the helicity quantum number [Fig. 2]. The crossing point at \(\mathbf{k}=0\) has \((2j+1)\)-fold degeneracy. The band with helicity \(h\) carries the Chern number \(c_{h}=-2\chi h\) on a closed surface that encloses the node (i.e., the magnetic monopole charge in momentum space defined by the Berry curvature), which serves as a topological charge of the spin-\(j\) fermion. We have a Weyl fermion for \(j=1/2\) and a chiral multifold fermion for a higher \(j\). In this model, optical transitions occur between adjacent energy levels only because of an optical selection rule imposed by isotropy [19; 23]: for \(m\neq n\), transition dipole moment \(\langle\psi_{m\mathbf{k}}|e\dot{\mathbf{r}}|\psi_{n\mathbf{k}}\rangle\propto \langle u_{m\mathbf{k}}|\mathbf{J}|u_{n\mathbf{k}}\rangle=0\) if \(h_{m}\neq h_{n}\pm 1\).
Our model has time reversal symmetry \(TH(\mathbf{k})T^{-1}=H(-\mathbf{k})\) under time reversal \(T\) that flips the pseudospin. Therefore, the anomalous Hall effect is forbidden. However, natural optical activity can arise from broken inversion symmetry.
_Topological circular dichroism from natural optical activity.--_ In crystalline solids, natural optical activity is described by the part of the optical conductivity that is linear in photon momentum \(\mathbf{q}\)[4]. Let us consider the expansion \(\sigma_{ij}(\omega,\mathbf{q})=\sigma_{ij}(\omega)+\sigma_{ijk}(\omega)q_{k}+ O(q^{2})\). In our model, the refractive indices for light with left (\(L\)) and right (\(R\)) helicity are
\[n_{L/R}=\sqrt{1+\chi_{xx}+(\mu_{0}c\sigma_{xyz}/2)^{2}}\pm\mu_{0}c\sigma_{xyz} /2, \tag{3}\]
where \(\chi_{ij}=\sigma_{ij}(-i\epsilon_{0}\omega)^{-1}\) is the electric susceptibility, and the light helicity is defined by the sign of \(\mathbf{q}\cdot i\mathbf{E}^{*}\times\mathbf{E}\). For \(\mathbf{q}=|\mathbf{q}|(0,0,1)\), \(L\) and \(R\) polarization vectors are respectively \(\tilde{L}=(1,-i,0)/\sqrt{2}\) and \(\tilde{R}=(1,i,0)/\sqrt{2}\). Because of the isotropy in our model, \(\chi_{xx}\) and \(\sigma_{xyz}\) are the only non-vanishing tensor components. The real and imaginary parts of circular birefringence \(n_{L}-n_{R}=\mu_{0}c\sigma_{xyz}\) is responsible for the optical rotation and circular dichroism, respectively.
Natural optical activity has two contributions from the Fermi sea and the Fermi surface, respectively [4; 5; 24]. The formula for the Fermi sea part is [4]
\[\sigma_{ijk}^{0}= \frac{e^{2}\omega}{\hbar}\sum_{n,m}\int_{\mathbf{k}}f_{nm}\bigg{[} \frac{r_{nm}^{i}B_{mn}^{jk}-r_{nm}^{j}B_{mn}^{ik}}{\omega_{mn}^{2}-\omega^{2}}\] \[-\frac{(3\omega_{mn}^{2}-\omega^{2})r_{nm}^{i}r_{jm}^{j}(v_{mm}^{ k}+v_{nn}^{k})}{2(\omega_{mn}^{2}-\omega^{2})^{2}}\bigg{]}, \tag{4}\]
where \(\int_{\mathbf{k}}=\int_{\mathrm{BZ}}d^{3}k/(2\pi)^{3}\), \(f_{nm}=f_{n}-f_{m}\) and \(\hbar\omega_{mn}=\hbar\omega_{m}-\hbar\omega_{n}\) are the differences of the Fermi-Dirac distributions and energy eigenvalues, respectively, \(v_{mn}^{i}=\langle\psi_{m\mathbf{k}}|\dot{v}|\psi_{n\mathbf{k}}\rangle\) and \(r_{nm}^{j}=-iv_{nm}^{j}/\omega_{nm}\) are velocity and position matrix elements, \(B_{mn}^{ik}=B_{mn}^{\mathrm{orb},ik}+B_{mn}^{\mathrm{spin},ik}\), \(B_{mn}^{\mathrm{orb},ik}=-2^{-1}(\sum_{p\neq m}r_{mp}^{k}v_{pn}^{i}+\sum_{p\neq n }v_{mp}^{i}v_{pn}^{k})\), \(B_{mn}^{\mathrm{spin},ik}=e^{-1}\epsilon_{ikl}\,\langle\psi_{m\mathbf{k}}| \hat{M}^{\mathrm{spin},l}|\psi_{n\mathbf{k}}\rangle\), and \(\hat{M}^{\mathrm{spin}}\) is the spin magnetic moment operator. The spin magnetic moment does not contribute to the response in our spinless model, but we discuss its effect in spin-orbit coupled systems below. The Fermi surface part is given by [5; 25]
\[\sigma_{ijk}^{G} =\frac{e^{2}}{\hbar}\sum_{n}\int_{\mathbf{k}}\bigg{[}\frac{1}{ \omega}(\partial_{i}f_{n}B_{nn}^{jk}-\partial_{j}f_{n}B_{nn}^{ik})\] \[-\partial_{k}f_{n}\sum_{m}\mathrm{Im}(r_{nm}^{i}r_{mn}^{j})\frac {\omega_{mn}\omega}{\omega_{mn}^{2}-\omega^{2}}\bigg{]}. \tag{5}\]
The effect of dissipation is included by the substitution \(\omega\rightarrow\omega+i\tau^{-1}\).
For the model in Eq. (1), we obtain quantized values within a given frequency range:
\[\sigma_{ijk}^{0}=i\epsilon_{ijk}s_{\mu}\chi\frac{e^{2}}{3h}N_{j}(\omega), \tag{6}\]
in the clean limit \(\omega\tau\rightarrow\infty\), where \(s_{\mu}=\mu/|\mu|\), \(N_{1/2}(\omega)=0\), \(N_{1}(\omega)=\Theta(\hbar\omega-|\mu|)\), and \(N_{3/2}(\omega)=3[\Theta(\hbar\omega-3|\mu|/2)-\Theta(\hbar\omega-2|\mu|)]\) [Fig. 3(a)]. The Chern number origin of the quantization is manifested in the expression of the nonvanishing value \(\sigma_{xyz}^{0}=-is_{\mu}c_{2j+j}e^{2}(6h)^{-1}\left[(v_{2j+1}+v_{2j})/(v_{2j+ 1}-v_{2j})\right]\), where \(c_{2j+1}=-(2\pi)^{-1}\oint d\mathbf{S}\cdot\mathbf{F}_{2j+j}=-2\chi j\) is the outward Berry flux of the topmost band, i.e., band \(2j+1\)[26]. As the value of \(\sigma_{ijk}^{0}\) is imaginary, this part describes circular dichroism.
The isotropic linearly dispersing Weyl fermion does not show circular dichroism [27]. This is because of the constraint from \(CP\) symmetry that imposes \(B_{mn}^{ij}=0\) and \(v_{mm}^{i}+v_{nn}^{i}=0\) between \(CP\)-relatd states \(m\) and \(n\), where \(C\) is particle-hole conjugation, and \(P\) is spatial inversion [26]. The nontrivial circular dichroism of multifold fermions is due
Figure 2: Band structure of pseudospin-\(j\) fermions described by Eq. (1). (a) \(j=1/2\). (b) \(j=1\). (c) \(j=3/2\). The spectrum has the same shape along \(k_{i=x,y,z}\) because of isotropy. Arrows represent possible optical transition channels allowed by the selection rule due to isotropy [19; 23]. Optical transitions with red x marks are forbidden by the Pauli blocking with the chemical potential represented by the blue dashed line. \(CP\) symmetry further constrains that transitions between \(h=\pm 1/2\) bands (arrows with red triangles) does not contribute to natural optical activity.
to \(CP\)-asymmetric optical excitations which generate the net change of the orbital magnetic moment. This favors the absorption of one particular circular polarization of light to the other polarization.
Let us consider shining linearly polarized or unpolarized light. Then, the incident intensity is the same for \(L/R\) helicity on average. The transmitted light intensity after propagation of the distance \(d\) within the material is \(I_{L/R}=2^{-1}I_{0}|\exp(2\pi in_{L/R}d/\lambda)|^{2}=2^{-1}I_{0}\exp(-4\pi d \mathrm{Im}(n_{L/R})/\lambda)\) for \(L/R\) helicity, where \(I_{0}\) is the incident light intensity. The transmissive circular dichroism is defined by
\[\mathrm{CD}\equiv\frac{I_{L}-I_{R}}{I_{L}+I_{R}}=\tanh\left(\chi s_{\mu}\frac {4\pi\alpha N_{j}}{3}\frac{d}{\lambda}\right) \tag{7}\]
where \(\alpha=\mu_{0}ce^{2}/2h\) is the fine structure constant.
In the clean limit, the Fermi surface part does not contribute to the circular dichroism because it is real valued, where \(\sigma^{G}_{ijk}=\epsilon_{ijk}\chi(e^{2}/h)(3\pi)^{-1}(\mu/\hbar\omega)[(\mu^ {2}-3j^{2}(\hbar\omega)^{2}/2)/(\mu^{2}-j^{2}(\hbar\omega)^{2})+f_{j}]\), where \(f_{1/2}=f_{1}=0\), and \(f_{3/2}=7[\mu^{2}-3(\hbar\omega)^{2}/8]/[\mu^{2}-(\hbar\omega)^{2}/4]\). But this contributes to the circular dichroism when there is a finite relaxation and is proportional to \(\tau^{-1}\). Figure 3(b) shows the case with \(\hbar\tau^{-1}=0.01\mu\).
_Effect of quadratic dispersion and spin-orbit coupling.--_ To see the effect of \(O(k^{2})\) terms, we consider \(H=H_{0}+H_{1}\) of a threefold fermion with an additional quadratic Hamiltonian allowed by octahedral symmetry:
\[H_{1}=\begin{pmatrix}Xk^{2}-2Ck_{z}^{2}&Bk_{y}k_{z}&Bk_{z}k_{x}\\ Bk_{y}k_{z}&Xk^{2}-2Ck_{x}^{2}&Bk_{x}k_{y}\\ Bk_{z}k_{x}&Bk_{x}k_{y}&Xk^{2}-2Ck_{y}^{2}\end{pmatrix}, \tag{8}\]
where \(X=A+2C/3\)[13].
Figure 3(c) shows the band structure with quadratic terms included. We take \(\mu=0.1\) eV and the model parameters for CoSi derived in Ref. [13], which are \(A=1.07a^{2}\) eV, \(B=-1.72a^{2}\) eV, \(C=3.26a^{2}\) eV, and \(\hbar v=1.79a\) eV, where \(a\) has the dimension of length (\(a=a_{0}/2\pi\), where \(a_{0}=4.45\) A is the lattice constant of CoSi).
When the quadratic terms are included, the value of \(\mathrm{Im}(\sigma^{0}_{xyz})\) deviates from the quantized plateau [Fig. 3(d)]. The deviation originates from the momentum dependence of the velocities of bands [26], and the effect of selection-rule-breaking transitions is negligible (less than 1 %). Therefore, an isotropic quadratic dispersion that preserves the selection rule can lead to a comparable deviation from the quantization [orange curve in Fig. 3(d)].
The spin-orbit coupling up to linear order in \(k\) is given by
\[H_{\mathrm{SOC}}=\mathbf{s}\cdot\left(w\mathbf{k}+\Delta\mathbf{J}\right), \tag{9}\]
Figure 4: Circular-dichroic photogalvanic effect. (a) Proposed device geometry. The first layer, which is left-chiral, absorbs left circularly polarized light more, generating a net DC photocurrent through a circular photogalvanic effect. The remaining circular polarized light generates the DC photocurrent at the second layer, which is right chiral. (b) Thickness dependence of absorption for a triple-point fermion semimetal. The blue and orange curves show the absorptive circular dichroism and total absorption, respectively. The absolute value of the ACD peaks at \(d=d_{*}\). (c) Absorptive circular dichroism of the heterochiral bilayer. (d) Total absorption of the heterochiral bilayer. We use our model in Eq. (1) with \(j=1\), and \(v=0.01c\), where \(c\) is the speed of light. \(\hbar\omega>|\mu|\) is assumed in (b-d). We take \(\chi_{t}>0\), \(\chi_{b}<0\), and \(\mu_{t}\mu_{b}>0\), and \(\omega\tau\rightarrow\infty\).
Figure 3: The imaginary part of \(\sigma_{xyz}\) of a pseudospin-\(j\) fermion. (a,b) Spinless linearly dispersing fermion. (a) Fermi sea contribution \(\sigma^{0}_{xyz}\) and (b) Fermi surface contribution \(\sigma^{G}_{xyz}\) of the linearly dispersing model in Eq. (1) without quadratic terms. While we take \(\omega\tau\rightarrow\infty\) in (a), we introduce a finite relaxation time in (b). We take \(\mu>0\) for all plots. (c,d) Band structure and \(\sigma^{0}_{xyz}\) with quadratic terms in Eq. (8). \(\mu=0.01\) eV, \(A=1.07a^{2}\) eV, \(B=-1.72a^{2}\) eV, \(C=3.26a^{2}\) eV, and \(\hbar v=1.79a\) eV, where \(a=a_{\mathrm{CoSi}}/(2\pi)\), \(a_{\mathrm{CoSi}}=4.45\) Å is the lattice constant of CoSi, \(\hbar\tau^{-1}=1\) meV. The transparent plane in (c) shows the Fermi level. The orange curve in (d) shows the isotropic case (\(B=C=0\) with other parameters kept unchanged) for comparison. (e,f) Band structure and \(\sigma^{0}_{xyz}\) with spin-orbit coupling in Eq. (9). \(w=30a\) meV, \(\Delta=30\) meV, and \(\hbar\tau^{-1}=10\) meV. In (f), the spin part is due to spin magnetic moment, and the orbital part refers to the other contributions.
where \(s_{i=x,y,z}\) is the spin Pauli matrix. For a threefold (per spin) fermion, this splits the sixfold (including spin) degeneracy into fourfold and twofold degenerate points by \(\delta E_{\rm SOC}=3\Delta\) [Fig. 3(e)]. Figure 3(f) shows that the circular dichroism approaches to the quantized value as the photon energy becomes larger than \(\delta E_{\rm SOC}\). The effect of spin magnetic moment is negligible in the quantized regime.
_Circular-dichroic photogalvanics.--_ The imbalance in absorption for different helicities implies that it is possible to generate the circular photogalvanic effect without net helicity of incident light. While the circular photogalvanic effect leads to much larger photocurrent than the linear photogalvanic effect, it cannot be directly used for photodetection or energy conversion for linearly polarized or unpolarized light, because the photocurrents from two oppositely helical polarizations cancel each other. However, since chiral materials absorb light with one helicity more than the other, circular dichroism allows the circular photogalvanic effect with incident light having compensated helicity. Although the circular dichroism is a small effect of about 0.1 %, the resulting photogalvanic effect can be non-negligible because the circular photogalvanic effect is much larger than the linear photogalvanic effect when the relaxation time is long, by the factor of \(\omega\tau\).
We consider the device with two heterochiral materials in Fig. 4(a). A key point here is that the chiral materials should not be too thick. Let us first consider a single chiral crystal whose low-energy effective model is Eq. (1) with \(j=1\). Figure 4(a) shows the absorptive dichroism (ACD) for \(\chi>0\) (left-chiral) and \(\mu<0\), where we define \(\rm{ACD}\equiv(A_{L}-A_{R})/I_{0}\), and \(A_{L/R}=I_{0}-I_{L/R}^{t}\). \(\rm{ACD}>0\) in our case. Here, \(I_{0}\) is the incident light intensity minus the light intensity reflected at the top. We neglect the reflection at the bottom for the moment. The \(\rm{ACD}\) is maximal when the thickness of the sample is \(d=d_{*}=(\lambda/8\pi n_{1}^{i})\log(n_{L}^{i}/n_{R}^{i})=\lambda/4\pi n_{1}^{ n}+O[(n_{1}^{i})^{2}]\), where \(n_{L/R}=n_{0}\pm n_{1}\) and \(n_{a}^{i}={\rm Im}(n_{a})\), and it approaches zero in thick bulk samples because they perfectly absorbs both helical lights. Using \(\chi_{xx}=i\alpha(3c/2v)\Theta(\hbar\omega-\mu)\), we have \(d_{*}=(2\pi)^{-1}(v/3c\alpha)^{1/2}\lambda\), and \(|\rm{ACD}|_{d=d_{*}}=e_{0}^{-1}(n_{1}^{i}/n_{0}^{i})\) in the leading order in \(n_{1}\), where \(e_{0}=2.718\dots\) is the Euler's number.
At the optical thickness \(d_{*}(\lambda)\), only about \(1-e_{0}^{-1}+O(n_{1}^{2})=0.632\) portion of \(I_{0}\) is absorbed. We can thus improve the device efficiency by adding another chiral material to exploit the transmitted light. Since the intensity of the right-helical (or left-helical if \(\rm{ACD}<0\)) light is stronger in transmission, we can put a second chiral material that absorbs right-helical light preferentially. For example, we take a chiral crystal described by Eq. (1) with \(j=1\), \(\chi<0\) (right-chiral), and \(\mu<0\). To minimize reflections between the top (\(t\)) and bottom (\(b\)) chiral crystals, we take \(v_{t}=v_{b}\) such that the refractive indices are almost identical. Figure 4(c) shows the dependence of the absolute value of \(\rm{ACD}\) on \(d_{t}\) and \(d_{b}\). It peaks at \(d_{t}=0.1364\), and \(d_{b}=0.2985\) in our model, where 93 % of \(I_{0}\) is absorbed.
While the conditions for maximal \(\rm{ACD}\) and maximal circular photogalvanic current coincides in the above example, they are different in general. The circular photogalvanic current is generated because the group velocity of electronic quasiparticles changes during the optical excitation. Therefore, the optimization of the circular-dichroism-induced photogalvanic effect depends on the average velocity change as well as the circular dichroism. This complication goes away as we take \(v_{t}=v_{b}\) above for a simple demonstration.
_Chiral threefold semimetal CoSi._ We now turn the discussion towards material-specific DFT-based calculations to test our model analysis. We focus on the transition metal monosilicide family of materials CoSi, which crystalizes in the B20 cubic structure [28; 29]. The crystal structure is chiral, and it belongs to the \(P2_{1}3\) space group (SG198); it lacks an inversion, mirror, and roto-inversion symmetry. The structural chirality and the octahedral symmetries lead to various types of multifold fermions in these systems [28; 29; 30; 31]. Specifically, in the absence of spin-orbit interaction, CoSi host a threefold degenerate nodal point at the zone center and double Weyl fermion state at the corner of the cubic BZ [Fig. 5(a)].
We compute \(\rm{Im}(\sigma^{0}_{xyz})\) for CoSi using the Wannier function-based tight-binding model [see Supplemental Material for details]. The chemical potential is set to 10 meV above the threefold degenerate crossing at the \(\Gamma\)-point (indicated by the green dashed line). As shown in Fig. 5(c), the calculated \(\rm{Im}(\sigma^{0}_{xyz})\) results strongly support our low energy model analysis. Specifically, we found that in CoSi, the \(\rm{Im}(\sigma^{0}_{xyz})\) starts from a finite value for low photon energy and it quickly approaches the quantized value \(e^{2}/3h\), developing a plateau-like region for \(50\lesssim\hbar\omega\lesssim 200\) meV. In this region, the optical transitions involving the threefold fermion around the \(\Gamma\)
Figure 5: Ab-initio calculations for threefold semimetal CoSi based on density functional theory. (a,b) Band structure. (a) without and (b) with spin-orbit coupling. The insets show the band structure near the \(\Gamma\) point. The horizontal green dashed line denotes the chemical potential used for computing the \(\sigma^{0}_{xyz}\) and \(\sigma^{0}_{xyz}\). (c,d) The imaginary parts of Fermi sea (\(\sigma^{0}_{xyz}\)) and Fermi-surface (\(\sigma^{G}_{xyz}\)) contributions. (c) without and (d) with spin-orbit coupling.
point plays the important role. The small deviation from the quantized value is attributed to the presence of quadratic band dispersion and partial occupancy of the flat band, and it supports our model analysis. For \(\hbar\omega\gtrsim 200\) meV, the optical transitions involving the states around the R point become important, and consequently the \(\mathrm{Im}(\sigma^{0}_{xyz})\) changes sign, as it strongly deviates from the quantized value. We also compute the, Fermi surface contribution, \(\mathrm{Im}(\sigma^{G}_{xyz})\) for CoSi. In general, \(\mathrm{Im}(\sigma^{G}_{xyz})\) is smaller compared to the \(\mathrm{Im}(\sigma^{0}_{xyz})\), and its value depends strongly on the relaxation time, and in the clean limit \(\omega\tau\gg 1\), and this Fermi surface contribution should be negligible in the quantized region.
We further consider the effect of spin-orbit coupling in Fig. 5(b,d). In consistent with model analysis, quantization of \(\mathrm{Im}(\sigma^{0}_{xyz})\) still holds true even after including the effect of spin-orbit coupling, and the spin magnetic moment contributes negligibly compared to the orbital part in the plateau region.
We also explored other material candidates in this family, including RhSi, and PtAl (see Supplemental Material [32; 33]). Our analysis suggests that in the absence of spin-orbit coupling, the quantization of \(\mathrm{Im}(\sigma^{0}_{xyz})\) holds true both in RhSi and PtAl. However, due to the presence of large spin-orbit coupling in these compounds, the \(\mathrm{Im}(\sigma^{0}_{xyz})\) deviates from the quantized value. Interestingly, this deviation is still approximately within 10 % for RhSi and 20 % for PtAl, despite the spin-orbit coupling being signficantly stronger compared to CoSi.
_Discussion.--_ Our analysis establishes that topological circular dichroism is the unique feature of multifold fermions in the k dot p regime. Thin films will be ideal for an observation of this effect because transmitted light intensity is exponentially suppressed in bulk samples. Topological circular dichroism is similar to the quantized absorption in graphene [34] because it requires linear dispersion. The quantization is expected to be robust as long as photon energy is much larger than thermal energy. However, disorder and interaction effects can give deviations from quantized optical responses [35; 36], in contrast to the quantum Hall effect.
Our photogalvanic mechanism is not restricted to topological mechanisms and is generally possible when transmissive circular dichroism occurs, in which circular photogalvanic effect also occur because they are constrained by the symmetry in the same way. When combined with conventional photovoltaic mechanisms, our new mechanism will help maximize the efficiency of the solar cell.
We appreciate Ashvin Vishwanath, Arun Bansil, Su-Yang Xu, and Yuan Ping for helpful discussions. We thank Ipsita Mandal for bringing their work [17; 18] to our attention when our manuscript is being finalized. J.A. was supported by the Center for Advancement of Topological Semimetals, an Energy Frontier Research Center funded by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences, through the Ames Laboratory under contract No. DE-AC02-07CH11358. B.G. was supported by the Air Force Office of Scientific Research under Award No. FA9550-20-1-0322 and benefited from the computational resources of Northeastern University's Advanced Scientific Computation Center (ASCC) and the Discovery Cluster.
|
2302.07469 | Robust Safety under Stochastic Uncertainty with Discrete-Time Control
Barrier Functions | Robots deployed in unstructured, real-world environments operate under
considerable uncertainty due to imperfect state estimates, model error, and
disturbances. Given this real-world context, the goal of this paper is to
develop controllers that are provably safe under uncertainties. To this end, we
leverage Control Barrier Functions (CBFs) which guarantee that a robot remains
in a ``safe set'' during its operation -- yet CBFs (and their associated
guarantees) are traditionally studied in the context of continuous-time,
deterministic systems with bounded uncertainties. In this work, we study the
safety properties of discrete-time CBFs (DTCBFs) for systems with discrete-time
dynamics and unbounded stochastic disturbances. Using tools from martingale
theory, we develop probabilistic bounds for the safety (over a finite time
horizon) of systems whose dynamics satisfy the discrete-time barrier function
condition in expectation, and analyze the effect of Jensen's inequality on
DTCBF-based controllers. Finally, we present several examples of our method
synthesizing safe control inputs for systems subject to significant process
noise, including an inverted pendulum, a double integrator, and a quadruped
locomoting on a narrow path. | Ryan K. Cosner, Preston Culbertson, Andrew J. Taylor, Aaron D. Ames | 2023-02-15T04:47:03Z | http://arxiv.org/abs/2302.07469v2 | # Robust Safety under Stochastic Uncertainty with Discrete-Time Control Barrier Functions
###### Abstract
Robots deployed in unstructured, real-world environments operate under considerable uncertainty due to imperfect state estimates, model error, and disturbances. Given this real-world context, the goal of this paper is to develop controllers that are provably safe under uncertainties. To this end, we leverage Control Barrier Functions (CBFs) which guarantee that a robot remains in a "safe set" during its operation--yet CBFs (and their associated guarantees) are traditionally studied in the context of continuous-time, deterministic systems with bounded uncertainties. In this work, we study the safety properties of discrete-time CBFs (DTCBFs) for systems with discrete-time dynamics and unbounded stochastic disturbances. Using tools from martingale theory, we develop probabilistic bounds for the safety (over a finite time horizon) of systems whose dynamics satisfy the discrete-time barrier function condition in expectation, and analyze the effect of Jensen's inequality on DTCBF-based controllers. Finally, we present several examples of our method synthesizing safe control inputs for systems subject to significant process noise, including an inverted pendulum, a double integrator, and a quadruped locomoting on a narrow path.
## I Introduction
Safety is critical for a multitude of modern robotic systems from autonomous vehicles, to medical and assistive robots, to aerospace systems. When deployed in the real world, these systems face sources of uncertainty such as imperfect perception, approximate models of the world and the system, and unexpected disturbances. In order to achieve the high degrees of safety necessary for these robots to be deployed at scale, it is essential that controllers can not only guarantee safe behavior, but also provide robustness to these uncertainties.
In the field of control theory, safety is often defined as the forward invariance of a "safe set" [6]. In this view, a closed-loop system is considered safe if all trajectories starting inside the safe set will remain in this set for all time. Several tools exist for generating controllers which can guarantee this forward-invariance property, including Control Barrier Functions (CBFs) [7], reachability-based controllers [9], and state-constrained Model-Predictive Controller (MPC) approaches [19]. Considerable advancements have been made in guaranteeing safety or stability in the presence of bounded uncertainties [37, 11, 8, 29, 20, 5]. Yet less attention has been paid to the case of unbounded uncertainties, where the aforementioned methods generally do not apply.
Obtaining robust safety in the case of unbounded disturbances is particularly important when considering systems subject to stochastic disturbances, since these disturbances are often modeled as continuous random variables with unbounded support (e.g., zero-mean, additive Gaussian noise); for such systems, it is impossible to give an absolute bound on the disturbance magnitude. Existing methods for unbounded, random disturbances fall into two categories. The first is to impose step-wise chance constraints on a given safety criterion (e.g., a state constraint in MPC [19] or CBF-based controllers [4]), which in turn provide one-step safety guarantees. The other class of approaches [21, 26, 27, 17, 30] use Lyapunov or barrier function techniques to provide bounds on the safety probabilities for trajectories over a fixed time horizon; existing approaches, however, often assume the presence of a stabilizing controller, or model the system in continuous-time (i.e., assume the controller has, in effect, infinite bandwidth).
In order to best represent the uncertainty that might appear from sources such as discrete-time perception errors or sampled-data modeling errors, we focus our work on generating probabilistic bounds of safety for discrete-time (DT) stochastic systems. While MPC state constraints are generally enforced in discrete time, CBFs, normally applied in continuous time, have a discrete-time counterpart (DTCBFs) that were first introduced in [1] and have gained popularity due to
Fig. 1: Safety of a simulated quadrupedal robot locomoting on a narrow path for a variety of controllers. **(Top Left)** The safe region that the quadruped is allowed to traverse. **(Bottom Left)** A system diagram depicting the states of the quadruped \(\left[x,y,\theta\right]^{\top}\). **(Top Right)** 50 trajectories for 3 controllers: one without any knowledge of safety (\(\mathbf{K_{\text{nom}}}\)), one with a standard safety filter (DTCBF-OP), and finally our method which accounts for stochasticity (JED). **(Bottom Right)** Plots of \(h(\mathbf{x})\), a scalar value representing safety. The system is safe (i.e., in the green safe region) if \(h(\mathbf{x})\geq 0\).
their compatibility with planners based on MPC [36, 23, 35], reinforcement learning [15], and Markov decision processes [3]. In a stochastic setting, martingale-based techniques have been leveraged to establish safety guarantees [27, 30], yet these works have limited utility when analyzing the safety of discrete-time CBF-based controllers.
In particular, the "c-martingale" condition used in [30] does not admit a multiplicative scaling of the barrier function, and therefore, at best, provides a weak worst-case safety bound for CBF-based controllers that grows linearly in time. The work of [27] (which builds upon [21], as does this paper) is largely focused on offline control synthesis to achieve a desired safety bound (as opposed to the online, optimization-based control studied in this work). Also, the method proposed in [27] can only generate discrete-time controllers for affine barriers, which severely limits its applicability to general barrier functions. Both papers also depend on sum-of-squares (SoS) programming [25] for control synthesis/system verification, thereby requiring an offline step that scales poorly with the state dimension. The goal of this paper is to extend the results of [21] in a different direction, and thereby enable the synthesis of online controllers that can be realized on robotic systems.
The main contribution of this paper is to apply martingale-based probability bounds in the context of discrete-time CBFs to guarantee robust safety under stochastic uncertainty. To this end, we leverage the bounds originally presented in the seminal work by Kushner [21]. Our first key contribution is the translation of these results from a Lyapunov setting to a CBF one. To this end, we present a new proof of the results in [21] which we believe to be more complete and intuitive and which relates to the existing results of Input-to-State Safety (ISSf) for systems with bounded uncertainties [20]. Furthermore, we present a method (based on Jensen's inequality) to account for the effects of process noise on a DTCBF-based controller. Finally, we apply this method to a variety of systems in simulation to analyze the tightness of our bound and demonstrate its utility. These experiments range from simple examples that illustrate the core mathematics--a single and double integrator and a pendulum--to a high fidelity simulation of a quadrupedal robot locomoting along a narrow path with the uncertainty representing the gap between the simplified and full-order dynamics models.
## II Background
In this section we provide a review of safety for discrete-time nonlinear systems via control barrier functions (CBFs), and review tools from probability theory useful for studying systems with stochastic disturbances.
### _Safety of Discrete-time Systems_
Consider a discrete-time (DT) nonlinear system with dynamics given by:
\[\mathbf{x}_{k+1}=\mathbf{F}(\mathbf{x}_{k},\mathbf{u}_{k}),\quad\forall k\in \mathbb{N}, \tag{1}\]
with state \(\mathbf{x}_{k}\in\mathbb{R}^{n}\), input \(\mathbf{u}_{k}\in\mathbb{R}^{m}\), and continuous dynamics \(\mathbf{F}:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\). A continuous state-feedback controller \(\mathbf{k}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) yields the DT closed-loop system:
\[\mathbf{x}_{k+1}=\mathbf{F}(\mathbf{x}_{k},\mathbf{k}(\mathbf{x}_{k})),\quad \forall k\in\mathbb{N}. \tag{2}\]
We formalize the notion of safety for systems of this form using the concept of forward invariance:
**Definition 1** (Forward Invariance & Safety [11]).: _A set \(\mathcal{C}\subset\mathbb{R}^{n}\) is forward invariant for the system (2) if \(\mathbf{x}_{0}\in\mathcal{C}\) implies that \(\mathbf{x}_{k}\in\mathcal{C}\) for all \(k\in\mathbb{N}\). In this case, we call the system (2) safe with respect to the set \(\mathcal{C}\)._
Discrete-time barrier functions (DTBFs) are a tool for guaranteeing the safety of discrete-time systems. Consider a set \(\mathcal{C}\triangleq\{\mathbf{x}\in\mathbb{R}^{n}\mid h(\mathbf{x})\geq 0\}\) expressed as the \(0\)-superlevel set of a continuous function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\). We refer to such a function \(h\) as a DTBF1 if it satisfies the following properties:
Footnote 1: The state constraint \(\mathbf{x}_{k}\in\mathcal{C}\), when expressed as \(h(\mathbf{x}_{k})\geq 0\), is the special case of a DTBF with \(\alpha=0\).
**Definition 2** (Discrete-Time Barrier Function (DTBF) [1]).: _Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be the \(0\)-superlevel set of a continuous function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\). The function \(h\) is a discrete-time barrier function (DTBF) for (2) on \(\mathcal{C}\) if there exists an \(\alpha\in[0,1]\) such that for all \(\mathbf{x}\in\mathbb{R}^{n}\), we have that:_
\[h(\mathbf{F}(\mathbf{x},\mathbf{k}(\mathbf{x})))\geq\alpha h(\mathbf{x}). \tag{3}\]
This inequality mimics that of discrete-time Lyapunov functions [12], and similarly regulates the evolution of \(h\) based on its previous value. DTBFs serve as a certificate of forward invariance as captured in the following theorem:
**Theorem 1** ([1]).: _Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be the \(0\)-superlevel set of a continuous function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\). If \(h\) is a DTBF for (2) on \(\mathcal{C}\), then the system (2) is safe with respect to the set \(\mathcal{C}\)._
Intuitively, the value of \(h(\mathbf{x}_{k})\) can only decay as fast as the geometric sequence \(\alpha^{k}h(\mathbf{x}_{0})\), which is lower-bounded by \(0\), thus ensuring the safety (i.e., forward invariance) of \(\mathcal{C}\).
Discrete-time control barrier functions (DTCBFs) provide a tool for constructively synthesizing controllers that yield closed-loop systems that possess a DTBF:
**Definition 3** (Discrete-Time Control Barrier Function (DTCBF) [1]).: _Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be the \(0\)-superlevel set of a continuous function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\). The function \(h\) is a discrete-time control barrier function (DTCBF) for (1) on \(\mathcal{C}\) if there exists an \(\alpha\in[0,1]\) such that for each \(\mathbf{x}\in\mathbb{R}^{n}\), there exists a \(\mathbf{u}\in\mathbb{R}^{m}\) such that:_
\[h(\mathbf{F}(\mathbf{x},\mathbf{u}))\geq\alpha h(\mathbf{x}). \tag{4}\]
Given a CBF \(h\) for (1) and a corresponding \(\alpha\in[0,1]\), we define the point-wise set of control values:
\[\mathscr{K}_{\mathrm{CBF}}(\mathbf{x})=\left\{\mathbf{u}\in\mathbb{R}^{m}\mid h (\mathbf{F}(\mathbf{x},\mathbf{u}))\geq\alpha h(\mathbf{x})\right\}. \tag{5}\]
This yields the following result:
**Theorem 2** ([2]).: _Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be the \(0\)-superlevel set of a continuous function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\). If \(h\) is a DTCBF for (1) on \(\mathcal{C}\)
_then the set \(\mathscr{K}_{\rm CBF}({\bf x})\) is non-empty for all \({\bf x}\in\mathbb{R}^{n}\), and for any continuous state-feedback controller \({\bf k}\) with \({\bf k}({\bf x})\in\mathscr{K}_{\rm CBF}({\bf x})\) for all \({\bf x}\in\mathbb{R}^{n}\), the function \(h\) is a DTBF for (2) on \(\mathcal{C}\)._
Given a continuous nominal controller \({\bf k}_{\rm nom}:\mathbb{R}^{n}\times\mathbb{N}\to\mathbb{R}^{m}\) and a DTCBF \(h\) for (1) on \(\mathcal{C}\), a controller \({\bf k}\) satisfying \({\bf k}({\bf x},k)\in\mathscr{K}_{\rm CBF}({\bf x})\) for all \({\bf x}\in\mathbb{R}^{n}\) and \(k\in\mathbb{N}\) can be specified via the following optimization problem:
\[\begin{split}{\bf k}({\bf x})=\operatorname*{argmin}_{{\bf u} \in\mathbb{R}^{m}}&\left\|{\bf u}-{\bf k}_{\rm nom}({\bf x},k) \right\|^{2}\qquad\text{(DTCBF-OP)}\\ \text{s.t.}& h({\bf F}({\bf x},{\bf u}))\geq\alpha h ({\bf x}).\end{split}\]
We note that unlike the affine inequality constraint that arises with continuous-time CBFs [7], the DTCBF inequality constraint (4) is not necessarily convex with respect to the input, preventing it from being integrated into a convex optimization-based controller. To solve this issue, it is often assumed that the function \(h\circ{\bf F}:\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}\) is concave with respect to its second argument [1, 3, 36]. This assumption was shown to be well motivated for concave \(h\)[31].
### _Stochastic Preliminaries_
We now review tools from probability theory that will allow us to utilize information about the distribution of a stochastic disturbance signal in constructing a notion of stochastic safety and corresponding safety-critical controllers. We choose to provide this background material at the level necessary to understand our later constructions of stochastic safety and safety-critical controllers, but refer readers to [18] for a precise measure-theoretic presentation of the following concepts.
The key tool underlying our construction of a notion of stochastic safety is a nonnegative supermartingale, a specific type of expectation-governed random process:
**Definition 4**.: _Let \({\bf x}_{k}\) be a sequence of random variables that take values in \(\mathbb{R}^{n}\), \(W:\mathbb{R}^{n}\times\mathbb{N}\to\mathbb{R}\), and suppose that \(\mathbb{E}\big{[}|W({\bf x}_{k},k)|\big{]}<\infty\) for \(k\in\mathbb{N}\). The process \(W_{k}\triangleq W({\bf x}_{k},k)\) is a supermartingale if:_
\[\mathbb{E}[W_{k+1}\mid{\bf x}_{0:k}]\leq W_{k}\text{ almost surely for all }k\in\mathbb{N}, \tag{6}\]
_where \({\bf x}_{0:k}\) indicates the random variables \(\{{\bf x}_{0},{\bf x}_{1},\ldots,{\bf x}_{k}\}\). If, additionally, \(W_{k}\geq 0\) for all \(k\in\mathbb{N}\), \(W_{k}\) is a nonnegative supermartingale. If the process is non-decreasing in expectation, the process \(W_{k}\) is a submartingale. If the inequality (6) holds with equality, the process \(W_{k}\) is a martingale._
An important result from martingale theory that we will use to develop probabilistic safety guarantees is _Ville's inequality_, which allows us to bound the probability that a nonnegative supermartingale will rise above a certain value:
**Theorem 3** (Ville's Inequality [33]).: _Let \(W_{k}\) be a nonnegative supermartingale. Then for all \(\lambda\in\mathbb{R}_{>0}\),_
\[\mathbb{P}\left\{\sup_{k\in\mathbb{N}}W_{k}>\lambda\right\}\leq\frac{\mathbb{ E}[W_{0}]}{\lambda}. \tag{7}\]
Intuitively, Ville's inequality can be compared with Markov's inequality for nonnegative random variables; since the process \(W_{k}\) is nonincreasing in expectation, Ville's inequality allows us to control the probability the process instead moves upward above \(\lambda\).
Lastly, as we will see when synthesizing safety-critical controllers in the presence of stochastic disturbances, we will need to enforce conditions on the expectation of a DCTBF. In doing so, we will need to relate the expectation of the DCTBF \(h({\bf x}_{k+1})\) to the expectation of the state \({\bf x}_{k+1}\). This will be achieved using Jensen's inequality:
**Theorem 4** (Jensen's Inequality [22]).: _Consider a continuous function \(h:\mathbb{R}^{n}\to\mathbb{R}\) and a random variable \({\bf x}\) that takes values in \(\mathbb{R}^{n}\) with \(\mathbb{E}[\|{\bf x}\|]<\infty\). We have that:_
\[\begin{cases}\text{if $h$ is convex,}&\text{ then }\mathbb{E}[h({\bf x})]\geq h( \mathbb{E}[{\bf x}]),\\ \text{if $h$ is concave,}&\text{ then }\mathbb{E}[h({\bf x})]\leq h( \mathbb{E}[{\bf x}]).\end{cases} \tag{8}\]
## III Safety of Discrete-Time Stochastic Systems
In this section we provide one of our main results in the form of a bound on the probability that a system with stochastic disturbances will exit a given superlevel set of a DTBF over a finite time horizon.
Consider the following modification of the DT system (1):
\[{\bf x}_{k+1}={\bf F}({\bf x}_{k},{\bf u}_{k})+{\bf d}_{k},\quad\forall k\in \mathbb{N}, \tag{9}\]
with \({\bf d}_{k}\) taking values in \(\mathbb{R}^{n}\), and a closed-loop system:
\[{\bf x}_{k+1}={\bf F}({\bf x}_{k},{\bf k}({\bf x}_{k}))+{\bf d}_{k},\quad\forall k \in\mathbb{N}. \tag{10}\]
We assume that \({\bf x}_{0}\) is known and the disturbances \({\bf d}_{k}\) are a sequence of independent and identically distributed (with distribution \(\mathcal{D}\)) random variables2 with (potentially unbounded) support on \(\mathbb{R}^{n}\), generating the random process \({\bf x}_{1:k}\). To study the safety of this system, we will use the following definition:
Footnote 2: This implies the dynamics define a Markov process, i.e. \(\mathbb{E}[h({\bf F}({\bf x}_{k},{\bf u}_{k})+{\bf d}_{k})\mid{\bf x}_{0:k}]= \mathbb{E}[h({\bf F}({\bf x}_{k},{\bf u}_{k})+{\bf d}_{k})\mid{\bf x}_{k}]\), since the state \({\bf x}_{k+1}\) at time \(k+1\) only depends on the state \({\bf x}_{k}\), input \({\bf u}_{k}\), and disturbance \({\bf d}_{k}\) at time \(k\).
**Definition 5** (\(K\)-Step Exit Probability).: _Let \(h:\mathbb{R}^{n}\to\mathbb{R}\) be a continuous function. For any \(K\in\mathbb{N}\), \(\gamma\in\mathbb{R}_{\geq 0}\), and initial condition \({\bf x}_{0}\in\mathbb{R}^{n}\), the \(K\)-step exit probability of the closed-loop system (10) is given by:_
\[P_{u}(K,\gamma,{\bf x}_{0})=\mathbb{P}\left\{\min_{k\in\{0,\ldots,K\}}h({\bf x}_{ k})<-\gamma\right\}. \tag{11}\]
which describes the probability that the system will leave the \(-\gamma\) superlevel set of \(h\) within \(K\) steps. This probability is directly related to the robust safety concept of Input-to-State Safety (ISSf) [20] which reasons about the superlevel set of \(h\) which is rendered safe in the presence of bounded disturbances. For the remainder of this work, we will omit the dependence of \(P_{u}\) on \(K\), \(\gamma\), and \({\bf x}_{0}\) for notational simplicity.
**Remark 1**.: The finite time aspect of \(K\)-step exit probabilities is critical since systems exposed to unbounded disturbances will exit a bounded set with probability \(P_{u}=1\) over an infinite horizon [30, 16]. Intuitively, this is because a sufficiently large sample will eventually be drawn from the tail of the distribution that forces the system out in a single step.
Given this definition, we now provide one of our main results relating DTBFs to \(K\)-step exit probabilities. We note that this result is a reframing of the stochastic invariance theorem in [21, 27]. Our reframing features three key components. First, we develop our results using the standard formulation of DTBFs covered in the background. Second, we produce a probability bound not only for \(\mathcal{C}\) (defined as the 0-superlevel set of \(h\), such that \(\gamma=0\)), but for all non-positive superlevel sets of \(h\) (\(\gamma\geq 0\)), a stochastic variant of ISSF [20]. Third, we present a complete proof of our result, with the goal of illuminating how to leverage tools from martingale theory to reason about the safety of discrete-time stochastic systems.
**Theorem 5**.: _Let \(h:\mathbb{R}^{n}\to\mathbb{R}\) be a continuous, upper-bounded function with upper bound \(M\in\mathbb{R}_{>0}\). Suppose there exists an \(\alpha\in(0,1)\) and \(a^{3}\)\(\delta\leq M(1-\alpha)\) such that the closed-loop system (10) satisfies:_
\[\mathbb{E}[\ h(\mathbf{F}(\mathbf{x},\mathbf{k}(\mathbf{x}))+\mathbf{d})\ |\ \mathbf{x}\ ]\geq\alpha h(\mathbf{x})+\delta, \tag{12}\]
_for all \(\mathbf{x}\in\mathbb{R}^{n}\), with \(\mathbf{d}\sim\mathcal{D}\). For any \(K\in\mathbb{N}\) and \(\gamma\in\mathbb{R}_{\geq 0}\), if \(\delta<-\gamma(1-\alpha)\), we have that:_
\[P_{u}\leq\left(\frac{M-h(\mathbf{x}_{0})}{M+\gamma}\right)\alpha^{K}+\frac{M( 1-\alpha)-\delta}{M+\gamma}\sum_{i=1}^{K}\alpha^{i-1}. \tag{13}\]
_Alternatively if \(\delta\geq-\gamma(1-\alpha)\), then:_
\[P_{u}\leq 1-\frac{h(\mathbf{x}_{0})+\gamma}{M+\gamma}\left(\frac{M\alpha+ \gamma+\delta}{M+\gamma}\right)^{K}. \tag{14}\]
**Remark 2**.: The upper bound \(\delta\leq M(1-\alpha)\) is relatively non-restrictive, as not only is \(\delta\) typically negative, but it must hold such that, in expectation, \(h(\mathbf{x}_{k+1})\) cannot rise above the upper bound \(M\) on \(h\). The switching condition between (13) and (14) of \(\delta=\gamma(1-\alpha)\) corresponds to whether, in expectation, the one-step evolution of the system remains in the set \(\mathcal{C}_{\gamma}=\{\mathbf{x}\in\mathbb{R}^{n}\ |\ h(\mathbf{x})\geq-\gamma\}\) when it begins on the boundary of \(\mathcal{C}_{\gamma}\).
To make our argument clear at a high level, we begin with a short proof sketch before proceeding in detail.
_Proof sketch:_ The key tool in proving Theorem 5 is Ville's inequality (7). Since \(h(\mathbf{x}_{k})\), in general, is not a super- or submartingale, we will first construct a nonnegative supermartingale, \(W_{k}\triangleq W(\mathbf{x}_{k},k)\), by scaling and shifting \(h(\mathbf{x}_{k})\). We can then apply Ville's inequality (7) to bound the probability of \(W_{k}\) going above any \(\lambda>0\). Next we find a particular value of \(\lambda\), denoted \(\lambda^{*}\), such that:
\[\max_{k\in\{0,\ldots,K\}}W_{k}\leq\lambda^{*}\implies\min_{k\in\{0,\ldots,K \}}h(\mathbf{x}_{k})\geq-\gamma. \tag{15}\]
Intuitively, this means that any sequence \(W_{k}\) that remains below \(\lambda^{*}\) ensures that the corresponding sequence \(h(\mathbf{x}_{k})\) remains (safe) above \(-\gamma\). This allows us to bound the \(K\)-step exit probability \(P_{u}\) of our original process \(h(\mathbf{x}_{k})\) with the probability that \(W_{k}\) will rise above \(\lambda^{*}\):
\[P_{u}\leq\mathbb{P}\left\{\max_{k\in\{0,\ldots,K\}}W_{k}>\lambda^{*}\right\} \leq\frac{\mathbb{E}[W_{0}]}{\lambda^{*}}=\frac{W_{0}}{\lambda^{*}}, \tag{16}\]
where the last equality will follow as it is assumed \(\mathbf{x}_{0}\) is known _a priori_. Particular choices of \(W\) and \(\lambda^{*}\) will yield the bounds stated in the theorem, completing the proof.
### _Proof: Constructing a Nonnegative Supermartingale_
We will begin by constructing a nonnegative supermartingale, allowing us to use Ville's inequality. To construct this supermartingale, we first note that by rearranging terms in the inequality in (12), we can see the process \(M-h(\mathbf{x}_{k})\) resembles a supermartingale:
\[\mathbb{E}[M-h(\mathbf{x}_{k+1})\ |\ \mathbf{x}_{k}] \leq\alpha(M-h(\mathbf{x}_{k}))+M(1-\alpha)-\delta,\] \[\triangleq\alpha(M-h(\mathbf{x}_{k}))+\varphi, \tag{17}\]
but with a scaling \(\alpha\) and additive term \(\varphi\triangleq M(1-\alpha)-\delta\) that makes \(\mathbb{E}\left[M-h(\mathbf{x}_{k+1})\ |\ \mathbf{x}_{k}\right]\nleq M-h(\mathbf{x}_{k})\) in general. To remove the effects of \(\alpha\) and \(\varphi\), consider the function \(W:\mathbb{R}^{n}\times\mathbb{N}\to\mathbb{R}\) defined as:
\[W(\mathbf{x}_{k},k)\triangleq\underbrace{(M-h(\mathbf{x}_{k}))\theta^{k}}_{ \text{negate and scale}}-\underbrace{\varphi\sum_{i=1}^{k}\theta^{i}}_{\text{ cancel}\ \varphi}+\underbrace{\varphi\sum_{i=1}^{K}\theta^{i}}_{\text{ensure}\ W\geq 0}, \tag{18}\]
where \(\theta\in[1,\infty)\) will be used to cancel the effect of \(\alpha\), but is left as a free variable that we will later use to tighten our bound on \(P_{u}\). Denoting \(W_{k}\triangleq W(\mathbf{x}_{k},k)\), we now verify \(W_{k}\) is a nonnegative supermartingale. We first show that \(W_{k}\geq 0\) for all \(k\in\{0,\ldots,K\}\). Combining the two sums in (18) yields:
\[W_{k}=(M-h(\mathbf{x}_{k}))\theta^{k}+\varphi\sum_{i=k+1}^{K}\theta^{i}, \tag{19}\]
which is nonnegative as \(h(\mathbf{x})\leq M\) for all \(\mathbf{x}\in\mathbb{R}^{n}\), \(\theta\geq 1\), and \(\varphi\geq 0\) since \(\delta\leq M(1-\alpha)\) by assumption. We now show that \(W_{k}\) satisfies the supermartingale inequality (6):
\[\mathbb{E}[W_{k+1}\ |\ \mathbf{x}_{0:k}]=\mathbb{E}[W_{k+1}\ |\ \mathbf{x}_{k}], \tag{20}\] \[=(M-\mathbb{E}[h(\mathbf{x}_{k+1})\ |\ \mathbf{x}_{k}])\theta^{k+1}+ \varphi\sum_{i=k+2}^{K}\theta^{i},\] (21) \[\leq(M-\alpha h(\mathbf{x}_{k})-\delta)\theta^{k+1}+\varphi\sum_{ i=k+2}^{K}\theta^{i},\] (22) \[=\alpha\theta(M-h(\mathbf{x}_{k}))\theta^{k}+\theta^{k+1}\underbrace {((1-\alpha)M-\delta)}_{=\varphi}+\varphi\sum_{i=k+2}^{K}\theta^{i},\] \[=\underbrace{\alpha\theta}_{\text{req},\leq 1}(M-h(\mathbf{x}_{k})) \theta^{k}+\varphi\sum_{i=k+1}^{K}\theta^{i}\leq W_{k}, \tag{23}\]
where (20) is due to the Markovian nature of system (10), (21) comes from using (19) to write \(W_{k+1}\), (22) follows from (12), and (23) follows from the preceding line using the definition of \(\varphi\) and assuming the further requirement that \(\theta\leq\frac{1}{\alpha}\). Thus, we have shown that \(W_{k}\) is a nonnegative supermartingale.
### _Proof: Bounding the Exit Probability via Ville's Inequality_
Since \(W_{k}\) is a nonnegative supermartingale, we can apply Ville's inequality to establish:
\[\mathbb{P}\left\{\max_{k\in\{0,\ldots,K\}}W_{k}>\lambda\right\}\leq\frac{ \mathbb{E}[W_{0}]}{\lambda}=\frac{W_{0}}{\lambda}. \tag{24}\]
for all \(\lambda\in\mathbb{R}_{>0}\). To relate this bound to the \(K\)-step exit probability \(P_{u}\), we seek a value of \(\lambda\), denoted \(\lambda^{*}\), such that:
\[\max_{k\in\{0,\ldots,K\}}W_{k}\leq\lambda^{*}\implies\min_{k\in\{0,\ldots,K \}}h(\mathbf{x}_{k})\geq-\gamma. \tag{25}\]
In short, we will choose a value of \(\lambda^{*}\) such that all trajectories of \(W_{k}\) that remain below \(\lambda^{*}\) must also have \(h_{k}\geq-\gamma\). To this end, we use the geometric series identity4\(\sum_{i=1}^{k}\theta^{i-1}=\frac{1-\theta^{k}}{1-\theta}\) to rewrite \(W_{k}\) as:
Footnote 4: At \(\theta=1\), the fraction \(\frac{1-\theta^{k}}{1-\theta}\) is not well defined. However, the proof can be carried out using the summation notation. In this case \(\lambda^{*}=M+\gamma\), and (24) yields \(P_{u}\leq 1-\frac{h(\mathbf{x}_{0})+\gamma-\rho K}{M+\gamma}\).
\[W_{k}=(M-h(\mathbf{x}_{k}))\theta^{k}+\varphi\theta\frac{\theta^{K}-\theta^{ k}}{\theta-1}. \tag{26}\]
Let us define:
\[\lambda_{k}=\left(\gamma+M-\frac{\varphi\theta}{\theta-1}\right)\theta^{k}+ \frac{\varphi\theta}{\theta-1}\theta^{K}>0, \tag{27}\]
which, intuitively, applies the same time-varying scaling and shift to a constant, \(-\gamma\), that was applied to \(h(\mathbf{x}_{k})\) to yield \(W_{k}\) (26). Let us choose:
\[\lambda^{*}\triangleq\min_{k\in\{0,\ldots,K\}}\lambda_{k}. \tag{28}\]
Since we assume \(\max_{k\in\{0,\ldots,K\}}W_{k}\leq\lambda^{*},\) we can write, for all \(k\in\{0,\ldots,K\}\):
\[0\geq W_{k}-\lambda^{*}\geq W_{k}-\lambda_{k}=(-\gamma-h_{k})\theta^{k}. \tag{29}\]
Since \(\theta>1,\) this implies that \(-\gamma-h_{k}\leq 0\) for all \(k\in\{0,\ldots,K\}\), and thus \(\min_{k\in\{0,\ldots,K\}}h(\mathbf{x}_{k})\geq-\gamma,\) as needed.
### _Proof: Choosing \(\theta\) to Minimize the Ville's Bound_
Since our supermartingale \(W_{k}\) includes a free parameter \(\theta\in(1,\frac{1}{\alpha}]\), we will choose a value of \(\theta\) in this interval which provide the tightest bound on \(P_{u}\).
**Case 1:** Consider the first case where \(\delta<-\gamma(1-\alpha)\), implying \(\varphi>(M+\gamma)(1-\alpha)\). In this case \(\frac{1}{\alpha}<\frac{M+\gamma}{M+\gamma-\varphi}\) and thus all of the allowable choices of \(\theta\in(1,\frac{1}{\alpha})\) are such that \(\theta<\frac{M+\gamma}{M+\gamma-\varphi}\). Denoting \(k^{*}\) such that \(\lambda^{*}=\lambda_{k^{*}}\), we have that:
\[\lambda^{*}=\underbrace{\left(\gamma+M-\frac{\varphi\theta}{\theta-1}\right)}_ {\leq 0}\theta^{k^{*}}+\frac{\varphi\theta}{\theta-1}\theta^{K}. \tag{30}\]
Thus, we know \(\min_{k\in\{0,\ldots,K\}}\lambda_{k}\) occurs at \(k^{*}=K\) and so:
\[P_{u}\leq\frac{W_{0}}{\lambda^{*}}=\frac{M-h(\mathbf{x}_{0})+\frac{\varphi \theta}{\theta-1}\left(\theta^{K}-1\right)}{(M+\gamma)\theta^{K}}. \tag{31}\]
Since this bound is a decreasing function of \(\theta\) (as shown in Lemma 2 in Appendix A), we choose the largest allowable value \(\theta^{*}=\frac{1}{\alpha}\) to achieve the bound:
\[P_{u} \leq\frac{W_{0}}{\lambda^{*}}=\frac{M-h(\mathbf{x}_{0})+\frac{ \varphi}{1-\alpha}\left(\alpha^{-K}-1\right)}{(M+\gamma)\alpha^{-K}}, \tag{32}\] \[=\left(\frac{M-h(\mathbf{x}_{0})}{M+\gamma}\right)\alpha^{K}+ \frac{M(1-\alpha)-\delta}{M+\gamma}\sum_{i=1}^{K}\alpha^{i-1}, \tag{33}\]
where we again use the geometric series identity.
**Case 2:** Now consider the second case where \(\delta\geq-\gamma(1-\alpha)\), so \(\varphi\leq(M+\gamma)(1-\alpha)\), which implies that the set \([\frac{M+\gamma}{M+\gamma-\varphi},\frac{1}{\alpha}]\) is nonempty. Choosing a value of \(\theta\) in this set ensures that:
\[\lambda^{*}=\underbrace{\left(\gamma+M-\frac{\varphi\theta}{\theta-1}\right) \theta^{k^{*}}}_{\geq 0}+\frac{\varphi\theta}{\theta-1}\theta^{K}. \tag{34}\]
Thus \(\min_{k\in\{0,\ldots,K\}}\lambda_{k}\) occurs at \(k^{*}=0\) and:
\[P_{u} \leq\frac{W_{0}}{\lambda}=\frac{(M-h(\mathbf{x}_{0}))+\frac{ \varphi\theta}{\theta-1}\left(\theta^{K}-1\right)}{(M+\gamma)+\frac{\varphi \theta}{\theta-1}\left(\theta^{K}-1\right)}, \tag{35}\] \[=1-\frac{h(\mathbf{x}_{0})+\gamma}{M+\gamma+\frac{\varphi\theta} {\theta-1}\left(\theta^{K}-1\right)}. \tag{36}\]
Since this bound is increasing in \(\theta\) (as shown in Lemma 3 in Appendix A), we choose \(\theta^{*}=\frac{M+\gamma}{M+\gamma-\varphi}\) to achieve the bound:
\[P_{u}\leq 1-\left(\frac{h(\mathbf{x}_{0})+\gamma}{M+\gamma}\right)\left(\frac{M \alpha+\gamma+\delta}{M+\gamma}\right)^{K}. \tag{37}\]
If, alternatively, we choose \(\theta\in\left(1,\frac{M+\gamma}{M+\gamma-\varphi}\right]\), then the inequality in (30) holds, \(k^{*}=K\), and the bound is decreasing in \(\theta\) as in Case 1. Evaluating this bound for the minimizing value \(\theta^{*}=\frac{M+\gamma}{M+\gamma-\varphi}\) again yields:
\[P_{u} \leq\frac{M-h(\mathbf{x}_{0})+(M+\gamma)(\theta^{K}-1)}{(M+ \gamma)\theta^{K}}, \tag{38}\] \[=1-\left(\frac{h(\mathbf{x}_{0})+\gamma}{M+\gamma}\right)\left( \frac{M\alpha+\gamma+\delta}{M+\gamma}\right)^{K}. \tag{39}\]
## IV Practical Considerations for Enforcing Stochastic DTCBFs
Theorem 5 allows us to reason about the finite-time safety of systems governed by DTBFs. To utilize the results of this theorem in a control setting, we aim to use DTCBFs to develop control methods which enforce the expectation condition:
\[\mathbb{E}[h(\mathbf{F}(\mathbf{x}_{k},\mathbf{u}_{k})+\mathbf{d}_{k})\mid\mathbf{ x}_{k}]\geq\alpha h(\mathbf{x}_{k}). \tag{40}\]
Like the DTCBF-OP controller, we seek to enforce this constraint using an optimization-based controller that enforces safety while achieving pointwise minimal deviation from a
nominal controller \(\mathbf{k}_{\text{nom}}\) in the form of an Expectation-based DTCBF (ED) Controller:
\[\mathbf{k}_{\text{ED}}(\mathbf{x}_{k})=\operatorname*{argmin}_{ \mathbf{u}\in\mathbb{R}^{m}} \|\mathbf{u}-\mathbf{k}_{\text{nom}}(\mathbf{x}_{k},k)\|^{2}\] (ED) s.t. \[\mathbb{E}[h(\mathbf{F}(\mathbf{x}_{k},\mathbf{u})+\mathbf{d}_{k} )\mid\mathbf{x}_{k}]\geq\alpha h(\mathbf{x}_{k}).\]
The expectation in (ED) adds complexity that is not generally considered in the application of deterministic DTCBFs. More commonly, CBF-based controllers solve "certainty-equivalent" optimization programs, like this Certainty-Equivalent DTCBF (CED) controller, that replaces the expected barrier value \(\mathbb{E}[h(\mathbf{x}_{k+1})\mid\mathbf{x}_{k}]\) with the barrier evaluated at the expected next state, \(h(\mathbb{E}[\mathbf{x}_{k+1}\mid\mathbf{x}_{k}])\):
\[\mathbf{k}_{\text{CED}}(\mathbf{x}_{k})=\operatorname*{argmin}_{ \mathbf{u}\in\mathbb{R}^{m}} \|\mathbf{u}-\mathbf{k}_{\text{nom}}(\mathbf{x}_{k},k)\|^{2}\] (CED) s.t. \[h(\mathbf{F}(\mathbf{x}_{k},\mathbf{u})+\mathbb{E}[\mathbf{d}_{k }])\geq\alpha h(\mathbf{x}_{k}).\]
where \(\mathbb{E}[\mathbf{F}(\mathbf{x}_{k},\mathbf{u}_{k})|\mathbf{x}_{k}]=\mathbf{ F}(\mathbf{x}_{k},\mathbf{u}_{k})\) and \(\mathbb{E}[\mathbf{d}_{k}|\mathbf{x}_{k}]=\mathbb{E}[\mathbf{d}_{k}]\). This constraint is often easier to evaluate than (40) since it allows control actions to be selected with respect to the expected disturbance \(\mathbb{E}[\mathbf{d}_{k}]\) without needing to model the disturbance distribution \(\mathcal{D}\). If the disturbance is zero-mean, then this form of the constraint is implicitly enforced by DTCBF controllers such as those presented in [1, 36]. However, when replacing ED with CED it is important to consider the effect of Jensen's inequality in Theorem 4.
If the "certainty-equivalent" constraint in CED is strictly concave5, then we can apply the results of Theorem 5 directly since Jensen's inequality tightens the constraint and ensures satisfaction of the expectation condition (12). Unfortunately, using such a controller is a non-convex optimization program which can be impractical to solve. If, instead, the constraint is convex, then CED is a convex program, but does not necessarily enforce the expectation condition (12) in Theorem (5) due to the gap introduced by Jensen's inequality.
Footnote 5: The constraint \(h(\mathbf{x}_{k}+\mathbf{u})\geq\alpha h(\mathbf{x}_{k})\) is concave in \(\mathbf{u}\) when \(h\) is convex and it is convex in \(\mathbf{u}\) when \(h\) is concave.
In order to apply the results of Theorem 5 to controllers of the form (CED) with convex constraints, we must first provide a bound on the gap introduced by Jensen's inequality. In particular, for any concave function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\) and random variable \(\mathbf{d}\sim\mathcal{D}\), we seek to determine a value \(\psi\in\mathbb{R}_{\geq 0}\) such that, for all \(\mathbf{x}\in\mathbb{R}^{n}\) and \(\mathbf{u}\in\mathbb{R}^{m}\):
\[\mathbb{E}[h(\mathbf{F}(\mathbf{x},\mathbf{u})+\mathbf{d})\mid\mathbf{x}]\geq h (\mathbf{F}(\mathbf{x},\mathbf{u})+\mathbb{E}[\mathbf{d}])-\psi, \tag{41}\]
thus quantifying the gap introduced by Jensen's inequality.
A large body of work has studied methods for finding the smallest possible \(\psi\) that satisfies (41). Here we adapt a result in [10] to achieve a relatively loose, but straightforward bound:
**Lemma 1**.: _Consider a twice-continuously differentiable, concave function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\) with \(\sup_{\mathbf{x}\in\mathbb{R}^{n}}\|\nabla^{2}h(\mathbf{x})\|_{2}\leq\lambda_ {\max}\) for some \(\lambda_{\max}\in\mathbb{R}_{\geq 0}\), and a random variable \(\mathbf{x}\) that takes values in \(\mathbb{R}^{n}\) with \(\mathbb{E}[\|\mathbf{x}\|]<\infty\) and \(\|\text{cov}(\mathbf{x})\|<\infty\). Then we have that:_
\[\mathbb{E}[h(\mathbf{x})]\geq h(\mathbb{E}[\mathbb{E}])-\frac{\lambda_{\max}}{ 2}\text{tr}(\text{cov}(\mathbf{x})). \tag{42}\]
The proof is included in Appendix B. We note that although this value of \(\psi=\frac{\lambda_{\text{nom}}}{2}\text{tr}(\text{cov}(\mathbf{x}))\) is easy to interpret, tighter bounds exist which have less restrictive assumptions than a globally bounded Hessian [22]. We also note that one could also use sampling-based methods to approximately satisfy the constraint (41) by estimating \(\psi\) empirically.
Next we present a controller which combines the mean-based control of the "certainty equivalent" (CED) while also accounting for Jensen's inequality. This Jensen-Enhanced DTCBF Controller (JED) includes an additional control parameter \(c_{\text{I}}\geq 0\) to account for Jensen's inequality:
\[\mathbf{k}_{\text{ED}}(\mathbf{x}_{k})=\operatorname*{argmin}_{ \mathbf{u}\in\mathbb{R}^{m}} \|\mathbf{u}-\mathbf{k}_{\text{nom}}(\mathbf{x}_{k},k)\|^{2}\] (JED) s.t. \[h(\mathbf{F}(\mathbf{x}_{k},\mathbf{u}_{k})+\mathbb{E}[\mathbf{d }_{k}])-c_{\text{I}}\geq\alpha h(\mathbf{x}_{k}).\]
Given this controller and a method for bounding \(\psi\), we can now apply Theorem 5 while accounting for (or analyzing) the effects of Jensen's inequality on the (JED) controller:
**Theorem 6**.: _Consider the system (10) and let \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a twice-continuously differentiable, concave function such that \(\sup_{\mathbf{x}\in\mathbb{R}^{n}}h(\mathbf{x})\leq M\) for \(M\in\mathbb{R}_{>0}\) and \(\sup_{\mathbf{x}\in\mathbb{R}^{n}}\|\nabla^{2}h(\mathbf{x})\|_{2}\leq\lambda_ {\max}\) for \(\lambda_{\max}\in\mathbb{R}_{\geq 0}\). Suppose there exists an \(\alpha\in(0,1)\) and a \(c_{\text{I}}\in[0,\frac{\lambda_{\text{nom}}}{2}\text{tr}(\text{cov}(\mathbf{d}))+ M(1-\alpha)]\) such that:_
\[h(\mathbf{F}(\mathbf{x},\mathbf{k}(\mathbf{x}))+\mathbb{E}[\mathbf{d}])-c_{ \text{I}}\geq\alpha h(\mathbf{x}), \tag{43}\]
_for all \(\mathbf{x}\in\mathbb{R}^{n}\) with \(\mathbf{d}\sim\mathcal{D}\). Then we have that:_
\[\mathbb{E}[\ h(\mathbf{F}(\mathbf{x},\mathbf{k}(\mathbf{x}))+\mathbf{d})\mid \mathbf{x}\ ]\geq\alpha h(\mathbf{x})+\delta, \tag{44}\]
_for all \(\mathbf{x}\in\mathbb{R}^{n}\) with \(\mathbf{d}\sim\mathcal{D}\) and \(\delta=c_{\text{I}}-\frac{\lambda_{\text{nom}}}{2}\text{tr}(\text{cov}(\mathbf{d} _{k}))\)._
Proof:: Given \(\mathbf{x}\in\mathbb{R}^{n}\), Lemma 1 ensures that:
\[0 \leq h(\mathbf{F}(\mathbf{x},\mathbf{k}(\mathbf{x}))+\mathbb{E}[ \mathbf{d}])-c_{\text{I}}-\alpha h(\mathbf{x}) \tag{45}\] \[\leq\mathbb{E}[h(\mathbf{F}(\mathbf{x},\mathbf{k}(\mathbf{x}))+ \mathbf{d})\mid\mathbf{x}]+\psi-c_{\text{I}}-\alpha h(\mathbf{x}) \tag{46}\]
where \(\psi=\frac{\lambda_{\text{nom}}}{2}\text{tr}(\text{cov}(\mathbf{d}))\). Letting \(\delta=c_{\text{I}}-\frac{\lambda_{\text{nom}}}{2}\text{tr}(\text{cov}( \mathbf{d}))\) yields the the desired result.
## V Practical Examples
In this section we consider a variety of simulation examples that highlight the key features of our approach.
Fig. 2: The dashed lines represent the theoretical probability bounds for the system as in Theorem 5. The solid lines represent the Monte Carlo (MC) estimated \(P_{u}\) across 500 experiments.
### _Linear 1D System_
Here we analyze our bounds by considering the case of unbounded i.i.d. disturbances \(d_{k}\sim\mathcal{N}(0,1)\) for the one dimensional system (\(x,u,\in\mathbb{R}\)) and safe set:
\[x_{k+1}=x_{k}+2+u_{k}+\sigma d_{k},\ \mathcal{C}=\{x\mid 1-x^{2}\geq 0\}. \tag{47}\]
The Jensen gap for this system and DTCBF is bounded by \(\psi=\sigma^{2}\). For simulation, we employ the JED controller with \(c_{l}=\sigma^{2}\), \(\alpha=1-\sigma^{2}\), and nominal controller \(\mathbf{k}_{\text{nom}}(\mathbf{x}_{k},k)=0\). Figure 2 shows the results of 500 one second long trials run with a variety of \(\sigma\in[0,0.2]\) and also displays how the bound on \(P_{u}\) decreases as \(\gamma\) increases.
### _Simple Pendulum_
Next we consider an inverted pendulum about its upright equilibrium point with the DT dynamics:
\[\begin{bmatrix}\theta_{k+1}\\ \dot{\theta}_{k+1}\end{bmatrix}=\begin{bmatrix}\theta_{k}+\Delta t\dot{\theta }_{k}\\ \dot{\theta}_{k}+\Delta t\sin(\theta_{k})\end{bmatrix}+\begin{bmatrix}0\\ \Delta t\mathbf{u}\end{bmatrix}+\mathbf{d}_{k}, \tag{48}\]
with time step \(\Delta_{t}=0.01\) sec, i.i.d disturbances \(\mathbf{d}_{k}\sim\mathcal{N}(\mathbf{0}_{2},\text{Diag}([0.005^{2},0.025^{2}]))\), and safe set6:
Footnote 6: Diag: \(\mathbb{R}^{n}\to\mathbb{R}^{n\times n}\) generates a square diagonal matrix with its argument along the main diagonal.
\[\mathcal{C}=\left\{\mathbf{x}\in\mathbb{R}^{n}\ \bigg{|}\ \underbrace{1-\frac{6^{2}}{ \pi^{2}}\mathbf{x}^{\top}\begin{bmatrix}1&3^{-\frac{1}{2}}\\ 3^{-\frac{1}{2}}&1\end{bmatrix}}_{h_{\text{pred}}(\mathbf{x})}\mathbf{x}\geq 0\right\} \tag{49}\]
which is constructed using the continuous-time Lyapunov equation as in [31] and for which \(|\theta|\leq\pi/6\) for all \(\mathbf{x}\in\mathcal{C}\). Figure 3 shows the results of 500 one second long trials for each \(\mathbf{x}_{0}\in\mathcal{C}\) using the JED controller with parameters \(\alpha=1-\psi,c_{l}=\psi\), where \(\psi=\frac{\gamma_{\text{max}}}{2}\text{tr}(\text{cov}(\mathbf{d}_{k}))\). This figure highlights the influence of \(\mathbf{x}_{0}\) and shows how the bound on \(P_{u}\) increases as \(h(\mathbf{x}_{0})\) decreases.
### _Double Integrator_
We also consider the problem of controlling a planar system with unit-mass double-integrator dynamics to remain inside a convex polytope (in particular, a unit square centered at the origin). Using Heun's method, the dynamics are given by:
\[\mathbf{x}_{k+1} =\left[\begin{array}{cc}\mathbf{I}_{2}&\Delta t\ \mathbf{I}_{2}\\ \mathbf{0}_{2}&\mathbf{I}_{2}\end{array}\right]\mathbf{x}_{k}+\left[\begin{array} []{c}\frac{\Delta t^{2}}{2}\mathbf{I}_{2}\\ \Delta t\mathbf{I}_{2}\end{array}\right]\mathbf{u}_{k}+\mathbf{d}_{k}, \tag{50}\] \[\triangleq\mathbf{A}\mathbf{x}_{k}+\mathbf{B}\mathbf{u}_{k}+ \mathbf{d}_{k}, \tag{51}\]
where \(\Delta t\) is the integration time step and \(\mathbf{d}_{k}\sim\mathcal{N}(\mathbf{0}_{4},\mathbf{Q})\) is a zero-mean Gaussian process noise added to the dynamics. Here we use \(\Delta t=0.05\) sec, and \(\mathbf{Q}=\mathbf{B}\mathbf{B}^{T}\), which corresponds to applying a disturbance force \(\mathbf{f}_{k}\sim\mathcal{N}(0,\mathbf{I}_{2})\) to the system at each timestep.
To keep the system inside a convex polytope, we seek to enforce the affine inequalities \(\mathbf{C}\mathbf{x}\leq\mathbf{w}\) for \(\mathbf{C}\in\mathbb{R}^{n_{e}\times n},\mathbf{w}\in\mathbb{R}^{n_{e}}\). Thus, we define our barrier \(h(\mathbf{x})=-\max(\mathbf{C}\mathbf{x}-\mathbf{w})\), where \(\max(\cdot)\) defines the element-wise maximum, and \(h(\mathbf{x})\geq 0\) if and only if the constraint \(\mathbf{C}\mathbf{x}\leq\mathbf{w}\) holds. Implementing the ED controller for this system is non-trivial, since the expectation of \(h(\mathbf{x})\) for a Gaussian-distributed \(\mathbf{x}\) does not have a closed form. Similarly, implementing the JED controller to account for Jensen's inequality is non-trivial since \(h\) is not twice continuously differentiable. We instead choose to enforce a conservative approximation of the barrier condition (40) using the _log-sum-exp_ function. As we show in Appendix C, this approximation yields an analytic upper bound (derived using the moment-generating function of Gaussian r.v.s) on \(\mathbb{E}[h(\mathbf{x}_{k+1})]\) which can be imposed via a convex constraint.
Figure 4 plots the results of 500 simulated trajectories for the double integrator system using the proposed ED controller,
Fig. 4: Simulation results for double integrator over \(500\) trials. **(Top left)**: Planar \((x,y)\) trajectories for the approximated ED controller, with the safe set (a unit square) plotted in green. **(Top right):** Planar \((x,y)\) trajectories for a CED controller. **(Bottom left):** The \(h(\mathbf{x}_{k})\) for both controllers, with the max and min values shaded. **(Bottom right):** Percent of trajectories that have remained safe over time. We also plot our (conservative) bound (14) on the unsafe probability \(P_{u}\).
Fig. 3: **(Top Left)** System diagram of the inverted pendulum. **(Top Right)** 500 one second long example trajectories starting at \(\mathbf{x}_{0}=0\). **(Bottom Left)** Monte Carlo estimates of \(P_{u}\) for \(\gamma=0\) using 500 one second long trials for each initial conditions represented by a black dot. **(Bottom Right)** Our (conservative) theoretical bounds on \(P_{u}\) from Theorem 5
and the certainty equivalent CED controller that neglects the presence of process noise. Both controllers have a nominal controller \(\mathbf{k}_{\text{nom}}(\mathbf{x})=[50,0]\) which seeks to drive the system into the right wall. All trajectories start from the origin. We note the proposed controller is indeed more conservative than the CED controller, yielding both fewer and smaller violations of the safe set. In the bottom right, we also plot our bound as a function of the time horizon, which we note is quite conservative compared to our Monte Carlo estimate of the safety probability, motivating future work.
### _Quadruped_
Finally, we consider the problem of controlling a simulated quadrupedal robot locomoting along a narrow path. The simulation is based on a Unitree A1 robot as shown in Figure (1) which has 18 degrees of freedom and 12 actuators. An ID-QP controller designed using concepts in [14] and implemented at 1kHz is used to track stable walking gaits with variable planar velocities and angle rate using the motion primitive framework presented in [32]. We simulate the entire quadruped's dynamics at 1kHz, but follow a similar methodology to [24] and consider the following simplified discrete-time single-integrator system for DTCBF-based control:
\[\mathbf{x}_{k+1}=\mathbf{x}_{k}+\Delta t\begin{bmatrix}\cos\theta&-\sin \theta&0\\ \sin\theta&\cos\theta&0\\ 0&0&1\end{bmatrix}\begin{bmatrix}v_{k}^{2}\\ v_{k}^{b}\\ \theta_{k}\end{bmatrix}+\mathbf{d}_{k}. \tag{52}\]
where \(\mathbf{x}_{k}=\begin{bmatrix}x,&y,&\theta\end{bmatrix}^{\top}\). In order to represent the error caused by uncertain terrain, zero mean Gaussian disturbances are added to the quadruped's \((x,y)\) body position and velocity with variances of \(2.25\times 10^{6}\) and \(0.01\) respectively. This random noise along with the dynamics-mismatch between the full-order quadrupedal dynamics and (52) is modeled as an i.i.d. random process \(\mathbf{d}_{k}\).
The quadruped is commanded to stand and then traverse a 7 meter path that is 1 meter wide, with the safe set \(\mathcal{C}=\{\mathbf{x}\in\mathbb{R}^{n}\mid 0.5^{2}-y^{2}\geq 0\}\). For this simulation, three controllers are compared: a simple nominal controller \(\mathbf{k}_{\text{nom}}(\mathbf{x})=\begin{bmatrix}0.2,&0,&-\theta\end{bmatrix}^ {\top}\) with no understanding of safety, the DTCBF-OP controller with \(\alpha=0.99\), and our proposed JED controller with \(\alpha=0.99\) and \(c_{l}=\psi\) using the mean and covariance estimates, \(\mathbb{E}[\mathbf{d}_{k}]\approx\begin{bmatrix}-0.0132,&-0.0034,&-0.0002 \end{bmatrix}^{\top}\) and \(\text{tr}(\text{cov}(\mathbf{d}_{k}))\approx\psi=0.000548\), which were generated using 15 minutes of walking data controlled by \(\mathbf{k}_{\text{nom}}\).
The results of 50 trials for each controller can be seen in Figure 1. As expected, \(\mathbf{k}_{\text{nom}}\) generated the largest safety violations and JED the smallest and fewest safety violations.
## VI Conclusion
In this work, we developed a bound for the finite-time safety of stochastic discrete-time systems using discrete-time control barrier functions. Additionally, we presented a method for practically implementing convex optimization-based controllers which satisfy this bound by accounting for or analyzing the effect of Jensen's inequality. We presented several examples which demonstrate the efficacy of our bound and our proposed ED and JED controllers,
This paper offers a large variety of directions for future work. In particular, in our practical examples, we find the safety bound presented here is often quite conservative in practice. One way forward would be to find other supermartingale transformations of the process \(h(\mathbf{x}_{k})\) (perhaps programatically, as in [30]) that can yield tighter bounds than those in Theorem 5. Another potential avenue may consider alternative martingale inequalities to the Ville's inequality used in this work. Another important open question is how to incorporate state uncertainty into our framework. This would allow us to reason about the safety of CBF-based controllers that operate in tandem with state estimators such as Kalman Filters or SLAM pipelines. Similarly, our methods may have interesting applications in handling the dynamics errors introduced in sampled-data control which can perhaps be modeled as a random variable or learned using a distribution-generating framework such as a state-dependent Gaussian processes or Bayesian neural networks. Finally, we assume that the disturbance distribution \(\mathcal{D}\) is known exactly, _a priori_; it would be interesting to consider a "distributionally robust" variant of the stochastic barrier condition (40) that can provide safety guarantees for a class of disturbance distributions.
## Acknowledgments
The authors would like to thank Alexander De Capone and Victor Dorobantu for their invaluable discussion and Joel Tropp for his course on Probability. The authors would also like to thank Wyatt Ubellacker for generously providing his fantastic quadruped simulation environment.
### _Lemmas for Theorem 5_
The following lemmas are used to prove optimality of the bound in Theorem 5 Cases 1 and 2. These lemmas were originally stated without proof in [21].
**Lemma 2**.: _For \(M\in\mathbb{R}_{>0}\), \(\gamma,\varphi\in\mathbb{R}_{\geq 0}\), \(h(\mathbf{x}_{0})\in[-\gamma,M]\), and \(K\in\mathbb{N}_{\geq 1}\), the function \(\Psi_{1}:(1,\infty)\rightarrow\mathbb{R}\) defined as:_
\[\Psi_{1}(\theta)=\frac{M-h(\mathbf{x}_{0})+\frac{\varphi\theta}{\theta-1} \left(\theta^{K}-1\right)}{(M+\gamma)\theta^{K}}, \tag{53}\]
_is monotonically decreasing._
Proof:: The geometric series identity yields:
\[\Psi_{1}(\theta) =\frac{M-h(\mathbf{x}_{0})}{M+\gamma}\theta^{-K}+\frac{\varphi}{(M+ \gamma)}\sum_{i=1}^{K}\theta^{i-K}, \tag{54}\] \[\frac{d\Psi_{1}}{d\theta} =-\frac{M-h(\mathbf{x}_{0})}{M+\gamma}K\theta^{-K-1}-\varphi\sum_ {i=1}^{K}\frac{(K-i)\theta^{i-K-1}}{M+\gamma},\] \[\leq 0, \tag{55}\]
for all \(\theta\in(1,\infty)\).
**Lemma 3**.: _For \(M\in\mathbb{R}_{>0},\gamma,\varphi\in\mathbb{R}_{\geq 0}\), \(h(\mathbf{x}_{0})\in[-\gamma,M]\), and \(K\in\mathbb{N}_{\geq 1}\), the function \(\Psi_{2}:(1,\infty)\rightarrow\mathbb{R}\) defined as:_
\[\Psi_{2}(\theta)=1-\frac{h(\mathbf{x}_{0})+\gamma}{M+\gamma+\frac{\varphi \theta}{\theta-1}\left(\theta^{K}-1\right)}, \tag{56}\]
_is monotonically increasing._
Proof:: The geometric series identity yields:
\[\Psi_{2}(\theta) =1-\frac{h(\mathbf{x}_{0})+\gamma}{M+\gamma+\varphi\sum_{i=1}^{K }\theta^{i}}, \tag{57}\] \[\frac{d\Psi_{2}}{d\theta} =\frac{\left(h(\mathbf{x}_{0})+\gamma\right)\left(\varphi\sum_{ i=1}^{K}i\theta^{i-1}\right)}{\left(M+\gamma+\varphi\sum_{i=1}^{K}\theta^{i} \right)^{2}},\] (58) \[\geq 0, \tag{59}\]
for all \(\theta\in(1,\infty)\).
### _Lemma 1_
Here we present a proof of Lemma 1.
Proof:: Consider the convex, twice-continuously differentiable function \(\eta:\mathbb{R}^{n}\rightarrow\mathbb{R}\) defined as \(\eta=-h\). The intermediate value theorem implies that for all \(\mathbf{y},\mathbf{z}\in\mathbb{R}^{n}\), there exists an \(\omega\in[0,1]\) such that:
\[\eta(\mathbf{z})=\eta(\mathbf{y})+\nabla\eta(\mathbf{y})^{\top}\mathbf{e}+ \frac{1}{2}\mathbf{e}^{\top}\nabla^{2}\eta(\mathbf{c})\mathbf{e}, \tag{60}\]
where \(\mathbf{e}\triangleq\mathbf{z}-\mathbf{y}\), \(\mathbf{c}\triangleq\omega\mathbf{z}+(1-\omega)\mathbf{z}\), and \(\nabla^{2}\eta(\mathbf{c})\) is the Hessian of \(\eta\) evaluated at \(\mathbf{c}\). We then have that:
\[\eta(\mathbf{z}) =\eta(\mathbf{y})+\nabla\eta(\mathbf{y})^{\top}\mathbf{e}+\frac{ 1}{2}\text{tr}\left(\nabla^{2}\eta(\mathbf{c})\mathbf{e}\mathbf{e}^{\top} \right), \tag{61}\] \[\leq\eta(\mathbf{y})+\nabla\eta(\mathbf{y})^{\top}\mathbf{e}+ \frac{1}{2}\|\nabla^{2}\eta(\mathbf{c})\|_{2}\text{tr}\left(\mathbf{e}\mathbf{ e}^{\top}\right),\] (62) \[\leq\eta(\mathbf{y})+\nabla\eta(\mathbf{y})^{\top}\mathbf{e}+ \frac{\lambda_{\text{max}}}{2}\text{tr}\left(\mathbf{e}\mathbf{e}^{\top} \right), \tag{63}\]
where the first inequality is a property of the trace operator for positive semi-definite matrices [28] (and \(\nabla^{2}\eta(\mathbf{c})\) is positive semi-definite as \(\eta\) is convex), and the second inequality follows by our definition of \(\lambda_{\text{max}}\). Let \(\mathbf{x}\) be a random variable taking values in \(\mathbb{R}^{n}\) with probability density function \(p:\mathbb{R}^{n}\rightarrow\mathbb{R}_{\geq 0}\), and let \(\boldsymbol{\mu}\triangleq\mathbb{E}[\mathbf{x}]\). We then have that:
\[\mathbb{E}[\eta(\mathbf{x})]-\eta(\mathbb{E}[\mathbf{x}])=\int_{ \mathbb{R}^{n}}(\eta(\mathbf{x})-\eta(\boldsymbol{\mu}))p(\mathbf{x})d \mathbf{x}, \tag{64}\] \[\leq\int_{\mathbb{R}^{n}}\nabla\eta(\boldsymbol{\mu})^{\top} \mathbf{e}+\frac{\lambda_{\text{max}}}{2}\text{tr}\left(\mathbf{e}\mathbf{e}^ {\top}\right)p(\mathbf{x})d\mathbf{x},\] (65) \[=\frac{\lambda_{\text{max}}}{2}\text{tr}(\text{cov}(\mathbf{x})), \tag{66}\]
where \(\mathbf{e}=\mathbf{x}-\boldsymbol{\mu}\). Replacing \(\eta\) with \(-h\) yields:
\[\mathbb{E}[h(\mathbf{x})]\geq h(\mathbb{E}[\mathbf{x}])-\frac{\lambda_{\text {max}}}{2}\text{tr}(\text{cov}(\mathbf{x})). \tag{67}\]
### _Derivation of Convex Approximation for Polytopic Barrier_
Here we derive a conservative approximation of the constraint \(\mathbb{E}[h(\mathbf{x}_{k+1})]\geq\alpha h(\mathbf{x}_{k})\) for barriers of the form \(h(\mathbf{x})=-\max(\mathbf{C}\mathbf{x}-\mathbf{w})\) and systems with linear-Gaussian dynamics (51). The key idea is to use the _log-sum-exp_ function as a smooth, convex upper bound of the pointwise maximum in the barrier function, which yields a closed-form expression for Gaussian random variables.
In particular, if \(L\) is the _log-sum-exp_ function, for any \(t>0\), \(\max(\mathbf{x})\leq\frac{1}{t}L(t\mathbf{x})\triangleq\frac{1}{t}\log(\sum_{ i=1}^{n}\exp(tx_{i}))\)[13, Chapter 3]. We can use this to upper bound the expectation of \(-h\),
\[-\mathbb{E}[h(\mathbf{x}_{k+1})] =\mathbb{E}\left[\max(\mathbf{C}\mathbf{x}_{k+1}-\mathbf{w})\right] \tag{68}\] \[\leq\frac{1}{t}\mathbb{E}\left[L\Big{(}t(\mathbf{C}\mathbf{x}_{k +1}-\mathbf{w})\Big{)}\right]\] (69) \[\leq\frac{1}{t}\log\left(\sum_{i=1}^{n_{c}}\mathbb{E}\left[\exp(t \mathbf{r}_{i})\right]\right), \tag{70}\]
for \(\mathbf{r}_{i}\triangleq\mathbf{c}_{i}^{T}\mathbf{x}-w_{i}\), where \(\mathbf{c}_{i}\) is the \(i^{\text{th}}\) row of \(\mathbf{C}\), \(w_{i}\) is the \(i^{\text{th}}\) entry of \(\mathbf{w}\), and the last inequality follows from Jensen's inequality and the concavity of the natural logarithm. Further, since we have linear-Gaussian dynamics, it is easy to show that \(\mathbf{r}_{i}\sim\mathcal{N}(\mathbf{c}_{i}^{T}(\mathbf{A}\mathbf{x}_{k}+ \mathbf{B}\mathbf{u}_{k})-w_{i},\mathbf{c}_{i}^{T}\mathbf{Q}\mathbf{c}_{i})\). The expression \(\mathbb{E}[\exp(t\mathbf{X})]\) is the "moment-generating function" of a random variable \(\mathbf{X}\), and for a Gaussian r.v. \(\mathbf{X}\sim\mathcal{N}(\mu,\sigma^{2})\), it has a closed form, \(\mathbb{E}[\exp(t\mathbf{X})]=\exp(t\mu+\frac{t^{2}}{2}\sigma^{2})\)[34, Chapter 6].
Thus, for \(\boldsymbol{\mu}\triangleq\mathbf{C}(\mathbf{A}\mathbf{x}_{k}+\mathbf{B} \mathbf{u}_{k})-\mathbf{w}\), \(\boldsymbol{\sigma}\triangleq\operatorname{diag}(\mathbf{A}\mathbf{Q}\mathbf{A}^{T})\), where \(\operatorname{diag}(\cdot)\) defines the diagonal of a square matrix,
\[-\mathbb{E}[h(\mathbf{x}_{k+1})]\leq\frac{1}{t}L\left(t\boldsymbol{\mu}+\frac{t ^{2}}{2}\boldsymbol{\sigma}\right), \tag{71}\]
which implies that imposing the constraint \(\frac{1}{t}L(t\boldsymbol{\mu}+\frac{t^{2}}{2}\boldsymbol{\sigma})\leq-\alpha h (\mathbf{x}_{k})\) ensures that the stochastic barrier condition (40) is satisfied. Finally, recognizing that our constraint is a perspective transform of \(L(\boldsymbol{\mu}+\frac{t}{2}\boldsymbol{\sigma})\) by the scalar \(\frac{1}{t}\), which preserves convexity [13, Chapter 3], our constraint is indeed convex. Thus an optimization-based controller such as ED can be used online to select control actions, and can jointly optimize over \(\mathbf{u}_{k},t\) to obtain the tightest bound on the expectation possible.
|
2304.03581 | Deformation quantization and intrinsic noncommutative differential
geometry | We provide an intrinsic formulation of the noncommutative differential
geometry developed earlier by Chaichian, Tureanu, R. B. Zhang and the second
author. This yields geometric definitions of covariant derivatives of
noncommutative metrics and curvatures, as well as the noncommutative version of
the first and the second Bianchi identities. Moreover, if a noncommutative
metric and chiral coefficients satisfy certain conditions which hold
automatically for quantum fluctuations given by isometric embedding, we prove
that the two noncommutative Ricci curvatures are essentially equivalent. For
(pseudo-) Riemannian metrics given by certain type of spherically symmetric
isometric embedding, we compute their quantum fluctuations and curvatures. We
find that they have closed forms, which indicates that the quantization of
gravity is renormalizable in this case. Finally, we define quasi-connections
and their curvatures with respect to general associative star products
constructed by Kontsevich on Poisson manifolds. As these star products are not
compatible with the Leibniz rule, we can only prove the first Bianchi identity. | Haoyuan Gao, Xiao Zhang | 2023-04-07T10:47:16Z | http://arxiv.org/abs/2304.03581v2 | # Deformation quantization and intrinsic noncommutative differential geometry
###### Abstract.
We provide an intrinsic formulation of the noncommutative differential geometry developed earlier by Chaichian, Tureanu, R. B. Zhang and the second author. This yields geometric definitions of covariant derivatives of noncommutative metrics and curvatures, as well as the noncommutative version of the first and the second Bianchi identities. Moreover, if a noncommutative metric and chiral coefficients satisfy certain conditions which hold automatically for quantum fluctuations given by isometric embedding, we prove that the two noncommutative Ricci curvatures are essentially equivalent. Finally, we show that the quantum fluctuations and their curvatures have close forms if (pseudo-) Riemannian metrics are given by certain type of spherically symmetric isometric embeddings. Hence the quantization of gravity is renormalizable in this case.
## 1. Introduction
Gravity is essentially a theory of spacetime geometry. In the concept of quantum effects of gravity, the Heisenberg uncertainty relations would result in noncommutativity of spacetime variables for sufficiently small distances. In 1947, Snyder, C.N. Yang made the first attempts to quantize spacetimes [14, 18], which are referred as Snyder's quantum space-times and Yang's quantum phase spaces [9, 10]. In their approach, spacetime variables were represented by Hermitian operators with discrete eigenvalues. This idea to encoding geometry of a space by its algebras of functions was realized prominently by Connes to establish noncommutative geometry using spectral triples [6]. And the main ingredients are the noncommutative analog of the Dirac operator acting on a representation space of the algebra, the spectrum of this generalized Dirac operator and the cyclic (co)homology. They are used to encode the information of noncommutative manifold structure, noncommutative metric and noncommutative curvature respectively. The overview of its applications to physics can be found in [5].
However, the metric and curvature information in an infinitesimal neighborhood of manifold is still lack as it is not known what means to take derivatives when coordinate variables are operators. Alternatively, deformation quantization deforms the commutative algebras of functions based on pointwise commutative multiplication to noncommutative algebras of functions based on certain noncommutative products such as the Moyal products, but still keeps spacetime variables usual functions, c.f. [3]. In recent years, there
have been intensive research activities on noncommutative gravity in frame of deformation quantization, c.f. [13, 1, 2] and references therein, where general relativity is adopted to the noncommutative setting in an intuitive way, as pointed out in [15].
In [4, 16, 17], a mathematically rigorous and complete theory of noncommutative differential geometry was developed on a coordinate chart \(U\) of a (pseudo-) Riemannian manifold. The idea is to embed \(U\) isometrically into a flat (pseudo-) Euclidean space and use the isometric embedding to construct the noncommutative analogues of metric, connection and curvature. They yield the noncommutative Einstein field equations. It was found that the deformation quantization of the Schwarzschild metric does not depend on time and yields an unevaporated quantum black hole [16], and the quantum fluctuation of the plane-fronted gravitational wave is the exact solution of the noncommutative vacuum Einstein field equations [17]. We refer to [11, 12] for the review on general existence of isometric embedding and applications in physics.
The paper is organized as follows. In Section 2, we state the main theorem. In Section 3, we study the intrinsic formulation of covariant derivatives of noncommutative metrics and curvatures from the geometric point of view. This yields noncommutative version of the first and the second Bianchi identities. In Section 4, we show the two noncommutative Ricci curvatures are essentially equivalent if a noncommutative metric and chiral coefficients satisfy certain conditions. These conditions hold automatically for quantum fluctuations given by isometric embedding. In Section 5, we show that the quantum fluctuations and their curvatures have close forms coming from Moyal products of trigonometric functions if (pseudo-) Riemannian metrics are given by certain type of spherically symmetric isometric embeddings.
## 2. Main Theorem
In this section, we provide some basic knowledge on the noncommutative differential geometry and state the main theorem proved in the paper. Recall the intrinsic setting of noncommutative differential geometry proposed by the second author [19], without using the isometric embedding. Let \(M\) be an \(n\)-dimensional differentiable manifold and \(U\subset M\) be a coordinate chart equipped with natural coordinates \((x^{1},\cdots,x^{n})\). Let \(\hbar\) be the Planck constant viewed as an indeterminate. Denote \(\mathbb{R}[[\hbar]]\) the ring of formal power series in \(\hbar\) with real coefficients, and \(\mathcal{A}_{U}\) the set of formal power series in \(\hbar\) with coefficients being real smooth functions on \(U\)
\[\mathcal{A}_{U}=C^{\infty}(U)[[\hbar]]=\Big{\{}\sum_{k=0}^{\infty}f_{k}\hbar^ {k}\Big{|}f_{k}\in C^{\infty}(U)\Big{\}}.\]
\(\mathcal{A}_{U}\) is an \(\mathbb{R}[[\hbar]]\)-module.
Throughout the paper, all the indices \(i\), \(j\), \(k\), \(l\), \(\cdots\), range from \(1\) to \(n\), \(q\in\mathbb{N}_{0}\). We also use the Einstein summation convention. Given two smooth
functions \(u\), \(v\) on \(U\), we denote \(uv\) their usual pointwise product. For any skew-symmetric \(n\times n\) real constant matrix \((\theta^{ij})\) on \(U\), the Moyal product of \(u\) and \(v\) with respect to \((\theta^{ij})\) is defined as
\[(u*v)(x)=\Bigl{[}\exp(\hbar\theta^{ij}\partial_{i}\partial_{j}^{\prime})u(x)v( x^{\prime})\Bigr{]}_{x=x^{\prime}}, \tag{2.1}\]
where \(x\) and \(x^{\prime}\) denote the same coordinate system and \(\partial_{i}=\frac{\partial}{\partial x^{i}}\), \(\partial_{i}^{\prime}=\frac{\partial}{\partial(x^{\prime})^{i}}\). It is clearly that
\[u*v\in\mathcal{A}_{U}.\]
Extending by \(\mathbb{R}[[\hbar]]\)-bilinearity, the Moyal product provides an associative \(\mathbb{R}[[\hbar]]\)-bilinear product on \(\mathcal{A}_{U}\), c.f. [8]. The Moyal algebra is \(\mathcal{A}_{U}\) equipped with the Moyal product, which is a formal deformation of the algebra of real smooth functions on \(U\).
Extend \(\partial_{i}\) to \(\mathcal{A}_{U}\) by \(\mathbb{R}[[\hbar]]\)-linearity, the Moyal product satisfies
1. Noncommutativity: \([x^{i},x^{j}]=x^{i}*x^{j}-x^{j}*x^{i}=2\hbar\theta^{ij}\);
2. Leibniz rule: \(\partial_{i}(u*v)=(\partial_{i}u)*v+u*(\partial_{i}v)\), for \(u,v\in\mathcal{A}_{U}\).
Denote \(E_{i}=\tilde{E}_{i}=\partial_{i}\), \(1\leq i\leq n\). The noncommutative left (resp. right) tangent bundle \(\mathcal{T}_{U}\) (resp. \(\tilde{\mathcal{T}}_{U}\)) on \(U\) is the free left (resp. right) \(\mathcal{A}_{U}\)-module with basis \(\{E_{1},\cdots,E_{n}\}\) (resp. \(\{\tilde{E}_{1},\cdots,\tilde{E}_{n}\}\)), i.e.,
\[\mathcal{T}_{U} =\Bigl{\{}a^{i}*E_{i}\,\Big{|}\,a^{i}\in\mathcal{A}_{U},\,a^{i}*E _{i}=0\Longleftrightarrow a^{i}=0.\Bigr{\}},\] \[\tilde{\mathcal{T}}_{U} =\Bigl{\{}\tilde{E}_{i}*a^{i}\,\Big{|}\,a^{i}\in\mathcal{A}_{U}, \,\tilde{E}_{i}*a^{i}=0\Longleftrightarrow a^{i}=0.\Bigr{\}}.\]
An element of \(\mathcal{T}_{U}\) (resp. \(\tilde{\mathcal{T}}_{U}\)) is called a left (resp. right) vector field.
A noncommutative metric \(g\) on \(U\) is a homomorphism of two-sided \(\mathcal{A}_{U}\)-modules
\[g:\mathcal{T}_{U}\otimes_{\mathbb{R}[[\hbar]]}\tilde{\mathcal{T}}_{U} \longrightarrow\mathcal{A}_{U},\]
such that the matrix
\[(g_{ij})\in\mathcal{A}_{U}^{n\times n},\quad g_{ij}=g(E_{i},\tilde{E}_{j})\]
is invertible, i.e., there exists a unique matrix \((g^{ij})\in\mathcal{A}_{U}^{n\times n}\) such that
\[g_{ik}*g^{kj}=g^{jk}*g_{ki}=\delta_{i}^{j}.\]
Let \((g_{l}^{ij})\) be the left inverse of \((g_{ij})\) and \((g_{r}^{ij})\) be the right inverse of \((g_{ij})\). Since the Moyal product is associative,
\[g_{l}^{ij}=g_{l}^{ip}*\delta_{p}^{j}=g_{l}^{ip}*g_{pk}*g_{r}^{kj}=\delta_{k}^{ i}*g_{r}^{kj}=g_{r}^{ij}.\]
Therefore the left inverse and the right inverse coincide.
A noncommutative left connection \(\nabla\) is a set of operators \(\{\nabla_{i}:=\nabla_{E_{i}}\}\) for \(1\leq i\leq n\), where each
\[\nabla_{i}:\mathcal{T}_{U}\longrightarrow\mathcal{T}_{U}\]
is called a noncommutative left covariant derivative and satisfies
1. \(\mathbb{R}[[\hbar]]\)-linearity: For \(a\), \(b\in\mathbb{R}[[\hbar]]\) and \(V,\)\(W\in\mathcal{T}_{U},\) \[\nabla_{i}(aV+bW)=a\nabla_{i}V+b\nabla_{i}W;\]
2. Leibniz rule: For \(f\in\mathcal{A}_{U}\) and \(V\in\mathcal{T}_{U},\) \[\nabla_{i}(f*V)=(\partial_{i}f)*V+f*\nabla_{i}V.\]
The noncommutative right connection \(\tilde{\nabla}=\{\tilde{\nabla}_{i}:=\tilde{\nabla}_{\tilde{E}_{i}}\}\) and the noncommutative right covariant derivatives \(\tilde{\nabla}_{i}\) can be defined in the same way. The left and right connections are uniquely determined by connection coefficients \(\Gamma_{ij}^{k}\) and \(\tilde{\Gamma}_{ij}^{k},\) which are elements of \(\mathcal{A}_{U}\)
\[\nabla_{i}E_{j}=\Gamma_{ij}^{k}*E_{k},\quad\tilde{\nabla}_{i}\tilde{E}_{j}= \tilde{E}_{k}*\tilde{\Gamma}_{ij}^{k}.\]
Denote
\[\Gamma_{ijk}=\Gamma_{ij}^{l}*g_{lk},\quad\tilde{\Gamma}_{ijk}=g_{kl}*\tilde{ \Gamma}_{ij}^{l}.\]
Inspired by the Levi-Civita connection of a (pseudo-) Riemannian metric, the second author introduced the canonical connection [19]. Given a noncommutative metric \(g\) and a set of elements \(\Upsilon_{ijk}\) of \(\mathcal{A}_{U}\) with
\[\Upsilon_{ijk}=\Upsilon_{jik},\]
which are referred as the chiral coefficients. A noncommutative connection, which consists of a noncommutative left connection \(\nabla\) and a noncommutative right connection \(\tilde{\nabla},\) is canonical with respect to \(g\) and \(\Upsilon_{ijk}\) if it satisfies
1. Compatibility: \(\partial_{k}g_{ij}=g(\nabla_{k}E_{i},\tilde{E}_{j})+g(E_{i},\tilde{\nabla}_{k }\tilde{E}_{j})=\Gamma_{kij}+\tilde{\Gamma}_{kji};\)
2. Torsion free: \(\nabla_{i}E_{j}=\nabla_{j}E_{i},\)\(\tilde{\nabla}_{i}\tilde{E}_{j}=\tilde{\nabla}_{j}\tilde{E}_{i};\)
3. Chirality: \(\Gamma_{ijk}-\tilde{\Gamma}_{ijk}=\Upsilon_{ijk}.\)
The torsion free condition implies
\[\Gamma_{ij}^{k}=\Gamma_{ji}^{k},\quad\tilde{\Gamma}_{ij}^{k}=\tilde{\Gamma}_ {ji}^{k}.\]
It is straightforward that
\[\begin{split} 2\Gamma_{ijk}&=\partial_{i}g_{jk}+ \partial_{j}g_{ki}-\partial_{k}g_{ij}+\Upsilon_{ikj}+\Upsilon_{jik}-\Upsilon_{ kji}\\ &=\partial_{i}g_{jk}+\partial_{j}g_{ki}-\partial_{k}g_{ji}+ \Upsilon_{ijk}\\ &=\partial_{i}\Big{(}\frac{g_{jk}+g_{kj}}{2}\Big{)}+\partial_{j }\Big{(}\frac{g_{ki}+g_{ik}}{2}\Big{)}-\partial_{k}\Big{(}\frac{g_{ij}+g_{ji}} {2}\Big{)}+\Upsilon_{ijk},\end{split} \tag{2.2}\]
and
\[\begin{split} 2\tilde{\Gamma}_{ijk}&=\partial_{i}g_{jk}+ \partial_{j}g_{ki}-\partial_{k}g_{ij}+\Upsilon_{ikj}-\Upsilon_{jik}-\Upsilon_{ kji}\\ &=\partial_{i}g_{jk}+\partial_{j}g_{ki}-\partial_{k}g_{ji}- \Upsilon_{ijk}\\ &=\partial_{i}\Big{(}\frac{g_{jk}+g_{kj}}{2}\Big{)}+\partial_{j }\Big{(}\frac{g_{ki}+g_{ik}}{2}\Big{)}-\partial_{k}\Big{(}\frac{g_{ij}+g_{ji}} {2}\Big{)}-\Upsilon_{ijk}.\end{split} \tag{2.3}\]
In classical Riemannian geometry, the chiral coefficients vanish and \(\Gamma_{ijk}\) reduce to the Christoffel symbols.
For any \(f\in\mathcal{A}_{U}\), it is easy to verify
\[[E_{i},E_{j}]f=[\tilde{E}_{i},\tilde{E}_{j}]f=\partial_{i}\partial_{j}f- \partial_{j}\partial_{i}f=0.\]
Thus the left curvature operators \(\mathcal{R}_{E_{i}E_{j}}\) and the right curvature operators \(\tilde{\mathcal{R}}_{\tilde{E}_{i}\tilde{E}_{j}}\) can be defined as the following \(\mathcal{A}_{U}\)-linear operators
\[\mathcal{R}_{E_{i}E_{j}} =[\nabla_{i},\nabla_{j}]:\ \mathcal{T}_{U}\longrightarrow\mathcal{T}_{U},\] \[\tilde{\mathcal{R}}_{\tilde{E}_{i}\tilde{E}_{j}} =[\tilde{\nabla}_{i},\tilde{\nabla}_{j}]:\ \tilde{\mathcal{T}}_{U} \longrightarrow\tilde{\mathcal{T}}_{U}.\]
For the canonical connection, the left Riemannian curvatures \(R_{lkij}\) and right Riemannian curvatures \(\tilde{R}_{lkij}\) are defined as
\[R_{lkij}=g(\mathcal{R}_{E_{i}E_{j}}E_{k},\tilde{E}_{l}),\quad\tilde{R}_{lkij} =-g(E_{k},\tilde{R}_{\tilde{E}_{i}\tilde{E}_{j}}\tilde{E}_{l}).\]
They satisfy
\[R_{lkij}=-R_{lkji}=\tilde{R}_{lkij},\quad R_{lkij}\not\equiv-R_{klij}.\]
Therefore the left curvatures are sufficient for the purpose. There are two Ricci curvatures \(R_{kj}\) and \(\Theta_{il}\) obtained by contracting \(l\), \(i\) and \(k\), \(j\) in \(R_{lkij}\) respectively
\[R_{kj}= g(\mathcal{R}_{E_{i}E_{j}}E_{k},\tilde{E}_{l})*g^{li}=R_{lkij}*g^{li},\] \[\Theta_{il}= g^{jk}*g(\mathcal{R}_{E_{i}E_{j}}E_{k},\tilde{E}_{l})=g^{jk}*R_{ lkij}.\]
Raising the index at \(k\) and \(l\) respectively, we have Ricci curvatures
\[R_{j}^{p}= g^{pk}*g(\mathcal{R}_{E_{i}E_{j}}E_{k},\tilde{E}_{l})*g^{li}=g^{pk}* R_{lkij}*g^{li},\] \[\Theta_{i}^{p}= g^{jk}*g(\mathcal{R}_{E_{i}E_{j}}E_{k},\tilde{E}_{l})*g^{lp}=g^{jk}* R_{lkij}*g^{lp}.\]
The two Ricci curvatures \(R_{i}^{p}\) and \(\Theta_{i}^{p}\) are not equal to each other in the noncommutative case. But their traces coincide and yield the same scalar curvature
\[R=R_{j}^{j}=\Theta_{i}^{i}.\]
As elements of \(\mathcal{A}_{U}\), there are the following power series expansions
\[g_{ij} =\sum_{q=0}^{\infty}g_{ij}[q]\hbar^{q},\quad g_{ij}[q]\in C^{\infty} (U), \tag{2.4}\] \[\Upsilon_{ijk} =\sum_{q=0}^{\infty}\Upsilon_{ijk}[q]\hbar^{q},\quad\Upsilon_{ijk} [q]\in C^{\infty}(U),\] (2.5) \[R_{lkij} =\sum_{q=0}^{\infty}R_{lkij}[q]\hbar^{q},\quad R_{lkij}[q]\in C^{ \infty}(U),\] (2.6) \[R_{ij} =\sum_{q=0}^{\infty}R_{ij}[q]\hbar^{q},\quad R_{ij}[q]\in C^{ \infty}(U),\] (2.7) \[\Theta_{ij} =\sum_{q=0}^{\infty}\Theta_{ij}[q]\hbar^{q},\quad\Theta_{ij}[q] \in C^{\infty}(U),\] (2.8) \[R_{j}^{i} =\sum_{q=0}^{\infty}R_{j}^{i}[q]\hbar^{q},\quad R_{j}^{i}[q]\in C ^{\infty}(U),\] (2.9) \[\Theta_{j}^{i} =\sum_{q=0}^{\infty}\Theta_{j}^{i}[q]\hbar^{q},\quad\Theta_{j}^{ i}[q]\in C^{\infty}(U). \tag{2.10}\]
In this paper, we prove the following theorem.
**Theorem 2.1**.: _Let \(M\) be an \(n\)-dimensional smooth manifold and \(U\subset M\) a coordinate chart. Let \(\nabla\), \(\tilde{\nabla}\) be the canonical connection with respect to noncommutative metric \(g\) and chiral coefficients \(\Upsilon_{ijk}\) on \(U\). If \(g_{ij}\) satisfy_
\[g_{ij}[2q]=g_{ji}[2q],\quad g_{ij}[2q+1]=-g_{ji}[2q+1], \tag{2.11}\]
_and \(\Upsilon_{ijk}\) satisfy_
\[\Upsilon_{ijk}[2q]=0, \tag{2.12}\]
_then two Ricci curvatures are equivalent in the sense that_
\[R_{ij}[2q]=\Theta_{ji}[2q],\quad R_{ij}[2q+1]=-\Theta_{ji}[2q+1] \tag{2.13}\]
_and_
\[R_{j}^{i}[2q]=\Theta_{j}^{i}[2q],\quad R_{j}^{i}[2q+1]=-\Theta_{j}^{i}[2q+1]. \tag{2.14}\]
In particular, if noncommutative metric and chiral coefficients are given by an isometric embedding, then (2.11), (2.12) hold and the theorem follows.
Finally, we would like to point out that, in Poisson geometry, the Moyal product is a deformation quantization of the constant Poisson structure
\[\pi=\frac{1}{2}\theta^{ij}\partial_{i}\wedge\partial_{j}\]
for constant skew-symmetric matrix \((\theta^{ij})\). If \(\theta^{ij}\) are smooth functions, \(\pi\) still gives a Poisson structure if its Schouten-Nijenhuis bracket vanishes,
\[[\pi,\pi]_{S}=0.\]
However, the corresponding Moyal product is not associative. In the pioneer work, Kontsevich proved that there always exists an associative noncommutative star product which provides the deformation quantization for any Poisson structure [7]. Unfortunately, this star product does not satisfy the Leibniz rule. It indicates our theory of noncommutative differential geometry depends on the choice of coordinate systems in \(U\). As coordinate systems correspond to observers, this fits Bohr's opinion that evidence obtained under different experimental conditions cannot be comprehended within a single picture, but must be regarded as complementary in the sense that only the totality of the phenomena exhausts the possible information about the objects.
## 3. Curvature operators and Bianchi identities
In this section, we study the covariant derivatives of noncommutative metrics and curvatures from the geometric point of view. This yields noncommutative version of the first and the second Bianchi identities.
**Proposition 3.1**.: _Let \(M\) be an \(n\)-dimensional differentiable manifold and \(U\subset M\) be a coordinate chart equipped with natural coordinates \((x^{1},\cdots,x^{n})\). Let \(g\) be a homomorphism of two-sided \(\mathcal{A}_{U}\)-modules given by (2.4) where \((g_{ij}[0])\) is not necessarily symmetric. If \((g_{ij}[0])\) is invertible on \(U\) with the inverse matrix \((g^{ij}[0])\), then (2.4) gives a noncommutative metric \(g\) on \(U\)._
_Proof:_ For any smooth functions \(u(x)\), \(v(x)\) over \(U\), denote
\[\mu_{q}(u,v)(x)=\frac{1}{q!}\Big{[}(\theta^{ij}\partial_{i}\partial^{\prime}_ {j})^{q}u(x)v(x^{\prime})\Big{]}_{x=x^{\prime}}. \tag{3.1}\]
Let \(g^{ij}\) have the power series expansions
\[g^{ij}=\sum_{q=0}^{\infty}g^{ij}[q]\hbar^{q}\in\mathcal{A}_{U},\quad g^{ij}[q ]\in C^{\infty}(U). \tag{3.2}\]
By viewing \(g^{ij}\) as the right inverse, we obtain the recursive formula for \(q\in\mathbb{N}\)
\[\begin{split} g^{ij}[q]=&-\sum_{r=1}^{q}g^{ik}[0]g_ {kl}[r]g^{lj}[q-r]\\ &-\sum_{r=1}^{q}\sum_{s=0}^{q-r}g^{ik}[0]\mu_{r}\Big{(}g_{kl}[s],g^{lj}[q-r-s]\Big{)}.\end{split} \tag{3.3}\]
On the other hand, by viewing \(g^{ij}\) as the left inverse, we obtain
\[\begin{split} g^{ij}[q]=&-\sum_{r=1}^{q}g^{ik}[q-r]g_{ kl}[r]g^{lj}[0]\\ &-\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{r}\Big{(}g^{ik}[q-r-s],g_{kl}[s] \Big{)}g^{lj}[0].\end{split} \tag{3.4}\]
Thus the matrix \((g_{ij})\) is invertible in \(\mathcal{A}_{U}^{n\times n}\) if and only if the matrix \((g_{ij}[0](x))\) is invertible in \(\mathbb{R}^{n\times n}\) for any \(x\in U\). Therefore the proof of the proposition is complete. Q.E.D.
**Corollary 3.1**.: _For any (pseudo-) Riemannian metric \(g_{ij}[0]\) on \(U\), (2.4) provides a noncommutative metric \(g\) on \(U\), which is referred as a quantum fluctuation of \(g_{ij}[0]\)._
In classical differential geometry, the cotangent bundle is the dual of the tangent bundle. Inspired by this, we can define the noncommutative cotangent bundles as the dual modules of the noncommutative tangent bundles. As the dual of a left (resp. right) \(\mathcal{A}_{U}\)-module is a right (resp. left) \(\mathcal{A}_{U}\)-module and the dual of a free module is also free, we may use the noncommutative metric \(g\) to induce bases of the cotangent bundles dual to \(E_{i}\) and \(\tilde{E}_{j}\) respectively, i.e., let \(E^{i}\), \(\tilde{E}^{j}\) be dual bases of \(\tilde{E}_{j}\), \(E_{i}\) respectively, we have
\[g(E^{i},\tilde{E}_{j})=g(E_{j},\tilde{E}^{i})=\delta_{j}^{i}.\]
**Definition 3.1**.: _The noncommutative left (resp. right) cotangent bundle \(\mathcal{T}_{U}^{*}\) (resp. \(\tilde{\mathcal{T}}_{U}^{*}\)) on \(U\) with respect to the noncommutative metric \(g\) is the free left (resp. right) \(\mathcal{A}_{U}\)-module with basis \(\{E^{1},\cdots,E^{n}\}\) (resp. \(\{\tilde{E}^{1},\cdots,\tilde{E}^{n}\}\))_
\[\mathcal{T}_{U}^{*}=\Big{\{}a_{i}*E^{i}\,\Big{|}\,a_{i}\in\mathcal{A}_{U},\,a _{i}*E^{i}=0\Longleftrightarrow a_{i}=0.\Big{\}},\]
_and_
\[\tilde{\mathcal{T}}_{U}^{*}=\Big{\{}\tilde{E}^{i}*a_{i}\,\Big{|}\,a_{i}\in \mathcal{A}_{U},\,\tilde{E}^{i}*a_{i}=0\Longleftrightarrow a_{i}=0.\Big{\}}.\]
The left (resp. right) cotangent bundle is the dual of the right (resp. left) tangent bundle. Analogous to the classical situation, the noncommutative metric \(g\) acts as an element of \(\tilde{\mathcal{T}}_{U}^{*}\otimes_{\mathcal{A}_{U}}\mathcal{T}_{U}^{*}\),
\[\tilde{E}^{i}\otimes g_{ij}*E^{j}=\tilde{E}^{i}*g_{ij}\otimes E^{j}. \tag{3.5}\]
The inverse matrix \((g^{ij})\) can be viewed as a homomorphism of two-sided modules
\[g^{-1}:\mathcal{T}_{U}^{*}\otimes_{\mathbb{R}[[h]]}\tilde{\mathcal{T}}_{U}^{*} \longrightarrow\mathcal{A}_{U}\]
such that
\[g^{-1}(E^{i},\tilde{E}^{j})=g^{ij}.\]
Similarly, \(g^{-1}\) acts as an element of \(\tilde{\mathcal{T}}_{U}\otimes_{\mathcal{A}_{U}}\mathcal{T}_{U}\),
\[\tilde{E}_{i}\otimes g^{ij}*E_{j}=\tilde{E}_{i}*g^{ij}\otimes E_{j}.\]
Similar to the classical differential geometry, a noncommutative left connection \(\nabla\) on the left tangent bundle induces a unique noncommutative right connection \(\tilde{\nabla}\) on the right cotangent bundle \(\tilde{\mathcal{T}}_{U}^{*}\) in terms of noncommutative metric \(g\)
\[\partial_{i}g(V,\tilde{W})=g(\nabla_{i}V,\tilde{W})+g(V,\tilde{\nabla}_{i} \tilde{W})\]
where \(V\in\mathcal{T}_{U}\), \(\tilde{W}\in\tilde{\mathcal{T}}_{U}^{*}\). It yields
\[\tilde{\nabla}_{i}\tilde{E}^{j}=-\tilde{E}^{k}*\Gamma_{ik}^{j}.\]
Moreover, a noncommutative right connection \(\tilde{\nabla}\) on the right tangent bundle also induces a noncommutative left connection \(\nabla\) on the left cotangent bundle which yields
\[\nabla_{i}E^{j}=-\tilde{\Gamma}_{ik}^{j}*E^{k}.\]
The noncommutative metric \(g\) and its inverse \(g^{-1}\) can be written as,
\[g= \tilde{E}^{i}\otimes g_{ij}*E^{j}=\tilde{E}^{i}*g_{ij}\otimes E^{j},\] \[g^{-1}= \tilde{E}_{i}\otimes g^{ij}*E_{j}=\tilde{E}_{i}*g^{ij}\otimes E_ {j}.\]
This allows us to define covariant derivatives of \(g\) and \(g^{-1}\) by
\[\nabla_{k}g= \tilde{E}^{i}\otimes\nabla_{k}g_{ij}*E^{j}=\tilde{E}^{i}*\nabla _{k}g_{ij}\otimes E^{j},\] \[\nabla_{k}g^{-1}= \tilde{E}_{i}\otimes\nabla_{k}g^{ij}*E_{j}=\tilde{E}_{i}*\nabla _{k}g^{ij}\otimes E_{j}.\]
**Proposition 3.2**.: _Let \(\nabla\) be the canonical connection with respect to the noncommutative metric \(g\). Then_
\[\nabla_{k}g=\nabla_{k}g^{-1}=0\Longleftrightarrow\nabla_{k}g_{ij}=\nabla_{k}g ^{ij}=0.\]
_Proof:_ It is straightforward that
\[\nabla_{k}g =\nabla_{k}(\tilde{E}^{i}\otimes g_{ij}*E^{j})\] \[=\tilde{\nabla}_{k}\tilde{E}^{i}\otimes g_{ij}*E^{j}+\tilde{E}^{ i}\otimes\partial_{k}g_{ij}*E^{j}+\tilde{E}^{i}\otimes g_{ij}*\nabla_{k}E^{j}\] \[=-\tilde{E}^{l}*\Gamma_{kl}^{i}\otimes g_{ij}*E^{j}+\tilde{E}^{ i}\otimes\partial_{k}g_{ij}*E^{j}-\tilde{E}^{i}\otimes g_{ij}*\tilde{\Gamma}_{ kl}^{j}*E^{l}\] \[=\tilde{E}^{i}\otimes(\partial_{k}g_{ij}-\Gamma_{kij}-\tilde{ \Gamma}_{kji})*E^{j}=0.\]
On the other hand, a direct computation yields
\[0 =\left[\partial_{k}(g^{il}*g_{lr})\right]*g^{rj}\] \[=\left[\partial_{k}g^{il}*g_{lr}+g^{il}*\partial_{k}g_{lr}\right] *g^{rj}\] \[=\partial_{k}g^{il}*g_{lr}*g^{rj}+g^{il}*(\Gamma_{klr}+\tilde{ \Gamma}_{klr})*g^{rj}\] \[=\partial_{k}g^{ij}+g^{il}*\Gamma_{kl}^{k}*g_{sr}*g^{rj}+g^{il}*g _{ls}*\tilde{\Gamma}_{kr}^{s}*g^{rj}\] \[=\partial_{k}g^{ij}+g^{il}*\Gamma_{kl}^{j}+\tilde{\Gamma}_{kl}^{i} *g^{lj}.\]
Therefore,
\[\nabla_{k}g^{-1}= \nabla_{k}(\tilde{E}_{i}\otimes g^{ij}*E_{j})\] \[= \tilde{\nabla}_{k}\tilde{E}_{i}\otimes g^{ij}*E_{j}+\tilde{E}_{i} \otimes\partial_{k}g^{ij}*E_{j}+\tilde{E}_{i}\otimes g^{ij}*\nabla_{k}E_{j}\] \[= \tilde{E}_{i}\otimes\left(\partial_{k}g^{ij}+g^{il}*\Gamma^{j}_{ kl}+\tilde{\Gamma}^{i}_{kl}*g^{lj}\right)*E_{j}=0.\]
Q.E.D.
The noncommutative left (resp. right) covariant derivative along a left (resp. right) vector field \(V=a^{i}*E_{i}\) (resp. \(W=\tilde{E}^{i}*a^{i}\)) with \(a^{i}\in\mathcal{A}_{U}\) is defined as the \(\mathbb{R}[[\hbar]]\)-linear map
\[\nabla_{V}:\mathcal{T}_{U}\to\mathcal{T}_{U}\quad\text{(resp.}\quad\tilde{ \nabla}_{W}:\tilde{\mathcal{T}}_{U}\to\tilde{\mathcal{T}}_{U})\]
given by
\[\nabla_{V}X=a^{i}*(\nabla_{i}X)\quad\text{(resp.}\quad\tilde{\nabla}_{W}\,Y=( \tilde{\nabla}_{i}\,Y)*a^{i})\]
for \(X\in\mathcal{T}_{U}\) (resp. \(Y\in\tilde{\mathcal{T}}_{U}\)).
**Remark 3.1**.: _Noncommutative covariant derivatives along general vector fields are not compatible with the Leibniz rule. Otherwise, for any \(f\in\mathcal{A}_{U}\),_
\[\nabla_{V}(f*E_{j})= (Vf)*E_{j}+f*\nabla_{V}E_{j}\] \[= a^{i}*(\partial_{i}f)*E_{j}+f*a^{i}*\nabla_{i}E_{j}.\]
_On the other hand, by the definition,_
\[\nabla_{V}(f*E_{j})= a^{i}*\nabla_{i}(f*E_{j})\] \[= a^{i}*(\partial_{i}f)*E_{j}+a^{i}*f*\nabla_{i}E_{j}.\]
_They are not equal to each other unless_
\[a^{i}*f=f*a^{i}.\]
_This is generally impossible. As a consequence, it indicates that the noncommutative covariant derivatives are not well-defined with respect to orthonormal basis._
For left and right tangent vectors
\[V=v^{i}*E_{i},\quad W=w^{j}*E_{j},\quad\tilde{V}=\tilde{E}_{i}*\tilde{v}^{i}, \quad\tilde{W}=\tilde{E}_{j}*\tilde{w}^{j},\]
where \(v^{i},w^{j},\tilde{v}^{i},\tilde{w}^{j}\in\mathcal{A}_{U}\), the noncommutative Lie brackets are defined as
\[[V,W]f= v^{i}*E_{i}\big{(}w^{j}*E_{j}(f)\big{)}-w^{j}*E_{j}\big{(}v^{i}*E_{i}( f)\big{)}\] \[= \big{(}v^{i}*E_{i}(w^{j})-w^{i}*E_{i}(v^{j})\big{)}*E_{j}(f)+[v^{ i},w^{j}]*E_{i}E_{j}(f)\] \[= -[W,V]f,\] \[= \tilde{E}_{i}*\tilde{v}^{i}\big{(}\tilde{E}_{j}(f)*\tilde{w}^{j} \big{)}-\tilde{E}_{j}*\tilde{w}^{j}\big{(}\tilde{E}_{i}(f)*\tilde{v}^{i}\big{)}\] \[= \tilde{E}_{j}(f)*(\tilde{E}_{i}(\tilde{w}^{j})*\tilde{v}^{i}- \tilde{E}_{i}(\tilde{v}^{j})*\tilde{w}^{i}\big{)}-\tilde{E}_{i}\tilde{E}_{j}( f)*[\tilde{v}^{i},\tilde{w}^{j}]\] \[= -[\tilde{W},\tilde{V}]f\]
for any \(f\in\mathcal{A}_{U}\). Analogous to the classical (pseudo-) Riemannian geometry, the noncommutative left and right curvature operators for left and right tangent vectors can be formally defined as
\[\mathcal{R}_{VW}= [\nabla_{V},\nabla_{W}]-\nabla_{[V,W]},\] \[\tilde{\mathcal{R}}_{\tilde{V}\tilde{W}}= [\tilde{\nabla}_{\tilde{V}},\tilde{\nabla}_{\tilde{W}}]-\tilde{ \nabla}_{[\tilde{V},\tilde{W}]}.\]
It is shown that \(\mathcal{R}_{E_{i}E_{j}}\), \(\tilde{\mathcal{R}}_{\tilde{E}_{i}\tilde{E}_{j}}\) are left and right \(\mathcal{A}_{U}\)-module endomorphisms over left and right tangent bundles respectively [4]. But \(\mathcal{R}_{VW}\), \(\tilde{\mathcal{R}}_{\tilde{V}\tilde{W}}\) do not make sense unless
\[[v^{i},w^{j}]=[\tilde{v}^{i},\tilde{w}^{j}]=0.\]
Thus, if \(V=E_{i}\) (resp. \(\tilde{V}=\tilde{E}_{i}\)) or \(W=E_{j}\) (resp. \(\tilde{W}=\tilde{E}_{j}\)), then \(\mathcal{R}_{VW}E_{k}\in\mathcal{T}_{U}\) (resp. \(\tilde{\mathcal{R}}_{\tilde{V}\tilde{W}}\tilde{E}_{k}\in\tilde{\mathcal{T}}_{U}\)) is well-defined. This suggests to define the covariant derivatives of noncommutative curvatures by adopting the idea of classical (pseudo-) Riemannian geometry. We only consider the case of left curvatures.
**Definition 3.2**.: _The covariant derivatives of noncommutative curvature operators are defined as follows._
\[(\nabla_{k}\mathcal{R})_{E_{i}E_{j}}E_{p}= \nabla_{k}(\mathcal{R}_{E_{i}E_{j}}E_{p})-\mathcal{R}_{(\nabla_{ E_{k}}E_{i})E_{j}}E_{p}\] \[-\mathcal{R}_{E_{i}(\nabla_{E_{k}}E_{j})}E_{p}-\mathcal{R}_{E_{i }E_{j}}(\nabla_{E_{k}}E_{p}).\]
**Definition 3.3**.: _The covariant derivatives of noncommutative curvature tensors, noncommutative Ricci curvatures and noncommutative scalar curvature are defined as follows._
\[\nabla_{s}R_{lkij}= g\big{(}(\nabla_{s}\mathcal{R})_{E_{i}E_{j}}E_{k},\tilde{E}_{l} \big{)},\] \[\nabla_{s}R_{j}^{p}= g^{pk}*g\big{(}(\nabla_{s}\mathcal{R})_{E_{i}E_{j}}E_{k}, \tilde{E}_{l}\big{)}*g^{li}=g^{pk}*\nabla_{s}R_{lkij}*g^{li},\] \[\nabla_{s}\Theta_{i}^{p}= g^{jk}*g\big{(}(\nabla_{s}\mathcal{R})_{E_{i}E_{j}}E_{k}, \tilde{E}_{l}\big{)}*g^{lp}=g^{jk}*\nabla_{s}R_{lkij}*g^{lp},\] \[\nabla_{s}R= g^{jk}*g\big{(}(\nabla_{s}\mathcal{R})_{E_{i}E_{j}}E_{k}, \tilde{E}_{l}\big{)}*g^{li}.\]
**Remark 3.2**.: _If \(V=E_{i}\) or \(W=E_{j}\), then the operator \(\mathcal{R}_{VW}:\mathcal{T}_{U}\rightarrow\mathcal{T}_{U}\) is well-defined but generally not \(\mathcal{A}_{U}\)-linear. In fact assume \(V=E_{i}\) and
\(W=a^{j}*E_{j}\), then_
\[\mathcal{R}_{VW}(f*E_{k})= \nabla_{i}\Big{(}a^{j}*\nabla_{j}(f*E_{k})\Big{)}-a^{j}*\nabla_{j} \Big{(}\nabla_{i}(f*E_{k})\Big{)}\] \[-\nabla_{[E_{i},a^{j}*E_{j}]}(f*E_{k})\] \[= (\partial_{i}a^{j})*\nabla_{j}(f*E_{k})+a^{j}*\nabla_{i}\Big{(} \nabla_{j}(f*E_{k})\Big{)}\] \[-a^{j}*\nabla_{j}\Big{(}\nabla_{i}(f*E_{k})\Big{)}-\nabla_{( \partial_{i}a^{j})*E_{j}}(f*E_{k})\] \[= a^{j}*\mathcal{R}_{E_{i}E_{j}}(f*E_{k})+(\partial_{i}a^{j})* \nabla_{j}(f*E_{k})\] \[-(\partial_{i}a^{j})*\nabla_{j}(f*E_{k})\] \[= a^{j}*f*\mathcal{R}_{E_{i}E_{j}}E_{k}.\]
_The same computation yields that_
\[\mathcal{R}_{VW}E_{k}=a^{j}*\mathcal{R}_{E_{i}E_{j}}E_{k}.\]
_Hence \(\mathcal{R}_{VW}(f*E_{k})\) and \(f*\mathcal{R}_{VW}E_{k}\) are not equal unless \(a^{j}*f=f*a^{j}\). The above computation also yields_
\[\mathcal{R}_{E_{i}(a^{j}*E_{j})}=a^{j}*\mathcal{R}_{E_{i}E_{j}}.\]
_Similarly, we have_
\[\mathcal{R}_{(a^{i}*E_{i})E_{j}}=a^{i}*\mathcal{R}_{E_{i}E_{j}}.\]
_As an operator, \((\nabla_{k}\mathcal{R})_{E_{i}E_{j}}\) dose not give rise a left \(\mathcal{A}_{U}\)-module endomorphism over left tangent bundle. This is because_
\[(\nabla_{k}\mathcal{R})_{E_{i}E_{j}}(f*E_{p})= \nabla_{k}\Big{(}\mathcal{R}_{E_{i}E_{j}}(f*E_{p})\Big{)}- \mathcal{R}_{(\nabla_{k}E_{i})E_{j}}(f*E_{p})\] \[-\mathcal{R}_{E_{i}(\nabla_{k}E_{j})}(f*E_{p})-\mathcal{R}_{E_{i} E_{j}}\Big{(}\nabla_{k}(f*E_{p})\Big{)}\] \[= \nabla_{k}(f*\mathcal{R}_{E_{i}E_{j}}E_{p})-\mathcal{R}_{(\nabla_ {k}E_{i})E_{j}}(f*E_{p})\] \[-\mathcal{R}_{E_{i}(\nabla_{k}E_{j})}(f*E_{p})\] \[-\mathcal{R}_{E_{i}E_{j}}\Big{(}(\partial_{k}f)*E_{p}+f*\nabla_{k }E_{p}\Big{)}\] \[= (\partial_{k}f)*\mathcal{R}_{E_{i}E_{j}}E_{p}+f*\nabla_{k}( \mathcal{R}_{E_{i}E_{j}}E_{p})\] \[-\mathcal{R}_{(\nabla_{k}E_{i})E_{j}}(f*E_{p})-\mathcal{R}_{E_{i} (\nabla_{k}E_{j})}(f*E_{p})\] \[-(\partial_{k}f)*\mathcal{R}_{E_{i}E_{j}}E_{p}-f*\mathcal{R}_{E_{ i}E_{j}}(\nabla_{k}E_{p})\] \[= f*\nabla_{k}(\mathcal{R}_{E_{i}E_{j}}E_{p})-\mathcal{R}_{(\nabla _{k}E_{i})E_{j}}(f*E_{p})\] \[-\mathcal{R}_{E_{i}(\nabla_{k}E_{j})}(f*E_{p})-f*\mathcal{R}_{E_{ i}E_{j}}(\nabla_{k}E_{p})\] \[\not\equiv f*(\nabla_{k}\mathcal{R})_{E_{i}E_{j}}E_{p}\]
_as \(\mathcal{R}_{(\nabla_{k}E_{i})E_{j}}\) and \(\mathcal{R}_{E_{i}(\nabla_{k}E_{j})}\) are not \(\mathcal{A}_{U}\)-module endomorphisms in general._
**Theorem 3.1**.: _The first (algebraic) Bianchi identity_
\[\mathcal{R}_{E_{i}E_{j}}E_{k}+\mathcal{R}_{E_{j}E_{k}}E_{i}+\mathcal{R}_{E_{k} E_{i}}E_{j}=0\]
_and the second (differential) Bianchi identity_
\[(\nabla_{i}\mathcal{R})_{E_{j}E_{k}}E_{p}+(\nabla_{j}\mathcal{R})_{E_{k}E_{i}}E_{ p}+(\nabla_{k}\mathcal{R})_{E_{i}E_{j}}E_{p}=0\]
_hold for \(1\leq i,j,k,p\leq n\)._
_Proof:_ Since the connection is torsion free, we have
\[\mathcal{R}_{E_{i}E_{j}} E_{k}+\mathcal{R}_{E_{j}E_{k}}E_{i}+\mathcal{R}_{E_{k}E_{i}}E_{j}\] \[= \nabla_{i}\nabla_{j}E_{k}-\nabla_{j}\nabla_{i}E_{k}+\nabla_{j} \nabla_{k}E_{i}-\nabla_{k}\nabla_{j}E_{i}\] \[+\nabla_{k}\nabla_{i}E_{j}-\nabla_{i}\nabla_{k}E_{j}\] \[= \nabla_{i}(\nabla_{j}E_{k}-\nabla_{k}E_{j})+\nabla_{j}(\nabla_{k} E_{i}-\nabla_{i}E_{k})\] \[+\nabla_{k}(\nabla_{i}E_{j}-\nabla_{j}E_{i})\] \[= 0.\]
Thus the first Bianchi identity holds. As
\[(\nabla_{i}\mathcal{R})_{E_{j}E_{k}}E_{p}= \nabla_{i}\nabla_{j}\nabla_{k}E_{p}-\nabla_{i}\nabla_{k}\nabla_{j }E_{p}-\mathcal{R}_{(\nabla_{i}E_{j})E_{k}}E_{p}\] \[-\mathcal{R}_{E_{j}(\nabla_{i}E_{k})}E_{p}-\nabla_{j}\nabla_{k} \nabla_{i}E_{p}+\nabla_{k}\nabla_{j}\nabla_{i}E_{p},\] \[(\nabla_{j}\mathcal{R})_{E_{k}E_{i}}E_{p}= \nabla_{j}\nabla_{k}\nabla_{i}E_{p}-\nabla_{j}\nabla_{i}\nabla_{k }E_{p}-\mathcal{R}_{(\nabla_{j}E_{k})E_{i}}E_{p}\] \[-\mathcal{R}_{E_{k}(\nabla_{j}E_{i})}E_{p}-\nabla_{k}\nabla_{i} \nabla_{j}E_{p}+\nabla_{i}\nabla_{k}\nabla_{j}E_{p},\] \[(\nabla_{k}\mathcal{R})_{E_{i}E_{j}}E_{p}= \nabla_{k}\nabla_{i}\nabla_{j}E_{p}-\nabla_{k}\nabla_{j}\nabla_{i }E_{p}-\mathcal{R}_{(\nabla_{k}E_{i})E_{j}}E_{p}\] \[-\mathcal{R}_{E_{i}(\nabla_{k}E_{j})}E_{p}-\nabla_{i}\nabla_{j} \nabla_{k}E_{p}+\nabla_{j}\nabla_{i}\nabla_{k}E_{p},\]
we obtain
\[(\nabla_{i}\mathcal{R})_{E_{j}E_{k}}E_{p}+(\nabla_{j}\mathcal{R}) _{E_{k}E_{i}}E_{p}+(\nabla_{k}\mathcal{R})_{E_{i}E_{j}}E_{p}\] \[=-\mathcal{R}_{(\nabla_{i}E_{j})E_{k}}E_{p}-\mathcal{R}_{E_{j}( \nabla_{i}E_{k})}E_{p}-\mathcal{R}_{(\nabla_{j}E_{k})E_{i}}E_{p}\] \[\quad-\mathcal{R}_{E_{k}(\nabla_{j}E_{i})}E_{p}-\mathcal{R}_{( \nabla_{k}E_{i})E_{j}}E_{p}-\mathcal{R}_{E_{i}(\nabla_{k}E_{j})}E_{p}.\]
The torsion free condition implies
\[\mathcal{R}_{(\nabla_{i}E_{j})E_{k}}E_{p}+\mathcal{R}_{E_{k}( \nabla_{j}E_{i})}E_{p} =0,\] \[\mathcal{R}_{E_{j}(\nabla_{i}E_{k})}E_{p}+\mathcal{R}_{(\nabla_{k} E_{i})E_{j}}E_{p} =0,\] \[\mathcal{R}_{(\nabla_{j}E_{k})E_{i}}E_{p}+\mathcal{R}_{E_{i}( \nabla_{k}E_{j})}E_{p} =0.\]
Therefore
\[(\nabla_{i}\mathcal{R})_{E_{j}E_{k}}E_{p}+(\nabla_{j}\mathcal{R})_{E_{k}E_{i}}E _{p}+(\nabla_{k}\mathcal{R})_{E_{i}E_{j}}E_{p}=0.\]
Thus the second Bianchi identity holds. Q.E.D.
**Remark 3.3**.: _Bianchi identities also hold for noncommutative right curvature tensors._
**Proposition 3.3**.: _The second Bianchi identity gives that_
\[\nabla_{i}R_{j}^{i}+\nabla_{i}\Theta_{j}^{i}-\delta_{j}^{i}\nabla_{i}R=0.\]
_Proof:_ By the second Bianchi identity, we have
\[\nabla_{i}R_{qpjk}+\nabla_{j}R_{qpki}+\nabla_{k}R_{qpij}=0.\]
Multiplying \(g^{ip}\) from the left side and \(g^{qk}\) from the right side, we obtain
\[g^{ip}*\nabla_{i}R_{qpjk}*g^{qk}+g^{ip}*\nabla_{j}R_{qpki}*g^{qk}+g^{ip}*\nabla _{k}R_{qpij}*g^{qk}=0.\]
Taking summation for \(i\), \(k\), \(p\) and \(q\), we obtain
\[-\nabla_{i}R^{i}_{j}+\nabla_{j}R-\nabla_{k}\Theta^{k}_{j}=0.\]
Therefore the proof of the proposition is complete. Q.E.D.
## 4. Equivalence of noncommutative Ricci curvatures
In this section, we show that two Ricci curvatures \(R^{i}_{j}\) and \(\Theta^{i}_{j}\) are equivalent under certain conditions. In particular, they are satisfied if noncommutative metric and chiral coefficients are given by an isometric embedding.
Let noncommutative metric \(g_{ij}\), its inverse \(g^{ij}\) and chiral coefficients \(\Upsilon_{ijk}\) have power series expansions (2.4), (3.2) and (2.5).
**Lemma 4.1**.: _If noncommutative metric \(g\) satisfies (2.11), then_
\[g^{ij}[2q]=g^{ji}[2q],\quad g^{ij}[2q+1]=-g^{ji}[2q+1]. \tag{4.1}\]
_Proof:_ Since \((g_{ij}[0])\) is symmetric and invertible on \(U\), the inverse matrix \((g^{ij}[0])\) is also symmetric, i.e.,
\[g^{ij}[0]=g^{ji}[0].\]
For \(u,v\in C^{\infty}(U)\), (3.1) indicates that
\[\mu_{2q}(u,v)=\mu_{2q}(v,u),\quad\mu_{2q+1}(u,v)=-\mu_{2q+1}(v,u). \tag{4.2}\]
By (4.2) and the recursive formulas (3.3), (3.4), we have
\[g^{ij}[1] =-g^{ik}[0]g_{kl}[1]g^{lj}[0]-g^{ik}[0]\mu_{1}\Big{(}g_{kl}[0],g^ {lj}[0]\Big{)}\] \[=g^{jl}[0]g_{lk}[1]g^{ki}[0]+\mu_{1}\Big{(}g^{jl}[0],g_{lk}[0] \Big{)}g^{ki}[0]\] \[=-g^{ji}[1].\]
Next, let \(q\in\mathbb{N}\), using the recursive formula (3.3), we have
\[g^{ij}[2q]= -\sum_{r=1}^{q}g^{ik}[0]g_{kl}[2r]g^{lj}[2q-2r]\] \[-\sum_{r=1}^{q}g^{ik}[0]g_{kl}[2r-1]g^{lj}[2q-2r+1]\] \[-\sum_{r=1}^{q}\sum_{s=0}^{q-r}g^{ik}[0]\mu_{2r}\Big{(}g_{kl}[2s],g ^{lj}[2q-2r-2s]\Big{)}\] \[-\sum_{r=1}^{q}\sum_{s=1}^{q-r}g^{ik}[0]\mu_{2r}\Big{(}g_{kl}[2s-1 ],g^{lj}[2q-2r-2s+1]\Big{)}\] \[-\sum_{r=1}^{q}\sum_{s=0}^{q-r}g^{ik}[0]\mu_{2r-1}\Big{(}g_{kl}[2 s],g^{lj}[2q-2r-2s+1]\Big{)}\] \[-\sum_{r=1}^{q}\sum_{s=0}^{q-r}g^{ik}[0]\mu_{2r-1}\Big{(}g_{kl}[2 s+1],g^{lj}[2q-2r-2s]\Big{)}.\]
Therefore, by induction and (4.2) and recursive formula (3.4), we obtain
\[g^{ij}[2q]= -\sum_{r=1}^{q}g^{jl}[2q-2r]g_{lk}[2r]g^{ki}[0]\] \[-\sum_{r=1}^{q}g^{jl}[2q-2r+1]g_{lk}[2r-1]g^{ki}[0]\] \[-\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r}\Big{(}g^{jl}[2q-2r-2s],g_ {lk}[2s]\Big{)}g^{ki}[0]\] \[-\sum_{r=1}^{q}\sum_{s=1}^{q-r}\mu_{2r}\Big{(}g^{jl}[2q-2r-2s+1],g_{lk}[2s-1]\Big{)}g^{ki}[0]\] \[-\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}g^{jl}[2q-2r-2s+1 ],g_{lk}[2s]\Big{)}g^{ki}[0]\] \[-\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}g^{jl}[2q-2r-2s],g_{lk}[2s+1]\Big{)}g^{ki}[0]\] \[= g^{ji}[2q].\]
Similarly, we can prove
\[g^{ij}[2q+1]=-g^{ji}[2q+1].\]
**Lemma 4.2**.: _If noncommutative metric \(g\) satisfies (2.11), then, for any \(f\in C^{\infty}(U)\),_
\[\big{(}g^{ij}*f*g^{kl}\big{)}[2q] =\big{(}g^{lk}*f*g^{ji}\big{)}[2q],\] \[\big{(}g^{ij}*f*g^{kl}\big{)}[2q+1] =-\big{(}g^{lk}*f*g^{ji}\big{)}[2q+1].\]
_Proof:_ For any \(u,v\in C^{\infty}(U)\), we have
\[\big{(}g^{ij}*u*v\big{)}[2q]= \sum_{r=0}^{q}\sum_{s=0}^{q-r}\mu_{2r}\Big{(}g^{ij}[2s],\mu_{2q-2r -2s}(u,v)\Big{)}\] \[+\sum_{r=0}^{q}\sum_{s=1}^{q-r}\mu_{2r}\Big{(}g^{ij}[2s-1],\mu_{2 q-2r-2s+1}(u,v)\Big{)}\] \[+\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}g^{ij}[2s],\mu_{2 q-2r-2s+1}(u,v)\Big{)}\] \[+\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}g^{ij}[2s+1],\mu_ {2q-2r-2s}(u,v)\Big{)}.\]
Using Lemma 4.1 and (4.2), we obtain
\[\big{(}g^{ij}*u*v\big{)}[2q]= \sum_{r=0}^{q}\sum_{s=0}^{q-r}\mu_{2r}\Big{(}\mu_{2q-2r-2s}(v,u), g^{ji}[2s]\Big{)}\] \[+\sum_{r=0}^{q}\sum_{s=1}^{q-r}\mu_{2r}\Big{(}\mu_{2q-2r-2s+1}(v, u),g^{ji}[2s-1]\Big{)}\] \[+\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}\mu_{2q-2r-2s+1}( v,u),g^{ji}[2s]\Big{)}\] \[+\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}\mu_{2q-2r-2s}(v, u),g^{ji}[2s+1]\Big{)}\] \[= \big{(}v*u*g^{ji}\big{)}[2q].\]
Similarly, we can show
\[\big{(}g^{ij}*u*v\big{)}[2q+1]=-\big{(}v*u*g^{ji}\big{)}[2q+1].\]
Using them, we obtain
\[\big{(}g^{ij}*f*g^{kl}\big{)}[2q]= \sum_{r=0}^{q}\Big{(}g^{ij}*f*(g^{kl}[2r])\Big{)}[2q-2r]\] \[+\sum_{r=1}^{q}\Big{(}g^{ij}*f*(g^{kl}[2r-1])\Big{)}[2q-2r+1]\] \[= \sum_{r=0}^{q}\Big{(}(g^{kl}[2r])*f*g^{ji}\Big{)}[2q-2r]\] \[-\sum_{r=1}^{q}\Big{(}(g^{kl}[2r-1])*f*g^{ji}\Big{)}[2q-2r+1]\] \[= \sum_{r=0}^{q}\Big{(}(g^{lk}[2r])*f*g^{ji}\Big{)}[2q-2r]\] \[+\sum_{r=0}^{q}\Big{(}(g^{lk}[2r-1])*f*g^{ji}\Big{)}[2q]\] \[= \big{(}g^{lk}*f*g^{ji}\big{)}[2q],\]
and, similarly
\[\big{(}g^{ij}*f*g^{kl}\big{)}[2q+1]=-\big{(}g^{lk}*f*g^{ji}\big{)}[2q+1].\]
Q.E.D.
**Lemma 4.3**.: _If noncommutative metric \(g\) satisfies (2.11), then, for \(u,v\in C^{\infty}(U)\),_
\[\big{(}u*g^{ij}*v\big{)}[2q]= \big{(}v*g^{ji}*u\big{)}[2q],\] \[\big{(}u*g^{ij}*v\big{)}[2q+1]= -\big{(}v*g^{ji}*u\big{)}[2q+1].\]
_Proof:_ For any \(u\in C^{\infty}(U)\), we have
\[\big{(}u*g^{ij}\big{)}[2q]= \sum_{r=0}^{q}\mu_{2r}\Big{(}u,g^{ij}[2q-2r]\Big{)}\] \[+\sum_{r=1}^{q}\mu_{2r-1}\Big{(}u,g^{ij}[2q-2r+1]\Big{)}.\]
Using Lemma 4.1 and (4.2), we obtain
\[\big{(}u*g^{ij}\big{)}[2q]= \sum_{r=0}^{q}\mu_{2r}\Big{(}u,g^{ij}[2q-2r]\Big{)}\] \[+\sum_{r=1}^{q}\mu_{2r-1}\Big{(}u,g^{ij}[2q-2r+1]\Big{)}\] \[= \sum_{r=0}^{q}\mu_{2r}\Big{(}g^{ji}[2q-2r],u\Big{)}\] \[+\sum_{r=1}^{q}\mu_{2r-1}\Big{(}g^{ji}[2q-2r+1],u\Big{)}\] \[= \big{(}g^{ji}*u\big{)}[2q].\]
Similarly we have
\[\big{(}u*g^{ij}\big{)}[2q+1]=-\big{(}g^{ji}*u\big{)}[2q+1].\]
Then using Lemma 4.1 and (4.2) again, we have
\[\big{(}u*g^{ij}*v\big{)}[2q]= \sum_{r=0}^{q}\mu_{2r}\Big{(}\big{(}u*g^{ij}\big{)}[2q-2r],v\Big{)}\] \[+\sum_{r=1}^{q}\mu_{2r-1}\Big{(}\big{(}u*g^{ij}\big{)}[2q-2r+1],v \Big{)}\] \[= \sum_{r=0}^{q}\mu_{2r}\Big{(}v,\big{(}g^{ji}*u\big{)}[2q-2r] \Big{)}\] \[+\sum_{r=1}^{q}\mu_{2r-1}\Big{(}v,\big{(}g^{ji}*u\big{)}[2q-2r+1] \Big{)}\] \[= \big{(}v*g^{ji}*u\big{)}[2q].\]
Similarly we have
\[\big{(}u*g^{ij}*v\big{)}[2q+1]=-\big{(}v*g^{ji}*u\big{)}[2q+1].\]
Q.E.D.
**Lemma 4.4**.: _Let \(\nabla\), \(\tilde{\nabla}\) be the canonical connection with respect to the non-commutative metric \(g\) and chiral coefficients \(\Upsilon_{ijk}\) on \(U\). If (2.11), (2.12) hold, then_
\[\Gamma_{ijk}[2q]=\tilde{\Gamma}_{ijk}[2q],\ \ \ \Gamma_{ijk}[2q+1]=-\tilde{ \Gamma}_{ijk}[2q+1]. \tag{4.3}\]
_Proof:_ By chirality and (2.12), we have
\[\Gamma_{ijk}[2q]-\tilde{\Gamma}_{ijk}[2q]= \Upsilon_{ijk}[2q]=0.\]
By (2.2), (2.3) and (2.11), we have
\[\begin{split}\Gamma_{ijk}[2q+1]=&\frac{1}{2}\Upsilon_{ ijk}[2q+1]\\ =&-\tilde{\Gamma}_{ijk}[2q+1].\end{split}\]
Q.E.D.
**Proposition 4.1**.: _Let \(M\) be an \(n\)-dimensional smooth manifold and \(U\subset M\) a coordinate chart. Let \(\nabla\), \(\tilde{\nabla}\) be the canonical connection with respect to noncommutative metric \(g\) and chiral coefficients \(\Upsilon_{ijk}\) on \(U\). Let Riemannian curvatures have the power series expansions (2.6). If (2.11), (2.12) hold, then_
\[R_{lkij}[2q]=-R_{klij}[2q],\quad R_{lkij}[2q+1]=R_{klij}[2q+1]. \tag{4.4}\]
_Proof:_ In terms of connection coefficients, the Riemannian curvatures are
\[\begin{split} R_{lkij}=& g\big{(}(\nabla_{i} \nabla_{j}-\nabla_{j}\nabla_{i})E_{k},\tilde{E}_{l}\big{)}\\ =&\partial_{i}g\big{(}\nabla_{j}E_{k},\tilde{E}_{ l}\big{)}-g\big{(}\nabla_{j}E_{k},\tilde{\nabla}_{i}\tilde{E}_{l}\big{)}\\ &-\partial_{j}g\big{(}\nabla_{i}E_{k},\tilde{E}_{l}\big{)}+g \big{(}\nabla_{i}E_{k},\tilde{\nabla}_{j}\tilde{E}_{l}\big{)}\\ =&\partial_{i}\Gamma_{jkl}-\partial_{j}\Gamma_{ikl} +\Gamma_{ik}^{r}*g_{rs}*\tilde{\Gamma}_{jl}^{s}-\Gamma_{jk}^{r}*g_{rs}*\tilde {\Gamma}_{il}^{s}\\ =&\partial_{i}\Gamma_{jkl}-\partial_{j}\Gamma_{ikl} +\Gamma_{iks}*g^{sr}*\tilde{\Gamma}_{jlr}-\Gamma_{jks}*g^{sr}*\tilde{\Gamma} _{ilr}.\end{split}\]
Since
\[\begin{split}\partial_{i}\Gamma_{jkl}-\partial_{j}\Gamma_{ikl}=& \partial_{i}g\big{(}\nabla_{j}E_{k},\tilde{E}_{l}\big{)}-\partial_{j}g \big{(}\nabla_{i}E_{k},\tilde{E}_{l}\big{)}\\ =&\partial_{i}\big{(}\partial_{j}g(E_{k},\tilde{E} _{l})-g(E_{k},\tilde{\nabla}_{j}\tilde{E}_{l})\big{)}\\ &-\partial_{j}\big{(}\partial_{i}g(E_{k},\tilde{E}_{l})-g(E_{k}, \tilde{\nabla}_{i}\tilde{E}_{l})\big{)}\\ =&\partial_{i}\partial_{j}g_{kl}-\partial_{i}\tilde{ \Gamma}_{jlk}-\partial_{j}\partial_{i}g_{kl}+\partial_{j}\tilde{\Gamma}_{ilk} \\ =&\partial_{j}\tilde{\Gamma}_{ilk}-\partial_{i} \tilde{\Gamma}_{jlk},\end{split}\]
we obtain
\[R_{lkij}=\partial_{j}\tilde{\Gamma}_{ilk}-\partial_{i}\tilde{ \Gamma}_{jlk}+\Gamma_{iks}*g^{sr}*\tilde{\Gamma}_{jlr}-\Gamma_{jks}*g^{sr}* \tilde{\Gamma}_{ilr}.\]
Using Lemma 4.4, we have
\[\begin{split}\big{(}\partial_{i}\Gamma_{jkl}-\partial_{j}\Gamma_ {ikl}\big{)}[2q]=&-\big{(}\partial_{j}\tilde{\Gamma}_{ikl}- \partial_{i}\tilde{\Gamma}_{jkl}\big{)}[2q]\\ \big{(}\partial_{i}\Gamma_{jkl}-\partial_{j}\Gamma_{ikl}\big{)}[2 q+1]=&\big{(}\partial_{j}\tilde{\Gamma}_{ikl}-\partial_{i} \tilde{\Gamma}_{jkl}\big{)}[2q+1].\end{split}\]
Moreover, using Lemma 4.3 and Lemma 4.4, we obtain
\[\big{(}\Gamma_{iks}*g^{sr}*\tilde{\Gamma}_{jlr}\big{)}[2q]\] \[= \sum_{\alpha=0}^{q}\sum_{\beta=0}^{q-\alpha}\big{(}\Gamma_{iks}[2 \alpha]*g^{sr}*\tilde{\Gamma}_{jlr}[2\beta]\big{)}[2q-2\alpha-2\beta]\] \[+\sum_{\alpha=0}^{q}\sum_{\beta=1}^{q-\alpha}\big{(}\Gamma_{iks}[2 \alpha]*g^{sr}*\tilde{\Gamma}_{jlr}[2\beta-1]\big{)}[2q-2\alpha-2\beta+1]\] \[+\sum_{\alpha=1}^{q}\sum_{\beta=0}^{q-\alpha}\big{(}\Gamma_{iks}[2 \alpha-1]*g^{sr}*\tilde{\Gamma}_{jlr}[2\beta]\big{)}[2q-2\alpha-2\beta+1]\] \[+\sum_{\alpha=1}^{q}\sum_{\beta=0}^{q-\alpha}\big{(}\Gamma_{iks}[ 2\alpha-1]*g^{sr}*\tilde{\Gamma}_{jlr}[2\beta+1]\big{)}[2q-2\alpha-2\beta]\] \[= \sum_{\alpha=0}^{q}\sum_{\beta=0}^{q-\alpha}\big{(}\Gamma_{jlr}[ 2\beta]*g^{rs}*\tilde{\Gamma}_{iks}[2\alpha]\big{)}[2q-2\alpha-2\beta]\] \[+\sum_{\alpha=0}^{q}\sum_{\beta=1}^{q-\alpha}\big{(}\Gamma_{jlr}[ 2\beta-1]*g^{rs}*\tilde{\Gamma}_{iks}[2\alpha]\big{)}[2q-2\alpha-2\beta+1]\] \[+\sum_{\alpha=1}^{q}\sum_{\beta=0}^{q-\alpha}\big{(}\Gamma_{jlr}[ 2\beta]*g^{rs}*\tilde{\Gamma}_{iks}[2\alpha-1]\big{)}[2q-2\alpha-2\beta+1]\] \[+\sum_{\alpha=1}^{q}\sum_{\beta=0}^{q-\alpha}\big{(}\Gamma_{jlr} [2\beta+1]*g^{rs}*\tilde{\Gamma}_{iks}[2\alpha-1]\big{)}[2q-2\alpha-2\beta]\] \[= \big{(}\Gamma_{jls}*g^{sr}*\tilde{\Gamma}_{ikr}\big{)}[2q].\]
Similarly, we have
\[\big{(}\Gamma_{iks}*g^{sr}*\tilde{\Gamma}_{jlr}\big{)}[2q+1]= -\big{(}\Gamma_{jls}*g^{sr}*\tilde{\Gamma}_{ikr}\big{)}[2q+1],\] \[\big{(}\Gamma_{jks}*g^{sr}*\tilde{\Gamma}_{ilr}\big{)}[2q]= \big{(}\Gamma_{ils}*g^{sr}*\tilde{\Gamma}_{jkr}\big{)}[2q],\] \[\big{(}\Gamma_{jks}*g^{sr}*\tilde{\Gamma}_{ilr}\big{)}[2q+1]= -\big{(}\Gamma_{ils}*g^{sr}*\tilde{\Gamma}_{jkr}\big{)}[2q+1].\]
Therefore
\[R_{lkij}[2q]= \big{(}\partial_{i}\Gamma_{jkl}-\partial_{j}\Gamma_{ikl}+\Gamma_ {iks}*g^{sr}*\tilde{\Gamma}_{jlr}-\Gamma_{jks}*g^{sr}*\tilde{\Gamma}_{ilr} \big{)}[2q]\] \[= -\big{(}\partial_{j}\tilde{\Gamma}_{ikl}-\partial_{i}\tilde{ \Gamma}_{jkl}+\Gamma_{ils}*g^{sr}*\Gamma_{jkr}-\Gamma_{jls}*g^{sr}*\tilde{ \Gamma}_{ikr}\big{)}[2q]\] \[= -R_{klij}[2q].\]
Similarly, we obtain
\[R_{lkij}[2q+1]=R_{klij}[2q+1].\]
Q.E.D.
**Theorem 4.1**.: _Let \(M\) be an \(n\)-dimensional smooth manifold and \(U\subset M\) a coordinate chart. Let \(\nabla\), \(\tilde{\nabla}\) be the canonical connection with respect to noncommutative metric \(g\) and chiral coefficients \(\Upsilon_{ijk}\) on \(U\). Let two Ricci curvatures have the power series expansions (2.7), (2.8), (2.9), (2.10). If (2.11), (2.12) hold, then_
\[R_{ij}[2q]=\Theta_{ji}[2q],\quad R_{ij}[2q+1]=-\Theta_{ji}[2q+1], \tag{4.5}\] \[R^{i}_{j}[2q]=\Theta^{i}_{j}[2q],\quad R^{i}_{j}[2q+1]=-\Theta^{i }_{j}[2q+1]. \tag{4.6}\]
_Proof:_ By definition
\[R_{ij}=R_{likj}*g^{lk},\quad\Theta_{ij}=g^{lk}*R_{jkil}.\]
Using Lemma 4.1, Proposition 4.1 and (4.2), we obtain
\[R_{ij}[2q]= \sum_{r=0}^{q}\sum_{s=0}^{q-r}\mu_{2r}\Big{(}R_{likj}[2s],g^{lk}[2 q-2r-2s]\Big{)}\] \[+\sum_{r=0}^{q}\sum_{s=1}^{q-r}\mu_{2r}\Big{(}R_{likj}[2s-1],g^{ lk}[2q-2r-2s+1]\Big{)}\] \[+\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}R_{likj}[2s],g^{ lk}[2q-2r-2s+1]\Big{)}\] \[+\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}R_{likj}[2s+1],g^ {lk}[2q-2r-2s]\Big{)}\] \[= -\sum_{r=0}^{q}\sum_{s=0}^{q-r}\mu_{2r}\Big{(}g^{kl}[2q-2r-2s],R_ {ilkj}[2s]\Big{)}\] \[-\sum_{r=0}^{q}\sum_{s=1}^{q-r}\mu_{2r}\Big{(}g^{kl}[2q-2r-2s+1], R_{ilkj}[2s-1]\Big{)}\] \[-\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}g^{kl}[2q-2r-2s+1 ],R_{ilkj}[2s]\Big{)}\] \[= -\Big{(}g^{kl}*R_{ilkj}\Big{)}[2q]\] \[= \big{(}g^{kl}*R_{ilkj}\big{)}[2q]\] \[= \Theta_{ji}[2q].\]
Similarly, we have
\[R_{ij}[2q+1]=-\Theta_{ji}[2q+1].\]
Then since
\[R^{i}_{j}=g^{ik}*R_{kj},\quad\Theta^{i}_{j}=\Theta_{jk}*g^{ki},\]
we obtain
\[R^{i}_{j}[2q]= \sum_{r=0}^{q}\sum_{s=0}^{q-r}\mu_{2r}\Big{(}g^{ik}[2s],R_{kj}[2q-2 r-2s]\Big{)}\] \[+\sum_{r=0}^{q}\sum_{s=1}^{q-r}\mu_{2r}\Big{(}g^{ik}[2s-1],R_{kj}[2 q-2r-2s+1]\Big{)}\] \[+\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}g^{ik}[2s],R_{kj}[2 q-2r-2s+1]\Big{)}\] \[+\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}g^{ik}[2s+1],R_{kj} [2q-2r-2s]\Big{)}\] \[= \sum_{r=0}^{q}\sum_{s=0}^{q-r}\mu_{2r}\Big{(}\Theta_{jk}[2q-2r-2 s],g^{ki}[2s]\Big{)}\] \[+\sum_{r=0}^{q}\sum_{s=1}^{q-r}\mu_{2r}\Big{(}\Theta_{jk}[2q-2r- 2s+1],g^{ki}[2s-1]\Big{)}\] \[+\sum_{r=1}^{q}\sum_{s=0}^{q-r}\mu_{2r-1}\Big{(}\Theta_{jk}[2q-2 r-2s+1],g^{ki}[2s]\Big{)}\] \[+\sum_{r=1}^{q}\sum_{s=1}^{q-r}\mu_{2r-1}\Big{(}\Theta_{jk}[2q-2 r-2s],g^{ki}[2s+1]\Big{)}\] \[= \big{(}\Theta_{jk}*g^{ki}\big{)}[2q]\] \[= \Theta^{i}_{j}[2q].\]
Similarly, we have
\[R^{i}_{j}[2q+1]=-\Theta^{i}_{j}[2q+1].\]
Q.E.D.
Now we provide the quantum fluctuation of a pseudo-Riemannian metric \(g[0]\) on \(U\) in terms of isometric embedding [4, 16, 17]. Recall that \((U,g[0])\) can always be isometrically embedded into a pseudo-Euclidean space, c.f. [11], i.e., there exist a differentiable map
\[X:U\longrightarrow\mathbb{R}^{p,m-p}\]
such that
\[g_{ij}[0]=\sum_{\alpha=1}^{m}\eta_{\alpha\alpha}\partial_{i}X^{\alpha}\cdot \partial_{j}X^{\alpha},\]
where \(\eta=\operatorname{diag}(-1,\cdots,-1,1,\cdots,1)\) is the flat metrics of \(\mathbb{R}^{p,m-p}\). The quantum fluctuation of \(g[0]\) is
\[g\big{(}E_{i},\tilde{E}_{j}\big{)}=\sum_{\alpha=1}^{m}\eta_{\alpha\alpha} \partial_{i}X^{\alpha}*\partial_{j}X^{\alpha}, \tag{4.7}\]
where \(E_{i}=\tilde{E}_{i}=\partial_{i}\). It yields a canonical connection with the connection and chiral coefficients
\[\Gamma_{ijk}= \sum_{\alpha=1}^{m}\eta_{\alpha\alpha}\partial_{i}\partial_{j}X^{ \alpha}*\partial_{k}X^{\alpha}, \tag{4.8}\] \[\tilde{\Gamma}_{ijk}= \sum_{\alpha=1}^{m}\eta_{\alpha\alpha}\partial_{k}X^{\alpha}* \partial_{i}\partial_{j}X^{\alpha},\] (4.9) \[\Upsilon_{ijk}= \sum_{\alpha=1}^{m}\eta_{\alpha\alpha}\big{(}\partial_{i} \partial_{j}X^{\alpha}*\partial_{k}X^{\alpha}-\partial_{k}X^{\alpha}*\partial _{i}\partial_{j}X^{\alpha}\big{)}. \tag{4.10}\]
The noncommutative metric (4.7) also induce a noncommutative scalar product on the high order partial derivatives of isometric embedding \(X\). For multi-index \(\gamma\), \(\delta\),
\[g\big{(}\partial^{|\gamma|}X,\partial^{|\delta|}X\big{)}=\sum_{\alpha=1}^{m} \eta_{\alpha\alpha}\partial^{|\gamma|}X^{\alpha}*\partial^{|\beta|}X^{\alpha}. \tag{4.11}\]
**Corollary 4.1**.: _Let \(g\) be given by (4.7) for isometric embedding_
\[X=(X^{1},\cdots,X^{m})\in C^{\infty}(U,\mathbb{R}^{m}),\]
_where \(X^{\alpha}\in C^{\infty}(U)\), \(1\leq\alpha\leq m\). Let \(R^{i}_{j}\) and \(\Theta^{i}_{j}\) be the two Ricci curvatures of the canonical connection induced by \(X\). Then (4.5) and (4.6) hold._
_Proof:_ We only need to check (4.7), (4.10) satisfy (2.11) and (2.12). Indeed, it is a direct consequence of (4.2). Q.E.D.
**Remark 4.1**.: _The following noncommutative Einstein field equations were proposed in [4]_
\[R^{i}_{j}+\Theta^{i}_{j}-\delta^{i}_{j}R=T^{i}_{j}.\]
_As it may not capture all information of noncommutative metrics, the second author gave the strong version in [19]_
\[R^{i}_{j}-\frac{1}{2}\delta^{i}_{j}R=T^{i}_{j},\quad\Theta^{i}_{j}-\frac{1}{2 }\delta^{i}_{j}R=\tilde{T}^{i}_{j}.\]
_Theorem 4.1 and Corollary 4.1 indicate that only the first one is sufficient and the noncommutative Einstein field equations should be_
\[R^{i}_{j}-\frac{1}{2}\delta^{i}_{j}R=T^{i}_{j}\]
_if (2.11), (2.12) hold, in particular, if noncommutative metrics are given by isometric embedding._
## 5. Spherically symmetric isometric embeddings
In this section, we show that the quantum fluctuations and their curvatures have close forms coming from Moyal products of trigonometric functions if (pseudo-) Riemannian metrics are given by certain type of spherically symmetric isometric embeddings. This indicates that the quantization of gravity is renormalizable in this case.
**Theorem 5.1**.: _Let open set_
\[U=(0,\infty)\times(0,2\pi)\times(0,\pi)\times\cdots\times(0,\pi)\subset\mathbb{ R}^{n},\]
_which is equipped with coordinates \((x^{1},x^{2},\cdots,x^{n})=(\rho,\theta_{1},\cdots,\theta_{n-1})\). Let \((U,g[0])\) be a (pseudo-) Riemannian metric given by a spherically symmetric isometric embedding_
\[X:U\longrightarrow\mathbb{R}^{p,m-p}\]
_with_
\[X^{1} =f^{1}(\rho),\] \[\cdots\cdots\] \[X^{m-n} =f^{m-n}(\rho),\] \[X^{m-n+1} =f^{m-n+1}(\rho)\sin\theta_{n-1}\sin\theta_{n-2}\cdots\sin\theta_ {2}\sin\theta_{1},\] \[X^{m-n+2} =f^{m-n+2}(\rho)\sin\theta_{n-1}\sin\theta_{n-2}\cdots\sin\theta_ {2}\cos\theta_{1},\] \[\cdots\cdots\] \[X^{m-2} =f^{m-2}(\rho)\sin\theta_{n-1}\sin\theta_{n-2}\cos\theta_{n-3},\] \[X^{m-1} =f^{m-1}(\rho)\sin\theta_{n-1}\cos\theta_{n-2},\] \[X^{m} =f^{m}(\rho)\cos\theta_{n-1},\]
_where \(f^{1}(\rho)\), \(\cdots\), \(f^{m}(\rho)\) are smooth functions of \(\rho\), \(m-n+1>p\) and_
\[f^{m-n+1}(\rho)=f^{m-n+2}(\rho)=f(\rho).\]
_Fix some \(l\in[3,n]\), define the Moyal product in terms of skew-symmetric matrix \((\theta^{ij})\) with nonzero elements_
\[\theta^{2l}=-\theta^{l2}=\lambda\neq 0.\]
_Then the quantum fluctuation of \(g[0]\) and their curvatures have close forms coming from absolutely convergent power series expansions on \(U\)._
_Proof:_ Note that only the term
\[\partial_{i}X^{m-n+1}*\partial_{j}X^{m-n+1}+\partial_{i}X^{m-n+2}*\partial_{j} X^{m-n+2}\]
cannot be reduced to the usual commutative product in noncommutative metric (4.7). Denote
\[g^{(\alpha,\beta)}_{ij}=\partial_{i}X^{\alpha}*\partial_{j}X^{\beta}.\]
And denote \(a_{0}=m-n+1\) for short. Using the formulas in the appendix, we obtain, for \(2<i<j\leq n,2<k\leq n\) and \(i,j,k\neq l\),
\[g_{11}^{(a_{0},a_{0})}+g_{11}^{(a_{0}+1,a_{0}+1)}= \big{(}f^{\prime}\big{)}^{2}\sin^{2}\theta_{n-1}\cdots\sin^{2} \theta_{l}\sin^{2}\theta_{l-2}\cdots\sin^{2}\theta_{2}\] \[\Big{(}\sin^{2}\theta_{l-1}\cosh^{2}(\lambda\hbar)-\cos^{2} \theta_{l-1}\sinh^{2}(\lambda\hbar)\Big{)},\] \[g_{12}^{(a_{0},a_{0})}+g_{12}^{(a_{0}+1,a_{0}+1)}= 2ff^{\prime}\sin^{2}\theta_{n-1}\cdots\sin^{2}\theta_{l}\sin^{2} \theta_{l-2}\cdots\sin^{2}\theta_{2}\] \[\sin\theta_{l-1}\cos\theta_{l-1}\cosh(\lambda\hbar)\sinh(\lambda \hbar),\] \[g_{1l}^{(a_{0},a_{0})}+g_{1l}^{(a_{0}+1,a_{0}+1)}= ff^{\prime}\sin^{2}\theta_{n-1}\cdots\sin^{2}\theta_{l}\sin^{2}\theta_{l-2} \cdots\sin^{2}\theta_{2}\] \[\sin\theta_{l-1}\cos\theta_{l-1}\Big{(}1+2\sinh^{2}(\lambda\hbar )\Big{)},\] \[g_{22}^{(a_{0},a_{0})}+g_{22}^{(a_{0}+1,a_{0}+1)}= f^{2}\sin^{2}\theta_{n-1}\cdots\sin^{2}\theta_{l}\sin^{2}\theta_{l-2} \cdots\sin^{2}\theta_{2}\] \[\Big{(}\sin^{2}\theta_{l-1}\cosh^{2}(\lambda\hbar)-\cos^{2} \theta_{l-1}\sinh^{2}(\lambda\hbar)\Big{)},\] \[g_{2l}^{(a_{0},a_{0})}+g_{2l}^{(a_{0}+1,a_{0}+1)}= f^{2}\sin^{2}\theta_{n-1}\cdots\sin^{2}\theta_{l}\sin^{2}\theta_{l-2} \cdots\sin^{2}\theta_{2}\] \[\Big{(}\sin^{2}\theta_{l-1}-\cos^{2}\theta_{l-1}\Big{)}\cosh( \lambda\hbar)\sinh(\lambda\hbar),\] \[g_{ll}^{(a_{0},a_{0})}+g_{ll}^{(a_{0}+1,a_{0}+1)}= f^{2}\sin^{2}\theta_{n-1}\cdots\sin^{2}\theta_{l}\sin^{2}\theta_{l-2} \cdots\sin^{2}\theta_{2}\] \[\Big{(}\cos^{2}\theta_{l-1}\cosh^{2}(\lambda\hbar)-\sin^{2}\theta _{l-1}\sinh^{2}(\lambda\hbar)\Big{)},\] \[g_{1k}^{(a_{0},a_{0})}+g_{1k}^{(a_{0}+1,a_{0}+1)}= ff^{\prime}\sin^{2}\theta_{n-1}\cdots\sin^{2}\theta_{k}\sin\theta_{k-1} \cos\theta_{k-1}\sin^{2}\theta_{k-2}\cdots\] \[\Big{(}\sin^{2}\theta_{l-1}\cosh^{2}(\lambda\hbar)-\cos^{2}\theta _{l-1}\sinh^{2}(\lambda\hbar)\Big{)},\] \[g_{2k}^{(a_{0},a_{0})}+g_{2k}^{(a_{0}+1,a_{0}+1)}= -2f^{2}\sin^{2}\theta_{n-1}\cdots\sin^{2}\theta_{k}\sin\theta_{k -1}\cos\theta_{k-1}\sin^{2}\theta_{k-2}\cdots\] \[\sin\theta_{l-1}\cos\theta_{l-1}\Big{(}1+2\sinh^{2}(\lambda\hbar) \Big{)},\] \[g_{kk}^{(a_{0},a_{0})}+g_{kk}^{(a_{0}+1,a_{0}+1)}= f^{2}\sin^{2}\theta_{n-1}\cdots\sin^{2}\theta_{k}\cos^{2}\theta_{k -1}\sin^{2}\theta_{k-2}\cdots\] \[\Big{(}\sin^{2}\theta_{l-1}\cosh^{2}(\lambda\hbar)-\cos^{2}\theta _{l-1}\sinh^{2}(\lambda\hbar)\Big{)},\] \[g_{ij}^{(a_{0},a_{0})}+g_{ij}^{(a_{0}+1,a_{0}+1)}= f^{2}\sin^{2}\theta_{n-1}\cdots\sin^{2}\theta_{j}\sin\theta_{j-1} \cos\theta_{j-1}\sin^{2}\theta_{j-2}\cdots\] \[\sin^{2}\theta_{i}\sin\theta_{i-1}\cos\theta_{i-1}\sin^{2}\theta_{i -2}\cdots\] \[\Big{(}\sin^{2}\theta_{l-1}\cosh^{2}(\lambda\hbar)-\cos^{2}\theta _{l-1}\sinh^{2}(\lambda\hbar)\Big{)}.\]
These indicate that
\[g_{2k}=-g_{k2},\quad k\neq 2\]
but other metric components are symmetric, and the quantum fluctuation \(g=(g_{ij})\) of \(g[0]\) have close forms which are smooth functions not depending
on \(\theta_{1}\). Therefore the Moyal product relating to \((g_{ij})\) becomes usual commutative product. This means the inverse matrix \((g^{ij})\) coincides with the inverse matrix in the sense of usual commutative product, and its elements do not depend on \(\theta_{1}\) neither. By (4.8), (4.9), similar calculation yields that the connection coefficients \(\Gamma_{ijk}\), \(\tilde{\Gamma}_{ijk}\) also have close forms which are smooth functions not depending on \(\theta_{1}\).
As all quantities relating to the quantum fluctuation and the connection coefficients do not depend on \(\theta_{1}\), the Moyal product in deriving the curvatures becomes usual commutative product. Therefore the curvatures have close forms, which depend only on \(\rho\), \(\theta_{2},\cdots,\theta_{n}\) and \(\hbar\). Q.E.D.
## Appendix A Moyal Products of Trigonometric Functions
Let \(U\subset\mathbb{R}^{2}\) be an open subset with coordinates \((x^{1},x^{2})=(\theta_{1},\theta_{2})\). Define the Moyal product \(*\) on \(\mathcal{A}_{U}=C^{\infty}(U)[[\hbar]]\) by the matrix
\[\begin{pmatrix}0&\lambda\\ -\lambda&0\end{pmatrix},\]
for some constant \(\lambda\neq 0\). Moyal products of trigonometric functions are given as follows.
\[(\sin\theta_{1}\sin\theta_{2})*(\sin\theta_{1}\sin\theta_{2})\] \[\quad=\sin^{2}\theta_{1}\sin^{2}\theta_{2}\cosh^{2}(\lambda \hbar)-\cos^{2}\theta_{1}\cos^{2}\theta_{2}\sinh^{2}(\lambda\hbar),\] \[(\sin\theta_{1}\sin\theta_{2})*(\sin\theta_{1}\cos\theta_{2})\] \[\quad=\sin\theta_{2}\cos\theta_{2}\big{(}\sin^{2}\theta_{1}+\sinh ^{2}(\lambda\hbar)\big{)}-\sin\theta_{1}\cos\theta_{1}\cosh(\lambda\hbar) \sinh(\lambda\hbar),\] \[(\sin\theta_{1}\sin\theta_{2})*(\cos\theta_{1}\sin\theta_{2})\] \[\quad=\sin\theta_{1}\cos\theta_{1}\sin\theta_{2}\cos\theta_{2}+( \sin^{2}\theta_{1}-\sin^{2}\theta_{2})\cosh(\lambda\hbar)\sinh(\lambda\hbar),\]
A.2.
\[(\sin\theta_{1}\cos\theta_{2})*(\sin\theta_{1}\sin\theta_{2})\\ =\sin\theta_{2}\cos\theta_{2}\big{(}\sin^{2}\theta_{1}+\sinh^{2}( \lambda\hbar)\big{)}+\sin\theta_{1}\cos\theta_{1}\cosh(\lambda\hbar)\sinh( \lambda\hbar),\\ (\sin\theta_{1}\cos\theta_{2})*(\sin\theta_{1}\cos\theta_{2})\\ =\sin^{2}\theta_{1}\cos^{2}\theta_{2}\cosh^{2}(\lambda\hbar)-\cos^{2} \theta_{1}\sin^{2}\theta_{2}\sinh^{2}(\lambda\hbar),\\ (\sin\theta_{1}\cos\theta_{2})*(\cos\theta_{1}\sin\theta_{2})\\ =\sin\theta_{1}\cos\theta_{1}\sin\theta_{2}\cos\theta_{2}+(\cos^{2} \theta_{1}-\sin^{2}\theta_{2})\cosh(\lambda\hbar)\sinh(\lambda\hbar),\\ (\sin\theta_{1}\cos\theta_{2})*(\cos\theta_{1}\cos\theta_{2})\\ =\sin\theta_{1}\cos\theta_{1}\big{(}\cos^{2}\theta_{2}+\sinh^{2}( \lambda\hbar)\big{)}-\sin\theta_{2}\cos\theta_{2}\cosh(\lambda\hbar)\sinh( \lambda\hbar),\]
A.3.
\[(\cos\theta_{1}\sin\theta_{2})*(\sin\theta_{1}\sin\theta_{2})\\ =\sin\theta_{1}\cos\theta_{1}\big{(}\sin^{2}\theta_{2}+\sinh^{2}( \lambda\hbar)\big{)}-\sin\theta_{2}\cos\theta_{2}\cosh(\lambda\hbar)\sinh( \lambda\hbar),\\ (\cos\theta_{1}\sin\theta_{2})*(\sin\theta_{1}\cos\theta_{2})\\ =\sin\theta_{1}\cos\theta_{1}\sin\theta_{2}\cos\theta_{2}+(\sin^{2} \theta_{1}-\cos^{2}\theta_{2})\cosh(\lambda\hbar)\sinh(\lambda\hbar),\\ (\cos\theta_{1}\sin\theta_{2})*(\cos\theta_{1}\sin\theta_{2})\\ =\cos^{2}\theta_{1}\sin^{2}\theta_{2}\cosh^{2}(\lambda\hbar)-\sin^{2} \theta_{1}\cos^{2}\theta_{2}\sinh^{2}(\lambda\hbar),\\ (\cos\theta_{1}\sin\theta_{2})*(\cos\theta_{1}\cos\theta_{2})\\ =\sin\theta_{2}\cos\theta_{2}\big{(}\cos^{2}\theta_{1}+\sinh^{2}( \lambda\hbar)\big{)}+\sin\theta_{1}\cos\theta_{1}\cosh(\lambda\hbar)\sinh( \lambda\hbar),\]
A.4.
\[(\cos\theta_{1}\cos\theta_{2})*(\sin\theta_{1}\sin\theta_{2})\\ =\sin\theta_{1}\cos\theta_{1}\sin\theta_{2}\cos\theta_{2}+(\cos^{2} \theta_{1}-\cos^{2}\theta_{2})\cosh(\lambda\hbar)\sinh(\lambda\hbar),\\ (\cos\theta_{1}\cos\theta_{2})*(\sin\theta_{1}\cos\theta_{2})\\ =\sin\theta_{1}\cos\theta_{1}\big{(}\cos^{2}\theta_{2}+\sinh^{2}( \lambda\hbar)\big{)}+\sin\theta_{2}\cos\theta_{2}\cosh(\lambda\hbar)\sinh( \lambda\hbar),\\ (\cos\theta_{1}\cos\theta_{2})*(\cos\theta_{1}\sin\theta_{2})\\ =\sin\theta_{2}\cos\theta_{2}\big{(}\cos^{2}\theta_{1}+\sinh^{2}( \lambda\hbar)\big{)}-\sin\theta_{1}\cos\theta_{1}\cosh(\lambda\hbar)\sinh( \lambda\hbar),\\ (\cos\theta_{1}\cos\theta_{2})*(\cos\theta_{1}\cos\theta_{2})\\ =\cos^{2}\theta_{1}\cos^{2}\theta_{2}\cosh^{2}(\lambda\hbar)-\sin^{2} \theta_{1}\sin^{2}\theta_{2}\sinh^{2}(\lambda\hbar).\]
_Acknowledgement. The work is supported by Chinese NSF grants 11731001, the special foundation for Junwu and Guangxi Ba Gui Scholars._ |
2303.08473 | Unsupervised Traffic Scene Generation with Synthetic 3D Scene Graphs | Image synthesis driven by computer graphics achieved recently a remarkable
realism, yet synthetic image data generated this way reveals a significant
domain gap with respect to real-world data. This is especially true in
autonomous driving scenarios, which represent a critical aspect for overcoming
utilizing synthetic data for training neural networks. We propose a method
based on domain-invariant scene representation to directly synthesize traffic
scene imagery without rendering. Specifically, we rely on synthetic scene
graphs as our internal representation and introduce an unsupervised neural
network architecture for realistic traffic scene synthesis. We enhance
synthetic scene graphs with spatial information about the scene and demonstrate
the effectiveness of our approach through scene manipulation. | Artem Savkin, Rachid Ellouze, Nassir Navab, Federico Tombari | 2023-03-15T09:26:29Z | http://arxiv.org/abs/2303.08473v1 | # Unsupervised Traffic Scene Generation with Synthetic 3D Scene Graphs
###### Abstract
Image synthesis driven by computer graphics achieved recently a remarkable realism, yet synthetic image data generated this way reveals a significant domain gap with respect to real-world data. This is especially true in autonomous driving scenarios, which represent a critical aspect for overcoming utilizing synthetic data for training neural networks. We propose a method based on domain-invariant scene representation to directly synthesize traffic scene imagery without rendering. Specifically, we rely on synthetic scene graphs as our internal representation and introduce an unsupervised neural network architecture for realistic traffic scene synthesis. We enhance synthetic scene graphs with spatial information about the scene and demonstrate the effectiveness of our approach through scene manipulation.
## I Introduction
A broad variety of real world scenarios require autonomous navigation systems to rely on machine learning-based perception algorithms. Such algorithms are knowingly data dependent, yet data acquisition and labeling is a costly and tedious process. It is associated with manual labor, must handle rare _long tail_ corner case events, and could be hard constrained by ethical aspects e.g., in case of near-accident scenarios.
One of the common alternatives to real data acquisition and annotation is represented by simulation and synthetic data. Simulation has a long history in driver assistance systems, but with the renaissance of neural networks, the research community has strengthened efforts in this direction and many synthetic datasets [28, 27, 34] and simulation systems [9, 30] appeared.
Despite being a powerful research tool, synthetic data typically reveals a significant domain gap with respect to target real data. The underlying phenomenon, where marginal distributions in both domains differ, has been defined by [32] as a _covariate shift_. This means that _i.i.d_ assumption does not hold for a synthetic-real setup. Simply stated, synthetic and real domains differ both in content and appearance. This problem is normally a subject for domain adaptation methods such as _sim2real_ domain transfer.
In this work, we propose overcoming the domain gap between both domains by eliminating the rendering part from the pipeline. The intuition behind this concept tells that rendered images introduce a bias towards the underlying domain, which is then commonly leveled out by e.g., style transfer methods. Our main idea is to replace the rendering with abstract scene representation and directly synthesize realistic images of traffic scenes from it. We utilize scene graphs as such abstract representations. Scene graphs encode objects in the scene as nodes and relations between them as edges [18], in addition, they can also integrate certain characteristics of the objects as attributes [2].
In this regard, scene graphs are domain agnostic as they can be generated in simulation and be applied to real-world data without a strong domain gap. In fact, scene graphs are fairly simple to simulate and manipulate, which allows for domain randomization and data generation of potentially arbitrary size and variance.
Derived from a simulation, synthetic scene graphs incorporate relevant objects as _car, person_ etc. and relations
Fig. 1: BDD and Cityscapes traffic scenes generated from synthetic scene graph.
like _left to_, _right to_. We additionally extend this setup with necessary traffic scene classes like _road, sidewalk, building, vegetation_. It is important to note that simulation provides 3D information about the scene, which can be introduced into the scene graph in the form of spatial attributes and spatial relations between objects. Moreover, synthetic annotations provided by simulation are pixel precise and could be used in a downstream task.
In this work, we propose synthetic 3D scene graphs with spatial components. We also provide the aforementioned graph representations for existing synthetic urban traffic datasets PfB [27] and Synscapes [34]. To enable realistic traffic scene generation from synthetic scene graphs, we propose a neural network architecture that supports unsupervised training. We demonstrate the benefit of our approach with an online tool through traffic scene generation and manipulation: Demo.
## II Related Work
_Rendering._ Historically, traffic scene synthesis was commonly achieved through computer graphics. Multiple works focused on physically realistic rendering of urban traffic environments: SYNTHIA [28], Virtual KITTI [11], PfB [27], VKITTI2 [4]. The most recent and visually realistic one is arguably Synscapes [34]. Simulation can be applied not only in outdoor environments, but also in indoor scene synthesis for autonomous agents [21, 26]. Although rendered data reveals a high level of realism and a great deal of variations, it is still affected by a significant domain gap with respect to real data when it is used for training machine learning approaches, in particular, neural networks.
_Domain Transfer._ To mitigate the limitation introduced by _sim2real_ domain shift, research has focused on data synthesis using deep neural networks. The vast majority of such models [23, 13, 3, 19], utilizes adversarial frameworks (GAN) [12] for image generation. Many of them condition the generation process on visual artifacts or descriptions derived from available labels (semantics, edges etc.), such as Pix2Pix [16], CRN [5], Pix2PixHD [33] and SPADE [25]. There are also several works which employed unsupervised domain adaptation from synthetic to real images, such as CycleGAN [39, 1], DIRT-T [31], MUNIT [15].
_Scene Graphs._ Another relevant research direction is represented by the use of more domain invariant scene descriptions such as text [38] or scene graphs. Scene graphs are abstract data structures used for describing scenes by encoding objects as nodes and relations between them as edges. Thus, the whole scene can be represented as a directed graph. Scene graphs have been used as an alternative to natural language description for image retrieval [18] and image description [22]. Most recent works on scene graphs for image generation include [17, 2] and [8]. [2] proposed dual layout and appearance embedding for better matching of generated scenes and underlying scene graphs. [8] utilizes scene graphs as an intermediate representation for image manipulation without direct supervision. A lot of works in the area of scene graphs rely on the Visual Genome dataset [20], which provides image samples annotated with scene graphs. To our knowledge, there is no such dataset available for urban traffic environments, so we provide scene graph annotations for commonly used public datasets, in particular, PfB [27] and Synscapes [34].
## III Proposed Approach
Our approach is focused on synthetic scene graphs and unsupervised traffic scene generation, where pairs of synthetic and real scenes are not available. We derive a synthetic scene graph from a procedurally generated scene that does not provide visual characteristics such as textures or materials. For the synthesized scene graph, we then produce an image of the corresponding traffic scene, which resembles the content of the underlying synthetic scene and the realistic appearance of the target data. Figure 2 provides an overview of our method. It highlights the two main parts of the approach: synthetic scene graph generation and realistic image generation.
Fig. 2: Overview of the network with graph processor \(P\), traffic scene generator \(G\), and loss functions \(\mathcal{L}_{SG}\) and \(\mathcal{L}_{TS}\).
### _Synthetic 3D Scene Graphs_
We adopt the setup from [17] for traffic scene data, where every scene graph is represented by nodes \(n_{i}\) associated to each object \(i\) in the scene and edges \(e_{ij}\) encode relations between particular nodes \(n_{i}\) and \(n_{j}\). Our graph processor is built upon [2] but extends it in several ways, allowing 3D scene graph construction. In addition to the simplified relations - _e.g., left of, above_[2], we integrate 3D information about the scene in form of spatial relations (_in front of, behind_) between objects and spatial attributes (depth component _z_) of particular objects. Importantly, this information is available in simulation at no cost. To generate a comprehensive traffic scene, we extend the objects list and integrate background classes such as _sky, building, vegetation_.
Therefore, every node in our graph is represented as \(n_{i}=[o_{i},l_{i},z_{i}],o_{i}\in\mathbb{R},l_{i}\in\{0,1\}^{L\times L},z_{i} \in\{0,1\}^{Z}\), where \(o_{i}\) takes one of the indexes of \(C\) possible object classes, \(L\) is size of the grid to denote objects location \(l_{i}\) and \(Z\) is the scene depth to denote objects position \(z_{i}\) along the z-axis. Analogously, every edge is represented as \(e_{i,j}\in\mathbb{R}\), taking one of the values from predefined dictionary of relation types.
To train the graph processor, we also obtain mask \(m_{i}\) and the bounding box \(b_{i}\in\mathbb{R}^{4}\) for each object \(i\) from the simulation. Hence, our graph processor \(P\) is trained in a supervised manner to produce the full scene layout \(t\in\mathbb{R}^{H\times\ W\times C}\). Similar to [17], our graph processor \(P\) is composed of 3 networks: _graph network_, _mask regression network_ and _box regression network_. The graph network is constructed of graph convolutional layers which extract features from scene graphs and encode per-object layout embedding, mask network consists of several deconvolution layers, and box network is a Multi-Layer Perceptron (MLP).
Fig. 4: Examples of synthetic scene graphs with corresponding BDD generated traffic scenes and semantic maps.
Fig. 3: Examples of synthetic scene graphs and corresponding generated traffic scenes from Cityscapes.
The last two networks are dedicated to predicting masks and bounding boxes of the objects present in the scene.
As an output of the first step, we obtain a synthetic scene graph alongside the corresponding layout \(t\) produced by the graph processor \(P\). Both scene graphs and layouts are domain agnostic, therefore, we omit visual information from the synthetic domain (rendering) and merely encode content.
### _Traffic Scenes_
In the next step, we apply the image generator \(G\) to layout \(t\) to produce a realistic traffic scene image \(x\), which visually resembles the target data \(X=\{\hat{x}\in\mathcal{X}\}\) while at the same time preserving the content of the original scene graph. Since our approach focuses on the realistic image generation from the synthetic scene graphs, it requires unsupervised training, as synthetic-real pairs are not available. To enable unsupervised training we employ adversarial [12] and contrastive [14] losses. Hence, our final objective is as follows:
\[\mathcal{L}=\mathcal{L}_{SG}+\mathcal{L}_{TS} \tag{1}\] \[=\mathcal{L}_{MSE}(b_{i},\hat{b}_{i})+\mathcal{L}_{GAN}(m_{i}, \hat{m}_{i})+\mathcal{L}_{FM}(m_{i},\hat{m}_{i})\] \[+\mathcal{L}_{GAN}(x,\hat{x})+\mathcal{L}_{NCE}(x,\hat{x})\]
Where \(\mathcal{L}_{FM}\) is a _feature matching loss_[29], \(\mathcal{L}_{NCE}\) is a _multilayer, patch-wise contrastive loss_[24, 6] and \(\mathcal{L}_{FM}\) is an _adversarial loss_ applied both to masks \(m_{i}\) and images \(x\). With \(\hat{}\) we denote either a synthetic ground-truth (for masks) or a target dataset (for images):
\[\mathcal{L}_{GAN}(m_{i},\hat{m}_{i})= \tag{2}\] \[\mathbb{E}\log D_{m}(m_{i})+\mathbb{E}\log(1-D_{m}(\hat{m}_{i}))\] \[\mathcal{L}_{FM}(m_{i},\hat{m}_{i})=\|\mathbb{E}f(m_{i})-\mathbb{ E}_{x\sim}f(\hat{m}_{i})\|_{2}^{2}\] \[\mathcal{L}_{GAN}(x,\hat{x})=\mathbb{E}_{x\sim X}\log D(x)+ \mathbb{E}_{\hat{x}\sim X}\log(1-D(\hat{x}))\] \[\mathcal{L}_{NCE}(x,\hat{x})=\mathbb{E}_{x\sim X}\sum_{l}\sum_{s }\ell(\hat{x}_{l}^{s},x_{l}^{s},\mathbb{x}_{l}^{s})\]
Where \(f\) denotes activations on an intermediate layer of the discriminator, \(\ell\) is a cross-entropy loss function for a positive pair of examples and \(x_{l}^{s}\) are the features of the generator from \(l\)-th layer at \(s\)-th location [6].
## IV Experiments
To properly evaluate our approach, we design the synthetic graph generator to operate on bounding boxes and semantic as input. This allows producing synthetic scene graphs for any existing synthetic image dataset, which provides 3D bounding boxes and semantic segmentation labels. In particular we use 2 synthetic traffic datasets for scene graph generation: PFB [27] and Synscapes [34]. Both are large-scale datasets that provide about 25000 urban traffic environment images at \(1914\times 1052\) and \(1440\times 720\) alongside 2D/3D bounding boxes, semantic and instance maps.
In the image generation part, we utilize two real datasets, Cityscapes [7] and Berkley Deep Drive [36] as providers of appearance characteristics. The first one comprises about 3000 images of traffic scenes from several German cities with finely annotated semantic maps. The second one includes about 100000 images of the streets of US American cities, of which we only use 10000.
### _Synthesis_
We first produce a set of synthetic scene graphs that includes 5k samples. We rely on this set to train our scene graph generator. For both experiments with Cityscapes and BDD, our setup is similar. We conduct all experiments at the resolution of \(256x\times 512\) pixels. The training continues for 150 epochs with a learning rate of \(1e-3\) and momentum \(0.99\). Figures 3 and 4 show the results for the training on Cityscapes and BDD, respectively. Additionally, in figure 9 we compare our method with state-of-the-art unsupervised image translation techniques - _e.g._, CycleGAN [39], MUNIT [15]. This is also reflected in the FID score - CycleGAN achieves 103.05, MUNIT - 75.98 and our method 47.26.
It is evident from the aforementioned figures that the traffic scene generator picks up the appearance of both datasets nicely, producing in general realistic images. Being slightly inaccurate in fine details (especially those of cars), it is yet very consistent on the class and image level. The generator produces a very congruous appearance of the objects of following classes: _road, sidewalk, building, vegetation_. We also want to highlight the conditioning of the generation
Fig. 5: BDD and Cityscapes traffic scenes generated from the same synthetic scene graph.
process illustrated in Figures 3 and 4 in sections IV-C and IV-B. There we demonstrate traffic scene manipulation with regard to newly introduced classes, spatial attributes and spatial relations to ensure that our improvements take effect.
### _Spatial Attributes and Relations_
Synthetic scene graphs let us leverage spatial information about the traffic scene. Figure 6 visualizes 3 traffic scenes generated from the same scene graph by varying the z-component of a particular object in the graph. In the first image, no car is present in the scene. In the following images, we put a car in the scene and change its position by manipulating the value of the \(z\)-attribute.
The 3D information for the scene not only enables changing the z-attribute of the objects, but also enables spatial relation between them. Thus, Figure 7 shows a traffic scene produced by the scene graph by swapping the relation between two cars, which appear in the scene having approximately the same size when not bounded by any spatial relation. The right car gets pulled toward the ego vehicle, while the left car appears farther away from the case of _in front of_ relation. When the relation changes to _behind_, the
Fig. 8: An example of traffic scene manipulation by changing classes.
Fig. 6: An example of traffic scene manipulation by changing spatial attribute of the car object.
Fig. 7: An example of traffic scene manipulation by changing spatial relation of 2 car objects.
left car is rendered significantly bigger than before and bigger than the right car.
### _Traffic Scene Classes_
Additionally to objects (_car, bus_) we introduce multiple classes which are characteristic for traffic scene scenarios. Such classes include _road, sidewalk, vegetation, building, sky_. To verify that introducing background classes is effective, we perform several experiments by manipulating the classes in the traffic scenes. Figure 8 shows the effects of such manipulation: changing classes of the particular nodes in the scene graph results in the respective adjustment of the semantic layout of the generated scene.
In addition to, to a qualitative evaluation of the approach, we conduct several experiments on the downstream task of semantic segmentation. This provides a quantitative assessment of the data generated by our method.
### _Image Segmentation_
To assess the proposed method quantitatively, we train a state-of-the-art semantic segmentation method on the data produced by our method as well as on the underlying synthetic data. Here we focus on the scene graphs produced from the Cityscapes simulation - _i.e_, we randomly generate 2 sets of scene graphs with 5000 and 10000 samples. For these scene graphs, we generate corresponding images alongside semantic maps. They are then used to train DRN [37].
We train a segmentation network for 200 epochs on 8 classes of interest and evaluate on the real Cityscapes _val_ dataset. We provide per class IoU score and meanIoU also referred as _Jaccard Index_[10] in Table I. The table demonstrates that DRN trained on the generated 5k dataset lies 5% behind DRN trained on the original simulated PFD 25k dataset. Doubling the number of scene graphs and corresponding images reduces the gap to 2 points. Additionally, we report in the table I the results of the experiment where scene graphs follow the statistics of a real dataset. Knowing the target data class ratio allows us to sub-sample the data and keep those synthetic scene graphs whose layouts follow target data class ratio. This makes it possible to reduce the scene graph number to 2000 samples and improve segmentation performance by 5% compared to the original 25k dataset.
## V Conclusion
In this work, we propose a method to ease data generation for realistic traffic scenes from domain agnostic scene representation called scene graphs instead of using photo-realistic rendering. We utilize synthetic scene graphs enhanced by spatial attributes (_z_) and spatial relations (e.g., _behind_). Furthermore, we introduce an unsupervised approach for realistic image generation from synthetic scene graphs. The approach shows convincing generation results as demonstrated in the proposed qualitative evaluation. We
Fig. 9: Comparison of the generated images (top\(\rightarrow\) bottom): CycleGAN[39], DualGAN[35], MUNIT[15], DRIT[31], ours
also show the effectiveness of our method through traffic scene manipulation and validation on a downstream task.
## VI Acknowledgment
The research leading to these results is funded by the German Federal Ministry for Economic Affairs and Energy within the project "KI Absicherung - Safe AI for Automated Driving". The authors would like to thank the consortium for the successful cooperation.
|
2307.04966 | Wasserstein Distributionally Robust Regret-Optimal Control under Partial
Observability | This paper presents a framework for Wasserstein distributionally robust (DR)
regret-optimal (RO) control in the context of partially observable systems.
DR-RO control considers the regret in LQR cost between a causal and non-causal
controller and aims to minimize the worst-case regret over all disturbances
whose probability distribution is within a certain Wasserstein-2 ball of a
nominal distribution. Our work builds upon the full-information DR-RO problem
that was introduced and solved in Yan et al., 2023, and extends it to handle
partial observability and measurement-feedback (MF). We solve the finite
horizon partially observable DR-RO and show that it reduces to a tractable
semi-definite program whose size is proportional to the time horizon. Through
simulations, the effectiveness and performance of the framework are
demonstrated, showcasing its practical relevance to real-world control systems.
The proposed approach enables robust control decisions, enhances system
performance in uncertain and partially observable environments, and provides
resilience against measurement noise and model discrepancies. | Joudi Hajar, Taylan Kargin, Babak Hassibi | 2023-07-11T01:58:27Z | http://arxiv.org/abs/2307.04966v1 | # Wasserstein Distributionally Robust Regret-Optimal Control under Partial Observability
###### Abstract
This paper presents a framework for Wasserstein distributionally robust (DR) regret-optimal (RO) control in the context of partially observable systems. DR-RO control considers the regret in LQR cost between a causal and non-causal controller and aims to minimize the worst-case regret over all disturbances whose probability distribution is within a certain Wasserstein-2 ball of a nominal distribution. Our work builds upon the full-information DR-RO problem that was introduced and solved in Yan et al., 2023 [1], and extends it to handle partial observability and measurement-feedback (MF). We solve the finite horizon partially observable DR-RO and show that it reduces to a tractable semi-definite program whose size is proportional to the time horizon. Through simulations, the effectiveness and performance of the framework are demonstrated, showcasing its practical relevance to real-world control systems. The proposed approach enables robust control decisions, enhances system performance in uncertain and partially observable environments, and provides resilience against measurement noise and model discrepancies.
regret-optimal control, Wasserstein distance, partial observability, distributionally robust control
## I Introduction
Regret-optimal control [1, 2, 3, 4, 5], is a new approach in control theory that focuses on minimizing the regret associated with control actions in uncertain systems. The regret measures the cumulative difference between the performance achieved by a causal control policy and the performance achieved by an optimal policy that could have been chosen in hindsight. In regret-optimal control, the worst-case regret over all \(\ell_{2}\)-norm-bounded disturbance sequences is minimized.
Distributionally robust control [1, 6, 7, 8], on the other hand, addresses uncertainty in system dynamics and disturbances by considering a set of plausible probability distributions rather than relying on a single distribution as in LQG control, or on a worst-case disturbance, such as in \(H_{\infty}\) or RO control. This approach seeks to find control policies that perform well across all possible distributions within the uncertainty set, thereby providing robustness against model uncertainties and ensuring system performance in various scenarios. The size of the uncertainty set allows one to control the amount of desired robustness so that, unlike \(H_{\infty}\) controllers, say, the controller is not overly conservative. The uncertainty set is most often taken to be the set of disturbances whose distributions are within a given Wasserstein-2 distance of the nominal disturbance distribution. The reason is that, for quadratic costs, the supremum of the expected cost over a Wasserstein ball reduces to a tractable semi-definite program (SDP).
The current paper considers and extends the framework introduced in [1] that applied distributionally robust (DR) control to the regret-optimal (RO) setting. In the full-information finite-horizon setting, the authors of [1] reduce the DR-RO problem to a tractable SDP. In this paper, we extend the results of [1] to partially observable systems where, unlike the full-information setting, the controller does not have access to the system state. Instead, it only has access to partial information obtained through noisy measurements. This is often called the measurement feedback (MF) problem. Of course, the solution to the measurement feedback problem in LQG and \(H_{\infty}\) control is classical. The measurement-feedback setting for DR control has been studied in [7, 9], and for RO control in [10].
In the finite-horizon case, we reduce the DR-RO control problem with measurement feedback to an SDP similar to the full-information case studied in [1]. Furthermore, we validate the effectiveness and performance of our approach through simulations, showcasing its applicability in real-world control systems.
The organization of the paper is as follows. In section II, we review the LQG and regret optimal control formulation in the measurement-feedback setting. In section III, we present the distributionally robust regret-optimal with measurement feedback (DR-RO-MF) problem formulation, in section IV we reformulate the problem as a tractable SDP, and in section V we show numerical results for controlling the flight of a Boeing 747 [11].
## II Preliminaries
### _Notations_
\(\mathbb{R}\) denotes the set of real numbers, \(\mathbb{N}\) is the set of natural numbers, \(\|\cdot\|\) is the 2-norm, \(\mathbb{E}_{(\cdot)}\) is the expectation over \((\cdot)\), \(\mathcal{M}(\cdot)\) is the set of probability distributions over \((\cdot)\) and \(\mathrm{Tr}\) denotes the trace.
### _A Linear Dynamical System_
We consider the following state-space model of a discrete-time, linear time-invariant (LTI) dynamical system:
\[\begin{split} x_{t+1}&=Ax_{t}+Bu_{t}+w_{t},\\ y_{t}&=Cx_{t}+v_{t}.\end{split} \tag{1}\]
Here, \(x_{t}\in\mathbb{R}^{n}\) represents the state of the system, \(u_{t}\in\mathbb{R}^{m}\) is the control input, \(w_{t}\in\mathbb{R}^{n}\) is the process noise, while \(y_{t}\in\mathbb{R}^{p}\) represents the noisy state measurements that the controller has access to, and \(v_{t}\in\mathbb{R}^{p}\) is the measurement noise. The sequences \(\{w_{i}\}\) and \(\{v_{i}\}\) are considered to be randomly distributed according to an unknown joint probability measure \(P\) which lies in a specified compact ambiguity set, \(\mathcal{P}\). For simplicity, we take \(x_{0}\) to be zero.
In the rest of this paper, we adopt an operator form representation of the system dynamics (1). To this end, assume a horizon of \(N\in\mathbb{N}\), and let us define
\[x\coloneqq\begin{bmatrix}&x_{0}\\ &x_{1}\\ &\vdots\\ &x_{N-1}\end{bmatrix}\in\mathbb{R}^{Nn}\quad,\quad u\coloneqq\begin{bmatrix}& u_{0}\\ &u_{1}\\ &\vdots\\ &u_{N-1}\end{bmatrix}\in\mathbb{R}^{Nm}\]
and similarly for \(y\in\mathbb{R}^{Np}\), \(w\in\mathbb{R}^{Nn}\), and \(v\in\mathbb{R}^{Np}\). Using these definitions, we can represent the system dynamics (1) equivalently in operator form as
\[x =Fu+Gw, \tag{2}\] \[y =Ju+Lw+v,\]
where \(F\in\mathbb{R}^{Nn\times Nm}\), \(G\in\mathbb{R}^{Nn\times Nn}\), \(J\in\mathbb{R}^{Np\times Nm}\), and \(L\in\mathbb{R}^{Np\times Nm}\) are strictly causal time-invariant operators (i.e, strictly lower triangular block Toeplitz matrices) corresponding to the dynamics (1).
We consider the Linear-Quadratic Gaussian (LQG) cost given as
\[J(u,w,v)\coloneqq x^{T}Qx+u^{T}Ru \tag{3}\]
where \(Q,R\succ 0\) are positive definite matrices of the appropriate dimensions. In order to simplify the notation, we redefine \(x\) and \(u\) as \(x\gets Q^{\frac{1}{2}}x\), and \(u\gets R^{\frac{1}{2}}u\), so that (3) becomes
\[J(u,w,v)=\|x\|^{2}+\|u\|^{2}. \tag{4}\]
### _Controller Design_
We consider a linear controller that has only access to the measurements:
\[u=Ky,\quad K\in\mathcal{K}, \tag{5}\]
where \(\mathcal{K}\subseteq\mathbb{R}^{Nm\times Np}\) is the space of causal (i.e., lower triangular) matrices. Then, the closed-loop state measurement becomes
\[y=(I-JK)^{-1}(Lw+v). \tag{6}\]
As in [10], let
\[E=K(I-JK)^{-1}, \tag{7}\]
be the Youla parametrization, so that
\[K=(I+EJ)^{-1}E. \tag{8}\]
The closed-loop LQG cost (4) can then be written as:
\[J(K,w,v)=\begin{bmatrix}w^{T}&v^{T}\end{bmatrix}T_{K}^{T}T_{K}\begin{bmatrix} w\\ \end{bmatrix}, \tag{9}\]
where \(T_{K}\) is the transfer operator associated with \(K\) that maps the disturbance sequences \(\begin{bmatrix}w\\ v\end{bmatrix}\) to the state and control sequences \(\begin{bmatrix}x\\ u\end{bmatrix}\):
\[T_{K}\coloneqq\begin{bmatrix}FEL+G&FE\\ EL&E\end{bmatrix}. \tag{10}\]
### _Regret-Optimal Control with Measurement-Feedback_
Given a noncausal controller \(K_{0}\!\in\!\mathcal{K}\), we define the regret as:
\[R(K,w,v) \coloneqq J(K,w,v)-J(K_{0},w,v), \tag{11}\] \[=\begin{bmatrix}w^{T}&v^{T}\end{bmatrix}(T_{K}^{T}T_{K}-T_{K_{0}} ^{T}T_{K_{0}})\begin{bmatrix}w\\ v\end{bmatrix}, \tag{12}\]
which measures the excess cost that a causal controller suffers by not knowing the future. In other terms, regret is the difference between the cost accumulated by a causal controller and the cost accumulated by a benchmark noncausal controller that knows the complete disturbance trajectory. The problem of minimizing regret in the measurement-feedback setting is referred to as (RO-MF) and is formulated as:
\[\inf_{K\in\mathcal{K}}\sup_{w,v}\frac{R(K,w,v)}{\|w\|^{2}+\|v\|^{2}}, \tag{13}\]
which is solved suboptimally by reducing it to a level-1 suboptimal Nehari problem [10].
## III Distributionally Robust Regret-Optimal Control
In this section, we introduce the **distributionally robust regret-optimal** (DR-RO) control problem **with measurement feedback**, which we refer to as **DR-RO-MF**.
In this setting, the objective is to find a controller \(K\!\in\!\mathcal{K}\) that minimizes the maximum expected regret among all joint probability distributions of the disturbances in an ambiguity set \(\mathcal{P}\). This can be formulated formally as
\[\inf_{K\in\mathcal{K}}\sup_{P\in\mathcal{P}}\mathbb{E}_{P}[R(K,w,v)], \tag{14}\]
where the disturbances \(\begin{bmatrix}w\\ v\end{bmatrix}\) are distributed according to \(P\!\in\!\mathcal{P}\).
To solve this problem, we first need to characterize the ambiguity set \(\mathcal{P}\) and explicitly determine a benchmark noncausal controller \(K_{0}\). As in [1], we choose \(\mathcal{P}\) to be the set of probability distributions that are at a distance of at most \(r>0\) to a nominal probability distribution, \(P_{0}\!\in\!\mathcal{M}(\mathbb{R}^{N(n+p)})\). Here, the distance is chosen to be the type-2 Wasserstein distance defined as [12]:
\[W_{2}^{2}(P_{1},P_{2}):=\inf_{\pi\in\Pi(P_{1},P_{2})}\ \int_{\mathbb{R}^{n} \times\mathbb{R}^{n}}\|z_{1}\!-\!z_{2}\|^{2}\,\pi(dz_{1},dz_{2}),\]
where the set \(\Pi(P_{1},P_{2})\) comprises all joint distributions that have marginal distributions \(P_{1}\) and \(P_{2}\). Then, \(\mathcal{P}\) can be written as:
\[\mathcal{P}:=\{P\in\mathcal{M}(\mathbb{R}^{N(n+p)})\,|\,W_{2}(P_{0},P)\leq r\}. \tag{15}\]
Unlike the full-information case, we know from Theorem 1 in [10] that in the measurement feedback case, there is no optimal noncausal controller that dominates every other controller for every disturbance. Therefore, we will choose \(K_{0}\) as the optimal noncausal controller that minimizes the Frobenius norm of \(T_{K}\). Theorem 3 in [10] shows that such a controller can be found as:
\[K_{0}=(I+E_{0}J)^{-1}E_{0}, \tag{16}\]
where the associated operator, \(T_{K_{0}}\) is:
\[T_{K_{0}}=\begin{bmatrix}FE_{0}L+G&FE_{0}\\ E_{0}L&E_{0}\end{bmatrix}, \tag{17}\]
with
\[E_{0} \coloneqq-T^{-1}F^{T}GL^{T}U^{-1}, \tag{18}\] \[T \coloneqq I+F^{T}F,\] (19) \[U \coloneqq I+LL^{T}. \tag{20}\]
## IV Tractable Formulation
In this section, we introduce a tractable reformulation of the DR-RO-MF control problem (14).
### _DR-RO-MF Control Problem_
Defining
\[\mathcal{C}_{K}\coloneqq T_{K}^{T}T_{K}-T_{K_{0}}^{T}T_{K_{0}}, \tag{21}\]
we can rewrite the DR-RO-MF control problem (14) as
\[\inf_{K\in\mathcal{K}}\sup_{P\in\mathcal{P}}\mathbb{E}_{P}\left[ \begin{bmatrix}w^{T}&v^{T}\end{bmatrix}\mathcal{C}_{\mathcal{K}}\begin{bmatrix} w\\ v\end{bmatrix}\right]. \tag{22}\]
The following theorem gives the dual problem of inner maximization and characterizes the worst-case distribution.
**Theorem IV.1**.: _[adapted from Theorems 2 and 3 in [1]]. Suppose \(P_{0}\) is absolutely continuous with respect to the Lebesgue measure on \(\mathbb{R}^{N}\) and \(\begin{bmatrix}w_{0}\\ v_{0}\end{bmatrix}\sim P_{0}\). The optimization problem:_
\[\sup_{P\in\mathcal{P}}\mathbb{E}_{P}\left[\begin{bmatrix}w^{T}&v^{T}\end{bmatrix} \mathcal{C}_{\mathcal{K}}\begin{bmatrix}w\\ v\end{bmatrix}\right] \tag{23}\]
_where \(\begin{bmatrix}w\\ v\end{bmatrix}\sim P\) and \(\mathcal{C}_{K}\in\mathbb{S}^{N(n+p)}\), with \(\lambda_{max}(\mathcal{C}_{K})\neq 0\), has a finite solution and is equivalent to the convex optimization problem:_
\[\inf_{\begin{subarray}{c}\gamma\geq 0,\\ \gamma\gamma>\mathcal{C}_{K}\end{subarray}}\gamma(r^{2}-\operatorname{Tr}(M_{ 0}))+\gamma^{2}\operatorname{Tr}(M_{0}(\gamma I-\mathcal{C}_{K})^{-1}), \tag{24}\]
_where \(M_{0}:=\mathbb{E}_{P_{0}}\left[\begin{bmatrix}w\\ v\end{bmatrix}\begin{bmatrix}w^{T}&v^{T}\end{bmatrix}\right]\). Furthermore, the disturbance that achieves the worst-case regret is \(\begin{bmatrix}w^{*}\\ v^{*}\end{bmatrix}\sim P^{*}\), where \(\begin{bmatrix}w^{*}\\ v^{*}\end{bmatrix}=\gamma^{*}(\gamma^{*}I-\mathcal{C}_{K})^{-1}\begin{bmatrix} w_{0}\\ v_{0}\end{bmatrix}\), and \(\gamma^{*}\) is the optimal solution of (24), which also satisfies the algebraic equation:_
\[\operatorname{Tr}((\gamma(\gamma I-\mathcal{C}_{K})^{-1}-I)^{2}M_{0})=r^{2} \tag{25}\]
Proof.: The proof follows from Theorems 2 and 3 in [1] and is omitted for brevity here.
We highlight two remarks pertaining to the presented theorem.
**Remark 1**: _Notice that the supremum of the quadratic cost depends on \(P_{0}\) only though its covariance matrix \(M_{0}\). Note further that as \(r\to\infty\), the optimal \(\gamma\) reaches its smallest possible value (since \(r^{2}\) multiplies \(\gamma\) in (24)). The smallest possible value that \(\gamma\) can take is simply the operator norm of \(\mathcal{C}_{K}\), which means that the DR-RO-MF controller approaches the regret-optimal controller as \(r\to\infty\)._
**Remark 2**: _Notice that the worst-case disturbance takes on a Gaussian distribution when the nominal disturbance is Gaussian. This is not immediately evident as the ambiguity set \(\mathcal{P}\) contains non-Gaussian distributions. Note further that the worst-case disturbance is correlated even if the nominal distribution has white noise._
Assuming the covariance of the nominal distribution to be
\[M_{0}=\mathbb{E}_{P_{0}}\left[\begin{bmatrix}w\\ v\end{bmatrix}\begin{bmatrix}w^{T}&v^{T}\end{bmatrix}\right]=I. \tag{26}\]
so that \(\operatorname{Tr}(M_{0})=N(n+p)\), the optimization problem (22) can be cast equivalently using Theorem IV.1 as
\[\inf_{K\in\mathcal{K}}\inf_{\gamma\geq 0}\gamma(r^{2}-N(n+p))+ \gamma^{2}\operatorname{Tr}((\gamma I-\mathcal{C}_{K})^{-1})\] \[\text{s.t.}\begin{cases}&\gamma I\succ\mathcal{C}_{K}\\ &\mathcal{C}_{K}=T_{K}^{T}T_{K}-T_{K_{0}}^{T}T_{K_{0}}\end{cases} \tag{27}\]
As in [10], define the unitary matrices \(\Psi\) and \(\Theta\):
\[\Theta =\begin{bmatrix}S^{-\frac{1}{2}}&0\\ 0&T^{-\frac{T}{2}}\end{bmatrix}\begin{bmatrix}I&-F\\ F^{T}&I\end{bmatrix} \tag{28}\] \[\Psi =\begin{bmatrix}I&L^{T}\\ -L&I\end{bmatrix}\begin{bmatrix}V^{-\frac{1}{2}}&-0\\ 0&U^{-\frac{T}{2}}\end{bmatrix} \tag{29}\]
where \(T\) and \(U\) are as in (19) and (20), and
\[S =I+FF^{T} \tag{30}\] \[V =I+L^{T}L. \tag{31}\]
and \(S^{\frac{1}{2}}\), \(T^{\frac{1}{2}}\), \(U^{\frac{1}{2}}\), and \(V^{\frac{1}{2}}\) are (block) lower triangular matrices, such that \(S=S^{\frac{1}{2}}S^{\frac{1}{2}}\), \(T=T^{\frac{T}{2}}T^{\frac{1}{2}}\), \(U=U^{\frac{1}{2}}U^{\frac{T}{2}}\), \(V=V^{\frac{T}{2}}V^{\frac{1}{2}}\). Then, the optimization problem (27) is equivalent to:
\[\inf_{\begin{subarray}{c}K\in\mathcal{K},\\ \gamma\geq 0\end{subarray}}\gamma(r^{2}-N(n+p))+\gamma^{2}\operatorname{Tr}(( \gamma I-\bar{\mathcal{C}}_{K})^{-1})\] \[\text{s.t.}\begin{cases}&\bar{\mathcal{C}}_{K}=(\Theta T_{K}\Psi)^ {T}\Theta T_{K}\Psi-(\Theta T_{K_{0}}\Psi)^{T}\Theta T_{K_{0}}\Psi\end{cases} \tag{32}\]
which holds true since trace is invariant under unitary \(\Theta\) and \(\Psi\). By introducing an auxiliary variable \(X\succeq\gamma^{2}(\gamma I-\bar{\mathcal{C}}_{K})^{-1}\) and leveraging the Schur complement theorem as in [1], the problem (32) can be recast as
\[\inf_{\begin{subarray}{c}K\in\mathcal{K},\\ \gamma\geq 0,\\ X\succeq 0\end{subarray}} \gamma(r^{2}-N(n+p))+\mathrm{Tr}(X)\] (33) s.t. \[\left\{\begin{array}{cc}\begin{bmatrix}X&\gamma I\\ \gamma I&\gamma I-\bar{\mathcal{C}}_{K}\end{bmatrix}\succeq 0\\ \gamma I-\bar{\mathcal{C}}_{K}\succ 0\\ \bar{\mathcal{C}}_{K}\!=\!(\Theta T_{K}\Psi)^{T}\Theta T_{K}\Psi-(\Theta T_{K _{0}}\Psi)^{T}\Theta T_{K_{0}}\Psi\end{array}\right.\]
In the following lemma, we establish some of the important identities that are utilized to convert problem (33) to a tractable convex program.
**Lemma IV.2**.: _[adapted from [10]]. The following statements hold:_
1. \[\gamma I-\bar{\mathcal{C}}_{K}=\begin{bmatrix}\gamma I&-PZ\\ -Z^{T}P^{T}&\gamma I-Z^{T}Z\end{bmatrix}\] (34) _where_ \[Z=T^{\frac{1}{2}}EU^{\frac{1}{2}}-W\] (35) \[W=-T^{-\frac{T}{2}}F^{T}GL^{T}U^{-\frac{T}{2}}\] (36) \[P=V^{-\frac{T}{2}}G^{T}FT^{-\frac{1}{2}}\] (37) _and E, T, U and V are as defined in 7, 19, 20 and 31 respectively._
2. \[\gamma I-\bar{\mathcal{C}}_{K}\succ 0\Leftrightarrow\|Y-W_{-,\gamma}\|_{2}\leq 1\] (38) _where_ \[\gamma^{-1}I+\gamma^{-2}P^{T}P=M_{\gamma}^{T}M_{\gamma}\] (39) \[M_{\gamma}=\left(\gamma^{-1}I+\gamma^{-2}P^{T}P\right)^{\frac{1 }{2}}\] (40) \[W_{\gamma}=M_{\gamma}W\] (41) \[Y=M_{\gamma}T^{\frac{1}{2}}EU^{\frac{1}{2}}-W_{+,\gamma}\] (42) _and_ \(W_{+,\gamma}\) _and_ \(W_{-,\gamma}\) _are the causal and strictly anti-causal parts of_ \(W_{\gamma}\)_. Here,_ \(M_{\gamma}\) _is lower triangular, and positive-definite._
3. \(Y\) _is causal iff_ \(E\) _is causal, where_ \(E\) _can be found as follows:_ \[E=T^{-\frac{1}{2}}M_{\gamma}^{-1}(Y+W_{+,\gamma})U^{-\frac{1}{2}}\] (43)
4. _The condition in (_38_) is recognized as a level-1 suboptimal Nehari problem that approximates a strictly anti-causal matrix_ \(W_{-,\gamma}\) _by a causal matrix_ \(Y\)_._
Proof.: The proof follows from Theorem 4 in [10] and is omitted for brevity here.
Using Lemma IV.2, problem (33) can be reformulated as a tractable optimization program:
\[\inf_{\begin{subarray}{c}Z,Y\in\mathcal{K},\\ \gamma\geq 0,\\ X\succeq 0\end{subarray}} \gamma(r^{2}-N(n+p))+\mathrm{Tr}(X)\] (44) s.t. \[\left\{\begin{array}{cccc}\begin{bmatrix}X_{11}&X_{12}&\gamma I &0\\ X_{12}^{T}&X_{22}&0&\gamma I\\ \gamma I&0&\gamma I&-PZ\\ 0&\gamma I&-Z^{T}P^{T}&\gamma I-Z^{T}Z\end{bmatrix}\succeq 0\\ \|Y-W_{-,\gamma}\|_{2}\leq 1\end{array}\right.\] \[= \inf_{\begin{subarray}{c}Z,Y\in\mathcal{K},\\ \gamma\geq 0,\\ X\succeq 0\end{subarray}} \gamma(r^{2}-N(n+p))+\mathrm{Tr}(X)\] \[\text{s.t.} \left\{\begin{array}{cccc}\begin{bmatrix}X_{11}&X_{12}&\gamma I &0&0\\ X_{12}^{T}&X_{22}&0&\gamma I&0\\ \gamma I&0&\gamma I&-PZ&0\\ 0&\gamma I&-Z^{T}P^{T}&\gamma I&Z^{T}\\ 0&0&0&Z&I\end{bmatrix}\succeq 0\\ \|Y-W_{-,\gamma}\|_{2}\leq 1\end{array}\right. \tag{45}\] \[\text{s.t.} \left\{\begin{array}{cccc}\begin{bmatrix}X_{11}&X_{12}&\gamma I &0&0\\ X_{12}^{T}&X_{22}&0&\gamma I&0\\ \gamma I&0&\gamma I&-PZ&0\\ 0&\gamma I&-Z^{T}P^{T}&\gamma I&Z^{T}\\ 0&0&0&Z&I\end{bmatrix}\succeq 0\\ \|Y-W_{-,\gamma}\|_{2}\leq 1\end{array}\right. \tag{46}\]
where the last step follows from the Schur complement. Using (35), (43), and
\[H_{\gamma}=M_{\gamma}^{-1}W_{+,\gamma}-W \tag{47}\]
we establish our main theorem.
**Theorem IV.3** (Tractable Formulation of DR-RO-MF).: _The distributionally robust regret-optimal control problem in the measurement feedback setting (14) reads:_
\[\inf_{\begin{subarray}{c}Y\in\mathcal{K},\\ \gamma\geq 0,\\ X\succeq 0\end{subarray}} \gamma(r^{2}-N(n+p))+\mathrm{Tr}(X)\] (48) _s.t._ \[\left\{\begin{array}{cccc}\begin{bmatrix}X_{11}&X_{12}&\gamma I &0&0\\ X_{12}^{T}&X_{22}&0&\gamma I&0\\ \gamma I&0&\gamma I&-P(*)&0\\ 0&\gamma I&-(*)^{T}P^{T}&\gamma I&(*)^{T}\\ 0&0&0&(*)&I\end{bmatrix}\succeq 0\\ (*)=M_{\gamma}^{-1}Y+H_{\gamma}\\ \left[\begin{matrix}I&(Y-W_{-,\gamma})^{T}\\ Y-W_{-,\gamma}&I\end{matrix}\right]\succ 0\end{array}\right. \tag{49}\]
### _Sub-Optimal Problem_
For a given value of \(\gamma\), problem (47) can be simplified into a tractable SDP. In practical implementations, we can solve problem (47) by optimizing the objective function with respect to the variables \(Y\) and \(X\) while fixing \(\gamma\), thus transforming the problem into an SDP, which can be solved using standard convex optimization packages. We then iteratively refine the value of \(\gamma\) until it converges to the optimal solution \(\gamma^{*}\). This iterative process ensures that we obtain the best possible value for \(\gamma\) that minimizes the objective function in problem (47).
### _LQG and RO-MF Control Problems as Special Cases_
Interestingly, LQG and RO control in the measurement feedback setting can be recovered from the DR-RO-MF control by varying the radius \(r\) which represents the extent of uncertainty regarding the accuracy of the nominal distribution in the ambiguity set. When \(r\to 0\), the ambiguity set transforms into a singular set comprising solely the nominal distribution. Consequently, the problem simplifies into a stochastic optimal control problem under partial observability:
\[\inf_{K\in\mathcal{K}}\mathbb{E}_{P_{0}}[J(K,w,v)] \tag{48}\]
As \(r\to\infty\), the ambiguity set transforms into the set of any disturbance generated adversarially and the optimal \(\gamma\) reaches its smallest possible value which is the operator norm of \(\mathcal{C}_{K}\). This means that the problem reduces to the RO-MF control problem which we discussed in section II-D.
## V Simulations
### _Flight Control_
We focus on the problem of controlling the longitudinal flight of a Boeing 747 which pertains to the linearized dynamics of the aircraft, as presented in [11]. The linear dynamical system provided describes the aircraft's dynamics during level flight at an altitude of 7.57 miles and a speed of 593 miles per hour, with a discretization interval of 0.1 second. The state variables of the system encompass the aircraft's velocity along the body axis, velocity perpendicular to the body axis, angle between the body axis and the horizontal plane, and angular velocity. The inputs to the system are the elevator angle and thrust. The process noise accounts for variations caused by external wind conditions. The discrete-time state space model is:
\[A =\begin{bmatrix}0.9801&0.0003&-0.0980&0.0038\\ -0.3868&0.9071&0.0471&-0.0008\\ 0.1591&-0.0015&0.9691&0.0003\\ -0.0198&0.0958&0.0021&1.000\\ \end{bmatrix}\] \[B =\begin{bmatrix}-0.0001&0.0058\\ 0.0296&0.0153\\ 0.0012&-0.0908\\ 0.0015&0.0008\\ \end{bmatrix},C=\begin{bmatrix}1&0&0&0\\ 0&0&0&1\\ \end{bmatrix}.\]
We conduct all experiments using MATLAB, on a PC with an Intel Core i7-1065G7 processor and 16 GB of RAM. The optimization problems are solved using the CVX package [13].
We limit the horizon to \(N=10\). We take the nominal distribution \(P_{0}\) to be Gaussian with mean \(\mu_{0}=0\) and covariance \(\Sigma_{0}=I\), and we investigate various values for the radius \(r\), specifically:
\[r\in\{0,0.2,0.4,0.6,0.8,1,1.5,2,4,8,16,32,126\}.\]
For each value of \(r\), we solve the sub-optimal problem described in section IV-B, iterating over \(\gamma\) until convergence to \(\gamma^{*}\).
To assess the performance of the controller, we compute the worst-case disturbance, which lies at a Wasserstein distance \(r\) from \(P_{0}\), as discussed in theorem IV.1. Finally, we compare the regret cost of the DR-RO-MF controller with that of the LQG, \(H_{\infty}\)[14], and RO-MF [10] controllers while considering the worst-case disturbance corresponding to the DR-RO-MF controller. The results are shown in Figures 1 and 2.
The DR-RO-MF controller achieves the minimum cost under worst-case disturbance conditions for any given value of \(r\). When \(r\) is sufficiently small (less than 0.2), the cost of the DR-RO-MF controller closely approximates that of the LQG controller (figure 1). Conversely, for sufficiently large values
Fig. 1: Controller costs for \(r\in{0,0.2,0.4,0.6,0.8,1,1.5,2,4}\). At \(r=0\), the top-performing controllers are DR-RO-MF and LQG, exhibiting regrets of 5.4. They are followed by \(\text{H}_{\infty}\) with a regret cost of 5.9, and finally RO-MF with a regret cost of 13.8. The ranking of the controllers based on regret costs is: **DR-RO-MF=LQG=5.34 \(<\) H\({}_{\infty}\)=5.47 \(<\) RO-MF=13.8**.
As \(r\) increases to 4, DR-RO-MF remains the best-performing controller with a regret of 141. It is followed by RO-MF with a regret of 144, \(\text{H}_{\infty}\) with a regret of 154, and finally H2 with a regret of 156. The ranking of the controllers at \(r=4\) based on regret costs is: **DR-RO-MF=141 \(<\) RO-MF=144 \(<\) H\({}_{\infty}\)=154 \(<\) LQG=156**.
Fig. 2: Controller costs for \(r\in{4,8,16,32,126}\). At \(r=8\), the best-performing controller is DR-RO-MF with a regret of 437, which is closely comparable to the regret of the RO-MF controller of 438. They are followed by \(\text{H}_{\infty}\) with a regret of 499, and finally H2 with a regret of 505. The ranking of controllers based on regret costs is follows: **DR-RO-MF=437 \(\lesssim\) RO-MF=438 \(<\) H\({}_{\infty}\)=499 \(<\) LQG=505**.
When \(r\) increases to 126, which approximates the behavior of \(r\) approaching infinity, the order of the best-performing controllers remains unchanged: **DR-RO-MF=RO-MF=\(8.33\times 10^{4}<\) H\({}_{\infty}\)=\(9.50\times 10^{4}<\) LQG=\(9.57\times 10^{4}\)**. DR-RO-MF and RO-MF controllers exhibit similar performance in this regime.
of \(r\) (greater than 8), the cost of the DR-RO-MF controller closely matches that of the RO-MF controller (figure 2). These observations align with theoretical findings as elaborated in section IV-C.
Furthermore, it is worth noting that for large values of \(r\) (figure 2), the LQG controller yields the poorest results. Conversely, for small values of \(r\) (figure 1), the LQG controller performs on par with the DR-RO-MF controller, emerging as the best choice, as mentioned earlier. This discrepancy is expected since LQG control accounts only for disturbances drawn from the nominal distribution, assuming uncorrelated noise. On the other hand, RO-MF exhibits inferior performance when \(r\) is small (figure 1), but gradually becomes the top-performing controller alongside DR-RO-MF as \(r\) increases. This behavior arises from the fact that RO-MF is specifically designed for sufficiently large \(r\). Lastly, note that the \(H_{\infty}\) cost lies between the costs of the other controllers, interpolating their respective costs.
### _Performance Under Adversarially Chosen Distribution_
For any given causal controller \(K_{c}\), an adversary can choose the worst-case distribution of disturbances for a fixed \(r\) as
\[\arg\max_{P\in\mathcal{P}}\mathbb{E}_{P}R(K_{c},w,v)=:P_{c}, \tag{49}\]
where \(R\) is the regret as in (11). Denoting by \(K_{\text{DR-RO-MF}}\) the optimal DR-RO-MF controller and by \(P_{\text{DR-RO-MF}}\) the worst-case (adversarial) distribution corresponding to \(K_{\text{DR-RO-MF}}\), we have that
\[\mathbb{E}_{P_{c}}R(K_{c},w,v) =\max_{P\in\mathcal{P}}\mathbb{E}_{P}R(K_{c},w,v), \tag{50}\] \[\geq\min_{K\in\mathcal{K}}\max_{P\in\mathcal{P}}\mathbb{E}_{P}R( K,w,v),\] (51) \[=\mathbb{E}_{\text{DR-RO-MF}}R(K_{\text{DR-RO-MF}},w,v),\] (52) \[\geq\mathbb{E}_{P_{c}}R(K_{\text{DR-RO-MF}},w,v), \tag{53}\]
where the first equality follows from (49) and the last inequality is due to the fact that \(P_{\text{DR-RO-MF}}\) is the worst-case distribution for \(K_{\text{DR-RO-MF}}\). In other words, DR-RO-MF controller is robust to adversarial changes in distribution as it yields smaller expected regret compared to any other causal controller \(K_{c}\) when the disturbances are sampled from the worst-case distribution \(P_{c}\) corresponding to \(K_{c}\).
The simulation results presented in Subsection V-A show that DR-RO-MF outperforms RO-MF, \(H_{\infty}\), and LQG (designed assuming disturbances are sampled from \(P_{0}\)) controllers under the worst-case distribution of the DR-RO-MF controller \(P_{\text{DR-RO-MF}}\), i.e
\[\mathbb{E}_{P_{\text{DR-RO-MF}}}R(K_{c},w,v)\geq\mathbb{E}_{P_{\text{DR-RO- MF}}}R(K_{\text{DR-RO-MF}},w,v). \tag{54}\]
This directly implies that the theoretically expected inequality
\[\mathbb{E}_{P_{c}}R(K_{c},w,v)\geq\mathbb{E}_{P_{c}}R(K_{\text{DR-RO-MF}},w,v) \tag{55}\]
is validated and positively exceeded following the inequalities (53) and
\[\mathbb{E}_{P_{c}}R(K_{c},w,v)\geq\mathbb{E}_{P_{\text{DR-RO-MF}}}R(K_{c},w,v). \tag{56}\]
To further support our claims, we assess the performance of LQG and RO-MF controllers by measuring the relative reduction in expected regret when DR-RO-MF controller is utilized under the worst-case distributions corresponding to LQG and RO-MF controllers, respectively:
\[\frac{\mathbb{E}_{P_{c}}R(K_{c},w,v)-\mathbb{E}_{P_{c}}R(K_{\text{DR-RO-MF}},w, v)}{\mathbb{E}_{P_{c}}R(K_{c},w,v)}\times 100, \tag{57}\]
where \(K_{c}\) is either LQG or RO-MF controller and \(P_{c}\) is the corresponding worst-case distribution. The results are shown in Table I for \(r\in\{0.2,1,2,4,16,32\}\).
### _Limitations_
In our scenario with a relatively short planning horizon of \(N=10\), the cost reduction achieved by employing DR-RO-MF control, in comparison to traditional controllers such as LQG and \(H_{\infty}\), is moderate. However, it is anticipated that this reduction would become more pronounced with the utilization of a longer planning horizon. Unfortunately, in our experimental setup, we were restricted to using \(N=10\) due to computational limitations. Solving semi-definite programs involving large matrices is computationally inefficient, necessitating this constraint. In practice, this limitation can be overcome by implementing the controller in a receding horizon fashion, where the controller is updated every \(x\) time steps.
## VI Conclusion
In conclusion, this paper extended the distributionally robust approach to regret-optimal control by incorporating the Wasserstein-2 distance [1] to handle cases of limited observability. The proposed DR-RO-MF controller demonstrated superior performance compared to classical controllers such as LQG and \(H_{\infty}\), as well as the RO-MF controller, in simulations of flight control scenarios. The controller exhibits a unique interpolation behavior between LQG and RO-MF, determined by the radius \(r\) that quantifies the uncertainty in the accuracy of the nominal distribution. As the time horizon increases, solving the tractable SDP to which the solution reduces, becomes more challenging, highlighting the practical need for a model predictive control approach. Overall, the extended distributionally robust approach presented in this paper holds promise for robust and effective control in systems with limited observability.
|
2310.07678 | Explainable Image Similarity: Integrating Siamese Networks and Grad-CAM | With the proliferation of image-based applications in various domains, the
need for accurate and interpretable image similarity measures has become
increasingly critical. Existing image similarity models often lack
transparency, making it challenging to understand the reasons why two images
are considered similar. In this paper, we propose the concept of explainable
image similarity, where the goal is the development of an approach, which is
capable of providing similarity scores along with visual factual and
counterfactual explanations. Along this line, we present a new framework, which
integrates Siamese Networks and Grad-CAM for providing explainable image
similarity and discuss the potential benefits and challenges of adopting this
approach. In addition, we provide a comprehensive discussion about factual and
counterfactual explanations provided by the proposed framework for assisting
decision making. The proposed approach has the potential to enhance the
interpretability, trustworthiness and user acceptance of image-based systems in
real-world image similarity applications. The implementation code can be found
in https://github.com/ioannislivieris/Grad_CAM_Siamese.git. | Ioannis E. Livieris, Emmanuel Pintelas, Niki Kiriakidou, Panagiotis Pintelas | 2023-10-11T17:21:48Z | http://arxiv.org/abs/2310.07678v2 | # Explainable Image Similarity: Integrating Siamese Networks and Grad-CAM
###### Abstract
With the proliferation of image-based applications in various domains, the need for accurate and interpretable image similarity measures has become increasingly critical. Existing image similarity models often lack transparency, making it challenging to understand the reasons why two images are considered similar. In this paper, we propose the concept of explainable image similarity, where the goal is the development of an approach, which is capable of providing similarity scores along with visual factual and counterfactual explanations. Along this line, we present a new framework, which integrates Siamese Networks and Grad-CAM for providing explainable image similarity and discuss the potential benefits and challenges of adopting this approach. In addition, we provide a comprehensive discussion about factual and counterfactual explanations provided by the proposed framework for assisting decision making. The proposed approach has the potential to enhance the interpretability, trustworthiness and user acceptance of image-based systems in real-world image similarity applications. The implementation code can be found in [https://github.com/ioannislivieris/Grad_CAM_Siamese.git](https://github.com/ioannislivieris/Grad_CAM_Siamese.git).
* This paper has been accepted for publication at _Journal of Imaging_. Cite: Livieris, I. E., Pintelas E., Kiriakidou, N., & Pintelas, P. (2023). Explainable Image Similarity: Integrating Siamese Networks and Grad-CAM. _Journal of Imaging_, 9(10):224.***
Explainability Siamese networks Grad-CAM recommendations
## 1 Introduction
In many real-world scenarios, the ability to measure image similarity is crucial for decision-making processes, intelligent systems as well as user interactions; therefore, image similarity models constitute a vital role in various computer vision tasks [1, 2, 3, 4, 5]. For example, in image retrieval systems, users often search for similar images based on a reference image or specific visual features [2]. Image similarity models allow these systems to find relevant images quickly and accurately. In content-based image analysis, large image databases are categorized and organized using image similarity models; hence, enabling the efficient automatic identification of similar images [1]. In copyright infringement detection or multimedia management, image similarity assists in identifying duplicate or visually similar
images [3]. Furthermore, in medical imaging, comparing and matching medical images can aid in the diagnosis and identification of diseases or abnormalities [4]. Finally, image similarity can also assist in visual search engines, where users are able to visually find similar images without relying on text-based queries [5].
Siamese neural networks [6], probably constitute the most efficient and widely utilized class of image similarity models. During the last decade, they have been successfully applied for addressing image similarity tasks by quantifying the similarity between images through numerical values [7, 8, 9]. The backbone of this class of neural networks are convolutional layers which are characterized by their remarkable abilities for image processing. Nevertheless, due to their architectural design, Siamese networks are not able to provide the users with human-understandable explanations about why two images are deemed similar. As the adoption of image-based technologies continues to grow in diverse applications like medical imaging, e-commerce, social media and security, the need for explainability in image similarity becomes paramount [10]. Explainability is a critical aspect of Deep Learning (DL), especially when dealing with complex models composed by convolutional layers. Although that convolutional-based neural network models, such as Siamese networks, are highly effective in several image processing tasks, they lack in transparency and explainability; thus, they are considered as "black boxes" [11]. Notice that many traditional machine learning models, such as decision trees and linear models, often have the advantage of being interpretable, since their decision-making process is based on understandable features; nevertheless, Siamese networks learn intricate and abstract features through layers of convolutions, making it challenging to directly interpret their decisions.
Explainability techniques aim to shed light by providing insights into how and why a convolutional-based model makes certain predictions by understanding the features and patterns, which the model learns from the data. These techniques not only enhance our understanding about the model's decision process but also play a vital role in building trust and accountability in artificial intelligence systems. More specifically, they enable us to verify the reasoning behind the predictions [12], identify potential biases, errors, or misinterpretations in model predictions and provide a means to improve their performance [13]. Also, in some domains, there are strict regulations that require models to be interpretable. For instance, the General Data Protection Regulation in Europe includes the "_right to explanation_", which mandates that individuals should be provided with an explanation for automated decisions [14]. Finally, in certain contexts, there may be legal or ethical requirements to explain model predictions to end-users or stake-holders, making interpretability a crucial aspect of the deployment [15, 10].
In the literature, several research directions have focused on enhancing the interpretability of deep learning models, particularly in the fields of computer vision [16, 17]. Explainable artificial intelligence (XAI) techniques, such as attention mechanisms [18] and Gradient-weighted Class Activation Mapping (Grad-CAM) technique [19], have been successfully applied to image classification, object detection and semantic segmentation tasks. However, the application of XAI to image similarity remains underexplored. In light of the increasing adoption of image-based technologies across various domains, the demand for explainable image similarity is considered crucial. Users and decision-makers seek transparency in understanding why certain images are considered similar, especially in critical applications like medical diagnosis or security surveillance. Therefore, exploring the integration of new or existing XAI techniques [20] with image similarity models [21] is able to provide insights on the underlying similarities between images. Moreover, exploring the notion of similarity from a human-centric perspective may lead to novel contributions in image understanding and user-friendly applications.
In this work, we propose a new concept, named "_explainable image similarity_". Our primary aim is to bridge the gap between numerical similarity scores and human-understandable explanations. Along this line, we propose a new algorithmic framework, which integrates Siamese Networks and Grad-CAM for providing explainability in image similarity tasks. The former are utilized for calculating the similarity between two input images while the latter is used for visualizing and interpreting the decisions made by convolutional-based Siamese network. An attractive advantage of the proposed framework is that it is able to provide image similarity score along with visual intuitive explanations for its decisions (factual explanations) together with explanations based on its ability to "what if" scenarios (counterfactual explanations). Finally, we provide a comprehensive discussion about factual and counterfactual explanations as well as the valuable insights and recommendations which can be made from the application of the proposed framework on three real-world use case scenarios.
At this point it is worth mentioning that although, Grad-CAM technique has been widely used and studied in a variety of domains to the best of our knowledge, it has never been utilized for image similarity tasks.
Summarizing, the main contributions of this work are described as follows:
* We propose the concept _"explainable image similarity"_ highlighting the needs for providing human-understandable explanations for image similarity tasks.
* We propose a new conceptual framework for explainable image similarity, which integrates Siamese networks along with Grad-CAM technique, which is able to provide reliable, transparent and interpretable decisions on image similarity tasks.
* The proposed framework produces factual and counterfactual explanations, which are able to provide valuable insights and be used for making useful recommendations.
The rest of this paper is organized as follows: Section 2 presents the state-of-the-art works relative to Grad-Cam technique and image similarity applications. Section 3 presents the concept of _"explainable image similarity"_ as well as a detailed discussion about the proposed framework while Section 4 presents three use cases scenarios from its application. Finally, Section 5 discusses the proposed research, summarizes its conclusions and provides some interesting ideas for future work.
## 2 Related work
Convolutional-based Neural Networks (CNNs) revolutionized modern computer vision and are widely regards as the cornerstone choice for addressing image processing tasks [22, 23, 5, 4]. The core element of CNNs are convolutional layers, which exploit a set of learnable filters (kernels) for generating feature maps. The aim is to highlight distinct attributes like edges, textures and shapes, allowing subsequent layers to recognize higher-level representations.
Nowadays, explainability and interpretability play a significant role in bridging the gap between the advanced capabilities of DL models and the need for transparency and accountability in their decision-making processes. However, as CNNs become deeper and more complex, understanding how and why they make particular predictions becomes challenging. Grad-CAM [19] is a novel technique, which enhances the interpretability of CNNs focusing on highlighting the regions of an input image that significantly contribute to a specific prediction; thus, it has been applied in various applications. Hsiao et al. [24] exploited the flexibility of the Grad-CAM technique towards accurate visualization and interpretable explanation of CNNs. In particular, the authors utilized Grad-CAM to provide reliable and accurate analysis results for fingerprint recognition. Generally, fingerprints are difficult to be analyzed manually; hence, this study contributed to the assistance of criminal investigation cases. In a similar research, Sang-Ho et al. [25] provided another application of the Grad-CAM technique in which they focused on providing a trading strategy for simultaneously achieving higher returns, compared to benchmark strategies. Along this line, the authors used Grad-CAM technique in conjunction with a CNN model aiming to develop a trustworthy method for meeting explainability as well as profitability in finance, therefore, fulfilling the challenging investors' needs.
In computer vision, the concept of image similarity consists of a fundamental building block for various real-world applications, ranging from image retrieval [26] and pattern recognition [25] to anomaly detection [27]. Siamese networks [6] have been established as state-of-the-art models for tackling image similarity tasks, especially where the available labeled data are limited. Their special architectural design enables them to learn and capture intricate relationships between pairs of images, allowing for the precise quantification of similarity and/or dissimilarity.
Appalaraju and Chaoji [7] proposed a new approach for identifying similar images using a deep Siamese network, named SimNet. In more detail, SimNet is trained on pairs of positive and negative images using a novel online pair mining strategy (OPMS). OPMS has been inspired by curriculum learning, a methodology for training DL models, aiming to ensure consistently increasing difficulty of input image pairs during the training process. Furthermore, another characteristic of SimNet is that it is composed of a multi-scale CNN, which is able to learn a joint image embedding of top and lower layers. For evaluating the model's performance, they utilized the widely used computer-vision object recognition dataset, named CIFAR10. The experimental analysis and use case examples showed that the proposed SimNet model is able to better capture fine-grained similarities between images, compared to traditional CNNs. Additionally, the authors stated that the adopted curriculum learning strategy led to faster model training.
Melekhov et al. [8] proposed a novel methodology for exploiting Siamese networks for dealing with image similarity and classification problems. For detecting the matching and non-matching image pairs, the authors suggested to represent them as feature vectors and distinguish the similarity between the input images using the Euclidean distance of these calculated feature vectors. In particular, those feature vectors are obtained through convolutional layers while the model training was based on contrastive loss. In their research, the authors used a large set of images from five different landmark for evaluating the performance of the proposed Siamese model for image matching against widely used models such as AlexNet, HybridNet and sHybridNet. Based on their experimental analysis the authors concluded
that the proposed model reported promising performance on image similarity and classification tasks while in contrast to traditional models, it is able to efficiently handle datasets with imperfect ground truth labels.
Rossi et al. [9] introduced a novel supervised Siamese deep learning architecture, which is a new Content-Based Image Retrieval system (CBIR) for assisting the process of interpreting a prostate radiological Magnetic Resonance Image (MRI). The rationale behind the architecture of the proposed approach is to integrate all available information in multi-parametric medical imaging tasks for predicting diagnostically similar images. Additionally, for handling multi-modal and multi-view MRIs, the authors considered the diagnostic severity of the lesion, assessed by the PI-RADS score [28], as the similarity criterion. It is worth mentioning that despite its initial purpose of development, this approach can be utilized for several diagnostic medical imaging retrieval due to its general design. As regards the experimental analysis, the authors presented that the performance of Siamese-based CBIRs was superior to that of the most widely used autoencoder-based CBIRs, for both diagnostic and information retrieval metrics.
In this research, we introduce the concept of explainable image similarity, for providing useful, interpretable and transparent insights into the underlying factors driving image relationships and comparisons. In addition, we propose a new framework which integrates Siamese networks together with Grad-CAM technique. The former are used for calculating the similarity between input images while the latter is used for visualizing and interpreting the decisions made by convolutional-based neural networks. In contrast to previous presented state-of-the-art approaches, the proposed framework is able to provide image similarity score along with visual intuitive explanations for its decisions. The presented use case scenarios demonstrate the applicability of the proposed framework as well as a path for providing insights and useful recommendations from factual and counterfactual explanations.
## 3 Explainable image similarity
In this section, we present the proposed framework which is able to provide similarity scores along with visual transparent and understandable explanations for its decisions. We recall that our primary goal is to proposed the concept of explainable image similarity for bridging the gap between numerical similarity scores and human-understandable explanations. By offering interpretable explanations, explainable image similarity not only enhances the usability of similarity-based applications but also empowers users to comprehend the reasoning behind the model's decisions, ultimately fostering informed and confident decision-making.
In the following, we briefly present the main components of the proposed framework which is based on the integration of Grad-CAM technique to Siamese Networks as well as a detailed description, paying special attention to its capabilities and advantages.
### Background
_Siamese neural networks_[6] constitute a special class of deep learning architectures, which are used in tasks involving similarity comparison, such as image or text matching [7, 9, 29]. These networks are characterized by their robustness to data which exhibit variations, distortions or noise as well as their requirement of significantly less labeled training data compared to neural networks; therefore, they have been well-established for real-world scenarios [26, 25, 27, 30, 31]. A traditional Siamese network is composed by two identical sub-networks with shared weights (backbone-network), allowing them to extract and encode into fixed-size feature vectors (embeddings) from input pairs. Then, the similarity of the input images (similarity score) is provided by computing the distance between the calculated embeddings.
_Gradient-weighted Class Activation Mapping_ (Grad-CAM) [19] is a powerful and model-agnostic technique in the field of computer vision, which enhances the interpretability of deep neural networks. Grad-CAM provides a way to visualize and localize the regions of an input image, which contribute most to the model's decision. For obtaining the class-discriminative localization map, denoted by \(L_{Grad-CAM}\), we initially calculate the neuron importance weights \(\alpha_{k}\) using the gradient of the model's output \(y\) with respect to the \(k\)-th map activations \(A^{k}\) of a selected convolutional layer, which are flowed back and are global-average-pooled over the width (index \(i\)) and height (index \(j\)) dimensions, that is
\[\alpha_{k}=\frac{1}{Z}\sum_{i}\sum_{j}\frac{\partial y}{\partial A^{k}_{ij}}, \tag{1}\]
where \(Z\) is the total number of spatial locations in the feature maps. Then, we perform a weighted combination of forward activation maps, and follow it by ReLU activation function for calculating \(L_{Grad-CAM}\), namely
\[L_{Grad-CAM}=\text{ReLU}\left(\sum_{k}a_{k}A^{k}\right). \tag{2}\]
By utilizing the gradients with respect to the model's internal feature maps, Grad-CAM generates an activation map, which highlights the discriminative regions responsible for the model's decision.
### Proposed framework
Next, we provide a detailed description of the proposed framework while a high-level presentation of its architecture is highlighted in Figure 1. Initially, two images are considered as input in a Siamese network, which are processed by the backbone network for encoding them into fixed-size feature vectors (embeddings). Then, the image embeddings are used for discerning similarities and differences between the input images and ultimately calculating their similarity score. Independently, Grad-CAM technique is applied to the last convolutional layer of the backbone network for the development of the Grad-CAM heatmaps and visualize the features, which significantly impact the Siamese model's decisions (factual explanations).
In addition, the proposed framework is able to provide with counterfactual explanations. Actually, a counterfactual explanation provides a description of "_what would have not happened when a certain decision was taken_" [19]. This transparency not only enhances model's interpretability but also empowers stakeholders to identify potential biases, assess model fairness, and build trust in AI-driven systems, leading to more accountable and reliable artificial intelligence solutions. The counterfactual explanations can be easily developed by a slight modification to Grad-CAM technique, namely, by simply replacing \(y\) with \(1-y\) in Eq. (1).
Figure 1: Architecture of the proposed framework
Summarizing, the advantages of the proposed framework are:
* _Counterfactual explanations:_ The identification of regions, which would make the network change its prediction, could highlight concepts that confuse the model. Therefore, by removing those concepts, the model's decisions may be more accurate or more confident.
* _Bias evaluation of model's decisions:_ In case, the Siamese model is performing well on both training and testing data (not-biased model), Grad-CAM heatmaps may be used to visualize the features, which significantly impact the model's decisions. In contrast, in case the Siamese model is performing pretty on the training data but it is not able to generalize well (biased model), Grad-CAM heatmaps can be efficiently used to identify unwanted features in which the model focuses on.
## 4 Application of proposed framework and use case scenarios
Next, we provide some use case scenarios from the application of the proposed framework to three (3) well-known datasets from different real-world application domains:
* _Flowers_. This dataset contains 4242 images (320x240) of flowers, which were categorized in five classes: "chamomile", "tulip", "rose", "sunflower" and "dandelion".
* _Skin cancer_. This dataset concerns images (224x224) of 1400 malignant and 1400 benign oncological diseases.
* _AirBnB_. This few-show dataset is composed by 864 interior and exterior house pictures (600x400) scraped from AirBnB over three cities, which were classified in 12 classes: "backyard", "basement", "bathroom", "bedroom", "decor", "dining-room", "entrance", "house-exterior", "kitchen", "living-room", "outdoor", "staircase" and "tv-room".
The presented use cases focus on highlighting how the proposed framework could be used for image similarity tasks, what useful conclusions could be drawn by factual and counterfactual explanations and finally, what useful recommendations could be provided.
For training the Siamese networks, 80% of each dataset's images were used for training while the rest 20% for testing while preserving the variance of each class in each set. In addition, 10% of training images where used as validation set for optimizing the network's performance. The implementation code along with the datasets can be found in [https://github.com/ioannisiliveris/Grad_CAM_Siamese.git](https://github.com/ioannisiliveris/Grad_CAM_Siamese.git).
Based on the images of each training dataset, we created the training pairs as follows: for each image, two images were randomly selected; one image from the same class and another image from a different class. The first pair containing the images from the same class is assigned with label zero (0), while the second pair containing the images from different classes is assigned with label one (1). Along this line, the similarity between two random input images is defined by \(1-d\), where \(d\) is the Siamese model's output. Notice that this methodology was initially proposed by Melekhov et al. [8].
At this point, it is worth mentioning that the model's prediction can be exploited to obtain information if two images belong to the same class or not. More specifically if the prediction of the Siamese network for a pair of images is less than a pre-defined \(threshold\), then the images are considered similar (belong to the same class); otherwise, they are considered as dissimilar (belong to the different classes). Notice that in our experiments, \(threshold\) was set to 0.5.
As regards, the Siamese network architecture: ResNet50 [32] was used as backbone network, followed by average pooling layer of size \((1,1)\) and a dense layer of 256 neurons with ReLU activations for calculating each input image embeddings. Next, the \(L_{2}\)-distance between the embeddings is calculated, followed by an output layer with one neuron with Sigmoid activation function. The utilized architecture and hyperparameter selection provide us very good and reliable performance as regards all three benchmarks. It is worth highlighting the scope of this research was not to address a specific class of benchmarks i.e. few-shot learning benchmarks, one-shot learning benchmarks, etc neither to provide a new advanced model architecture but to provide human-meaningful explanations on similarity tasks though the proposed framework. Finally, the Siamese model was trained using ADAM algorithm [33] while contrastive loss function [34] was used for training the network, which is defined by
\[\mathcal{L}=\frac{1}{2}\left[(1-y)(D_{w})^{2}+y\{max(0,m-D_{w})\}^{2}\right],\]
where \(D_{w}\) is the model's output and \(m\) is the margin value, which was set to 2.
### Flowers dataset
Next, we present an example from the application of the proposed framework on two random images (Figure 2(a) and 2(d)) from Flowers dataset, which belong to the same class ("rose"). The Siamese model's prediction was 0.24, which implies that the model predicts that the similarity between the input images is 76%. In addition, since the similarity score is greater than the pre-defined \(threshold=0.5\), the model suggests that input images belong to the same class. Figures 2(b) and 2(e) present the factual explanations provided by Grad-CAM in order to identify the features, which impact the model's decisions. In more detail, the model's decision was based on flower's blossoms in which it found common characteristics. As regards, the counterfactual explanations, which are presented in Figures 2(c) and 2(f), they highlight that the model would have been based on the stems of both flowers for predicting that the images are not similar.
By taking into consideration that similar conclusions can be drawn by randomly selecting any pair of images in Flowers dataset, a possible recommendation for improving the model's performance could be that the model is based on identifying the blossoms in the input images for making its prediction; thus, a removal of other characteristics such as stems, background, etc may improve the model's performance.
### Skin cancer dataset
Figure 3 presents the results from the application of the proposed framework on two random images from skin cancer dataset, which belong to the same differences classes, i.e. the first image belongs to "Benign" class, while the second one belongs to "Malignant" class. The Siamese model's prediction was 0.781, which implies that the model predicts the similarity score: 21.9% and that the input images belong to the different classes. Figures 3(b) and 3(e) present the factual explanations provided by Grad-CAM in order to identify the features, which impact the model's decisions. The interpretation of Figures 3(b) suggests that the model focused on a small region on the skin while the interpretation of Figure 3(e) reveals that the model was focused on the tumor area. This implies that model was focused on regions with dissimilar visual characteristics for predicting that the similarity score between the input images is considerably
Figure 2: Application of the proposed framework on flowers dataset (a) Original input image\({}_{1}\) (b) Factual explanations on image\({}_{1}\) (c) Counterfactual explanations on image\({}_{1}\) (d) Original input image\({}_{2}\) (e) Factual explanations on image\({}_{2}\) (f) Counterfactual explanations on image\({}_{2}\)
low. Furthermore, Figures 3(c) and 3(f) present the counterfactual explanations that demonstrate the region of each image in which the model would have been based for predicting that the images are similar. Clearly, the highlighted areas in both images possess no similar visual characteristics.
Notice that although the input images are looking similar for an non-expert human, tumor's characteristics such as texture, color and size are considered vital for separating benign from malignant cases. Therefore, a possible recommendation from this use case could be to use data augmentation based on transformation techniques (rotation, flip, crop, zoom, change the brightness, contrast and saturation, etc) in order to improve the model's performance.
### AirBnB dataset
Figure 4 presents the results from the application of the proposed framework on two random images from AirBnB dataset, which belong to differences classes, i.e. the first image belongs to "bedroom" class, while the second one belongs to "living-room" class. The Siamese model's prediction was 0.516, namely that the similarity score is 48.4%, which suggests that the model predicts that the input images marginally belong to different classes. Figures 4(b) and 4(e) present the factual explanations provided by Grad-CAM, which suggest that the model was focused on the chairs presented in the first image and on several items in the second image (such as lamps, fire-place and clock) to predict that the images are marginally dissimilar.
Since the model's prediction is not very confident; it is wise to study the counterfactual explanations to explore why the model was near to be confused. Figures 4(c) and 4(f) present the counterfactual explanations of both images, which suggest that the model, focused on the bed and sofa located in the first and second images, respectively as well as the tables presented in both images. This implies that the model was nearly confused since both images possess a common item (table) as well as two items which are visually similar (bed and sofa).
A possible recommendation for improving the model's performance could be to use advanced image processing techniques for item identification in order to assist the model of correlating the items and/or furniture, which belong to each room.
Figure 3: Application of the proposed framework on skin cancer dataset (a) Original input image\({}_{1}\) (b) Factual explanations on image\({}_{1}\) (c) Counterfactual explanations on image\({}_{1}\) (d) Original input image\({}_{2}\) (e) Factual explanations on image\({}_{2}\) (f) Counterfactual explanations on image\({}_{2}\)
### Improving the Siamese model's performance
In the rest of this section, we present an example of improving the performance of the Siamese model, through the conclusions and recommendations, which could be provided from the application of the proposed framework.
Firstly, we recall that in the use case scenario performed on Flowers dataset, we observed that by randomly selecting any pair of images, which belong to the same class, the Siamese model was focusing on the blossoms for making its decision (Figure 2). Hence, a possible recommendation for improving the model's performance could be that the model was based on identifying the blossoms in the input images for making its prediction; thus, a removal of characteristics such as stems, background, etc may improve the model's performance.
To examine the effectiveness of this approach, we create a new dataset in which each figure is replaced with a bounding box containing the flower's blossom. For calculating the bounding boxes, for each image in the training data (anchor image) another image from the same class was randomly selected for calculating their similarity. In case, their predicted similarity by the model was \(>\) 80%, then we calculate the anchor's image Grad-CAM heatmap. Based on the calculated heatmap, we utilized the methodology and implementation of Cui et al. [35], for obtaining a bounding box, which contains the area which was mostly focused by the Siamese model for making its decision (i.e. flower's
Figure 4: Application of the proposed framework on AirBnB dataset (a) Original input image\({}_{1}\) (b) Factual explanations on image\({}_{1}\) (c) Counterfactual explanations on image\({}_{1}\) (d) Original input image\({}_{2}\) (e) Factual explanations on image\({}_{2}\) (f) Counterfactual explanations on image\({}_{2}\)
blossom). Along this line, in the newly created dataset, each image was replaced with the calculated bounding box. Figure 5 presents an example of the presented technique i.e. the original image, the bounding boxes of Grad-CAM and the cropped image.
Table 1 presents the performance of the Siamese model of identifying similar and dissimilar pairs of instances (a) trained with the original Flowers dataset (b) trained with the "cropped" Flowers dataset in which each image has been replaced with the identified bounding box using technique of Cui et al. [35]. The evaluation was performed on 432 pairs of similar and 432 pairs of dissimilar unseen images using accuracy, area under curve (AUC), precision and recall as performance metrics [36, 37]. Clearly, we are able to conclude that the performance of the Siamese model was considerably increased relative to all performance metrics. In addition, the Siamese model achieved its top performance during the training process requiring less epochs in case it was trained with the "cropped" dataset.
Figure 6 presents two pairs of (similar) images from the same class (Daisy). The first pair contains images from the original Flowers dataset while the second pair contains the corresponding "cropped" images. For the first pair the Siamese model predicted a similar score equal to 18% while for the second pair the model predicted a similar score equal to 11%.
Summarizing the previous discussion, we are able to conclude that the recommendation of removing characteristics such as stems, background and focusing on the blossoms considerable improved the quality of dataset.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Dataset & Accuracy & AUC & Precision & Recall \\ \hline Original & 87.15\% & 0.872 & 0.890 & 0.872 \\ “Cropped” & 88.31\% & 0.883 & 0.900 & 0.880 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Siamese model’s performance trained with the original and the “cropped” dataset
Figure 5: (a) Original image (b) Bounding box (c) Cropped image
## 5 Discussion & conclusions
The motivation for this research was to introduce the concept of explainable image similarity, for providing useful, interpretable and transparent insights into the underlying factors driving image relationships and comparisons.
In this modern deep learning area, models are becoming more and more complex, they can exhibit high accuracy but their lack of transparency and explainability becomes crucial for building trust and understanding. The context of explainable image similarity aims to bridge the gap between the black-box nature of sophisticated similarity models and human interpretability. The main goal is not only the development of models which are able to provide accurate similarity scores between pairs of images but also to offer insights into the specific features, patterns, or attributes that contribute to the computed similarity. By offering interpretable explanations, explainable image similarity not only enhances the usability of similarity-based applications, such as image retrieval and recommendation systems, but also empowers users to comprehend the reasoning behind the models' decisions, ultimately fostering informed and confident decision-making.
For achieving this goal, we proposed a new framework by integrating Siamese networks together with Grad-CAM technique. The former are used for calculating the similarity between input images while the latter is used for visualizing and interpreting the decisions made by convolutional-based Siamese neural networks. An attractive advantage of the proposed framework is that it is able to provide image similarity score along with visual intuitive explanations for its decisions. In addition, the proposed framework is able to evaluate bias on model's decisions as well as providing counterfactual explanations, highlighting the ability to "what if/model's decisions". The presented use cases scenarios included the application of the proposed framework on three similarity tasks from different application domains (two classification datasets and a few-shot learning dataset). Notice that the scope of this research was not to address a specific class of benchmarks i.e. few-shot learning benchmarks, one-shot learning benchmarks, etc but to provide human-meaningful explanations on similarity tasks. Clearly, the proposed framework can be easily applied on any image similarity tasks as well as few-shot/one-shot image classification tasks providing similarity scores along with visual explanations about its decisions. The use cases scenarios along with the provided comprehensive discussion highlighted the need for explainable image similarity and the useful conclusions and recommendations, which can be provided by its application. Furthermore, we presented an example of improving the performance of the Siamese model for Flowers use case scenario, through the conclusions and recommendations provided from the application of the proposed framework. In more detail, the provided recommendations resulted in increasing the model's accuracy by 1.2% and its prediction ability to identify similar images. For Skin cancer and AirBnB use case scenarios, the recommendations for improving the models performance were to use data augmentation based on transformation techniques (rotation, flip, crop, etc) and image processing techniques for item identification in order to correlate the items and/or furniture, which belong to each room, respectively. Nevertheless, the former resulted in a minor improvement of the model's performance while the latter needs expert image processing and object identification techniques; hence, we decided to omit them.
It is worth mentioning that the proposed framework is based on the original Grad-CAM for providing visual explanations. Clearly, other state of the art techniques such as Grad-CAM++ [38], XGrad-CAM [39] and Score-Grad [40], can be easily adopted and incorporated. This can be considered as a limitation of this work; nevertheless, we should take into consideration that this was not the scope of this work. Another limitation can be considered the fact that
Figure 6: (a) Pair of images for the same class obtained from the original dataset (b) corresponding cropped images
the proposed framework uses a Siamese network with two input images. A possible extension could include the utilization of recent state-of-the-art models [41] with more advanced and complex architectures as well as the use of heatmap different saliency algorithms for heatmap calculation. Some interesting works presented by RichardWebster et al. [42] and Hu et al. [43] used and proposed several algorithms for calculating saliency algorithms. An adoption of the proposed approach to their frameworks could provide useful conclusions from the factual and counterfactual explanations.
Our future work is concentrated on the application of the proposed framework on real-world image similarity benchmarks and its usage in conjunction with non post-hoc explainable techniques [11, 16]. Since the presented conclusions from the presented use case scenarios are quite encouraging, we intent to proceed with studying the accuracy performance impact on similarity tasks through the adoption of the proposed framework and the utilization of advanced image processing techniques. Finally, another interesting idea could be the usage of advanced large language models for providing automated recommendations from the factual and/or counterfactual explanations [44, 45]. Our expectation is that this research could be used as a reference for explainability frameworks, assisting decision-making by providing useful visual insights and offering customized assistance and recommendations on image similarity-related tasks.
|
2303.07740 | Efficient Image-Text Retrieval via Keyword-Guided Pre-Screening | Under the flourishing development in performance, current image-text
retrieval methods suffer from $N$-related time complexity, which hinders their
application in practice. Targeting at efficiency improvement, this paper
presents a simple and effective keyword-guided pre-screening framework for the
image-text retrieval. Specifically, we convert the image and text data into the
keywords and perform the keyword matching across modalities to exclude a large
number of irrelevant gallery samples prior to the retrieval network. For the
keyword prediction, we transfer it into a multi-label classification problem
and propose a multi-task learning scheme by appending the multi-label
classifiers to the image-text retrieval network to achieve a lightweight and
high-performance keyword prediction. For the keyword matching, we introduce the
inverted index in the search engine and create a win-win situation on both time
and space complexities for the pre-screening. Extensive experiments on two
widely-used datasets, i.e., Flickr30K and MS-COCO, verify the effectiveness of
the proposed framework. The proposed framework equipped with only two embedding
layers achieves $O(1)$ querying time complexity, while improving the retrieval
efficiency and keeping its performance, when applied prior to the common
image-text retrieval methods. Our code will be released. | Min Cao, Yang Bai, Jingyao Wang, Ziqiang Cao, Liqiang Nie, Min Zhang | 2023-03-14T09:36:42Z | http://arxiv.org/abs/2303.07740v1 | # Efficient Image-Text Retrieval via Keyword-Guided Pre-Screening
###### Abstract
Under the flourishing development in performance, current image-text retrieval methods suffer from \(N\)-related time complexity, which hinders their application in practice. Targeting at efficiency improvement, this paper presents a simple and effective keyword-guided pre-screening framework for the image-text retrieval. Specifically, we convert the image and text data into the keywords and perform the keyword matching across modalities to exclude a large number of irrelevant gallery samples prior to the retrieval network. For the keyword prediction, we transfer it into a multi-label classification problem and propose a multi-task learning scheme by appending the multi-label classifiers to the image-text retrieval network to achieve a lightweight and high-performance keyword prediction. For the keyword matching, we introduce the inverted index in the search engine and create a win-win situation on both time and space complexities for the pre-screening. Extensive experiments on two widely-used datasets, _i.e._, Flickr30K and MS-COCO, verify the effectiveness of the proposed framework. The proposed framework equipped with only two embedding layers achieves \(O(1)\) querying time complexity, while improving the retrieval efficiency and keeping its performance, when applied prior to the common image-text retrieval methods. Our code will be released.
Image-Text Retrieval, Efficient Search, Multi-Label Classification, Inverted Index.
## I Introduction
Recent years have witnessed that cross-modal image-text retrieval has been gradually becoming one of the mainstream research topics in the fields of multimedia computing and information retrieval [1, 2, 3, 4]. It aims to retrieve the gallery samples in one modality from a large-scale repository with a given query sample in another, whereby the cross-modal alignment is well-established to estimate the pairwise similarity between the query and each gallery sample. Specifically, taking a text as the query to retrieve its corresponding images is called text-to-image retrieval, and vice versa. Image-text retrieval is challenging due to the heterogeneity and semantic gap between these two modalities.
Broadly speaking, the studies on image-text retrieval are in two variants: late and early fusion. In particular, the former [3, 6, 7, 8, 9, 10] emphasizes the image and text feature encodings separately, and then utilizes a simple inner product between the image and text features to estimate the similarities. Its antithesis is the early fusion methods [5, 11, 12, 13], paying more attention to designing complex interaction modules for deeply fusing the image and text features. Thanks to the deep fusion of cross-modal information, the early fusion methods have been the leading paradigm in boosting the search performance. It, nevertheless, suffers from a huge gap between the laboratory experiments and the real-world applications since we need to online compute the feature representations in the interaction module quadratically by traversing through all image-text pairs, leading to the prohibitive reference cost. Taking the text-to-image retrieval task as an example, we show a simple flowchart of the early fusion method with its online running time in minutes on the MS-COCO dataset [14] in Fig. 1 (a). To address the low efficiency problem, some researchers [2, 15, 16, 17] propose to first screen the gallery samples by a fast late fusion technique and then retrieve the remaining samples by a slow early fusion technique (Late2Early for short), thus yielding a win-win on accuracy and efficiency, as shown in Fig. 1 (b). However, an online search operation with \(O(N)\) time complexity for the gallery screening is needed in the Late2Early method, still restricting
Fig. 1: Illustration of various image-text retrieval methods. Rank-1 denotes the expectation of correct match at the 1-th in the ranking list; Time represents the online running time (in minutes); Acc is the abbreviation for acceleration; Para denotes the number of modal parameters (in millions). These results are computed based on the early fusion network VILT [5] and the late fusion one ALBEF\({}_{0}\)[2] on MS-COCO. In the proposed framework, we adopt the classification by a multi-task learning on the ALBEF. It is worth noting that the running time varies across platforms and here we lay stress on the comparison among the running times.
its application in reality where there are usually millions of galleries with a tremendous number \(N\).
In this paper, we present a novel keyword-guided pre-screening framework to solve the low efficiency problem of image-text retrieval. We argue that a large number of gallery samples semantically irrelevant to the query can be screened out prior to the image-text retrieval algorithm, thus speeding up the following retrieval computation. Taking the text-to-image retrieval as an example, as shown in Fig. 2, given a textual query about the dog, it is unnecessary to compute the similarities with images of the boy or men, and we can pre-screen such image galleries. Towards this end, we predict the keywords of the texts and images and compare them to enable a coarse-grained gallery pre-screening. To accomplish this, we transfer the keyword prediction into a multi-label classification problem and develop multi-label classifiers to predict the keywords. Correspondingly, two critical issues should be considered. Firstly, since the obtained keywords can be viewed as a semantic expression of the sample on the discrete space and used for screening, the quality of keywords largely impacts the screening accuracy and the subsequent retrieval result. Therefore, **the accuracy of keyword prediction is a critical issue**. Secondly, the purpose of the pre-screening process is to improve the retrieval efficiency, yet the process itself can bring extra computational overhead, offsetting the efficiency to some extent. Therefore, **the efficiency of the pre-screening process is another critical issue**. Regarding to the first issue, we aim to improve the classifiers' quality by employing the booming image-text retrieval technique. Specifically, we propose a multi-task learning scheme by appending the multi-label classifiers to the image-text retrieval network. As a result, only two embedding layers for the multi-label classification are introduced in the proposed pre-screening framework. Combined with the inverted index in the search engine [18], the lightweight pre-screening is fulfilled only with \(O(1)\) querying time complexity and hence the second issue is solved accordingly. The proposed pre-screening framework can be applied prior to the common image-text retrieval methods to improve retrieval efficiency. Beyond that, the keyword prediction, which is considered as a multi-label classification task, can be readily achieved by other state-of-the-art classification techniques, thereby further improving the proposed framework. Fig. 1 (c) shows the simple flowchart of the proposed framework applied prior to the Late2Early method. We achieve efficiency improvement (\(\sim 4.1\times\) speed up) with little extra cost (\(\sim+1.8\) million modal parameters, \(\sim+2.5\)e-6 minutes of online running), even while enhancing the accuracy performance (\(+0.1\%\) Rank-1).
The main contributions of this paper are four folds. (1) We focus on the relatively undervalued low efficiency problem of image-text retrieval and propose a simple yet effective keyword-guided pre-screening framework for improving retrieval efficiency correspondingly. (2) We convert the keyword prediction into a multi-label classification task and further propose a multi-task learning scheme for lightweight and high-performance keyword prediction. (3) We incorporate the inverted index in the search engine into the keyword matching for improving pre-screening efficiency. (4) The proposed framework has robust compatibility. The experimental results on two public benchmarks show that it can be readily applied to almost all image-text retrieval methods to boost efficiency with only a minimal cost.
## II Related Work
We categorize the existing cross-modal image-text retrieval methods into three lines: late fusion, early fusion and efficiency-focused methods, and we brief them as follows.
**Late fusion methods** usually focus on developing various image and text processing techniques to extract feature representations of each modality separately, after which the two modalities interact with each other only with a loss function for training [3, 19, 20, 21, 22, 23]. For instance, Zheng _et al._[19] constructed an end-to-end dual-path convolutional neural network to learn the image and text global representations and proposed an instance loss for better weight initialization; Park _et al._[21] explored the important components in the image and text independently by a multi-head self-attention network for local correspondence; Radford _et al._[3] recently proposed a contrastive dual-path pre-training network trained on a large-scale dataset of 400 million image-text pairs and achieved satisfactory performance on various downstream tasks, including the image-text retrieval task. The late fusion methods yield the pairwise similarities by a simple inner product interaction during inference, promising retrieval efficiency. Nevertheless, due to the lack of information fusion across modalities when learning the feature representations, the performance of this kind of method is generally limited.
**Early fusion methods** pay more attention to information fusion between the visual and textual domains, and the image and text data are pre-processed separately [11, 22, 23, 24, 25, 26, 27, 28, 29]. Researchers concentrated on the local correspondence across modalities in the early fusion methods for maximizing performance. For example, the cross-attention mechanism [30] is used for the message interaction between the regions of the image and the words of the text [24, 25]; the graph neural network [31] is adopted to explore higher-order relational information between locals across modalities [26, 11, 27]. In recent years, with the success of Transformer architecture [30], the large-scale pre-training methods [12, 13] have
Fig. 2: An exposition of necessity of pre-screening prior to image-text retrieval. As an example of the text-to-image retrieval task, the images with the dotted arrow are irrelevant to the textual query at the semantic level and are pre-screened out in advance.
gradually become a central paradigm in vision-and-language (V+L) research and achieved many state-of-the-art results on V+L tasks, including the image-text retrieval task. The cross-modal information is usually fused early and freely by a single-stream transformer-like network in most pre-training methods, classified as the early fusion methods. The information fusion across modalities benefits the semantic alignment between modalities, thus the early fusion method has an advantage on performance compared with the late fusion method. However, we need to quadratically compute the feature representation online in the early fusion method. It is, therefore, not practical to apply it in real-world scenarios.
**Efficiency-focused methods** improve the image-text retrieval efficiency mainly from two perspectives. (1) Researchers optimized the model architecture smaller and lighter. Yang _et al._[32, 33, 34] proposed to learn the hash codes of the feature representations with the help of the cross-modal hashing technique for saving memory and improving efficiency; Gan _et al._[35] noticed that the over-parameterization issue results in the large memory and computation in the cross-modal pre-training models for the image-text retrieval, and proposed to search their sparse subnetwork with the aid of the lottery ticket hypothesis [36]; similarly, Wang _et al._[37] borrowed the MiniLM [38] structure to reduce the computation cost of the transformer module in the cross-modal pre-training models. And (2) some studies make efforts to yield a powerful late fusion model for narrowing down to a list of relevant galleries, within which the early fusion model is used to achieve the retrieval. For instance, Miech _et al._[15] distilled the knowledge of the early fusion model into the late fusion model, and adopted the distilled late fusion model for the preliminary screening of the gallery samples; Geigle _et al._[39] proposed to jointly train the early fusion model and the late fusion model with shared parameters for obtaining the high-quality late fusion model; Li _et al._[2] introduced a contrastive loss on the feature representations from the image and text processing branches, and the similarity between the representations is used to select the relevant galleries for the following interaction module. Unfortunately, the first group even with lightweight architecture still needs a long reference time due to the quadratic executions, and the second group has to perform an online search operation with \(O(N)\) time complexity. Towards this end, there is further room for improving retrieval efficiency. Unlike the above two groups, we proposed a general low-cost pre-screening framework, which can be employed prior to them to further accelerate the retrieval.
## III Methodology
For an image-text retrieval system, given a query sample in one modality, we need to retrieve a rank list from \(N\) gallery samples in another based on the similarity with the query sample. The current state-of-the-art retrieval methods suffer from \(N\)-related complexity and come at prohibitive computational costs in reality. Regarding this problem, we propose a keyword-guided pre-screening framework with minor computational overhead to narrow down to a list of \(N_{r}\) relevant galleries (\(N_{r}\ll N\)) prior to the common image-text retrieval methods. The proposed framework involves two modules: keyword prediction and pre-screening. At first, we present the multi-label classification technique for keyword prediction in visual and textual modalities, respectively, referring to Section III-A. Thereafter, we introduce a keyword matching strategy based on the inverted index for the gallery screening in Section III-B.
The block diagram of the proposed framework is illustrated in Fig. 3. The proposed framework can be embedded into almost all common image-text retrieval networks. Combining the proposed coarse-grained pre-screening and the state-of-the-art fine-grained retrieval brings a superior tradeoff between accuracy and efficiency.
Fig. 3: A summary of the proposed framework with taking the text-to-image retrieval as an example. The ‘Image-Text Retrieval Network’ module can be replaced with the common image-text retrieval method.
### _Keyword Prediction_
**Visual Modality.** We train a multi-label image classifier for predicting the image's keywords. Any common multi-label classification techniques [40, 41] can be adopted. However, we can not directly adopt the classifier trained on the multi-label classification benchmarks, such as COCO-80 and VGG-500 [41]. In doing so, the image keywords obtained from the class annotations in these benchmarks may be misaligned with the actual semantic content in the image due to the data gap between the image-text retrieval and the multi-label classification, thereby causing the imprecise keyword-guided screening. To this end, we train the classifier based on the image-text retrieval benchmarks. Specifically, the image's ground-truth annotations are generated from the paired texts. Referring to the setting in the classification task where the class annotations are generally nouns, we use the nouns in the paired texts as the image annotations. Alternatively, we also experimentally use the nouns, verbs and adjectives from the paired texts as the ground-truth annotations. Better performance results are obtained by only using the nouns as the annotations.
Beyond adopting the existing common classification technique for image keyword prediction, we further propose an advanced classification to enhance the classifier's performance and the follow-up screening accuracy and retrieval result. The multi-label image classification task can essentially be viewed as an image-to-label retrieval task, sharing certain characteristics with the image-text retrieval task. We believe that the booming image-text retrieval technique can positively contribute to the multi-label image classification on performance. Thereby, we propose a multi-task learning scheme by appending the multi-label classifier to the image-text retrieval network. Specifically, after the image processing branch in the retrieval network, we add an extra label embedding layer with a multi-label classification loss, thus enabling the multi-task learning of the image-text retrieval and the multi-label classification. As an accessory, the advanced classification introduces minimal computational overhead into the proposed pre-screening framework compared to the common classification.
For the classification loss, rather than adopting a frequently-used binary cross-entropy loss [41], we adopt a state-of-the-art asymmetric loss (ASL) [40]. Compared to the binary cross-entropy loss, the ASL loss operates dynamically on positive and negative samples during training and considers the positive-negative imbalance problem in the classification task. Specifically,
\[L=\sum_{i=1}^{l}-y_{i}L_{+}-\left(1-y_{i}\right)L_{-}, \tag{1}\]
where \(y_{i}=1\) represents that the \(i\)-th class annotation is the ground-truth of the image \(x\) and vice versa, and
\[\begin{cases}L_{+}=\left(1-p_{i}\right)^{\alpha_{+}}\log\left(p_{i}\right)\\ L_{-}=\left(\tilde{p}_{i}\right)^{\alpha_{-}}\log\left(1-\tilde{p}_{i}\right),\end{cases} \tag{2}\]
where \(\alpha_{+}\) and \(\alpha_{-}\) are the positive and negative focusing parameters, respectively. With a high \(\alpha_{+}\) (resp. \(\alpha_{-}\)), the contribution from easy positives with \(p_{i}\gg 0.5\) (resp. easy negatives with \(p_{i}\ll 0.5\)) to the loss is weakened, leading to more focus on more challenging samples in training. \(\tilde{p}_{i}=max\left(p_{i}-\Delta,0\right)\) is a shifted label probability, and the negative sample will be discarded when \(\tilde{p}_{i}<\Delta\).
**Textual Modality.** Intuitively, we can extract the words directly from the text by using the natural language toolkit [42] as its keywords. As a result, the inference text's keywords are derived from the inference data, and yet the image's keywords are still from the class annotations of the training data. The gap between the training and inference data tends to bring a possible situation with no overlap between the keywords of text and image in inference, thus leading to the failure of pre-screening. Alternatively, we can introduce a multi-label classification into the keyword prediction in textual modality, as in visual one. The text's ground-truth annotations for training are equivalent to the ones of the paired images, ensuring the overlap of keywords across modalities in inference and avoiding the pre-screening's failure. Yet at the same time, the classifier is likely to predict the wrong keywords for the text.
Fig. 4: Illustration of the proposed advanced classifiers appended in the ALBEF [2] for keyword prediction.
Fig. 5: Illustration of the pre-screening with the inverted index. We take text-to-image retrieval as an example.
From the above analysis, merging the predicted labels and extracted nouns to the keywords is a complementary solution. We experimentally study the performance of the above models for generating text keywords. The multi-label classification results in better retrieval performance and is used in the proposed framework. Beyond using the common classification technique, we append a multi-label classifier after the text processing branch in the retrieval network for the advanced classifier.
We take an image-text retrieval method ALBEF [2] as an example for the two advanced classifiers, as illustrated in Fig. 4. The label embedding layer together with the ASL are appended to the end of the image encoder and text one, respectively. In inference, the class annotations with top-\(R_{I}\) and top-\(R_{T}\) highest probabilities are used as the image and text keywords, respectively.
### _Keyword Matching for Pre-screening_
The sample information is abstracted as the keywords expressed in discrete form and used as guidance for gallery pre-screening prior to the image-text retrieval network. Intuitively, we can compare the keywords of the query and each gallery sample and screen out the galleries without overlap with the query regarding the keywords. However, it has a high computational complexity in reality due to the traversal operator in the gallery set with a tremendous number of \(N\). For this, we achieve the high-efficient pre-screening with the inverted index technique, inspired by the search engine [18]. In particular, the pre-screening involves an index step that builds the mapping between the keywords and the gallery samples, a search step that picks out the gallery samples sharing the same keywords with the query sample and discards the rest of the gallery samples.
**Index.** After the keyword prediction, we have obtained a mapping from the gallery to the keyword. It implies that a naive forward database index has been implicitly built where the gallery sample ID is specified as the key and its keywords as the value, as shown in Fig. 5. A time-consuming pre-screening is required based on the database. Consequently, we construct an inverted database index in which the keyword is specified as the key and its paired gallery samples as the value, resulting in a fast pre-screening as follows.
**Search.** As shown in Fig. 5, with the inverted index created, the pre-screening can be resolved by two steps: a query step with quickly jumping to the keys having the same as the query's keywords, and a mergence step with merging the associated values to as the retaining gallery samples.
## IV Complexity Analysis and Discussion
While achieving efficiency improvement, the proposed framework itself can bring extra resource overhead. We analyze its time and space complexities and compare it with the Late2Early method, which is closely related to the proposed framework.
The online computation is composed of query processing and gallery screening. In the proposed framework, there is no extra time to spend on the query processing (i.e., query keyword prediction) due to the proposed multi-task learning scheme; during the gallery screening, with an \(O(1)\) querying time complexity, the next merging step with far less than \(O(N)\) complexity dominates the time complexity of the screening. In the Late2Early method, given a real-time query, its feature is extracted with high complexity and \(k\)-nearest neighbors from the gallery samples are selected with \(O(N)\) time complexity.
For the space complexity, we only need to add two label embedding layers and store the inverted index only including ID numbers, giving a clear advantage over the Late2Early method, usually with two heavy visual and textual encoding networks and storing the gallery samples' features for the gallery screening.
In addition, the proposed framework can be applied prior to almost all image-text retrieval methods to improve retrieval efficiency. When applied prior to the Late2Early method, the proposed pre-screening selects \(N_{r}\) gallery samples (\(N_{r}\ll N\)) to the follow-up Late2Early method, bringing \(O(N_{r})\) time complexity instead of the original \(O(N)\) in the Late2Early method. When applied prior to the other method, the proposed pre-screening plays a similar role as the late fusion network in the Late2Early method, yet offers a significant advantage over the resource overhead compared with the Late2Early methods.
## V Experiments
### _Experiment Settings_
**Datasets.** We conduct experiments on two widely-used image-text retrieval benchmark datasets: Flickr30K [43] and MS-COCO [14], containing \(31,014\) and \(123,287\) images, respectively. Both datasets have five associated textual descriptions per image. Following the common settings [5, 8], we split Flickr30K into \(29,000\) images for training, \(1,014\) for validation and \(1,000\) for inference; we use \(113,287\) images for training, \(5,000\) for validation and \(5,000\) for inference in MS-COCO. We report the results for both image-to-text retrieval (TR) and text-to-image retrieval (IR) in the experiments.
**Evaluation metrics.** We adopt the widely used metric Rank-k (R@k) for evaluation. R@k represents the expectation of correct match at the \(k\)-th in the ranking list and we report R@1, R@5 and R@sum (_i.e._, the sum of R@1 and R@5). In addition, in order to well reveal the method's effectiveness, we report the number of modal parameters (Para. for short, in millions), the online running time1 (in minutes) of the modal for all queries on a dataset and the speedup ratio for retrieval. For the running time and the speedup ratio, we report the mean average for TR and IR. It is worth noting that the number of modal parameters in the proposed framework is different on different datasets, since there are only two label embedding layers in the proposed framework and its modal parameter depends on the number of labels2. We report the average of the values on Flickr30K and MS-COCO datasets.
Footnote 1: The running time is measured without acceleration operation for a fair comparison.
Footnote 2: There are \(0.8\) and \(1.8\) million parameters for the proposed framework on Flickr30K and MS-COCO, respectively.
**Implementation details.** The experiments are conducted on eight GeForce RTX 3090 GPUs with 24 GB of memory. For
the advanced classification, we append the classifiers to the image-text retrieval network ALBEF [2] in the experiments. Based on the trained ALBEF on Flickr30K (resp. MS-COCO), we continue to train the classifier for 10 (resp. 5) epochs with a batch size of 128. For training the classifier better, we remove the class annotations paired with fewer than \(100\) images. As a result, there are \(539\) and \(1,122\) class annotations in Flickr30K and MS-COCO, respectively. We set \(\alpha_{+}=0\), \(\alpha_{-}=3\) and \(\Delta=0.05\) for ASL loss referring to the settings in [40], and set \(R_{I}=15\) and \(R_{T}=3\) in the image and text classifications, respectively.
**Baselines.** The proposed framework can be applied prior to the common image-text retrieval method, _e.g._, the early fusion, late fusion or the Late2Early methods. Several image-text retrieval methods are involved as the baselines in the experiments: the early fusion methods ALBEF\({}_{all}\) and ViLT-B/32a [5]; the late fusion methods ALBEF\({}_{0}\), LightDOT [16] and CLIP [3]; the Late2Early methods from the free combination of the above late fusion and early fusion baselines3. Referring to the setting in [2], we select \(128\) gallery samples by the late fusion network and fed them into the early fusion network in the Late2Early. Specifically, ALBEF\({}_{all}\) and ALBEF\({}_{0}\) are evolved from the ALBEF [2]. In the ALBEF, given one query, all gallery samples first get through the image/text processing branch to compute the similarities with the query, and then the top-\(128\) gallery samples together with the query are sent to the following interaction module for retrieval. ALBEF\({}_{all}\) refers to all galleries instead of top-\(k\) ones being sent to the interaction module and is essentially an early fusion method, and ALBEF\({}_{0}\) refers to that there are no galleries to the interaction module and is essentially a late fusion method. We report the baselines' results by running the published code in the experiments.
Footnote 3: Due to unpublished codes, the pure Late2Early methods [15, 16, 17] can not be used in the experiments.
### _Results and Discussion_
We report the results of the proposed pre-screening framework applied prior to the Late2Early, late fusion or early fusion methods in Table I. No matter what type of method is applied, the proposed framework can achieve acceleration, while keeping the performance and sometimes offering a remarkable improvement. For example, a \(2.0\%\) increase at R@sum is achieved by the proposed framework applied prior to the ALBEF\({}_{0}\)+ViLT-B/32a method on Flickr30K (TR), and a \(2.8\%\) increase is achieved applied prior to the ViLT-B/32a method on MS-COCO (IR). The price for this is only the \(4.2\)e-3 (resp. \(6.2\)e-2) minutes of online running time on Flickr30K (resp. MS-COCO) and an average of \(1.3\) million modal parameters. It is worth noting that the proposed pre-screening framework working on the early fusion method plays the same role as the late fusion network in the Late2Early method, that is, both aim at improving the retrieval efficiency of the early fusion method. Targeting the acceleration of the same early fusion method (_i.e._, ViLT-B/32a), the cost of the proposed framework is far less than that of the late fusion network (_i.e._, ALBEF\({}_{0}\) and LightDOT). Specifically, the proposed framework is an average of \(2,072\) (resp. \(685\)) times faster than the late fusion network on Flickr30K (resp. MS-COCO) for the running time
and \(145\) times less than the late fusion network for the modal parameter.
### _Further Analysis_
**Comparison with ANN-based Late2Early method.** Though the Late2Early method suffers from an \(O(N)\) screening time complexity, the Approximate Nearest Neighbor search (ANN) [44] can be applied to speed it up. Two commonly used ANN algorithms from the Facebook AI Similarity Search (FAISS) library, _i.e._, the Product Quantization (PQ) and the Product Quantization with InVerted File (IVFPQ), are applied in the Late2Early for comparing with the proposed framework. The comparison results are shown in Table II. The Late2Early method is speeded up by using ANN during the screening process. The proposed framework also achieves the screening acceleration over the Late2Early method, which is superior to ANN in most cases. More importantly, the proposed framework is nearly the same as the Late2Early method on performance, while the ANN seriously hurts the performance of the Late2Early.
**Comparison with common classification.** The performance of the multi-label classification for keyword prediction directly influences the result of the proposed framework. Aiming at a high-performance classification, we propose the advanced classification. Alternatively, the classification can be replaced with any existing common one. For comparison, we use the ViT-B/16 [45] and BERT [46] together with the ASL as the image and text classifiers, respectively. In terms of performance, as shown in Table III, the proposed framework with the advanced classification (Ours) has a consistent advantage over that with the common classification (Ours(Com.)) based on different baselines, which verifies the superiority of the proposed advanced classification. In terms of the classification-related metrics, we show the number of model parameters, the recall rate of ground-truth gallery samples and the mAP values of image and text classifiers in Table IV. The advanced classification model surpasses the common one at all metrics. Specifically, the number of parameters for the proposed framework with the advanced classification is far less than that with the common classification, which shows the superiority of the proposed framework in resource consumption.
**Analysis of the advanced classification.** We analyze the proposed advanced classification from three aspects. (1) We adopt the common binary cross-entropy loss (BCE) to replace the ASL as the classification loss. The comparison results (Ours vs. Ours(BCE)) in Table III show that the 'Ours(BCE)' has a slight drop at R@sum compared to the 'Ours' on Flickr30K (TR) when applied in the Baseline#1 and Baseline#2 and on MS-COCO when applied in Baseline#2, and has a slight increase or remains unchanged in other cases. In general, the 'Ours' and 'Ours(BCE)' are much of a muchness on performance. By adopting other state-of-the-art classification losses, the proposed framework with high flexibility is expected to further improve. (2) We develop a multi-label text classifier for text keyword prediction. Instead, we can also extract the nouns directly from the text by using the natural language toolkit (Ours(Ext.)), or use both the predicted labels and extracted nouns as the keywords (Ours(Merge)). As shown in Table III, the 'Ours' performs better than the 'Ours(Ext.)' in all cases except for MS-COCO (TR). In the 'Ours(Ext.)', the text keywords are derived from itself. In contrast, the keywords are from all ground-truth annotations in the 'Ours', enabling the exploration of keyword synonyms and positively affecting performance. However, the classifier is likely to predict the wrong keywords in the 'Ours', thus hurting retrieval performance. Beyond that, as mentioned in Section III-A, there are pre-screening failures in the 'Ours(Ext.)', yet not in the 'Ours'. Specifically, \(37\) and \(9\) texts can not incorporate into the pre-screening process by the 'Ours(Ext.)' in Flickr30K and MS-COCO, respectively. In view of the above, merging the predicted labels and extracted nouns to the keywords is a complementary solution. It can be seen from Table III that 'Ours(Merge)' surpasses the 'Ours' by a small margin in most cases. At the same time, though, the 'Ours(Merge)' suffers from a lower screening rate due to
the increasing number of text keywords from the mergence operation. As a result, the 'Ours(Merge)' is weaker than the 'Ours' on retrieval efficiency. Specifically, the \(2.0\times\) and \(3.9\times\) speedups are reached by the 'Ours(Merge)' on Flickr30K and MS-COCO, respectively; by contrast, the 'Ours' introduces the \(2.1\times\) and \(4.1\times\) speedups. (3) We use the nouns in the text as the image and text ground-truth annotations for training classifiers. One alternative is to use the nouns, verbs and adjectives (Ours(NVA)). The results in Table III show that the 'Ours(NVA)' is inferior to the 'Ours' on performance in some cases and performs better than the 'Ours' in others. In general, the 'Ours' and 'Ours(NVA)' are much of a muchness on performance. However, it is remarkable that more keywords can be predicted in the 'Ours(NVA)' due to more diverse ground-truth annotations, as a result of which the screening rate is reduced and the retrieval efficiency is hurt.
**The upper bound of the proposed framework.** Excluding the harmful effects of the wrong classification results on performance, we directly use the ground-truth annotations as the keywords to explore the upper bound of the proposed framework on performance (Ours(GT)). As shown in Table III, the 'Ours(GT)' performs better than the 'Ours' and also consistently achieves remarkable improvement over the baseline on performance. Owing the high flexibility, the proposed framework is hopeful to realize a powerful win-win situation for accuracy and efficiency by adopting a better classification technique.
**Parameter analysis.** There are two parameters in the proposed framework, _i.e._, \(R_{I}\) in image keyword prediction and \(R_{T}\) in text keyword prediction. We show the results of the proposed framework with different parameters based on the ALBEF\({}_{0}\)+ALBEF\({}_{all}\) in Table V. With an increasing value of \(R_{I}\) or \(R_{T}\), more ground-truth gallery samples are kept in the pre-screening stage and enter into the follow-up retrieval network (_i.e._, a higher Recall value), resulting in better accuracy (_i.e._, a higher R@1), yet at the same time, causing a decline in efficiency (_i.e._, a lower Speedup). For achieving a trade-off between accuracy and efficiency, we set \(R_{I}=15\) and \(R_{T}=3\) in the experiments.
**Efficiency comparison on larger-scale data.** To verify
the superiority of the proposed framework in reality, we make an efficiency comparison with the Late2Early method ALBEF\({}_{0}\)+ALBEF\({}_{all}\) on larger-scale data, which is constructed by merging all data in Flickr30K and MS-COCO and results in \(154,301\) images and \(771,837\) texts. We adopt the classifiers trained on MS-COCO in the proposed framework in the experiments. Fig. 6 shows the running time4 of one query on the new benchmark with different numbers of gallery samples. The proposed framework shows significant superiority over the Late2Early method on efficiency. Moreover, with an increasing number of the gallery samples, the increasing trend of the proposed framework's running time is gradually slowed down, by contrast, the Late2Early method's running time increases linearly. These results indicate the potential application of the proposed framework in reality.
Footnote 4: The running time is composed of the query processing time and gallery screening time and the change in the number of gallery samples only impacts the screening time, and thus we only report the screening time in Fig. 6. The query processing time is 3.0e-2 in TR and 3.1e-4 in IR for the Late2Early method, and is 0 for the proposed framework thanks to the proposed multi-task learning scheme.
**Simulation of practical application scenario.** In practical application scenario, the proposed framework usually deals with millions of wide-open inference data with more diverse content. As a result, the keywords predefined from training data may not have appropriate coverage of the semantic space of the inference data, which tends to lead to false predictions for keywords and affect the framework's performance. To investigate the performance of the proposed framework in such a scenario, we perform the proposed framework on Flickr30K-all, which is constructed by merging the training, validataion and inference data in Flickr30K and results in \(31,014\) images and \(155,070\) texts and we adopt the proposed framework trained in MS-COCO. There is a relatively big gap between the training and inference data. Table VI presents the results on Flickr30K-all. It can be seen that a decent performance in accuracy and speedup can still be obtained by the proposed framework. The proposed framework would work as long as there is only one correctly predicted keyword among the resulting keywords, so that the negative impact of the data gap on performance is alleviated to some extent.
**Visualization and limitation.** We visualize the predicted keywords and the screening results5 from the proposed framework in Fig. 7. It can be seen that the proposed framework yields the correct and reasonable keywords at the semantic level and can bring decent screening results, _i.e.,_ screening out the gallery samples semantically irrelevant to the query. Notably, the results in Flickr30K-all are obtained by the framework trained in MS-COCO and the gap between the training and inference data does not have much effect on performance.
Fig. 6: Running time (in minutes) of one query on large gallery data (in thousands); \(\sim\)750 and \(\sim\)150’ specifically refer to 771,837 texts and 154,301 images, respectively.
Fig. 7: Running time (in minutes) of one query on large gallery data (in thousands); \(\sim\)750 and \(\sim\)150’ specifically refer to 771,837 texts and 154,301 images, respectively.
Taking a step further, we discuss the limitation of the proposed framework. The proposed framework relies on the quality of the predicted keywords. With that in mind, we propose a multi-task learning scheme for improving the classification accuracy and prompting the keyword prediction, and present the detailed results of the classifier with various settings in Table III and the in-depth analysis in Section V-C. Nevertheless, there still leaves room for further development of the proposed framework with a more powerful classification technique. We could consider the labels from the object detection as a powerful supplementary to the classification results, which meanwhile could well apply to the case of the collections without extensive textual annotation.
## VI Conclusion
This paper focuses on the low efficiency problem in the image-text retrieval task. Admittedly, the existing image-text retrieval methods suffer from at least \(O(N)\) time complexity and may not be economically practical in many real cases. To this end, we present a simple and effective keyword-guided pre-screening framework, in which the image and text samples are projected into the keywords, and then a fast keyword matching across modalities is executed to screen out the gallery samples irrelevant to the query sample. The remaining gallery samples with an amount that is much less than the number of the original gallery samples are fed into the common image-text retrieval network, thus realizing the retrieval acceleration. The proposed framework is characterized by low consumption and excellent compatibility. We experimentally verify the effectiveness of the proposed framework.
|
2302.10115 | Non-Hermitian strongly interacting Dirac fermions: a quantum Monte-Carlo
study | Exotic quantum phases and phase transition in the strongly interacting Dirac
systems has attracted tremendous interests. On the other hand, non-Hermitian
physics, usually associated with dissipation arising from the coupling to
environment, emerges as a frontier of modern physics in recent years. In this
letter, we investigate the interplay between non-Hermitian physics and strong
correlation in Dirac-fermion systems. We develop a sign-problem-free projector
quantum Monte-Carlo (QMC) algorithm for the non-Hermitian interacting fermionic
systems. Employing state-of-the-art projector QMC simulation, we decipher the
ground-state phase diagram of the Honeycomb Hubbard model in the presence
non-Hermitian asymmetric spin resolved hopping processes. Intriguingly, the
antiferromagnetic ordering induced by Hubbard interaction is enhanced by the
non-Hermitian asymmetric hopping. More remarkably, our study reveals that
critical properties of the quantum phase transition between Dirac semi-metal
and AF ordered phases are consistent with the XY universality class in
Hermitian system, implying Hermiticity is emergent at the quantum critical
point. The numerically-exact QMC approach utilized in this study is easily
applied to other non-Hermitian interacting fermionic models, hence paving a new
avenue to investigating quantum many-body physics in non-Hermitian systems. | Xue-Jia Yu, Zhiming Pan, Limei Xu, Zi-Xiang Li | 2023-02-20T17:22:01Z | http://arxiv.org/abs/2302.10115v1 | # Non-Hermitian strongly interacting Dirac fermions: a quantum Monte-Carlo study
###### Abstract
Exotic quantum phases and phase transition in the strongly interacting Dirac systems has attracted tremendous interests. On the other hand, non-Hermitian physics, usually associated with dissipation arising from the coupling to environment, emerges as a frontier of modern physics in recent years. In this letter, we investigate the interplay between non-Hermitian physics and strong correlation in Dirac-fermion systems. We develop a sign-problem-free projector quantum Monte-Carlo (QMC) algorithm for the non-Hermitian interacting fermionic systems. Employing state-of-the-art projector QMC simulation, we decipher the ground-state phase diagram of the Honeycomb Hubbard model in the presence non-Hermitian asymmetric spin resolved hopping processes. Intriguingly, the antiferromagnetic ordering induced by Hubbard interaction is enhanced by the non-Hermitian asymmetric hopping. More remarkably, our study reveals that critical properties of the quantum phase transition between Dirac semi-metal and AF ordered phases are consistent with the XY universality class in Hermitian system, implying Hermiticity is emergent at the quantum critical point. The numerically-exact QMC approach utilized in this study is easily applied to other non-Hermitian interacting fermionic models, hence paving a new avenue to investigating quantum many-body physics in non-Hermitian systems.
_Introduction._--Fathoming various exotic quantum phases and phase transition triggered by strong correlation between electrons is one of the central issues in modern condensed matter physics [1]. In particular, inspired by the experimental realization of graphene[2] and topological phases[3; 4], the interaction driving spontaneously symmetry breaking(SSB) phases and the associated quantum phase transition in Dirac fermions attracts growing interests. Since most strongly correlated systems are theoretically intractable in more than one-dimension, numerical approaches play vital roles in understanding Dirac-fermion systems in the presence of strong electronic interaction. Extensive numerical studies on interacting Dirac systems reveal an abundance of intriguing phenomena arising from the interplay between Dirac physics and strong electronic interaction, including interaction driven topological and other exotic phases of matter[5; 6; 7; 8; 9; 10; 11; 12; 13], Gross-Neveu quantum criticality[14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29] and continuous phase transition beyond Landau's paradigm [30; 31; 32; 33].
In recent years, understanding non-Hermitian (NH) physics in quantum systems emerges as a frontier invoking considerable attention [34; 35; 36]. Non-Hermitian physics arises if the system is coupled to the environment in the presence of dissipation or measurement [37; 38; 39; 40]. Additionally, non-Hermitian band Hamiltonian provides a conceptually effective description of the quasi-particle with finite life-time resulting from electron-electron/electron-phonon interaction or disorder scattering[41; 42]. Previous studies demonstrate that plentiful exotic phenomena arise from non-Hermiticity, for instance skin effect[43; 44; 45; 46; 47; 48; 49; 50; 51; 52], exotic topological phases [53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70] and novel quantum critical behaviours [71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85], which have not been established in Hermitian systems. Most early studies on non-Hermitian physics focus on single-particle systems, whereas very recently, increasing attentions have been paid in the quantum many-body effect in the non-Hermitian systems [86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114].
Nevertheless, the effects of interplay between strong correlation and Dirac fermions in non-Hermitian systems remain unexplored hitherto. Moreover, the intrinsically unbiased numerical algorithm applying to the non-Hermitian quantum many-body models in more than one-dimension is lacking. To address these crucial issues, we systematically study the
Figure 1: Schematic asymmetry hopping (a) and ground state phase diagram (b) of non-Hermitian interacting model on the honeycomb lattice. In (a), the red(blue) circles represent the sites in A(B) sublattice. In (b), NHEAFM denotes non-Hermitian enhanced antiferromagnetism ordered phase and DSM denotes Dirac semi-metal phase. The red line is the phase boundary between DSM and NHEAFM phase, and red star points are numerical results obtained from projector QMC simulations.
ground-state properties of a non-Hermitian model featuring Dirac fermions in the presence of Hubbard interaction. For the first time, we develop a non-Hermitian version of projector QMC, applying to investigating the ground-state properties of non-Hermitian interacting fermionic systems. Remarkably, in the framework of the developed algorithm, we construct a non-Hermitian Hubbard model free from the notorious sign problem by virtue of time reversal symmetry[115; 116; 117; 118; 119; 120], enabling the large-scale QMC simulation without numerical approximation. Employing the numerically exact QMC simulation, we access the ground-state phase diagram of the model. The main results of the state-of-the-art QMC simulation are summarized as follows: 1. The interaction induced AFM order in Honeycomb Hubbard model is robust in the presence of non-Hermiticity. Intriguingly, the AFM order is enhanced by the asymmetric non-Hermitian process. 2. The numerical results reveal a continuous quantum phase transition occurring from DSM to AFM ordered phase. Surprisingly, despite the presence of non-Hermitician terms in the Hamiltonian, the quantum phase transition between Dirac semimetal (DSM) and antiferromagnetic (AFM) phases belongs to the Hermitian version of chiral-XY universality class[121; 30; 122]. Hence, the QMC results reveal the emergence of Hermiticity at the quantum phase transition point. Additionally, we present a systematic renormalization-group (RG) field theory analysis of the effective low-energy theory describing the non-Hermitian DSM-AFM transition, providing an understanding of the numerical results unveiled by QMC simulation. Notably, the QMC approach developed for non-Hermitian system is a general algorithm to tackle the non-Hermitian interacting fermionic models. We believe that our study paves a new route to investigating the quantum many-body physics in non-Hermitian systems.
_Model and methods._--In this paper, we study the non-Hermitian extension of an interacting model on the honeycomb lattice[1]. Specifically, as shown in Fig.1 (a), we introduce the asymmetric hopping to construct a non-Hermitian version of the Hubbard model[123]:
\[\begin{split}& H=H_{0}+H_{1}=\\ &-t\sum_{\langle i,j\rangle,\sigma}c_{i,\sigma}^{\dagger}c_{j, \sigma}+h.c+\sum_{i}U(c_{i\uparrow}^{\dagger}c_{i\uparrow}-\frac{1}{2})(c_{i \downarrow}^{\dagger}c_{i\downarrow}-\frac{1}{2})\\ &-\delta\sum_{\langle i,j\rangle}(c_{i,\uparrow}^{\dagger}c_{j, \uparrow}-c_{j,\uparrow}^{\dagger}c_{i,\uparrow}-c_{i,\downarrow}^{\dagger}c _{j,\downarrow}+c_{j,\downarrow}^{\dagger}c_{i,\downarrow}),\end{split} \tag{1}\]
where \(H_{1}\) and \(H_{0}\) are the non-Hermitian and Hermitian Hamiltonian (Hubbard model), respectively. \(\langle ij\rangle\) refers to the nearest-neighboring (NN) sites \(i\) and \(j\). \(c_{i,\sigma}^{\dagger}\) creates a fermion on-site \(i\) with spin \(\sigma=\uparrow,\downarrow\). \(t,\delta\) are symmetric and asymmetric hopping amplitude, respectively. For \(\delta=0\), Eq 1 is reduced to Hermitian Hubbard model on the honeycomb lattice, and has been extensively studied[14; 22]. We set our energy units \(t=1\) throughout this article. \(U\) is conventional on-site Hubbard interaction. The Hermiticity of the model is broken when asymmetric hopping is introduced. The model is \(\mathcal{PT}\) symmetric[124], rendering the possibility that the Hamiltonian exhibits real eigenvalues despite its non-Hermiticity. For Hamiltonian Eq. (1), in the non-interacting limit \(U=0\) the eigenstates preserve \(\mathcal{PT}\)-symmetry if \(\delta\) is smaller than the exceptional point (EP) \(\delta_{c}=1\), at which eigenstates of the Hamiltonian coalesce. In the presence of Hubbard interaction, the exact diagonalization calculations in small clusters suggest the EP \(\delta_{c}=1\) is stable under interaction, namely when \(\delta<\delta_{c}\) the eigenvalues of the interaction Hamiltonian Eq. (1) are real (see Sec.IV in Supplementary Materials (SM) for the details).
To investigate the ground-state properties of the non-Hermitian model, we generalize the algorithm of PQMC. To our knowledge, it is the first approximation-free quantum Monte-Carlo algorithm to study the ground-state properties of quantum many-body non-Hermitian model. Intriguingly, even though in the presence of non-Hermiticity, the model at half filling is free from the notorious sign problem at half filling in the framework of our algorithm, with the details introduced in SM I. Thus, the model offers a promising platform to investigate the interplay between non-Hermiticity and quantum many-body effects. In this work, we perform state-of-the-art QMC simulations[125; 126; 120] to study the ground-state phase diagram of the model. We focus on the model at half filling, and explore the quantum phase and the critical properties of the interaction driven quantum phase transition in
Figure 2: Finite-size scaling of XY-AFM static structure factor \(S_{AF}^{XY}\) at different non-Hermitian parameter \(\delta\) and Hubbard interaction \(U\) close to the DSM-AFM transition. (a) \(\delta=0.2\), \(U_{c}\approx 3.8\). (b) \(\delta=0.4\), \(U_{c}\lesssim 3.6\). (c) \(\delta=0.6\), \(U_{c}\lesssim 3.2\). The largest system size in the simulation is \(L=24\). The more accurate results of \(U_{c}\) are accessed from RG-invariant ratio, as shown in Fig. 3. The results of \(U_{c}\) obtained by different approaches are approximately consistent with each other. (d) XY-AFM structure factor \(S_{AF}^{XY}\) as a function of \(\delta\) for \(U/t=3.6\). The results for finite size \(L=12,15,18,21\), and extrapolated to thermodynamic limit \(L\rightarrow\infty\) by the polynomial fitting of \(S_{AF}^{XY}\) versus \(1/L\) as \(S_{AF}^{XY}=a+b/L^{c}\) for \(U/t=3.6\).
non-Hermitian system.
_Quantum phase diagram._--Before presenting the details of PQMC results, we briefly summarize the most salient features of the ground-state phase diagram of the model 1. The schematic phase diagram with varying asymmetric hopping amplitude \(\delta\) and on-site Hubbard interaction \(U\) is shown in Fig.1(b). For \(\delta=0\), a quantum phase transition from DSM to AFM phase is driven by the Hubbard interaction. We obtain critical point \(U_{c}=3.87\), which is consistent with previous studies in the literature[14]. Our large-scale QMC simulations convincingly show that AFM order is robust, and more appealingly, strongly enhanced in the presence of non-Hermiticity. The transition point \(U_{c}\) from DSM to the non-hermitian enhanced antiferromagnetic (NHEAFM) phase is shifted to a smaller value with increasing asymmetric hopping amplitude. The enhancement of AFM ordering by non-Hermitian asymmetric hopping is beyond our conventional understanding that non-Hermiticity is expected to destroy long-range ordering, and constitute a main discovery of our numerical simulation. Furthermore, we employ standard finite-size scaling (FSS) procedure to extract the critical exponents and determine universality class of the DSM-NHEAFM transition. Even though the lattice model is non-Hermitian, the results reveal that the transition between DSM and NHEAFM belongs to _Hermitian_ chiral XY universality class [121, 122, 30], implying Hermiticity is emergent at the DSM-NHEAFM transition point at low-energy limit.
_Non-Hermitian enhanced antiferromagnetism_--To investigate the effect of non-Hermiticity on AFM long-range ordering triggered by Hubbard interaction, we compute structure factor of AFM order, the definition of which is introduced in the section I of SM. Notice that non-Hermitian asymmetric hopping reduces the spin rotational \(SU(2)\) symmetry down to a \(U(1)\) corresponding to spin rotation in the \(x-y\) plane, leading to the consequence that AFM structure factor in \(z\)-direction \(S_{AF}^{z}\) is not equivalent to that in \(xy\)-plane \(S_{AF}^{xy}\). The numerical results unequivocally point out, AFM order in \(xy\)-plane is dominant over the ordering in \(z\)-direction in the presence of non-Hermitian asymmetric hopping, as shown in the section IV of SM. Hence, in the following we present the numerical results of AFM ordering in \(xy\)-plane.
For the Hermitian limit \(\delta=0\), the standard FSS procedure yields the critical point of DSM-AFM transition \(U_{c}=3.87\), in agreement with extensive studies in previous literature. In the presence of non-Hermitian asymmetric hopping, the FSS analysis also establishes existence of AFM long-range ordering triggered by strong Hubbard interaction. Fig. 2(a)-(c) depicts the AFM structure factors \(S_{AF}^{xy}\) for various interaction parameters and linear system sizes \(L\), at several non-Hermitian asymmetric hopping parameters \(\delta=0.2,0.4,0.6\), clearly indicating quantum phase transition occurs from a DSM to NHEAFM ordered phase with increasing Hubbard interaction strength. More remarkably, the critical value of Hubbard \(U\) for the DSM-NHEAFM transition decreases upon increasing non-Hermitian hopping parameter \(\delta\), suggesting the enhancement of AFM ordering by non-Hermiticity.
To explicitly observe the enhancement of AFM long-range order at thermodynamic limit, we present the AFM structure factor as a function of \(\delta\) for different linear system sizes \(L\) and the extrapolated results to \(L\rightarrow\infty\) ( Fig. 2(d)). The Hubbard interaction parameter is fixed at \(U=3.6\). Fig. 2(d) clearly indicates AFM structure factors monotonically increases with \(\delta\). For \(U=3.6\), the ground state is DSM with non-Hermitian hopping \(\delta=0\), which is verified by the extrapolated result of AFM structure factor \(S_{AF}^{xy}\left(L\rightarrow\infty\right)=0\). With increasing \(\delta\), the AFM ordering is enhanced, resulting in a quantum phase transition from DSM to AFM ordered phase driven by non-Hermitian hopping, occurring at \(\delta_{c}\approx 0.4\). Consequently, the numerical results unambiguously demonstrate the enhancement of AFM ordering by non-Hermiticity in model Eq. (1).
To determine accurate transition point between DSM and AFM ordered phases, we compute the RG-invariant correlation-length ratio for AFM order defined as:
\[R(L)_{AF}^{XY}=\frac{S_{AF}^{XY}(\vec{Q},L)}{S_{AF}^{XY}(\vec{Q}-\delta\vec{ q})}-1, \tag{2}\]
where \(\vec{Q}=(0,0)\) labels the momentum at which the structure factor is maximum for AFM order. \(\delta\vec{q}=(\frac{2\pi}{L},\frac{2\pi}{L})\) is a minimal momentum on lattice shift from \(\vec{Q}\). In the long-range ordered phase, correlation length ratio increases with system size, while if the AFM order is short-ranged, the trend is opposite. At the critical point, the RG-invariant correlation-length ratio is independent on system size owing to the nature of scaling invariant, therefore offering a powerful theoretical approach to identify critical point of phase transition. In Fig. 3, we present the results of correlation-length ratios as a function of Hubbard \(U\) for several values of \(\delta\), giving rise to the DSM-NHEAFM quantum critical points, which are ap
proximately consistent with the results given by the finite-size scaling of AFM structure factors. Consequently, we obtain the ground-state phase diagram of Hubbard model on Honeycomb lattice in the presence of non-Hermitian asymmetric hopping, explicitly revealing the AFM ordering associated with electronic interaction is strongly enhanced by the non-Hermitian asymmetric hopping in Eq. (1).
_Emergent chiral XY universality class._--Having accessing the ground-state phase diagram of the non-Hermitian model Eq. (1), we investigate the critical property of the quantum phase transition between DSM and NHEAFM phases. In the absence of non-Hermitian hopping, DSM-AFM transition featured in Hubbard model on Honeycomb lattice is extensively investigated, belonging to chiral-Heisenberg universality class [127]. As aforementioned, the non-Hermitian hopping introduced in Eq. (1) breaks the spin \(SU(2)\) symmetry to \(U(1)\), is thus expected to change the universality class of the phase transition. In Hermitian system, the continuous quantum phase transition between DSM and the in-plane AFM ordered phase breaking \(U(1)\) symmetry is described by the Gross-Neveu-Yukawa model, and belongs to chiral-XY universality class, as revealed by field-theory analysis and numerical simulation on lattice models. In this section, we decipher the critical properties of the DSM-AFM transition in the presence non-Hermitian asymmetric hopping process.
In the regime sufficiently close to critical point \(U_{c}\), the AFM structure factors obeys the scaling relation:
\[S_{AF}^{XY}=L^{-(1+\eta)}F_{1}[(U-U_{c})L^{1/\nu}], \tag{3}\]
where \(F_{1}\) is a unknown function, \(\eta\) and \(\nu\) are critical exponents determining the critical properties of the phase transition, where \(\eta\) is anomalous dimension of AFM order parameter and \(\nu\) is correlation length exponent. Other critical exponents are available to deduce from \(\eta\) and \(\nu\) by hyper-scaling relation. To extract anomalous dimension \(\eta\), we perform log-log plot of AFM structure factor at quantum critical point versus system size, and the anomalous dimension of AFM order parameter is given by the slope: \(S_{AF}^{XY}(L,U_{c})=aL^{-1-\eta}\). We present the result of extrapolation with fixed \(\delta=0.4\) in Fig. 4(a), yielding the anomalous dimension \(\eta=0.603\pm 0.03\). Then we perform the standard data collapse procedure to obtain the critical exponent \(\nu\). There exists an appropriate value of \(\nu\) such that the points \(((U-U_{c})L^{1/\nu},S_{AF}^{XY}L^{1+\eta})\) at various \(U\) should collapse into a single curve for different linear system sizes \(L\). We present the results of data collapse for \(\delta=0.4\) in Fig. 4 (b), which renders the correlation length exponent \(\nu=1.07\pm 0.02\). The results of critical exponents \(\eta=0.603\pm 0.03\) and \(\nu=1.07\pm 0.02\) are consistent with the numerical results of Gross-Neveu transition belonging to Hermitian chiral-XY universality class on Hermitian interacting lattice model[121; 122; 30]. To further verify the numerical results of critical exponents, we perform the data collapse of RG-invariant correlation-length ratios satisfying the scaling relation: \(R_{AF}^{XY}=F_{2}[(U-U_{c})L^{1/\nu}]\), using the result of \(\nu=1.07\). The results of \(R_{AF}^{XY}\) for different system sizes exhibit excellent collapse, ensuring our analysis yields convincing results of critical exponents. Furthermore, with the choice of a different value of \(\delta=0.6\), we implement the same procedure and extract critical exponents for the DSM-NHAFM transition. The results are included in the section V of SM, in agreement with the results for \(\delta=0.4\).
To conclude, the non-Hermitian asymmetric hopping introduced in model Eq. (1) breaks \(SU(2)\) spin rotational symmetry to \(U(1)\), favoring AFM ordering in \(xy\)-plane. Intriguingly, although in the presence of non-Hermiticity, DSM-NHEAFM transition realized in the model belongs to the same universality class as the Hermitian version of the phase transition, namely chiral-XY universality class.
_Low-energy effective theory._--We derive the effective Hamiltonian describing the low-energy Dirac fermions in the non-Hermitian model Eq. (1) on honeycomb lattice (see Sec.VI
Figure 4: Finite-size scaling analysis for the QCP between DSM and NEAFM phases. (a) Log-log plot for XY-AFM structure factors versus system size at QCP. The slope of the fitted line yields anomalous dimension \(\eta=0.603\pm 0.03\). (b) The data collapse analysis for the rescaled XY-AFM structure as a function of \((U/U_{c}-1)L^{1/\nu}\) at \(\delta=0.4\) for system sizes \(L=15,18,21,24\). (c) The data collapse analysis for the RG-invariant ratio of XY-AFM structure factors as a function of \((U/U_{c}-1)L^{1/\nu}\) at \(\delta=0.4\) for system sizes \(L=15,18,21,24\). The results of critical exponents \(\eta=0.603\pm 0.03\) and \(\nu=1.07\pm 0.02\) are consistent with chiral-XY universality class within error bar.
in SM for the details). The result writes:
\[h(\vec{q})=-v_{F}(q_{x}s_{0}\otimes\sigma_{1}+q_{y}s_{0}\otimes\sigma_{2}-\frac{ iq_{y}\delta}{t}s_{3}\otimes\sigma_{1}-\frac{iq_{x}\delta}{t}s_{3}\otimes\sigma_{2}), \tag{4}\]
where \(s,\sigma\) are Pauli matrices acting on spin and sublattice spaces respectively, and \(v_{F}=\frac{3t}{2}\) is the Fermi velocity in the absence of non-Hermitian hopping, namely \(\delta=0\). The direct diagonalization of the Hamiltonian yields the energy dispersion (see Sec.VI in SM), which is purely real and corresponds to a Dirac cone with renormalized Fermi velocity \(\tilde{v}_{F}=v_{F}\sqrt{1-(\delta/t)^{2}}\). Since the density of state(DOS) near the energy of Dirac point is proportional to the inverse of Fermi velocity, the reduction of Fermi velocity results in the increase of DOS near the Dirac point, which offers a plausible explanation on the enhancement of AFM order as revealed by our numerical simulation. More crucially, to understand the emergence of Hermiticity at XY-AFM QCP as unambiguously revealed in our numerical results, we perform an one-loop perturbative renormalization group calculation on the Gross-Neveu low-energy effective theory (see the SM VII for details ). The beta function of the non-Hermitian parameter \(\delta\) is \(\frac{d\delta}{dt}=-2\delta^{2}\delta\) (see the SM VII for details), demonstrating the irrelevance of the non-Hermitian parameter at low-energy limit. Therefore, the Hermitian chiral XY universal class is emergent at the QCP. We leave a more systematic field-theory study as a future work.
_Conclusion and discussion._--In summary, we propose an innovative QMC algorithm to investigate the ground-state properties of quantum many-body models in the presence of non-Hermiticity. We construct a non-Hermitian interacting fermionic model, more explicitly, Hubbard model with non-Hermitian asymmetric hopping. Remarkably, the model is free from the notorious sign problem, such that the ground-state properties of model with large system size are accessible. Employing state-of-the-art QMC simulation, we systematic investigate the quantum phases and phase transition emerging in this model. The results unambiguously reveal that the AFM ordering triggered by strong Hubbard interaction is enhanced by non-Hermitian asymmetric hopping. The interaction driven quantum phase transition from DSM to the NHEAFM phase belongs to the chiral-XY universality class as previously unveiled in Hermitian systems, although the non-Hermiticity is present in the model.
In the perspective of experimental realization, the non-Hermitian hopping process is induced by the single-particle loss or gain on the bond associated with coupling to the environment, corresponding to Lindblad operator \(L_{\langle ij\rangle}=c_{i}+c_{j}\) in the quantum master equation. When the quantum-jump terms are neglible, the resulting effective Hamiltonian involves the non-Hermitian hopping terms as discussed in our study. More remarkably, since the principles to guarantee sign-problem-free in QMC simulation proposed in Hermitian models are straightforwardly generalized to non-Hermitian algorithm, it is promising to design more non-Hermitian interacting fermionic models featuring fascinating physics based on our approach. For example, designing sign-problem-free models and investigating the interacting effect on the non-Hermitian topological phase is particularly intriguing. Hence, our study paves a new avenue towards understanding the interplay between quantum many-body and non-Hermitian physics in a theoretically controlled approach.
###### Acknowledgements.
We thank Zijian Wang, Shang Liu for helpful discussions. We thank the computational resources provided by the TianHe-1A supercomputer, the High Performance Computing Platform of Peking University, China. X.-J.Y. and L.X. are supported by the National Natural Science Foundation of China under Grant No.11935002, and the National 973 project under Grant No. 2021YF1400501. Z.X.L acknowledges support from the start-up grant of IOP-CAS.
|
2308.05225 | IoT Security: On-Chip Secure Deletion Scheme using ECC Modulation in IoT
Appliances | NAND flash memory-based IoT devices inherently suffer from data retention
issues. In IoT security, these retention issues are significant and require a
robust solution for secure deletion. Secure deletion methods can be categorized
into off-chip and on-chip schemes. Off-chip secure deletion schemes, based on
block-level erasure operations, are unable to perform real-time trim
operations. Consequently, they are vulnerable to hacking threats. On the other
hand, on-chip secure deletion schemes enable real-time trim operations by
performing deletion on a page-by-page basis. However, the on-chip scheme
introduces a challenge of program disturbance for neighboring page data. The
proposed on-chip deletion scheme tackles this problem by utilizing ECC code
modulation through a partial program operation. This approach significantly
reduces the program disturbance issue associated with neighboring page data.
Moreover, the proposed code modulation secure deletion scheme allows for
real-time verification of the deletion of original data. | Na Young Ahn, Dong Hoon Lee | 2023-08-09T21:07:55Z | http://arxiv.org/abs/2308.05225v1 | IoT Security: On-Chip Secure Deletion Scheme using ECC Modulation in IoT Appliances
###### Abstract
NAND flash memory-based IoT devices inherently suffer from data retention issues. In IoT security, these retention issues are significant and require a robust solution for secure deletion. Secure deletion methods can be categorized into off-chip and on-chip schemes. Off-chip secure deletion schemes, based on block-level erasure operations, are unable to perform real-time trim operations. Consequently, they are vulnerable to hacking threats. On the other hand, on-chip secure deletion schemes enable real-time trim operations by performing deletion on a page-by-page basis. However, the on-chip scheme introduces a challenge of program disturbance for neighboring page data. The proposed on-chip deletion scheme tackles this problem by utilizing ECC code modulation through a partial program operation. This approach significantly reduces the program disturbance issue associated with neighboring page data. Moreover, the proposed code modulation secure deletion scheme allows for real-time verification of the deletion of original data.
Deletion, NAND Flash Memory, ECC Modulation, Verification, Partial Program, Program Disturbance
## 1 Instruction
In the future, the legal enforcement of the duty to delete data on NAND flash memory is highly likely [1, 2, 3]. Specifically, anti-forensics techniques for NAND flash memory play a crucial role in safeguarding personal information stored not only in offline electronic devices but also in numerous online-connected devices. Many electronic devices utilize storage devices based on NAND flash memory, which can either be embedded or externally connected. These electronic devices are interconnected through various networks. The emerging threat of connecting these electronic devices to high-performance computers, such as commercialized quantum computers, poses risks. Malicious hackers could potentially access the personal information of multiple users at any given time. Even if the collected personal information is encrypted, hackers with quantum computing capabilities would be able to decrypt the encrypted data with ease [4, 5].
Generally, NAND flash memory is inherently vulnerable to digital forensics [6, 7]. Write Amplification Factor (WAF), one of the performance indicators of NAND flash memory, is larger than 1 [8-11]. Many studies are being conducted to lower WAF, such as efficient garbage collection and temperature-aware data storage space distinction, but there are limitations. Fundamentally, the original data stored in NAND flash memory contains not only original data stored in a space visible to legitimate users, but also original data stored in a space not accessible to legitimate users [12]. For spaces inaccessible to legitimate users, for example, over-provisioning (OP) areas, forensics issues may be caused by malicious users. Secure deletion (or data sanitization) using TRIM techniques and encryption techniques has been introduced [13-16]. However, these techniques are still exposed to forensics threats because they are not real-time secure deletion techniques in terms of performance [17-19]. In fact, studies on forensic results on used phones and used storage devices have been introduced by several researchers. Recently, anti-forensics techniques have been introduced for these forensic issues[12,20-27]. In particular, techniques for performing secure deletion in real time without performance degradation have been introduced. Real-time secure deletion techniques are emerging as a major anti-forensics technology. At the same time, it is necessary to introduce a verification technique of such real-time secure deletion. This paper introduces a verification technique of real-time secure deletion.
Recently, there has been a debate regarding the possibility of a malicious code injection attack on the invalidation area, particularly the OP (Over-Provisioning) area [26]. These problems arise due to the incomplete nature of the existing trim operation. These issues are inevitable because the trim operation does not perform real-time trimming. Although the host sends a trim command, the storage device only simulates the trim operation instead of actually executing it, resulting in the retention of the original data in the storage device. The page unit secure deletion technique offers the advantage of performing real-time trim operations.
In Section 2, we will review existing secure deletion schemes, such as off-chip secure deletion schemes. In Section 3, we discuss the limitations of current secure deletion technologies. In Section 4, we propose the ECC (Error Correction Code) modulation secure deletion scheme. Section 5 presents a performance comparison between existing on-chip secure deletion
schemes and the proposed on-chip secure deletion scheme. This paper aims to propose an on-chip secure deletion technique that is cost-effective, reasonable, and easily implementable for real-time trim operations. Further research and studies must be conducted before applying this technique to actual products, and it is expected that these studies will be actively pursued in the near future.
## 2 Related Works
When the operating system deletes a file, it doesn't actually delete the file itself, but rather removes the metadata associated with the file. File systems manage both the actual files and metadata, which contains essential information for file management. This metadata includes details such as the file's storage location, size, name, and access permissions [28]. Conversely, if the metadata for a specific file is lost, the file is considered non-existent within the storage device. In other words, deleting the metadata also implies deleting the file itself. Secure deletion involves the complete removal of both the file's metadata and the actual file.
### TRIM command
The TRIM command refers to the actual deletion of a file from the storage device after its metadata has been removed [29]. TRIM is performed to completely delete invalidated data within the storage device and free up necessary space. Enabling TRIM settings can enhance the write speed of the storage device, optimize space utilization, and extend its lifespan. The auto TRIM function is supported in the Windows system, and the storage device internally supports TRIM. Typically, the TRIM command is sent from the host to the storage device. The TRIM command is an important command used in Solid State Drives (SSDs). When a file system deletes a file, the operating system simply marks that space as 'available', rather than removing the actual data. However, because SSDs write and delete data in block units, they need to completely erase the previous data before writing new data to the 'available' space. This process can lead to slowdowns. The TRIM command solves this problem. It informs the operating system that a file has actually been deleted, allowing the SSD to clean up that space in advance. This significantly reduces slowdown when writing new data.
### Secure deletion schemes
One of the secure deletion technologies performed in SSDs involves using the TRIM command. The TRIM command modifies the operating system that a file has been deleted, and through this, the SSD cleans up the space and prepares to write new data. However, this alone does not completely delete the data but only makes it writable, making data recovery possible. To compensate for this, the 'Secure Erase' function is used. Secure Erase is a command that completely initializes all cells of the SSD, making data recovery impossible. This function is especially important when permanently deleting sensitive information and is one of the most effective ways to completely erase data from an SSD. The combination of these two functions is crucial for securely managing data on SSDs.
#### 2.2.1 Erase operation
The read/program operation and erase operation sizes in NAND flash memory are determined based on the physical structure [20]. NAND flash memory consists of multiple memory blocks, each containing several pages, and each page contains multiple memory cells connected to a wordline. Generally, read/program operations are performed on a page size, while erase operations are performed on a block size. To manage the lifespan of NAND flash memory, erase operations are kept relatively minimal. This implies that there may be a time difference between when the user intends to delete a file and when the file is completely erased from the storage device due to these inherent limitations. Erase operations are performed block by block, removing charges in the charge trap layer. In contrast, program/read operations are conducted on pages, which are significantly smaller in size compared to blocks. However, the existing TRIM operation is based on erase operations. Thus, while TRIM is crucial for security purposes, it can impact performance and lifespan management of the storage device. The TRIM operation typically involves garbage collection before the erase operation [30]. Garbage collection refers to the process of collecting valid pages from at least two or more blocks and moving them to a new block. Only the valid pages from multiple blocks are copied to a new memory block during garbage collection. As a result, a new block is composed of valid pages accessible by the host, while the original blocks are regenerated as reusable memory blocks through an erase operation. Since garbage collection can negatively affect storage device performance, it is usually performed as a background operation or during idle states.
#### 2.2.2 Encryption key deletion
Data is encrypted and stored using an encryption key, and when data is deleted, only the encryption key is removed [30]. This encryption scheme allows for fast and secure deletion. However, deleting all keys in a file can introduce significant overhead. As an alternative, a proposed scheme suggests storing all keys in one memory block and deleting them together. However, this encryption key deletion scheme may not be easily applicable to a real-time invalidation scheme used in databases.
#### 2.2.3 Scrubbing
To achieve undetectable secure deletion, scrubbing involves removing deleted data, making it inaccessible to the host, and hiding the deletion history to prevent knowledge of past deletions. The full scrubbing scheme ensures that all page data is overwritten with zero pages [10, 20]. The partial scrubbing scheme involves making a portion of page data zero. One example of partial scrubbing is NAND Flash Partial Scrubbing (NFPS). This scheme makes it challenging to detect the existence of deleted files, and partial scrubbing may include partial page reprogramming.
#### 2.2.4 Deletion pulse application
The deletion pulse application scheme is similar to the page unit reprogramming scheme [12, 23]. However, while the reprogramming scheme involves programming data with specific values, such as zero bits, the deletion pulse application scheme applies multiple deletion pulses to the corresponding wordline to perform partial overwriting, even if the most significant bit is not necessarily obtained. The level and number of deletion pulses can be predetermined, such as changing the data to an extent that it cannot be recovered by the error correction code. Through the deletion pulse application scheme, the original data is altered to arbitrary data. This scheme can be particularly useful and easily applicable in the case of multi-bit memory cells.
## 3 Incomplete TRIM
The trim operation essentially refers to the complete deletion of a file from a storage device based on the host's request. When a trim-related command is sent to the storage device, the host assumes that the operation is completed upon receiving a completion message indicating the trim operation's execution. However, in reality, this completion is only internally handled by the storage device and goes unrecognized by the host. Consequently, it is not uncommon for only the associated metadata to be deleted without actually removing the original data. Unfortunately, the majority of secure deletion schemes currently suffer from this limitation, making forensic analysis of storage devices relatively easy. This is primarily because the original data remains intact while only the related metadata is deleted, leading the host to believe that secure deletion is accomplished. Hackers are actively seeking ways to recover metadata, while storage device administrators are exploring strategies to make metadata recovery impossible. From the customer's point of view, a full trim operation will be required. However, in the existing secure deletion schemes, when performing a complete trim operation, an erase operation must be performed on a valid block or an invalid block to delete original data. Since these schemes are unreasonable in terms of the lifespan and cost of the storage device, manufacturers of the storage device are proceeding with secure deletion at a level that partially compromises the request of the host. In the end, the customer is requesting a full trim operation, but the storage device does not 100% accept this request, and is performing an incomplete trim operation in the middle of the line.
This problem primarily arises from performing erase operations at the block level and program operations at the page level, a well-known fact among experts in the field. However, despite their awareness of the solution involving deletion operations at the page level, experts have not pursued it extensively. Some researchers have suggested this solution as early as 2016 [13-15]. For instance, a scrubbing scheme was presented in 2016, followed by an arbitrary data overwriting scheme and a deletion pulse application scheme in 2017 [12]. In 2018, a down-level program scheme was proposed to modulate a multi-level program into a single-level program [21]. We refer to these page-level secure deletion schemes as on-chip secure deletion schemes, while other schemes are commonly known as off-chip security schemes.
### Limitations of existing schemes
Off-chip secure deletion schemes have inherent limitations. They instruct the host to perform a trim operation and falsely report completion. While this may seem to have the same effect as actual deletion based on the strict definition of trim, it is incorrect. Experts in the field understand that this claim is flawed. For instance, if the original data is encrypted and the encryption key is deleted, it is argued that the original data is unlikely to be recovered without the encryption key. However, technology is advancing rapidly, with the imminent commercialization of quantum computing. This means that even hackers can utilize quantum computing technology, making the possibility of recovering encrypted data undeniable. If the cost of recovering encrypted data is lower than the cost of using quantum computing, hackers will undoubtedly attempt to recover the encrypted original data. The management of storage devices is further complicated by the WAF (Write Amplification Factor). Due to the different erase units and program units within a storage device, the WAF increases over time. In general, storage devices contain invalidation blocks and validation blocks. As the WAF increases, the occurrence of these invalidation and validation blocks becomes more frequent. However, invalidation blocks contain only invalidated pages, whereas valid and invalid blocks may contain both invalidated and valid pages. This situation makes it challenging to manage the original data. One or more instances of original data may exist in validation blocks, and even in invalidation blocks. When the host issues a command to delete the original data in the storage device, a full trim operation must delete all original data in invalid and valid blocks throughout the device. To perform a full trim operation, a comprehensive analysis of the history of garbage collection and deletion is required. This places a significant burden on the management of storage devices.
To address these inherent challenges, the on-chip secure deletion scheme should be applied in real-time. Regardless of trim requests, the storage device can apply the on-chip secure deletion scheme in real-time when performing data updates or deletions. This prevents the existence of original data in invalidated pages, which may occur naturally. Moreover, it eliminates the fundamental existence of original data in the invalidation block. By implementing the on-chip secure deletion scheme, the original data exists only in the validated pages of the validated block, significantly simplifying the management of original data.
### Performance comparison of Off-chip/On-chip schemes
The comparison between the off-chip secure deletion scheme and the on-chip secure deletion scheme is as follows. The off-chip secure deletion scheme primarily consists of an encryption scheme and a trim scheme that includes an erase operation. On the other hand, the on-chip secure deletion scheme encompasses a scrubbing scheme [21], a partial overwriting scheme [12], a down-bit programming scheme [22, 23], a deletion pulse application scheme [23, 25-27], and the proposed code modulation secure deletion scheme. The performance of the off-chip secure deletion scheme and the on-chip secure deletion scheme is compared in Table I as follows:
In the off-chip secure deletion scheme, the erase operation is performed in units of blocks. Typically, NAND flash memory devices do not frequently execute erase operations at the block level due to management concerns. It is commonly understood that NAND flash memory manufacturers guarantee 1000 erase operations per block. If a block undergoes more than 1000 erase operations, the usability of the storage device becomes compromised. Consequently, the off-chip secure deletion scheme cannot achieve a real-time trim operation. The absence of real-time trim operation results in a cost associated with block management. On the other hand, the on-chip secure deletion scheme enables erase operations to be performed in units of pages.
### Traditional On-chip secure deletion schemes
Existing on-chip secure deletion schemes primarily focus on modulation with the original data using overwriting technology. The scrubbing scheme transforms the threshold voltage of a memory cell with the most significant bit, while the partial overwriting scheme generates and reprograms possibly random data. The deletion pulse application scheme applies a pulse to modulate the original data to a point where error correction becomes impossible in ECC. The scrubbing scheme requires a significant amount of time to program memory cells up to the most significant bit. Moreover, excessive execution of high-level state programming can lead to physical destruction of the page. While the partial overwriting scheme reduces the possibility of physical destruction compared to the scrubbing scheme, it adds the time required to generate and program programmable random data. In comparison, the deletion pulse application scheme can efficiently and simply delete original data at a page level while reducing the risk of physical destruction. However, there is a concern regarding the possibility of data modification connected to an adjacent valid wordline, which can cause erase pulse disturbance.
## 4 Proposed On-Chip Secure Deletion Scheme
For a complete trim operation by the host, on-chip secure deletion becomes an essential technology that must be applied. Depending on the host or customer's requirements, on-chip secure deletion is likely to develop into an essential technology for storage devices.
### Basic ECC operation
Data stored in NAND flash memory is prone to leakage current and data deformation due to program/read disturbance. To ensure data reliability, various technologies have been developed, including the application of error correction schemes for data recovery. Parity, necessary for recovering the original programmed data, is generated, and during a write operation, the original data and parity are simultaneously stored. During a read operation, the original data and parity are read, and errors in the original data are corrected using the parity. Referring to Fig. 1, NAND flash memory devices consist of multiple planes, each containing blocks connected to wordlines and bitlines. Pages corresponding to wordlines are present within each block, and a spare area is available for error management functions.
\begin{table}
\begin{tabular}{l|l|l|l} \hline Approach & Deletion Size & Real-time Response & Overhead \\ \hline Off-chip Secure Deletion & Block Unit & Impossible & Management Cost \\ \hline On-chip Secure Deletion & Page Unit & Possible & Program Disturbance \\ \hline \end{tabular}
\end{table} TABLE I: Performance Differences in Off-chip/On-chip Secure Deletion
The ECC engine includes a parity generator, as shown in Fig. 2. This generator produces parity data using the input data of a target page in the NAND flash memory device. Parity data can be generated by segments of the input data (1 KB out of 4 KB) or by utilizing the entire input data. The input data is programmed into the main area of the target page, while the parity data is programmed into the spare area of the target page.
The ECC engine consists of a syndrome generator, Berlekamp block, Chien block, and data corrector, as depicted in Fig. 3. The syndrome generator calculates or generates syndromes using the output data and parity data read from the NAND flash memory device to determine if an error exists. Syndromes are then fed into an error locator polynomial and a Berlekamp block, which determines the number of errors. The Chien block finds the square roots of the polynomial in the error locator polynomial output by the Berlekamp block. Finally, if there are any errors based on the output of the Chien block, the data corrector corrects the output data, resulting in corrected data.
Figure 1: Physical page configuration of a general NAND Flash Memory. The physical page includes a main area for storing user data and a spare area for storing meta data. Here, the metadata includes ECC of user data.
Figure 3: ECC Decoding Process.
Figure 2: ECC encoding process. ECC engine generates parity corresponding to user data.
4.2. Parity modulation-based secure deletion
We propose a novel on-chip secure deletion scheme that primarily utilizes parity modulation. Modulating a relatively small parity instead of a relatively large original data is likely to reduce the time and cost required for secure deletion. Therefore, we suggest a parity modulation-based secure deletion scheme when performing secure deletion based on the host's request, as shown in Fig. 4. For this purpose, a partial program operation may be performed on the data of the spare area connected to the wordline related to the original data. The partial program operation is generally defined not to exceed four or more times for one wordline.
### Partial programming for spare area
The proposed on-chip secure deletion scheme includes a partial program operation on the spare area where parity is stored, as depicted in Fig. 5. In the partial program operation, the program can be performed with zero bits or a predetermined program pulse can be applied to the part where the parity is stored. It is important for NAND flash memory to support "small data move" in its basic functionality.
### Error count increase and read-fail output
The ECC engine has a specific number of error-correctable bits, which gradually increases with technological advancements. When error correction is impossible, a NAND flash memory device typically performs a recovery code. The recovery code conducts a page read operation in various ways. If error correction is still impossible after the recovery code, the NAND flash memory device eventually transmits a read fail signal to the host. In the proposed method, there is a high possibility that error correction becomes impossible even when applying the defense code through modulation of the parity data. Consequently, a read fail signal is outputted to the host, as shown in Fig. 6. Unlike existing data modulation schemes, the proposed scheme does not require additional verification operations to prove the deletion of original data. It can be immediately confirmed through the on-chip secure deletion operation that the read operation for invalid/valid pages is impossible.
Figure 4: On-Chip Secure Deletion Target.
Figure 5: Partial programming for spare area.
Figure 6: Real-time read failure output.
4.5. Secure storage device and operating method
The storage device we propose can be referred to as a secure storage device because it performs secure deletion. The secure storage device, as shown in Fig. 7, includes a NAND flash memory device and a controller (CTRL) that controls it. The controller (CTRL) receives a TRIM command from an external host device and performs a Secure Deletion operation accordingly. This secure deletion operation is performed in conjunction with an ECC circuit. When the TRIM command is received, the controller starts the secure deletion operation in real time.
The ECC circuit can generate ECC data (ECC_SD) to be stored in the physical space where secure deletion is required, for example, in the spare area. Where ECC data (ECC_SD) is generated as a value that directs user data to be uncorrectable by errors. For this, a process of reading data on the page to be deleted can precede. On the other hand, ECC data (ECC_SD) can also be set to a fixed value in accordance with a secure deletion request. This fixed value can be determined experimentally.
At the same time, the controller can generate user data stored in the main area. The user data used in the secure deletion operation can be composed of data in the lowest state (for example, erase state). This is because the program operation of the lowest state effectively corresponds to the prohibition of the program. In other words, the proposed secure deletion operation can be expected to minimize the program disturbance of neighboring pages by tampering with error correction codes and programming the lowest state data.
Generally, the memory cells of a NAND flash memory device have multiple states. Among these states, the lowest state (for instance, the erase state) basically becomes a program pass without applying a program pulse due to its lowest threshold voltage. Therefore, when programming with the lowest state data, it can be understood to have a practical program prohibition effect. Also, if partial programming operation is possible, only the part including the ECC area can be processed for secure deletion operation with the new error correction code (ECC_SD).
Figure 7: Secure Solid State Device. The secure storage device executes the secure deletion module in response to the trim command. The secure deletion module generates user data configured in the lowest state and a modified ECC value, and performs a program operation on the page to be deleted with the generated page data.
When the host transfers a secure deletion request for a file stored on the storage device SSD, as illustrated in Fig. 8, the controller of the storage device translates logical addresses associated with the corresponding file into physical addresses in response to the host's secure deletion request. The controller generates ECC parity for secure deletion and transmits a command for performing a partial program operation on the ECC area to the NAND flash memory device through wordlines corresponding to physical addresses. The NAND flash memory device performs a partial program operation using ECC parity generated for the spare area in response to the partial program command. The non-volatile memory device then transmits a completion message for the partial program operation to the controller. Subsequently, the controller sends a read command to the non-volatile memory device through the corresponding wordline, and the non-volatile memory device responds by transmitting the read data to the controller. The controller determines whether error correction for the read data is possible. If error correction is unachievable, the controller sends a secure deletion completion message to the host. The described secure deletion operation can be performed in conjunction with the defense code (or read retry) of the storage device. Alternatively, the storage device may delete the separate ECC parity generated for secure deletion and apply a predetermined number of erase pulses to the spare area storing ECC.
The proposed secure deletion technique allows for real-time deletion and verification. Furthermore, since the proposed secure deletion technique does not involve program operations on data, it significantly reduces disturbances on valid data compared to existing on-chip secure deletion techniques. It is worth exploring the implementation of the ECC engine in an on-chip structure within NAND flash memory, as this concept holds promise for future research on secure deletion.
Figure 8: Total process for proposed secure deletion using ECC parity. The SSD includes a controller and a NAND flash memory. The controller performs ECC modulation using partial programming in response to the secure erase request, and transmits a secure erase complete message to the host when error correction is impossible.
Results and Analysis
The scrubbing scheme, partial overwriting scheme, down programming scheme, deletion pulse application scheme, and proposed code modulation scheme can all delete original data at the wordline level. Since these schemes operate at the page level, real-time trim operations are possible. However, the page-level scheme applies a predetermined voltage level to the wordline, which can have an adverse effect on adjacent valid pages. This disadvantage manifests as program disturbance. In the following section, we compare the performance of on-chip secure deletion schemes, as indicated in Table II.
Basically, on-chip deletion schemes rely on overwriting, which requires the generation of random data. In the scrubbing scheme [10, 20], the most significant bit, specifically the zero bit, is generated. The partial overwriting scheme [12] involves generating arbitrary data in a programmable state relative to the programmed data in multi-level cells. The down-bit programming scheme [21-23] requires data conversion from multi-level cells to single-level cells. In this case, separate management is necessary to ensure that the storage capacity remains unchanged. The deletion pulse application scheme [23, 25-27] applies multiple deletion pulses instead of programming standardized data. This scheme eliminates the need for data creation time compared to other on-chip secure deletion schemes. The code modulation secure deletion scheme may also generate arbitrary data in certain cases. It can be implemented by applying multiple pulses similar to the deletion pulse application scheme.
Next, the durability-related indices are compared as follows. The scrubbing scheme, due to programming in the highest state, significantly increases the wear rate of the cell. In terms of durability, it may be more favorable to perform erase operations at the block level rather than frequently using the scrubbing scheme. The partial overwriting scheme, down-bit programming scheme, deletion pulse application scheme, and code modulation secure deletion scheme do not have durability issues. However, since on-chip secure deletion schemes are primarily based on overwriting, they inevitably cause program disturbance. Program disturbance generally refers to the degree of data destruction in memory cells connected to wordlines adjacent to the wordline receiving the program pulse [37]. Therefore, if the retrieval and time of program pulse application in secure deletion operations are relatively high, the program disturbance is higher compared to cases where they are not. We have roughly categorized the degree of program disturbance into high, medium, low and very small. This indicates the approximate number of program pulses that need to be applied until the secure deletion operation is completed. The scrubbing scheme, which writes data in the highest state, necessitates proceeding to a higher program level, resulting in relatively high program disturbance. Data management of adjacent valid pages is likely to be required. Similar to the scrubbing scheme, the partial overwriting scheme may cause program disturbance, albeit to a lesser extent as it does not involve the highest state data like the scrubbing scheme. The down-bit programming scheme, deletion pulse application scheme, and code modulation secure deletion scheme cause relatively very small program disturbance compared to the scrubbing, partial overwriting scheme, and deletion pulse application. The proposed ECC modulation technique adopts programming user data to the lowest state of data. The lowest state of data is effectively a program inhibit as it mostly passes through the program path in a single program pulse. The proposed technique requires only the ECC modulation program to pass, resulting in a relatively low number of program pulse applications compared to other secure deletion techniques until it is relatively complete. In other words, the number of program pulses applied to the selected wordline until the proposed ECC modulation program succeeds is lower compared to other secure deletion techniques. Therefore, the program disturbance due to secure deletion is very low compared to that of other techniques
Finally, in the scrubbing scheme, partial overwriting scheme, down-bit programming scheme, and deletion pulse application scheme, a separate operation is required to verify whether the original data has been completely deleted in the NAND flash memory device. However, in the proposed code-modulated secure deletion scheme, a read failure is produced during a read operation, eliminating the need for a separate verification operation. This is because the ECC values have been altered according to the secure deletion technique. Even if the original data still exists in the user area, attempting to perform a read operation
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline On-chip Secure Deletion Scheme & Data Generation & Endurance & Program Disturbance & Verification Reporting \\ \hline \hline Scrubbing [10, 20] & Zero Bit & Increase in cell wear & High & None \\ \hline Partial Overwriting [12] & Possible Random Bit & None & Medium & None \\ \hline Down-Bit Programming [21],[22],[23] & SLC data bit & None & Low & None \\ \hline Deletion Pulse [23],[25],[26],[27] & None & None & Low & None \\ \hline Code Modulated Secure Deletion (Proposed Scheme) & In some cases & None & Very Small & Read Failure \\ \hline \end{tabular}
\end{table} TABLE II: Performance Differences of On-chip Secure Deletion Schemes
based on the associated physical address will result in a read failure. The storage device can only output information stating that it is unable to read the corresponding page of the NAND flash memory. This serves as a real-time indication of the completion of secure deletion. Such a technique can be implemented cost-effectively and facilitates easy secure deletion without altering the basic operations of existing NAND flash memory.
## 6 Conclusion
We have examined the limitations of the trim operation in secure deletion. Existing off-chip secure deletion schemes are primarily based on an erase operation, which prevents them from performing a real-time trim operation in terms of management. On the contrary, on-chip secure deletion schemes allow for a real-time trim operation as they operate on a page-by-page deletion basis. These on-chip secure deletion schemes ensure a complete trim operation compared to off-chip secure deletion schemes. However, the on-chip secure deletion schemes may cause program disturbance due to the application of pulses to wordlines at the page level. In contrast, the proposed on-chip secure deletion scheme employs ECC modulation through partial programming to simplify and secure deletion while minimizing the program disturbance problem. The ECC modulation secure deletion scheme minimizes the program disturbance problem and allows for real-time verification of the deleted original data compared to other on-chip secure deletion schemes. This proposed secure deletion scheme can be effectively applied to verify data in the invalidated area of the storage device. In the future, the code modulation secure deletion scheme, proposed as an anti-forensic technology for storage devices, is highly likely to be implemented.
|
2306.05951 | Prediction of Transportation Index for Urban Patterns in Small and
Medium-sized Indian Cities using Hybrid RidgeGAN Model | The rapid urbanization trend in most developing countries including India is
creating a plethora of civic concerns such as loss of green space, degradation
of environmental health, clean water availability, air pollution, traffic
congestion leading to delays in vehicular transportation, etc. Transportation
and network modeling through transportation indices have been widely used to
understand transportation problems in the recent past. This necessitates
predicting transportation indices to facilitate sustainable urban planning and
traffic management. Recent advancements in deep learning research, in
particular, Generative Adversarial Networks (GANs), and their modifications in
spatial data analysis such as CityGAN, Conditional GAN, and MetroGAN have
enabled urban planners to simulate hyper-realistic urban patterns. These
synthetic urban universes mimic global urban patterns and evaluating their
landscape structures through spatial pattern analysis can aid in comprehending
landscape dynamics, thereby enhancing sustainable urban planning. This research
addresses several challenges in predicting the urban transportation index for
small and medium-sized Indian cities. A hybrid framework based on Kernel Ridge
Regression (KRR) and CityGAN is introduced to predict transportation index
using spatial indicators of human settlement patterns. This paper establishes a
relationship between the transportation index and human settlement indicators
and models it using KRR for the selected 503 Indian cities. The proposed hybrid
pipeline, we call it RidgeGAN model, can evaluate the sustainability of urban
sprawl associated with infrastructure development and transportation systems in
sprawling cities. Experimental results show that the two-step pipeline approach
outperforms existing benchmarks based on spatial and statistical measures. | Rahisha Thottolil, Uttam Kumar, Tanujit Chakraborty | 2023-06-09T15:05:40Z | http://arxiv.org/abs/2306.05951v1 | Prediction of Transportation Index for Urban Patterns in Small and Medium-sized Indian Cities using Hybrid RidgeGAN Model
###### Abstract
The rapid urbanization trend in most developing countries including India is creating a plethora of civic concerns such as loss of green space, degradation of environmental health, clean water availability, air pollution, traffic congestion leading to delays in vehicular transportation, etc. Transportation and network modeling through transportation indices have been widely used to understand transportation problems in the recent past. This necessitates predicting transportation indices to facilitate sustainable urban planning and traffic management. Recent advancements in deep learning research, in particular, Generative Adversarial Networks (GANs), and their modifications in spatial data analysis such as CityGAN, Conditional GAN, and MetroGAN have enabled urban planners to simulate hyper-realistic urban patterns. These synthetic urban universes mimic global urban patterns and evaluating their landscape structures through spatial pattern analysis can aid in comprehend landscape dynamics, thereby enhancing sustainable urban planning. This research addresses several challenges in predicting the urban transportation index for small and medium-sized Indian cities. A hybrid framework based on Kernel Ridge Regression (KRR) and CityGAN is introduced to predict transportation index using spatial indicators of human settlement patterns. This paper establishes a relationship between the transportation index and human settlement indicators and models it using KRR for the selected 503 Indian cities. This nonlinear KRR model helps in deriving the transportation network for GAN-generated human settlements through the settlement indicators. Our approach leverages human settlement indices, which capture information about demographics and urban land use to predict the transportation index. The proposed hybrid pipeline, we call it RidgeGAN model, can evaluate the sustainability of urban sprawl associated with infrastructure development and transportation systems in sprawling cities. Experimental results show that the two-step pipeline approach outperforms existing benchmarks based on spatial and statistical measures. By predicting future urban patterns, this study can help in the creation of more livable and sustainable cities, particularly by improving transportation infrastructure in small and medium-sized Indian cities.
## Introduction
Mapping urban land use dynamics has been valuable research in urban studies over several decades. The advancement in remote sensing technology makes it possible to track the spatiotemporal changes in urban landscape structures with relatively high accuracy and on a required scale[1]. The spatial distribution of land use activities (residential, commercial, industrial, etc.) including the transportation system is important for understanding the current urban centers and for planning future city development[2]. Land use and land cover maps become a starting point for modeling urban patterns and they can infer the future urban growth and the direction of land expansion of cities to inform urban planners and government policymakers towards sustainable urban planning[3, 4, 5]. While urban areas continue to experience rapid growth, they pose new challenges to the nation, especially for developing and underdeveloped countries[6]. Hence, urban growth prediction models and related studies have become a hot topic that has been extensively and deeply investigated. The existing urban growth models[7, 8, 9, 10] include the driving factors which affect urban expansion. These factors influencing urban expansion include population growth, economic development, urbanization, transportation infrastructure, topography, and land use regulations[11]. However, in developing and underdeveloped nations, where urban expansion is more likely to occur, data on driving forces are hard to obtain and are often expensive to collect.
According to the UN report[12], India is the most populous country in the world. From 1901 to 2011, the country's urban population expanded by around 14 times[13]. Although largely unequal, its increase is not skewed and is not limited to one
region across the nation. The skyrocketing living costs in metropolitan areas and increasing house rents discourage enterprises from investing in major cities. Therefore, it is essential to assess the settlement pattern and infrastructure facilities of small and medium towns as an alternate option to larger metropolitan cities [14]. These towns sometimes called the "next billion" markets, will be important in propelling the expansion of the national economy in India [15]. Further to the intricacy of urban settlement, more basic infrastructure and facilities are required for these regions.
Various reports in recent years have estimated a massive demand for funding urban infrastructures in developing countries. The World Bank estimates that nearly 70000 Billion (INR) of investment in urban India will be required to meet the growing population demands in the next 15 years until 2036 (in 2020 prices) [16]. For example, the Indian Government introduced a scheme called the Integrated Development of Small and Medium Towns (IDSMT) project that aims to encourage the planned and sustainable growth of the nation's small and medium-sized towns or cities [15]. The Ministry of Urban Development, Government of India introduced this scheme in 2005, and it offers financial and technical assistance to local governments in order to help them develop their towns' infrastructure and fundamental services. This scheme focuses on the expansion of small and medium-sized towns (the population size is up to 500K), which may act as growth hubs for the nearby rural regions, in order to promote inclusive growth and balanced regional development. By providing funding for the construction of fundamental utilities including water supply, sanitation, solid waste management, and urban transportation, the program seeks to solve the infrastructure deficit and service shortages in these communities. The overall goal of the IDSMT program is to support sustainable urban growth and raise the standard of living in India's small and medium-sized towns. As a result, it is important to examine the small and medium towns (Tier 3 and above cities with populations up to 500K) in India. Adopting cutting-edge technologies can have a significant impact on enhancing Government's effectiveness in improving planning and decision-making, problem-solving, accelerating development, and deployment [17].
To mitigate this urban planning challenge, recent developments in machine learning and deep learning have become handy tools for urban planners and geoscience practitioners. Deep learning models such as Generative Adversarial Networks (GAN) can approximate complex, high-dimensional probability distributions [18]. GANs have achieved numerous state-of-the-art breakthroughs in the fields of computer vision [19], natural language processing [20], and more recently in urban science and geospatial domain [21, 22]. Several GAN-based models have been proposed to simulate hyper-realistic urban land use maps and generate synthetic urban universe without considering the driving factors, see for example CityGAN [23] and MetroGAN [24]. Among these, CityGAN [23] simulates urban patterns using global urban land-use inventory and builds an "urban universe" to reproduce the complex spatial patterns observed in global cities. An extension to CityGAN by incorporating geographical knowledge is called Metropolitan GAN (MetroGAN) [24] which learns hierarchical features for urban morphology simulation. Another deep learning method, namely U-Net [25], is also applied to generate future urban cities using water bodies, digital elevation models, and nighttime lights as inputs [22]. The application of previous GAN-based urban models was limited to the generation of urban patterns. There are limited works on quantifying the urban pattern of GAN-generated images and predicting the transportation metric (representing urban infrastructure) for these new urban regions. Thus, quantification and modeling of landscape patterns of those GAN-simulated cities remain an unattempted problem. The structure of a landscape emerges from the characteristics of the individual elements of an ecosystem and their spatial configuration [26, 27, 28, 29]. Human Settlement Indices (HSI), e.g., Class Area (CA), Number of Patches (NP), Largest Patch Index (LPI), Clumpiness Index (CLUMPY), Aggregation Index (AI), and Normalized Landscape Shape Index (NLSI) [30, 31, 32] provide some concrete information about the landscape structures and therefore contributes in the prediction of Transportation Index (TI). This paper makes an attempt to answer the following challenging questions:
1. How to generate an urban universe for India based on spatial patterns via learning urban morphology?
2. Is there a relationship between HSI and TI in small and medium cities in India?
3. How to predict (forecast) TI for synthetic urban cities generated by CityGAN for developing countries like India?
To generate small and medium-sized Indian cities with CityGAN, we collected the World Settlement Footprints (WSF 2019) maps which are publicly available and the best representations of urban patterns as input features [33, 34]. Then, we build a city image database of 503 small and medium Indian cities whose populations range between 20K and 500K. Each city image represents \(10.5\times 10.5\) km covering the urban center and surrounding regions. We also explored existing spatial and statistical measures to evaluate the performance of CityGAN in Indian cities due to the complex nature of individual cities with varying structural and hierarchical properties [23]. Assessing the spatial relationship between urban patterns (human settlement) and the transportation index of actual cities can help to build a model for predicting the transportation index for generated cities. We used different linear and nonlinear measures of statistical correlations to establish this spatial relationship. Furthermore, we propose a hybrid model (namely, RidgeGAN) to predict the transportation index for simulated urban patterns. Fig. 2 depicts the methodological framework. In RidgeGAN, a supervised learning model (KRR) builds a relationship between the human settlement patterns and the characteristics of the urban road transportation system and implements this to predict TI
for the GAN-simulated urban universe for India. Our proposal has numerous applications, ranging from understanding urban land patterns to predicting relevant urban infrastructure facilities to guiding policymakers toward a better and more inclusive planning process.
## Background and related work
### Applications of GANs in geospatial field
Deep learning has reached a significant milestone in geospatial research, computer vision and other cutting-edge technologies [35]. GAN [18], an essential subfield of unsupervised deep learning, has opened a new vista for geoscience research in recent years. GANs are utilized to generate data that is close to a given training set which can be images, texts and tabular data [36]. Geoscientists and urban planners have adopted this new deep learning methodology for handling geophysical and remote sensing data. In remote sensing, MARTA GANs were proposed for producing fake satellite images of urban environments [37]. It consists of a discriminator network that receives both real and synthetic images as input and predicts whether each image is genuine or synthetic, as well as a generation network that uses random noise as input to create synthetic images. Further, Spatial Generative Adversarial Networks (SpaGANs) [38] were introduced for synthesizing textures by incorporating spatial information (such as the position and orientation of the texture) into the generator and discriminator networks. A comprehensive review of GANs demonstrates promising performance in the built environment, from processing large-scale urban mobility data and remote sensing images at the regional level to performance analysis and design generation at the building level [22].
In addition, GANs were also applied to modeling global urban patterns. For example, CityGANs [23], conditional GANs [39], and MetroGANs [24] was built to generate urban land patterns by training the generator networks to generate synthetic images of urban areas that closely resemble real urban areas and it will be useful for analyzing urban human settlement data from space-based sensors. For more accurate urbanization parameters or spatial indices estimates in locations where local data is unavailable or impossible to collect, these models are very effective in simulating urban land use patterns. GANs are used to model hyper-realistic settlement patterns since they do not make any assumptions about the data distribution and can generate real-like samples from the latent space in a straightforward manner. This unique property lends GANs to a variety of geospatial applications, including image synthesis, image attribute editing, image translation, domain adaptation, and other computing fields [40].
### Transportation and urban landscape structures
The spatial structure of a city is extremely complex and is constantly evolving. Therefore, there are significant attempts to analyze cities, and thus to link urban policy to shape cities. Delineating homogeneous/heterogeneous human settlements, quantifying them, and analyzing their diversity and spatial organization are necessary to assess their structures and spatial patterns. Due to this, urban researchers utilized landscape metrics to quantify the qualities of the landscape related to shape, pattern, and area by measuring the structure and spatial distribution of settlements. Landscape metrics were originally introduced in ecological studies that reflected social, cultural, and ecological richness and heterogeneity [41]. Also, progressive and well-functioning urban planning departments can use spatial indicators to regularly monitor urban development and, when necessary, propose regulatory or public investment action [42]. These indicators can also evaluate the geometrical characteristics of ecological processes and landscape elements, as well as their relative locations and distribution [43]. The effect of landscape metrics on spatial patterns was studied to quantify landscape structures and these metrics can statistically determine the outcome [31]. Spatial patterns of urban growth and landscape metrics were studied for various cities in India [30, 44]. The integration of efficient urban structures and comprehensive transportation metrics plays a vital role in fostering sustainable development and improving the overall livability of cities.
Understanding the interaction between transportation infrastructures and urban pattern areas is always critical for driving smooth urban services. The investigation into the development of mathematical models for studying the relationship between transportation metrics and urban land use began in the early 1960s and technological advancements brought us to an era of integrated land use transportation modeling [45]. Several road network models have been developed to solve transportation problems. Most of the existing transportation models for prediction now in use are based on simple linear regression models. An inverse relationship between urban growth and transportation was found for the middle east regions [46]. Their analysis suggested that urban population growth has increased urban trips and increased travel demand due to transportation infrastructure.
In a recent study, the link between population and the characteristics of the road network in the Lebanese Republic was investigated using a multivariate regression model to estimate the population count based on various data sources and statistical modeling techniques [47]. However, linear regression models have the drawback of excluding all variables which are not linearly connected, and multicollinear variables adversely affect the model. Besides, it is proved that the relationship between transportation features and their influencing factors is not always linear in nature. Predicting the transportation index is an important step towards minimizing traffic congestion and providing critical information to individual travelers as well as various Government sectors to plan the city in a sustainable way. A support vector regression (SVR) approach was used to predict traffic
flow from California highways using different types of kernels [48]. A road network density (one of the transportation indices) prediction model was proposed using highway capacity, and turning probabilities manual methods were used to determine the shortest cycling time in metropolitan areas [49]. The model could aid in vehicle distribution and congestion relief in urban areas. Further, the concept of graph theory was used to analyze the topology of road networks in an Indian city to better understand the connectivity and coverage of the existing road transportation system [50]. Their findings indicate that there is a strong relationship between road connectivity and coverage and that improving the road network is essential for a reliable and safe road transportation system. To accurately estimate the connectivity index, the paper [50] proposed a model based on the relationship between the Eta index and Network Density (ND), Edge Graph Density (EGD) and Nodal Graph Density (NGD).
## Results
The scholarly literature on urban challenges primarily focuses on megacities and large urban centers, but there are a great number of small and medium-sized cities in developing countries that should be prioritized. There is a pressing need to address the challenges of developing transportation networks and settlement patterns in these cities. Previous literature on urban studies does not adequately address the transportation, unplanned city growth and socioeconomic and environmental challenges of small and medium-sized towns [51]. In this study, we utilize global training WSF data across India, and we demonstrate a simple and unconstrained GAN model to generate realistic settlement patterns that encompass the diversity of urban forms. We are primarily interested in how small and medium-sized cities are simulated using unsupervised CityGANs. Subsequently, an effective data-driven hybrid model is developed to predict road network density for a given urban settlement pattern.
### Study area
Our study focuses on small and medium-sized Indian cities (South Asia), one of the world's fastest-urbanizing regions. The selection of the study area involved the identification of the geographical location and corresponding demographic data for which the population data from the World Cities database ([https://simplemaps.com/data/world-cities](https://simplemaps.com/data/world-cities)) were used. Approximately, 503 cities out of 1600 were selected for our study where the population size ranges between 20k to 500k. The geographical location of study areas marked in the map of India is depicted in Fig. 1(b) showing visualization of human settlement patterns and corresponding transportation networks (also see Fig. 1(a) and 1(c)).
Figure 1: (b) Geographical distribution of 503 small and medium-sized Indian cities included in the study (red colored square grids indicate the selected cities); (a) and (c) are examples of human settlement and transportation maps of two random cities, namely Calicut from the state of Kerala and Panihati from the state of West Bengal.
### Data collection and preprocessing
Settlement footprints and transportation network datasets of all the selected cities were collated from various sources over 2019. Settlement maps were procured from the global inventory built-up database called WSF, published by the German Aerospace Center (DLR). These are binary maps (urban area pixels have a value of 1 and non-urban area pixels have a value of 0) derived from multi-temporal space-borne satellites namely Sentinel-1 and Sentinel-2 data that aided in estimating the human settlement pattern of an urban area (\(10.5\times 10.5\) km) with a resolution of around 10m/px. Open source GIS software (QGIS) was used to pre-process (rectify, project, and crop) the images and build a city database. To measure Human Settlement Indices (HSI), existing landscape metrics were selected and computed using Fragstat software [52]. It includes the packages to compute popular landscape metrics using spatial pattern analysis. We use six settlement indices: Total class area (CA), number of patches (NP), largest patch index (LPI), clumpiness index (CLUMPY), aggregation index (AI) and normalized landscape shape index (NLSI) to estimate the characteristics of human settlement [30, 31].
CA is a useful metric to depict the spatial extent of the settlements. It is a composition that specifies the extent of landscape that is made up of a specific class type (e.g., built-up area). The total class area, which is the sum of the areas (\(m^{2}\)) of all the patches of the relevant patch type divided by 10,000 (converted to hectares) and CA \(>\) 0 indefinitely. NP in each landscape indicates the degree of fragmentation that counts the number of human settlements or urban patches. The higher the value of NP, the higher is the fragmentation with no limit. At the class level, LPI estimates the percentage of the total landscape area occupied by the largest patch as an indicator of dominance. LPI is calculated by dividing the area (\(m^{2}\)) of the largest patch of the relevant patch type by the entire landscape area (\(m^{2}\)) and multiplying the result by 100 (converted to a percentage), i.e., LPI is the percentage of the landscape comprised by the largest patch. LPI values (0 < LPI < 100) decrease from the city center to the outskirts. Another human settlement metric CLUMPY deals with aggregation and disaggregation for adjacent settlements. It shows the frequency with which different pairs of patch types appear side by side. The value ranges from -1 to 1; -1 indicates maximally disaggregated patch type, 0 when the patch type is randomly distributed, and 1 when the patch type is maximally aggregated [52]. Another metric AI is calculated using an adjacency matrix, which indicates how frequently distinct pairs of patch types (including adjacencies between the same patch type) appear on the map side by side in the settlement map. Its values range from 0 to 100; AI values are less indicating maximum disaggregation and the high AI shows the maximally aggregated single and compact patch. Finally, NLSI provides a measure of class aggregation for which the values ranges from 0 to 1, where 0 means the landscape consists of a single square or maximally compact (i.e., almost square) patch. NLSI increases as the patch type becomes increasingly disaggregated and reaches 1 when the patch type is maximally disaggregated [30].
To compute the Transportation Index (TI), a layer of the road network that has been topologically cleaned and converted into polylines is a prerequisite. The application software used here is integrated with QGIS 3.30 for this purpose. Individual cities and their corresponding road networks were extracted for assessing the spatial patterns of the road systems corresponding to the respective cities. As transportation measures need to be calculated in a metric system, a projected coordinate system was used. Instead of common WGS84 - EPSG:4326, which uses degrees as a unit for distance, the Coordinate Reference System WGS84/UTM-EPSG was used here, to measure road length in meters. All categories of roads such as National highways, State highways, major roads, street roads, residential paths, footways, and service roads were included in this study. To measure the development of the urban road network, the network density of the respective cities was computed as follows:
\[\text{Network Density }(ND)=\frac{L}{A}=\frac{\text{Total length of the road network}}{\text{City area}};\]
where \(L\) is determined from road maps and it has been calculated using an open field calculator in QGIS software. Network length specifies the total span of the road network and network density is measured according to the area occupied by road networks (city area), denoted as \(A\). Fig. 2 shows the overall workflow of the proposed hybrid framework used in this study to predict the transportation index for any kind of urban pattern in Indian cities.
### Validation metrics
In the subsequent section, we estimate the Average Radial Profile (ARP) for real and generated urban patterns to assess the accuracy of CityGAN quantitatively. An example of computing the ARP of a city is illustrated in Fig. 7 and using a peak search algorithm, we determined the polycentricity of actual and simulated scenes from the radial profiles. ARP (\(h(d)\) or \(x(d)\)) represents how much the human settlement area changes as we go out from the city center. As indicated in Fig. 7 (b), we draw rings at a distance of \(d\) from the center and a width \(\Delta\) of \(d\). By averaging the entire settlement area inside rings of width \(\Delta d\) at a distance \(d\) from the city center, we can compute \(ARP\) (\(h(d)\)). The region inside the ring with radius \(d\) is denoted as \(R(d)\) and each pixel inside the ring has some build-up area, that is the amount of urbanized area denoted as \(H(u,v)\), where \((u,v)\) is a point inside the ring \(H(u,v)\in R(d)\), with
\[R(d)\equiv(u,v)\mid(u-u_{0})^{2}+(v-v_{0})^{2}>d^{2}\;\;\text{and}\;\;(u-u_{0} )^{2}+(v-v_{0})^{2}\leq d^{2}; \tag{1}\]
\[h(d)=\frac{1}{|R(d)|}\sum_{(u,v)\in R(d)}h(u,v); \tag{2}\]
where \(R(d)\) can be defined as the collection of all the two-dimensional points included within the ring, \(|R(d)|\) indicates the size of set \(R(d)\), and \(h(d)\) is the average radial profile of a city.
There are several statistical measures that are used to assess the supervised regression models, for e.g., mean squared error (MSE), mean absolute error (MAE), R-squared (\(R^{2}\)), and Adjusted R-squared (Adj \(R^{2}\)). MSE is the average squared difference between the predicted and actual values. It is a widely used metric that measures the quality of a regression model whereas MAE is the average absolute difference between the predicted and actual values. It is a robust metric that is less sensitive to outliers than MSE. R-squared measures the proportion of variance in the target variable that is explained by the model. It ranges from 0 to 1, with higher values indicating a higher correlation. Adjusted R-squared is a modified version of R-squared that takes into account the number of predictors in the model. It penalizes the model for adding unnecessary variables and is a better measure of a model's goodness of fit [53].
Figure 2: Prediction framework of the proposed hybrid RidgeGAN model: (a) Implementing an unsupervised learning model (CityGAN) to generate small and medium-sized Indian cities; (b) Landscape structures of generated cities are measured in terms of human settlement indices (HSI) using spatial landscape metrics; (c) Characteristics of the road network and landscape structures of real cities are measured in terms of HSI and transportation index (TI); (d) Assessing the relations between the settlement patterns and transportation system and building a supervised learning model to predict the transportation index for GAN-generated urban universe.
### Generating human settlement area from WSF 2019
A dataset of real-time human settlement images was collected and pre-processed before training the GAN. We utilized squared shape settlement maps of cities from the WSF 2019 database. We clipped them representing 10.5km \(\times\) 10.5km spatial extent and resized each image to 256x 256 (43m/px) for optimization purposes and to avoid overfitting. The final input dataset contains 503 binary images and can be formulated as \(H_{i}=h_{1},\ldots,h_{n}\), with \(H\in R^{WXW}\) and \(W=256\) and \(n=503\). The source of \(H_{i}\) is an urban binary map (1 and 0 represent urban and non-urban areas respectively). The generator network is trained to generate synthetic urban settlement images by generating random noise and transforming it into an urban image whose distribution matches the real urban images. The discriminator network is trained to distinguish between real and synthetic binary images. Here, generator \(G\) takes a random noise vector \(Z_{noise}\) as input, which deterministically changes (e.g., by passing it through successive deconvolutional layers if \(G\) is a deep CNN) to generate a sample fake human settlement image \(H_{fake}=G(z)\). Then the discriminator (\(D\)) accepts an input image \(H\) (which can be an actual human settlement image (\(H_{real}\)) from an empirical dataset or generated image (\(H_{fake}\)) synthetically by a generator and outputs the source probability that \(H\) is either sampled from the real distribution (\(H_{real}\)) or produced by generator (\(H_{fake}\)). Having trained a generator (\(G\)) (refer Fig. 2 (a)), we generated synthetic Indian urban settlement patterns of 500 binary images using the CityGAN model. Fig. 3 illustrates randomly selected real cities (Fig. 3(a)) and simulated urban patterns (in Fig. 3(b)). On a visual inspection, simulations are practically indistinguishable from the actual urban patterns, with realistic densities and complexity of settlement patterns. Input images and generated images are exhibiting realistic concentration at the center and distribution of settlement in the surrounding regions. Various quantitative metrics as discussed earlier are used to evaluate the performance of the Indian CityGAN model [42].
Among various spatial statistical measures, ARP [23] is used to compare the real and simulated urban patterns. We utilize Eq. 2 to compile the polycentric nature of real and generated images via the peak search algorithm illustrated in Fig. 7. The peak search algorithm finds points in univariate profiles whose value (peak height) is a fraction of the maximum height \(h\) and at a distance from the previously identified peak of at least \(d\). We set an acceptable value of \(h=80\%\) and \(d=430m\) via the cross-validation method. Graphical representations of peak search outcomes are illustrated in Figs. 4 (a) and (b).
The distribution of the number of peaks for real and fake cities is compared (refer to Fig. 5(a)) and similarities between the distributions are found. Further, we cluster the radial profiles of real cities using K-means algorithm [54, 55] and compare it to the typical profile of generated cities. Fig. 5 presents a summary of our findings from the CityGAN model. Our analysis suggests that \(K^{*}=10\) (refer Fig. 4(c)) gives the optimal number of clusters for both actual and synthetic scenes using a straightforward fraction of the sum-of-squares argument [55]. In Fig. 5(c), the distribution of scenes by class is given and they have a similar
Figure 3: Comparison of real urban built land use maps (a) and synthetic maps (b) generated by a CityGAN. The pixel values in each case are in the range [0, 1], where 1 represents the portion of land occupied by buildings. Names of the cities are reported in (a) using yellow color text.
shape and are more comparable. But in Fig. 5(b), we find more disparities for classes 1 and 4 (monocentric), 6, 8, and 11 (sprawled patterns). These discrepancies may result from a sampling technique that would have favored the abundance of mono-centric urban patterns while the simulation was produced regardless of the location of the urban center. Experimental results show that using the WSF dataset, CityGAN generates precise urban patterns for Indian cities.
### Relationship between human settlement and transportation network of real cities
HSIs and TI are computed for the selected cities (workflow is illustrated in Figs. 2 (b) and (c)). The outcomes of the analysis of human settlements using selected spatial indices are displayed in Table 1. Table 2 provides examples of the calculation of the transportation index (network density) and Table 3 shows descriptive statistics of network density. The result shows the spatial distributions of network density vary among cities. Once the metrics are derived, correlation coefficients (CC) are calculated to determine the relationship between the human settlement indices (CA, NP, CLUMPY, LPI, AI, and NLSI) and the transportation index (RL, ND). The heat maps of the correlations between transportation indices and human settlement indices are illustrated in Fig. 6.
Fig. 6(a) demonstrates the value matrix of Pearson's correlation coefficient (PCC) as a measure of a linear relationship (ranges between \(-1\) to \(1\)). Its value reflects the strength of the link between metrics. Positive numbers demonstrate the beneficial influence of variables on each other, and negative values represent the negative connection between variables. Here, the correlation index is displayed by the color intensity as well. The correlation increases as the color bar rises, light yellow color denotes a lower correlation. As shown in Fig. 6(a), the PCC values of coverage measures RL and ND are highly related to each other; therefore we choose ND as the response variable of TI. Settlement metrics such as CLUMPY and NLSI have the minimum correlation with transportation metrics (light blue color), while CA demonstrates the maximum value of correlation coefficient with transportation variables. The correlation coefficients of RL and ND with CA are \(0.7\) and \(0.80\) respectively. According to PCC, the highest correlation exists between CA and RL, CA and ND. Hence, CA is an inevitable variable in our regression model. Nonlinear relationships between HSIs and TIs are explored using Chatterjee correlation coefficients (CCC) and reported in Fig. 6(b). In the case of CCC, values reflect that most variables have positive correlations and CA has the highest rank followed by NP and LPI.
### Prediction of transportation index
A supervised machine learning approach (Kernel Ridge Regression) was implemented to predict road network density for given settlement patterns (refer to Fig. 2 (d)). To train and evaluate the prediction model, the raw dataset (database of 503 real cities) was divided into two parts: training (80%) and testing (20%). We compare the performance of various other supervised regression models such as support vector regression (SVR), decision tree (DT), gradient boosting (GB), multilayer perceptron regression (MLP), XGboost (XGB), linear regression (LR), random forest (RF) and simple ridge regression (RR) to select the best-fit model. To validate the models, four statistical scores: Mean Squared Error (MSE), Mean Absolute Error (MAE), R-Squared and Adjusted R-Squared were used. The results of our model and comparison with other models are summarized in Table 4. Experimental result shows that KRR outperforms all other eight state-of-the-art regression models, in terms of the highest \(R^{2}\) and Adjusted \(R^{2}\) and lowest error metrics (MSE and MAE values). Because of the ability to handle nonlinearity and
Figure 5: (a) Comparison of the distribution of the number of peaks of real and generated cities; (b) Comparison of the distribution of average radial profile classes of real and generated cities; (c) The typical radial profiles for real and generated Indian cities (similar profile)
\begin{table}
\begin{tabular}{|c c c c c|} \hline Transportation metrics & Min & Max & Mean & std. deviation \\ \hline RL & 15.7 & 2791.51 & 434.514 & 392.389 \\ \hline \end{tabular}
\end{table}
Table 3: Descriptive statistics of transportation metrics (namely road length)
multicollinearity difficulties within datasets, the KRR regression model performs best. Validation metrics indicate that our model is good at predicting network density for urban patterns. This implies that the supervised KRR model can be applied to predict TI for newly generated cities by CityGAN.
## Discussion
India is now the most populous country in the world [12] with more than 30% of the population residing in the urban area. Government of India recently came up with a scheme called IDSMT [15] to improve urban planning and road networks for the development of small and medium-sized cities (population size up to 500k). The Ministry of Urban Development offers financial and technical assistance to local bodies in developing their region's infrastructure and basic services to promote inclusive growth and balanced regional development. One of the objectives of this scheme is to address the transportation infrastructural deficits and service shortfalls of the eligible cities. However, Central Government finds difficulty in making a correct decision to allocate funds for transportation infrastructure development.
This study proposes a hybrid model to predict road network density for small and medium-sized settlement patterns in India. We used the publicly available WSF datasets and CityGAN model to build an unsupervised model to simulate realistic Indian urban patterns. The average radial profile was used to compare real against simulated cities to validate the performance of CityGAN. Also, we used the K-means technique to cluster the radial profiles with the optimal number of clusters to be equal to 10 for both actual and synthetic scenes using a straightforward fraction of sum-of-squares reasoning. Landscape structures of these generated cities were measured in terms of human settlement indices using spatial landscape metrics. Then, various supervised machine learning models were implemented for predicting transportation indices from human settlement indices based on real city datasets. All the regression models were compared based on error measurement metrics. The performance of the Kernel Ridge Regression (KRR) model outperformed the benchmark regression methods. The transportation index
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Accuracy metrics & SVR & DT & GB & MLP & XGB & LR & RF & RR & KRR \\ \hline MSE & 6.439 & 5.792 & 5.265 & 4.505 & 4.448 & 4.343 & 4.133 & 3.998 & **3.661** \\ \hline MAE & 1.832 & 1.716 & 1.688 & 1.582 & 1.535 & 1.576 & 1.475 & 1.511 & **1.397** \\ \hline \(R^{2}\) Score & 0.490 & 0.541 & 0.583 & 0.643 & 0.648 & 0.656 & 0.673 & 0.683 & **0.710** \\ \hline Adj \(R^{2}\) Score & 0.463 & 0.517 & 0.561 & 0.624 & 0.629 & 0.638 & 0.655 & 0.667 & **0.695** \\ \hline \end{tabular}
\end{table}
Table 4: Performance metrics for the test set of real dataset. Best model’s values are highlighted in **bold**.
Figure 6: Heat map of the correlation between transportation indices and human settlement indices, the (a) PCC, and (b) CCC based on the input data.
estimated from the KRR model is compared with actual test data (from real imagery). KRR has comparatively less MAE of 1.39 and a higher \(R^{2}\) value of 0.71. Thus, the proposed hybrid model can be used to predict the transportation index in terms of road network density for CityGAN-generated towns and cities. Our proposal can be treated as a versatile decision-support system for sustainable planning and development of new small and medium-sized towns and cities in terms of transportation infrastructure. However, the proposed method is useful to establish a relationship between coverage measures of transportation variables with HSIs, but the current work doesn't consider any connectivity measure. As a future scope of work, making interconnections between connectivity measures with HSIs will be worth exploring.
## Methods
In this section, we introduce the components of the model used in the hybrid framework are demonstrated. First, we discuss various correlation measures that are used in this study. Further, our proposed two-step pipeline approach utilizes the popularly used GAN model (CityGAN) in this case and the KRR model, a nonlinear shrinkage method for regression modeling. After this, we go over the suggested RidgeGAN method.
### Correlation analysis
Correlation coefficients (CC) are popular statistical measures used to determine the strength of a relationship between two or more variables (can be numerical or categorical). Pearson's correlation coefficient (PCC) is the most commonly used classical method of measuring linear associations, and its ease of use is advantageous [56]. However, their efficiency may be limited when dealing with non-normal, noisy, closed, or open data (even after applying log ratios to the data). Chatterjee Correlation Coefficient (CCC) [57], a recently developed method based on cross-correlation between ranked increments, is a reliable alternative to traditional correlation methods. CCC can deal with data that contains outliers or has non-normal distributions and it does not make any assumptions about the data distribution [56]. A Python implementation of CCC can be done using the "TripleCpy" package in Python. We can define CCC mathematically as follows: Given a pair of random variables \((X,Y)\) and suppose the realizations \(X_{i}\)'s and \(Y_{i}\)'s have no tie. A rearrangement of the pairs as \(\left(X_{(1)},Y_{(1)}\right),\ldots,\left(X_{(n)},Y_{(n)}\right)\) is done such that \(X_{(1)}\leq\cdots\leq X_{(n)}\). Let \(r_{i}\) be the rank of \(Y_{(i)}\) then CCC is defined using the formula:
\[\xi_{n}(X,Y):=1-\frac{3\sum_{i=1}^{n-1}|r_{i+1}-r_{i}|}{n^{2}-1},\]
where \(n\) is the number of observations. Once the relationships are assessed between TI and HSIs, selecting the "best" regression model is easier so that the model explains the variability in the response variables with possibly lower prediction error. However, the ridge regression (RR) method can directly handle multicollinearity structures in the data along with the instability of least square estimators [58]. A more effective nonlinear regularized regression technique in machine learning is kernel ridge regression (KRR). The creation of the ridge regression method addresses some of the shortcomings of the least square method (over-fitting and multicollinearity) [59]. One of their advantages is that the kernel implementation allows to handle nonlinearity of the data [60, 61, 62].
### CityGAN: Generative Adversarial Networks for modeling urban patterns
GAN is a powerful unsupervised deep learning model that learns representations of input data to fit high dimensional complex distributions [18]. GANs are revolutionary in that they can produce very high-quality (i.e., extremely realistic) samples compared to predecessor models at similar computational costs. In general, GAN is a system that consists of two neural networks competing against each other in a zero-sum game context [63]. The two neural network architectures are a generator (\(G\)) and a discriminator (\(D\)) and they can generate new data that conforms to learned patterns through both generative and adversarial processes. GANs demonstrate promising performance in modeling complex geospatial data having spatial dependence [22]. Whilst the core application of GANs has been computer vision and image processing [18]; however, their use in geoscience has provided urban planners with novel ways of generating "new" samples that can easily outperform state-of-the-art geostatistical tools. In this study, we deploy CityGAN [23] to learn the urban patterns from real settlement images and generate hyper-realistic urban settlement images. \(G\) and \(D\) are both deep convolutional neural networks with weight vectors \(\theta_{G}\) and \(\theta_{D}\). Back-propagation is used to learn these weights by alternatively reducing the following loss functions
\[\theta_{D}:\mathcal{L}_{D}=E_{h\sim p_{h}}[\log D(H)]+E_{z\sim p_{z}}[\log(1-D( G(z)))] \tag{3}\]
\[\theta_{G}:\mathcal{L}_{G}=E_{z\sim p_{z}}[\log(1-D(G(z)))] \tag{4}\]
Here, the generator is made up of numerous convolutional blocks, including inverse-convolutional, batch normalization, and rectified linear unit (ReLU) layers, and ends with a hyperbolic tangent layer (which applies tanh (\(\cdot\)) nonlinearity to each element of the produced map). Recent modifications of GANs have allowed performing conditional generation as domain transformation. In the GAN training phase, it is worth noting that the generator network is usually able to create realistic samples whereas the discriminator is an auxiliary network that gets discarded after training. Once GAN is trained, the CityGAN[23] can be used to generate new synthetic urban images that can be used for a variety of applications such as urban planning, disaster response, and simulations. Iteratively optimizing the \(G\) and \(D\) networks is part of the training process. The generator network attempts to deceive the discriminator by generating images that resemble real urban images, whereas the discriminator network attempts to correctly classify whether an image is real or fake. The networks are updated based on classification and generation errors until the generator produces images that are indistinguishable from real-world human settlement scenes. \(H_{fake}\) is implicitly sampled from the data distribution that the generator tries to imitate when G is at its optimum. However, the GAN-generated images may not be representative of all possible urbanization patterns because they are based on the training dataset and the GAN architecture used. As a result, before using GAN-generated images for any practical application, it is critical to carefully evaluate them and compare them to real-world urban areas.
### Kernel Ridge Regression (KRR)
Regression modeling is a fundamental area of machine learning where the target variable is quantitative (real numbers) in nature. The most classical approach is linear regression using the ordinary least square method. However, it has salient disadvantages, e.g., overfitting and multicollinearity which can be addressed via ridge regression. Ridge regression "shrinks" the least square coefficients using regularization parameter via minimizing the objective function given below:
\[\mathrm{Ridge}(\beta)=\frac{1}{2}(Y-X\beta)^{T}(Y-X\beta)+\frac{\lambda}{2} \beta^{T}\beta, \tag{5}\]
where \(X\in R^{N\times D}\) is the feature matrix with \(N\) being the number of training samples, \(D\) is the number of features \(Y\in R^{1}\) is the real-valued target vector, \(\beta\) is the regression coefficients, and \(\lambda\geq 0\) is the regularizer that helps in dealing multicollinearity problem. However, the Ridge regression model still has troubles when dealing with nonlinear data data[64]. A more general framework can be achieved by using a nonlinear mapping function \(\phi(\cdot)\) that maps low dimension feature to high dimension (helps in learning nonlinear patterns). Now, a kernel function in the form of the dot product is used to avoid the cause of dimensionality of the nonlinear transformation. Mathematically, Kernel between two points, say \(x_{m}\) and \(x_{n}\) is given by
\[K(x_{m},x_{n})=\phi(x_{m})^{T}\phi(x_{n}), \tag{6}\]
which satisfies Mercer's condition[65]. The major impact of Kernel is ridge regression that allows the identification of nonlinear functional relationships between one variable with remaining features. In this study, we use radial basis kernel function (RBF)[62] which is defined by:
\[K(x_{m},x_{n})=e^{-\gamma(|x_{m}-x_{n}|^{2}}, \tag{7}\]
where \(\gamma\) is the width of the kernel. Predictions in KRR model for a new test input \(x_{*}\) is given by,
\[\beta^{T}\phi(x_{*})=\sum_{n=1}^{N}(XX^{T}+\lambda I_{N})^{-1}YK(x_{n},x_{*}). \tag{8}\]
We use KRR to establish the relationship between HSIs and TI as shown in Fig. 2 (d).
### Hybrid model: RidgeGAN
RidgeGAN is a hybrid approach based on unsupervised CityGAN and supervised KRR models. KRR[62] has a built-in mechanism to perform nonlinear regularization analysis in the presence of multicollinearity. CityGAN[23] became popular to generate fake city images (a.k.a possible future cities) that look like real cities from the visual inspection and are statistically significant via learning the urban morphology. After implementation, we evaluated the performance of CityGAN, by comparing the real and simulated cities using the most widely used spatial summary statistics in an urban analysis called average radial profile and peak search algorithm. Each city is represented as a \(10.5\times 10.5\) km image covering the urban center and surrounding regions. Although, the quantifying transportation index has a vital role in the development of sustainable city planning and management. Here, we built a supervised KRR model to predict the transportation index by learning the relationship between urban patterns and the road transportation index. The KRR prediction model is integrated with the CityGAN model to predict the transportation index of newly generated cities using CityGAN. To build our hybrid model, we mainly use two models: an unsupervised learning model for generating urban patterns and a supervised learning model to predict the transportation index. To sum up, the workflow of the proposed RidgeGAN is detailed as follows (also see Fig. 2 for a schematic workflow):
* First, we apply CityGAN, an unsupervised learning model to generate small and medium-sized Indian cities using the available urban morphological features.
* Landscape structures of real and generated cities are measured in terms of Human Settlement Indices (HSI) using spatial landscape metrics.
* We assess the relations between two important features of urban forms (human settlement and transportation system) and build a KRR model to predict the transportation index, namely network density.
* The proposed hybrid model framework can predict the road network density on a given urban pattern for the urban universes generated in the first step.
|
2302.06777 | A convex-block approach for numerical radius inequalities | This article implements a simple convex approach and block techniques to
obtain several new refined versions of numerical radius inequalities for
Hilbert space operators. This includes comparisons among the norms of the
operators, their Cartesian parts, their numerical radii, the numerical radius
of the product of two operators, and the Aluthge transform. | Mohammad Sababheh, Cristian Conde, Hamid Reza Moradi | 2023-02-14T01:28:14Z | http://arxiv.org/abs/2302.06777v1 | # A convex-block approach for numerical radius inequalities
###### Abstract.
This article implements a simple convex approach and block techniques to obtain several new refined versions of numerical radius inequalities for Hilbert space operators. This includes comparisons among the norms of the operators, their Cartesian parts, their numerical radii, the numerical radius of the product of two operators, and the Aluthge transform.
Key words and phrases:Numerical radius, norm inequality, Cartesian decomposition, triangle inequality 2010 Mathematics Subject Classification: Primary 47A12, 47A30, Secondary 15A60, 47B15
## 1. Introduction
Let \(\mathcal{B}(\mathcal{H})\) denote the \(C^{*}-\)algebra of all bounded linear operators acting on a Hilbert space \(\mathcal{H}\). An operator \(T\in\mathcal{B}(\mathcal{H})\) is said to be positive if, for every \(x\in\mathcal{H}\), one has \(\langle Tx,x\rangle\geq 0\). In this case, we simply write \(A\geq O.\) Positive operators play an important role in understanding the geometry of a Hilbert space, and these operators constitute a special class of the wider class of self-adjoint operators; that is \(A^{*}=A\), where \(A^{*}\) denotes the conjugate of \(A\). Among the most basic properties of self-adjoint operators is the fact that
\[\|T\|=\omega(T)=r(T),\;T\;\text{is normal},\]
where \(\|\cdot\|,\omega(\cdot),\) and \(r(\cdot)\) denote the operator norm, the numerical radius, and the spectral radius respectively. Actually, for a general \(T\in\mathcal{B}(\mathcal{H})\) one has
\[\|T\|\geq\omega(T)\geq r(T).\]
While both \(\|\cdot\|\) and \(\omega(\cdot)\) are norms on \(\mathcal{B}(\mathcal{H})\), \(r(\cdot)\) is not. In fact, we have the equivalence relation [6, Theorem 1.3-1]
\[\frac{1}{2}\|T\|\leq\omega(T)\leq\|T\|,\;T\in\mathcal{B}(\mathcal{H}). \tag{1.1}\]
Numerous researchers' core interests have been sharpening the above inequality and obtaining new possible relations between \(\|\cdot\|\) and \(\omega(\cdot)\). This is because \(\|\cdot\|\) is much easier to compute than \(\omega(\cdot)\), not to forget the math appetite for obtaining such new relations.
The Cartesian decomposition of \(T\in\mathcal{B}(\mathcal{H})\) is \(T=\mathfrak{R}T+\mathrm{i}\mathfrak{I}T\), where \(\mathfrak{R}T=\frac{T+T^{*}}{2}\) and \(\mathfrak{I}T=\frac{T-T^{*}}{2\mathrm{i}}\) are the real and imaginary parts of \(T\), respectively. Although \(\|T\|\geq\omega(T)\) is always valid, the following reverses hold for the Cartesian components of \(T\), see [8, Theorem 2.1]
\[\|\mathfrak{R}T\|\leq\omega(T),\ \|\mathfrak{I}T\|\leq\omega(T). \tag{1.2}\]
While the original definition of \(\omega(\cdot)\) is based on a supremum over inner product values (i.e., \(\omega(T)=\sup_{\|x\|=1}|\left\langle Tx,x\right\rangle|\)), the following identity is extremely useful [14]
\[\sup_{\theta\in\mathbb{R}}\,\left\|\mathfrak{R}\mathrm{e}^{\mathrm{i}\theta}T \right\|=\omega\left(T\right). \tag{1.3}\]
Exploring further relations between \(\|\cdot\|\) and \(\omega(\cdot)\), it has been shown in [8, Theorem 2.3] that
\[\|A+B\|\leq 2\omega\left(\left[\begin{array}{cc}O&A\\ B^{*}&O\end{array}\right]\right)\leq\|A\|+\|B\|; \tag{1.4}\]
as an interesting refinement of the triangle inequality of norms, using the numerical radius of a matrix operator.
Having the matrix operator term in (1.4) is not a coincidence. In fact, numerous results have included such terms while studying numerical radius inequalities. For example, it has been shown in [7, Theorem 2.4] that
\[\frac{\max\left\{\omega\left(S+T\right),\omega\left(S-T\right)\right\}}{2} \leq\omega\left(\begin{bmatrix}O&S\\ T&O\end{bmatrix}\right),\ \text{for any}\ S,T\in\mathcal{B}\left(\mathcal{H}\right); \tag{1.5}\]
an inequality which has been reversed in a way or another by the form [7, Theorem 2.4]
\[\omega\left(\begin{bmatrix}O&S\\ T&O\end{bmatrix}\right)\leq\frac{\omega\left(S+T\right)+\omega\left(S-T\right) }{2},\ \text{for any}\ S,T\in\mathcal{B}\left(\mathcal{H}\right). \tag{1.6}\]
The above matrix operator is not only comparable with numerical radius terms, as we also have [13, Theorem 2.1]
\[2\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\leq\max\left\{\|A\|\,,\|B\|\right\}+\frac{1}{2} \left(\left\||A|^{\frac{1}{2}}|B|^{\frac{1}{2}}\right\|+\left\||B^{*}|^{\frac {1}{2}}|A^{*}|^{\frac{1}{2}}\right\|\right), \tag{1.7}\]
for any \(A,B\in\mathcal{B}\left(\mathcal{H}\right)\).
The right-hand side of this latter inequality is related to the Davidson-Power inequality, which has been generalized in [2, Theorem 5] to the form
\[\|A+B^{*}\|\leq\max\{\|A\|,\|B\|\}+\max\{\|\ |A|^{1/2}|B^{*}|^{1/2}\|,\|\ |A^{*}|^{1/2}|B|^{1/2}\|\}. \tag{1.8}\]
An important tool in obtaining matrix inequalities is convexity; whether it is scalar or operator convexity. Recall that a function \(f:J\to\mathbb{R}\) is said to be convex on the interval \(J\) if it satisfies \(f((1-\lambda)a+\lambda b)\leq(1-\lambda)f(a)+\lambda f(b)\) for all \(a,b\in J\) and \(0\leq\lambda\leq 1\). In convex
analysis, the Hermite-Hadamard inequality which states that for a convex function \(f\) on \(\left[0,1\right]\) one has
\[f\left(\frac{1}{2}\right)\leq\int\limits_{0}^{1}f\left(t\right)dt\leq\frac{f \left(0\right)+f\left(1\right)}{2}, \tag{1.9}\]
is a non-avoidable tool. Notice that this inequality provides a refinement of the mid-convexity condition of \(f\).
Our target in this paper is to further explore numerical radius and operator norm inequalities, via matrix operators and convex functions. For this, we begin by noting that since \(\omega(\cdot)\) and \(||\cdot||\) are norms, one can easily verify that the functions
\[f\left(t\right)=\omega\left(\left(1-t\right)T+tT^{\ast}\right),\text{ and }g(t)=\left\|\left(1-t\right)T+tT^{\ast}\right\|\]
are convex on \(\left[0,1\right]\).
With a considerable amount of research devoted to inequalities of convex functions, the following inequalities which have been shown in [5] for a convex function \(f:\left[0,1\right]\rightarrow\mathbb{R}\) have played a useful role in the literature
\[f\left(t\right)\leq\left(1-t\right)f\left(0\right)+tf\left(1\right)-2r\left( \frac{f\left(0\right)+f\left(1\right)}{2}-f\left(\frac{1}{2}\right)\right),\]
and
\[\left(1-t\right)f\left(0\right)+tf\left(1\right)\leq f\left(t\right)+2R\left( \frac{f\left(0\right)+f\left(1\right)}{2}-f\left(\frac{1}{2}\right)\right),\]
where \(r=\min\left\{t,1-t\right\}\), \(R=\max\left\{t,1-t\right\}\), and \(0\leq t\leq 1\). We refer the reader to [11, 12] for some applications and further discussion of these inequalities.
Applying these later inequalities to the convex functions \(f\) and \(g\) above implies the following refinements and reverses of (1.2).
\[\frac{\omega\left(T\right)-\omega\left(\left(1-t\right)T+tT^{\ast}\right)}{2R }\leq\omega\left(T\right)-\left\|\mathfrak{R}T\right\|\leq\frac{\omega\left(T \right)-\omega\left(\left(1-t\right)T+tT^{\ast}\right)}{2r}. \tag{1.10}\]
Furthermore,
\[\frac{\left\|T\right\|-\left\|\left(1-t\right)T+tT^{\ast}\right\|}{2R}\leq \left\|T\right\|-\left\|\mathfrak{R}T\right\|\leq\frac{\left\|T\right\|-\left\| \left(1-t\right)T+tT^{\ast}\right\|}{2r}. \tag{1.11}\]
Using this approach, we will be able to present refined versions and generalizations of most of the above inequalities, with the conclusion of some product inequalities that entail some interesting relations. We refer to inequalities that govern \(\omega(AB)\) as product inequalities. It is well-known that \(\omega(\cdot)\) is not sub-multiplicative. We refer the reader to [10] for further discussion of this property. Interestingly, our approach will entail a relation between \(\omega(AB)\) and \(\left\|A+B\right\|\)
with an application to the matrix arithmetic-geometric mean inequality that states [3, Theorem IX.4.5]
\[\|A^{1/2}B^{1/2}\|\leq\frac{1}{2}\|A+B\|,\;A,B\in\mathcal{B}(\mathcal{H}),A,B\geq O.\]
Namely, we obtain a new refinement of this inequality using the numerical radius; as a new approach to this direction, see Remark 2.3 below.
To achieve our goal, some auxiliary results are needed as follows.
**Lemma 1.1**.: _Let \(A,B\in\mathcal{B}(\mathcal{H})\)._
1. _If_ \(n\in\mathbb{N}\)_, then_ _[_6_, Theorem 2.1-1]___ (1.12) \[\omega(A^{n})\leq\omega^{n}(A).\]_
2. _The operator norm satisfies the identity_ (1.13) \[\left\|\left[\begin{array}{cc}O&A\\ A^{*}&O\end{array}\right]\right\|=\|A\|.\]
## 2. Main Result
In this section we present our results, starting with the following simple consequence that follows by applying (1.9) on
\[f\left(t\right)=\omega\left(\left(1-t\right)T+tT^{*}\right)\;\text{and}\;g(t )=\|(1-t)\,T+tT^{*}\|\]
yielding refinements of (1.2).
**Proposition 2.1**.: _Let \(T\in\mathcal{B}\left(\mathcal{H}\right)\). Then_
\[\|\mathfrak{R}T\|\leq\int\limits_{0}^{1}\omega\left(\left(1-t\right)T+tT^{*} \right)dt\leq\omega\left(T\right), \tag{2.1}\]
_and_
\[\|\mathfrak{I}T\|\leq\int\limits_{0}^{1}\omega\left(\left(1-t\right)T^{*}-tT \right)dt\leq\omega\left(T\right). \tag{2.2}\]
_Moreover,_
\[\|\mathfrak{R}T\|\leq\int\limits_{0}^{1}\|(1-t)\,T+tT^{*}\|\,dt\leq\|T\|\,, \tag{2.3}\]
_and_
\[\|\mathfrak{I}T\|\leq\int\limits_{0}^{1}\|(1-t)\,T^{*}-tT\|\,dt\leq\|T\|\,. \tag{2.4}\]
The identity (1.3) provides an alternative formula to evaluate the numerical radius without appealing to the inner product. Interestingly, the inequalities (2.1) and (2.2) provide the following alternative identities, which help better understand how the numerical radius behaves.
**Corollary 2.1**.: _Let \(T\in\mathcal{B}\left(\mathcal{H}\right)\). Then_
\[\omega\left(T\right)=\sup_{\theta\in\mathbb{R}}\,\left(\int\limits_{0}^{1} \omega\left(\left(1-t\right)e^{\mathrm{i}\theta}T+te^{-\mathrm{i}\theta}T^{*} \right)dt\right)=\sup_{\theta\in\mathbb{R}}\,\left(\int\limits_{0}^{1}\omega \left(\left(1-t\right)e^{-\mathrm{i}\theta}T^{*}-te^{\mathrm{i}\theta}T\right) dt\right).\]
Proof.: Replacing \(T\) by \(e^{\mathrm{i}\theta}T\) in (2.1), we get
\[\left\|\mathfrak{Re}^{\mathrm{i}\theta}T\right\|\leq\int\limits_{0}^{1}\omega \left(\left(1-t\right)e^{\mathrm{i}\theta}T+te^{-\mathrm{i}\theta}T^{*}\right) dt\leq\omega\left(T\right).\]
Taking the supremum over \(\theta\in\mathbb{R}\), (1.3) implies the first identity. The second identity follows from (2.2) and noting that
\[\sup_{\theta\in\mathbb{R}}\,\left\|\mathfrak{I}e^{\mathrm{i}\theta}T\right\| =\omega\left(T\right).\]
The following result involves an integral refinement of the second inequality in (1.1).
**Proposition 2.2**.: _Let \(T\in\mathcal{B}\left(\mathcal{H}\right)\). Then_
\[\omega\left(T\right)\leq\min\left\{\lambda_{1},\lambda_{2}\right\}\leq\left\|T \right\|,\]
_where_
\[\lambda_{1}=\sup_{\theta\in\mathbb{R}}\,\left(\int\limits_{0}^{1}\left\|\left( 1-t\right)e^{\mathrm{i}\theta}T+te^{-\mathrm{i}\theta}T^{*}\right\|dt\right) \text{ and }\lambda_{2}=\sup_{\theta\in\mathbb{R}}\,\left(\int\limits_{0}^{1} \left\|\left(1-t\right)e^{-\mathrm{i}\theta}T^{*}-te^{\mathrm{i}\theta}T \right\|dt\right).\]
Proof.: By the inequality (2.3), we have
\[\sup_{\theta\in\mathbb{R}}\|\mathfrak{Re}^{\mathrm{i}\theta}T\|\leq\sup_{ \theta\in\mathbb{R}}\left(\int\limits_{0}^{1}\left\|\left(1-t\right)e^{i \theta}T+te^{-i\theta}T^{*}\right\|dt\right)\leq\|T\|.\]
Finally, by (1.3) we get
\[\omega(T)\leq\sup_{\theta\in\mathbb{R}}\left(\int\limits_{0}^{1}\left\|\left( 1-t\right)e^{i\theta}T+te^{-i\theta}T^{*}\right\|dt\right)\leq\left\|T\right\|.\]
By a similar proof and with the help of (2.4), we also have
\[\omega(T)\leq\sup_{\theta\in\mathbb{R}}\left(\int\limits_{0}^{1}\left\|\left( 1-t\right)e^{-i\theta}T^{*}-te^{i\theta}T\right\|dt\right)\leq\left\|T\right\|.\]
This completes the proof.
The second inequality in the inequalities (2.1) and (2.2) can be reversed as follows.
**Proposition 2.3**.: _Let \(T\in\mathcal{B}\left(\mathcal{H}\right)\). Then_
\[\frac{1}{2}\omega\left(T\right)\leq\left\{\begin{array}{l}\int \limits_{0}^{1}\omega\left(\left(1-t\right)T+tT^{*}\right)dt,\\ \int\limits_{0}^{1}\omega\left(\left(1-t\right)T^{*}-tT\right)dt. \end{array}\right.\]
Proof.: For any \(0\leq t\leq 1\), it can be easily shown that
\[\left|1-2t\right|\omega\left(T\right)\leq\min\left\{\omega\left(\left(1-t \right)T+tT^{*}\right),\omega\left(\left(1-t\right)T^{*}-tT\right)\right\}.\]
Integrating this over the interval \(\left[0,1\right]\) implies the desired result.
The following result holds as well.
**Theorem 2.1**.: _Let \(T\in\mathcal{B}\left(\mathcal{H}\right)\). Then_
\[\left\|T\right\|\leq 2\int\limits_{0}^{1}\left\|\left(1-t\right)T+tT^{*} \right\|dt\leq 2\left\|T\right\|,\]
_and_
\[\omega\left(T\right)\leq 2\int\limits_{0}^{1}\omega\left(\left(1-t\right)T+tT^ {*}\right)dt\leq 2\omega\left(T\right).\]
Proof.: Let \(h(T)=\left\|T\right\|\) for any \(T\in\mathcal{B}\left(\mathcal{H}\right)\). Then, \(h\) is a convex function on \(\mathcal{B}\left(\mathcal{H}\right)\). For each \(t\in\left[0,1\right]\), we have
\[h((1-2t)T)+h((2t-1)T^{*})=h((1-t)A+tB)+h((1-t)B+tA),\]
where \(A=(1-t)T+tT^{*}\) and \(B=-(1-t)T^{*}-tT\). Then,
\[h((1-2t)T)+h((2t-1)T^{*}) =h((1-t)A+tB)+h((1-t)B+tA)\] \[\leq(1-t)h(A)+th(B)+(1-t)h(B)+th(A)\] \[=h(A)+h(B)\] \[=h((1-t)T+tT^{*})+h(-(1-t)T^{*}-tT)\] \[=h((1-t)T+tT^{*})+h((1-t)T^{*}+tT).\]
Integrating, the previous inequality, from \(t=0\) to \(t=1\), we obtain
\[\int_{0}^{1}\left|1-2t\right|(\left\|T\right\|+\left\|T^{*}\right\|)\,dt\leq 2 \int_{0}^{1}\left\|(1-t)T+tT^{*}\right\|dt.\]
Thus,
\[\|T\|=\frac{1}{2}(\|T\|+\|T^{*}\|)\leq 2\int_{0}^{1}\|(1-t)T+tT^{*}\|\;dt.\]
On the other hand,
\[\|(1-t)T+tT^{*}\|\leq(1-t)\|T\|+t\|T^{*}\|=\|T\|;0\leq t\leq 1.\]
Integrating this last inequality and then multiplying by \(2\) complete the proof of the first inequality. The second inequality is proved similarly.
Continuing with the convexity of the norms, the inequality (1.10) may be used to get the following refinement of the first inequality in (1.1).
**Theorem 2.2**.: _Let \(T\in\mathcal{B}\left(\mathcal{H}\right)\). Then for any \(0\leq t\leq 1\),_
\[\frac{1}{2}\left\|T\right\|+\frac{1}{4R}\left(2\omega\left(T\right)-\left( \omega\left(\left(1-t\right)T^{*}-tT\right)+\omega\left(\left(1-t\right)T+tT^ {*}\right)\right)\right)\leq\omega\left(T\right),\]
_where \(R=\max\left\{t,1-t\right\}\)._
Proof.: The first inequality in (1.10) can be written as
\[\|\mathfrak{R}T\|+\frac{\omega\left(T\right)-\omega\left(\left(1-t\right)T+tT ^{*}\right)}{2R}\leq\omega\left(T\right). \tag{2.5}\]
Replacing \(\mathrm{i}T^{*}\) by \(T\), we infer that
\[\|\mathfrak{I}T\|+\frac{\omega\left(T\right)-\omega\left(\left(1-t\right)T^{ *}-tT\right)}{2R}\leq\omega\left(T\right). \tag{2.6}\]
By (2.5) and (2.6), we get
\[\frac{1}{2}\left\|T\right\|+\frac{1}{4R}\left(2\omega\left(T \right)-\left(\omega\left(\left(1-t\right)T^{*}-tT\right)+\omega\left(\left(1 -t\right)T+tT^{*}\right)\right)\right)\] \[=\frac{1}{2}\left\|\mathfrak{R}T+\mathrm{i}\mathfrak{I}T\right\| +\frac{1}{4R}\left(2\omega\left(T\right)-\left(\omega\left(\left(1-t\right)T ^{*}-tT\right)+\omega\left(\left(1-t\right)T+tT^{*}\right)\right)\right)\] \[\leq\frac{1}{2}\left(\|\mathfrak{R}T\|+\|\mathfrak{I}T\|\right)+ \frac{1}{4R}\left(2\omega\left(T\right)-\left(\omega\left(\left(1-t\right)T^{ *}-tT\right)+\omega\left(\left(1-t\right)T+tT^{*}\right)\right)\right)\] \[\qquad\left(\text{by the triangle inequality for the usual operator norm}\right)\] \[\leq\omega\left(T\right).\]
This completes the proof.
As a consequence of Theorem 2.2, we get the following corollaries. Our results considerably refines [7, (4.3)] and [7, (4.2)], respectively.
**Corollary 2.2**.: _Let \(T\in\mathcal{B}\left(\mathcal{H}\right)\). Then,_
\[\frac{1}{2}\left\|T\right\|+\frac{1}{2}\Big{|}\|\mathfrak{I}T\|-\|\mathfrak{R }T\|\Big{|}\leq\frac{1}{2}\left\|T\right\|+\frac{1}{2}\left(2\omega\left(T \right)-\left(\|\mathfrak{I}T\|+\|\mathfrak{R}T\|\right)\right)\leq\omega \left(T\right).\]
Proof.: The second inequality can be deduced from Theorem 2.2 with \(t=\frac{1}{2}.\) On the other hand, we have
\[\frac{1}{2}\Big{|}\|\mathfrak{I}T\|-\|\mathfrak{R}T\|\Big{|}= \frac{1}{2}\Big{|}\|\mathfrak{I}T\|-\omega(T)+\omega(T)-|\mathfrak{ R}T\|\Big{|}\] \[\leq\frac{1}{2}\left(\Big{|}\|\mathfrak{I}T\|-\omega(T)\Big{|}+ \Big{|}\omega(T)-|\mathfrak{R}T\|\Big{|}\right)\] \[=\frac{1}{2}\left(\omega(T)-\|\mathfrak{I}T\|+\omega(T)-\| \mathfrak{R}T\|\right),\quad\text{(by the inequality (\ref{eq:1})).}\]
This completes the proof.
As a consequence of Corollary 2.2, we characterize when the numerical radius to be equal to half the operator norm. The following result is related to Theorem 3.1 previously obtained by Yamazaki in [14].
**Proposition 2.4**.: _Let \(T\in\mathcal{B}\left(\mathcal{H}\right)\). Then, \(\frac{\|T\|}{2}=\omega(T)\) if and only if \(\|\mathfrak{I}e^{\mathrm{i}\theta}T\|=\|\mathfrak{R}e^{\mathrm{i}\theta}T\|= \frac{\|T\|}{2}\) for any \(\theta\in\mathbb{R}.\)_
Proof.: If \(\|\mathfrak{I}e^{\mathrm{i}\theta}T\|=\|\mathfrak{R}e^{\mathrm{i}\theta}T\|= \frac{\|T\|}{2}\) for any \(\theta\in\mathbb{R},\) then by (1.3) we conclude that \(\omega(T)=\frac{\|T\|}{2}.\) Conversely, we suppose that \(\omega(T)=\frac{\|T\|}{2},\) thus from Corollary 2.2 we conclude that
\[\frac{1}{2}\left\|T\right\|=\frac{1}{2}\left\|T\right\|+\frac{1}{2}\Big{|}\| \mathfrak{I}T\|-\|\mathfrak{R}T\|\Big{|}=\frac{1}{2}\left\|T\right\|+\frac{1 }{2}\left(2\omega\left(T\right)-\left(\|\mathfrak{I}T\|+\|\mathfrak{R}T\| \right)\right)=\omega\left(T\right).\]
If we replace \(T\) for \(e^{\mathrm{i}\theta}T\) with \(\theta\in\mathbb{R},\) we have
\[\frac{1}{2}\left\|T\right\|=\frac{1}{2}\left\|T\right\|+\frac{1}{2}\Big{|}\| \mathfrak{I}e^{\mathrm{i}\theta}T\|-\|\mathfrak{R}e^{\mathrm{i}\theta}T\| \Big{|}=\frac{1}{2}\left\|T\right\|+\frac{1}{2}\left(2\omega\left(T\right)- \left(\|\mathfrak{I}e^{\mathrm{i}\theta}T\|+\|\mathfrak{R}e^{\mathrm{i}\theta} T\|\right)\right)=\omega\left(T\right).\]
This implies that \(\|\mathfrak{I}e^{\mathrm{i}\theta}T\|=\|\mathfrak{R}e^{\mathrm{i}\theta}T\|\) and \(2\omega(T)=\|\mathfrak{I}e^{\mathrm{i}\theta}T\|+\|\mathfrak{R}e^{\mathrm{i} \theta}T\|,\) i.e. for any \(\theta\in\mathbb{R}\) we get
\[\|\mathfrak{I}e^{\mathrm{i}\theta}T\|=\|\mathfrak{R}e^{\mathrm{i}\theta}T\|= \frac{\|T\|}{2}.\]
**Corollary 2.3**.: _Let \(A,B\in\mathcal{B}\left(\mathcal{H}\right)\). Then,_
\[\omega\left(\begin{bmatrix}O&A\\ B&O\end{bmatrix}\right) \geq \frac{1}{2}\left\|\begin{bmatrix}O&A\\ B&O\end{bmatrix}\right\|+\frac{1}{2}\left(2\omega\left(\begin{bmatrix}O&A \\ B&O\end{bmatrix}\right)-\left(\|A-B^{*}\|+\|A+B^{*}\|\right)\right)\] \[\geq \frac{1}{2}\left\|\begin{bmatrix}O&A\\ B&O\end{bmatrix}\right\|+\frac{1}{2}\Big{|}\|A-B^{*}\|-\|A+B^{*}\|\Big{|}.\]
Proof.: This follows clearly from Corollary 2.2 by considering \(T=\begin{bmatrix}O&A\\ B&O\end{bmatrix}\) and equality (1.13).
On the other hand, the reverse for the second inequality in (1.3) may be obtained as follows.
**Theorem 2.3**.: _Let \(T\in\mathcal{B}\left(\mathcal{H}\right)\). Then for any \(0\leq t\leq 1\),_
\[\left\|T\right\|\leq\omega\left(T\right)+\frac{\left\|T\right\|-\left\|\left(1-t \right)T+tT^{*}\right\|}{2r}-\frac{\omega\left(T\right)-\omega\left(\left(1-t \right)T+tT^{*}\right)}{2R},\]
_where \(r=\min\left\{t,1-t\right\}\) and \(R=\max\left\{t,1-t\right\}\). In particular,_
\[\frac{\omega\left(T\right)-\omega\left(\left(1-t\right)T+tT^{*}\right)}{2R} \leq\frac{\left\|T\right\|-\left\|\left(1-t\right)T+tT^{*}\right\|}{2r}.\]
Proof.: The inequalities (1.10) and (1.11) imply
\[\left\|T\right\| \leq\left\|\mathfrak{R}T\right\|+\frac{\left\|T\right\|-\left\| \left(1-t\right)T+tT^{*}\right\|}{2r}\] \[\leq\omega\left(T\right)+\frac{\left\|T\right\|-\left\|\left(1-t \right)T+tT^{*}\right\|}{2r}-\frac{\omega\left(T\right)-\omega\left(\left(1-t \right)T+tT^{*}\right)}{2R}.\]
This proves the first assertion. The second assertion follows from the first, noting that \(\omega(T)\leq\left\|T\right\|\).
Continuing with the theme of this paper, in the following result, the numerical radius of convex combinations of operator matrices is used to refine the triangle inequality, thanks to
\[\omega\left(\begin{bmatrix}O&\left(1-t\right)A+tB\\ \left(1-t\right)B^{*}+tA^{*}&O\end{bmatrix}\right)\leq\omega\left(\begin{bmatrix} O&A\\ B^{*}&O\end{bmatrix}\right);0\leq t\leq 1.\]
**Theorem 2.4**.: _Let \(A,B\in\mathcal{B}\left(\mathcal{H}\right)\). Then for any \(0\leq t\leq 1\),_
\[\left\|A+B\right\|\leq\left\|A\right\|+\left\|B\right\|-\frac{\omega\left( \begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)-\omega\left(\begin{bmatrix}O&\left(1-t\right)A+tB \\ \left(1-t\right)B^{*}+tA^{*}&O\end{bmatrix}\right)}{R},\]
_where \(R=\max\left\{t,1-t\right\}\)._
Proof.: Let \(T=\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\) on \(\mathcal{H}\oplus\mathcal{H}\). Then by (1.10), we can write
\[\left\|A+B\right\|\] \[=\left\|T+T^{*}\right\|\] \[=2\left\|\mathfrak{R}T\right\|\] \[\leq 2\omega\left(T\right)-\frac{\omega\left(T\right)-\omega\left( \left(1-t\right)T+tT^{*}\right)}{R}\] \[=2\underset{\theta\in\mathbb{R}}{\sup}\,\left\|\mathfrak{R}e^{ \mathrm{i}\theta}T\right\|-\frac{\omega\left(T\right)-\omega\left(\left(1-t \right)T+tT^{*}\right)}{R}\] \[=\underset{\theta\in\mathbb{R}}{\sup}\,\left\|\begin{bmatrix}O& e^{\mathrm{i}\theta}A+e^{-\mathrm{i}\theta}B\\ e^{\mathrm{i}\theta}B^{*}+e^{-\mathrm{i}\theta}A^{*}&O\end{bmatrix}\right\|- \frac{\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)-\omega\left(\begin{bmatrix}O&\left(1-t\right)A+tB\\ \left(1-t\right)B^{*}+tA^{*}&O\end{bmatrix}\right)}{R}\] \[=\underset{\theta\in\mathbb{R}}{\sup}\,\left\|e^{\mathrm{i} \theta}A+e^{-\mathrm{i}\theta}B\right\|-\frac{\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)-\omega\left(\begin{bmatrix}O&\left(1-t\right)A+tB \\ \left(1-t\right)B^{*}+tA^{*}&O\end{bmatrix}\right)}{R}\] \[\leq\left\|A\right\|+\left\|B\right\|-\frac{\omega\left(\begin{bmatrix }O&A\\ B^{*}&O\end{bmatrix}\right)-\omega\left(\begin{bmatrix}O&\left(1-t\right)A+tB\\ \left(1-t\right)B^{*}+tA^{*}&O\end{bmatrix}\right)}{R},\]
where the triangle inequality for the operator norm has been used to obtain the last inequality. This completes the proof.
**Remark 2.1**.: _Letting \(T=\left[\begin{array}{cc}O&A\\ B^{*}&O\end{array}\right]\), we have_
\[\omega\left(\begin{bmatrix}O&\left(1-t\right)A+tB\\ \left(1-t\right)B^{*}+tA^{*}&O\end{bmatrix}\right)+\omega\left(\begin{bmatrix} O&\left(1-t\right)B-tA\\ \left(1-t\right)A^{*}-tB^{*}&O\end{bmatrix}\right)\] \[=\omega(\left(1-t\right)T+tT^{*})+\omega(\left(1-t\right)T^{*}-tT)\] \[\leq 2\omega(T)\quad(\text{by the triangle inequality})\] \[=2\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right),\]
_for any \(0\leq t\leq 1\). Thus, noting (1.5) we have_
\[\max\left\{\omega\left(\left(1-t\right)\left(B+A^{*}\right)-t \left(A+B^{*}\right)\right),\omega\left(\left(1-t\right)\left(B-A^{*}\right)+t \left(B^{*}-A\right)\right)\right\}\] \[\quad+\max\left\{\omega\left(\left(1-t\right)\left(A+B^{*} \right)+t\left(B+A^{*}\right)\right),\omega\left(\left(1-t\right)\left(A-B^{* }\right)+t\left(B-A^{*}\right)\right)\right\}\] \[\leq 2\omega\left(\begin{bmatrix}O&\left(1-t\right)A+tB\\ \left(1-t\right)B^{*}+tA^{*}&O\end{bmatrix}\right)+2\omega\left(\begin{bmatrix} O&\left(1-t\right)B-tA\\ \left(1-t\right)A^{*}-tB^{*}&O\end{bmatrix}\right)\] \[\leq 4\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right).\]
_In particular,_
\[\max\left\{\left\|\mathfrak{I}A-\mathfrak{I}B\right\|,\left\| \mathfrak{R}A-\mathfrak{R}B\right\|\right\}+\max\left\{\left\|\mathfrak{R}A+ \mathfrak{R}B\right\|,\left\|\mathfrak{I}A+\mathfrak{I}B\right\|\right\}\] \[\leq\left\|A+B\right\|+\left\|A-B\right\|\] \[\leq 4\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right). \tag{2.7}\]
_Also noting (1.6), by the second inequality in (2.7), we get the following interesting inequalities_
\[\frac{\left\|A+B\right\|+\left\|A-B\right\|}{2} \leq 2\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\] \[\leq\omega\left(A+B^{*}\right)+\omega\left(A-B^{*}\right).\]
The following result provides an integral version of (1.4); where the numerical radius of convex combinations of operator matrices is used to refine the triangle inequality. Since its proof is similar to Theorem 2.4, we state it without details.
**Theorem 2.5**.: _Let \(A,B\in\mathcal{B}\left(\mathcal{H}\right)\). Then_
\[\left\|A+B\right\|\leq 2\int\limits_{0}^{1}\omega\left(\begin{bmatrix}O& \left(1-t\right)A+tB\\ \left(1-t\right)B^{*}+tA^{*}&O\end{bmatrix}\right)dt\leq\left\|A\right\|+ \left\|B\right\|.\]
The matrix operator \(\left[\begin{array}{cc}O&A\\ B^{*}&O\end{array}\right]\) is further used to obtain the following improvement of (1.8).
**Theorem 2.6**.: _Let \(A,B\in\mathcal{B}\left(\mathcal{H}\right)\). Then for any \(0\leq t\leq 1\),_
\[\left\|A+B\right\|+\frac{\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)-\omega\left(\begin{bmatrix}O&\left(1-t\right)A+tB \\ \left(1-t\right)B^{*}+tA^{*}&O\end{bmatrix}\right)}{R}\] \[\leq\max\left\{\left\|A\right\|,\left\|B\right\|\right\}+\frac{1 }{2}\left(\left\|A\right\|^{\frac{1}{2}}\left|B\right|^{\frac{1}{2}}\left\|+ \left\|B^{*}\right|^{\frac{1}{2}}\left|A^{*}\right|^{\frac{1}{2}}\right\| \right),\]
_where \(R=\max\left\{t,1-t\right\}\). In particular, if \(A\) and \(B\) are self-adjoint, we get_
\[\left\|A+B\right\|+\frac{\omega\left(\begin{bmatrix}O&A\\ B&O\end{bmatrix}\right)-\omega\left(\begin{bmatrix}O&(1-t)\,A+tB\\ (1-t)\,B+tA&O\end{bmatrix}\right)}{R}\leq\max\left\{\left\|A\right\|,\left\|B \right\|\right\}+\left\||A\right|^{\frac{1}{2}}\!\left|B\right|^{\frac{1}{2}} \right\|.\]
Proof.: Combining (1.7) with the inequality (2.5), we infer the desired result.
**Remark 2.2**.: _It is worthwhile to mention here that if \(A\) and \(B\) are positive operators, then Theorem 2.6 reduces to [9]_
\[\left\|A+B\right\|\leq\max\left\{\left\|A\right\|,\left\|B\right\|\right\}+ \left\|A^{\frac{1}{2}}B^{\frac{1}{2}}\right\|.\]
_This follows from the following point for positive operators [1]_
\[\omega\left(\begin{bmatrix}O&(1-t)\,A+tB\\ (1-t)\,B+tA&O\end{bmatrix}\right)=\omega\left(\begin{bmatrix}O&A\\ B&O\end{bmatrix}\right)=\frac{1}{2}\left\|A+B\right\|. \tag{2.8}\]
Now we move to study inequalities for \(\omega(AB)\), where \(A,B\in\mathcal{B}(\mathcal{H})\). Interestingly, the following numerical radius inequality leads to a new proof of the arithmetic-geometric mean inequality for positive operators, as we shall see in Remark 2.3 below.
**Theorem 2.7**.: _Let \(A,B\in\mathcal{B}\left(\mathcal{H}\right)\). Then for any \(0\leq t\leq 1\),_
\[\omega^{\frac{1}{2}}\left(AB\right)\leq\frac{1}{2}\left\|A+B^{*}\right\|+\frac {\omega\left(\begin{bmatrix}O&A\\ B&O\end{bmatrix}\right)-\omega\left(\begin{bmatrix}O&(1-t)\,A+tB^{*}\\ (1-t)\,B+tA^{*}&O\end{bmatrix}\right)}{2r},\]
_where \(r=\min\left\{t,1-t\right\}\)._
Proof.: By the second inequality in (1.10), we have
\[2\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\leq\left\|A+B\right\|+\frac{\omega\left( \begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)-\omega\left(\begin{bmatrix}O&(1-t)\,A+tB\\ (1-t)\,B^{*}+tA^{*}&O\end{bmatrix}\right)}{r}.\]
Thus,
\[2\omega^{\frac{1}{2}}\left(AB\right)\] \[\leq 2\max\left\{\omega^{\frac{1}{2}}\left(AB\right),\omega^{\frac{1 }{2}}\left(BA\right)\right\}\] \[=2\omega^{\frac{1}{2}}\left(\begin{bmatrix}AB&O\\ O&BA\end{bmatrix}\right)\] \[=2\omega^{\frac{1}{2}}\left(\begin{bmatrix}O&A\\ B&O\end{bmatrix}^{2}\right)\] \[\leq 2\omega\left(\begin{bmatrix}O&A\\ B&O\end{bmatrix}\right)\quad\text{(by \eqref{eq:2.1})}\] \[\leq\|A+B^{*}\|+\frac{\omega\left(\begin{bmatrix}O&A\\ B&O\end{bmatrix}\right)-\omega\left(\begin{bmatrix}O&(1-t)\,A+tB^{*}\\ (1-t)\,B+tA^{*}&O\end{bmatrix}\right)}{r},\]
which completes the proof.
Now we use Theorem 2.7 to prove the following arithmetic-geometric mean inequality for positive operators.
**Remark 2.3**.: _Let \(A,B\in\mathcal{B}\left(\mathcal{H}\right)\) be two positive operators. It follows from Theorem 2.7,_
\[\left\|A^{\frac{1}{2}}B^{\frac{1}{2}}\right\| =r^{\frac{1}{2}}\left(AB\right)\quad\text{(by \@@cite[cite]{[\@@bibref{}{T1}{}{}, \@@citephrase{(}{2.1)}{}]})}\] \[\leq\omega^{\frac{1}{2}}\left(AB\right)\] \[\leq\frac{1}{2}\left\|A+B\right\|,\]
_where (2.8) has been used together with the fact that \(r(T)\leq\omega(T)\) for any \(T\in\mathcal{B}(\mathcal{H})\)._
While Theorem 2.7 provides an upper bound of \(\omega(AB)\) in terms of \(\begin{bmatrix}O&A\\ B&O\end{bmatrix}\), we have the following lower bound in terms of the same matrix operator.
**Theorem 2.8**.: _Let \(A,B\in\mathcal{B}\left(\mathcal{H}\right)\). Then_
\[\omega\left(\begin{bmatrix}O&A\\ B&O\end{bmatrix}\right)\leq\sqrt{\max\left\{\omega\left(AB\right),\omega \left(BA\right)\right\}+\inf_{\lambda\in\mathbb{C}}\left\|\begin{bmatrix}- \lambda I&A\\ B&-\lambda I\end{bmatrix}\right\|^{2}},\]
_where \(I\) is the identity operator in \(\mathcal{B}(\mathcal{H})\)._
Proof.: By the main result of [4], we can write
\[\max\left\{\omega\left(AB^{*}\right),\omega\left(B^{*}A\right)\right\} =\omega\left(\left[\begin{matrix}AB^{*}&O\\ O&B^{*}A\end{matrix}\right]\right)\] \[=\omega\left(T^{2}\right)\] \[\geq\omega^{2}\left(T\right)-\inf_{\lambda\in\mathbb{C}}\left\|T- \lambda I\right\|^{2}\] \[=\omega^{2}\left(\left[\begin{matrix}O&A\\ B^{*}&O\end{matrix}\right]\right)-\inf_{\lambda\in\mathbb{C}}\left\|\left[ \begin{matrix}-\lambda I&A\\ B^{*}&-\lambda I\end{matrix}\right]\right\|^{2},\]
which completes the proof.
**Remark 2.4**.: _It follows from Theorem 2.8 that for \(X_{i}\in\mathcal{B}(\mathcal{H})\)\((i=1,2,3,4)\),_
\[\omega\left(\left[\begin{matrix}X_{1}&X_{2}\\ X_{3}&X_{4}\end{matrix}\right]\right)\] \[=\omega\left(\left[\begin{matrix}X_{1}&O\\ O&X_{4}\end{matrix}\right]+\left[\begin{matrix}O&X_{2}\\ X_{3}&O\end{matrix}\right]\right)\] \[\leq\omega\left(\left[\begin{matrix}X_{1}&O\\ O&X_{4}\end{matrix}\right]\right)+\omega\left(\left[\begin{matrix}O&X_{2}\\ X_{3}&O\end{matrix}\right]\right)\] \[\leq\max\left\{\omega\left(X_{1}\right),\omega\left(X_{4}\right) \right\}+\sqrt{\max\left\{\omega\left(X_{2}X_{3}\right),\omega\left(X_{3}X_{2} \right)\right\}+\inf_{\lambda\in\mathbb{C}}\left\|\left[\begin{matrix}-\lambda I &X_{2}\\ X_{3}&-\lambda I\end{matrix}\right]\right\|^{2}}.\]
**Remark 2.5**.: _Notice that_
\[r\left(X_{1}X_{2}+X_{3}X_{4}\right) =r\left(\left[\begin{matrix}X_{1}X_{2}+X_{3}X_{4}&O\\ O&O\end{matrix}\right]\right)\] \[=r\left(\left[\begin{matrix}X_{1}&X_{3}\\ O&O\end{matrix}\right]\left[\begin{matrix}X_{2}&O\\ X_{4}&O\end{matrix}\right]\right)\] \[=r\left(\left[\begin{matrix}X_{2}&O\\ X_{4}&O\end{matrix}\right]\left[\begin{matrix}X_{1}&X_{3}\\ O&O\end{matrix}\right]\right)\] \[=r\left(\left[\begin{matrix}X_{2}X_{1}&X_{2}X_{3}\\ X_{4}X_{1}&X_{4}X_{3}\end{matrix}\right]\right)\] \[\leq\omega\left(\left[\begin{matrix}X_{2}X_{1}&X_{2}X_{3}\\ X_{4}X_{1}&X_{4}X_{3}\end{matrix}\right]\right).\]
_If in the above inequality we put \(X_{1}=e^{\mathrm{i}\theta}A\), \(X_{2}=B\), \(X_{3}=e^{-\mathrm{i}\theta}B^{*}\), and \(X_{4}=A^{*}\), we reach_
\[\left\|\mathfrak{Re}\mathrm{e}^{\mathrm{i}\theta}AB\right\|\leq\frac{1}{2} \omega\left(\begin{bmatrix}e^{\mathrm{i}\theta}BA&e^{-\mathrm{i}\theta}BB^{*} \\ e^{\mathrm{i}\theta}A^{*}A&\left(e^{\mathrm{i}\theta}BA\right)^{*}\end{bmatrix} \right). \tag{2.9}\]
_This indicates the relation between the numerical radius of the product of two operators and the numerical radius of \(2\times 2\) operator matrices._
_The case \(A=U|T|^{1-t}\) and \(B=|T|^{t}\), in (2.9), implies_
\[\left\|\mathfrak{Re}\mathrm{e}^{\mathrm{i}\theta}T\right\|\leq\frac{1}{2} \omega\left(\begin{bmatrix}e^{\mathrm{i}\theta}\widetilde{T}_{t}&e^{-\mathrm{ i}\theta}|T|^{2t}\\ e^{\mathrm{i}\theta}|T|^{2(1-t)}&\left(e^{\mathrm{i}\theta}\widetilde{T}_{t} \right)^{*}\end{bmatrix}\right),\quad 0\leq t\leq 1,\]
_where \(\widetilde{T}_{t}\) is the weighted Aluthge transform of \(T\) defined by \(\widetilde{T}_{t}=|T|^{t}U|T|^{1-t},\) where \(U\) is the partial isometry appearing in the polar decomposition in \(T=U|T|.\)_
_Notice that, if we replace \(A=\sqrt{\frac{\left\|B\right\|}{\left\|A\right\|}}A\) and \(B=\sqrt{\frac{\left\|A\right\|}{\left\|B\right\|}}B\), in (2.9), we also have_
\[\left\|\mathfrak{Re}\mathrm{e}^{\mathrm{i}\theta}AB\right\|\leq\frac{1}{2} \omega\left(\begin{bmatrix}e^{\mathrm{i}\theta}BA&e^{-\mathrm{i}\theta}\frac{ \left\|A\right\|}{\left\|B\right\|}BB^{*}\\ e^{\mathrm{i}\theta}\frac{\left\|B\right\|}{\left\|A\right\|}A^{*}A&\left(e^{ \mathrm{i}\theta}BA\right)^{*}\end{bmatrix}\right), \tag{2.10}\]
_and_
\[\left\|\mathfrak{Re}\mathrm{e}^{\mathrm{i}\theta}T\right\|\leq\frac{1}{2} \omega\left(\begin{bmatrix}e^{\mathrm{i}\theta}\widetilde{T}_{t}&e^{-\mathrm{ i}\theta}\|T\|^{1-2t}|T|^{2t}\\ e^{\mathrm{i}\theta}\|T\|^{2t-1}|T|^{2(1-t)}&\left(e^{\mathrm{i}\theta} \widetilde{T}_{t}\right)^{*}\end{bmatrix}\right).\]
To better understand how the above relations help obtain the numerical radius of the product of two operators, we give an example. Recall that in [1, Corollary 2], Abu-Omar and Kittaneh proved that if \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) are Hilbert spaces and \(\mathbb{X}=\begin{bmatrix}X_{1}&X_{2}\\ X_{3}&X_{4}\end{bmatrix}\) is an operator matrix with \(X_{1}\in\mathcal{B}(\mathcal{H}_{1})\), \(X_{2}\in\mathcal{B}(\mathcal{H}_{2},\mathcal{H}_{1})\), \(X_{3}\in\mathcal{B}(\mathcal{H}_{1},\mathcal{H}_{2})\), and \(X_{4}\in\mathcal{B}(\mathcal{H}_{2})\), then
\[\omega\left(\mathbb{X}\right)\leq\frac{1}{2}\left(\omega\left(X_{1}\right)+ \omega\left(X_{4}\right)+\sqrt{\left(\omega\left(X_{1}\right)-\omega\left(X_{4 }\right)\right)^{2}+4\omega^{2}\left(\mathbb{E}\right)}\right),\]
where \(\mathbb{E}=\begin{bmatrix}O&X_{2}\\ X_{3}&O\end{bmatrix}\). In the same paper (see [1, Remark 6]), it has been shown that
\[\omega\left(\mathbb{E}\right)\leq\min\left\{\alpha_{1},\alpha_{2}\right\}\]
where
\[\alpha_{1}=\frac{1}{4}\sqrt{\left\|\left|X_{2}\right|^{2}+\left|X_{3}^{*} \right|^{2}\right\|+2\omega\left(X_{3}X_{2}\right)}\ \ \text{and}\ \ \alpha_{2}=\frac{1}{4}\sqrt{\left\|\left|X_{2}^{*}\right|^{2}+\left|X_{3}\right|^ {2}\right\|+2\omega\left(X_{2}X_{3}\right)}.\]
Combining these two inequalities we get
\[\omega\left(\begin{bmatrix}X_{1}&X_{2}\\ X_{3}&X_{4}\end{bmatrix}\right)\leq\frac{1}{2}\left(\omega\left(X_{1}\right)+ \omega\left(X_{4}\right)+\sqrt{\left(\omega\left(X_{1}\right)-\omega\left(X_{4 }\right)\right)^{2}+4\min\left\{\alpha_{1}^{2},\alpha_{2}^{2}\right\}}\right).\]
Now, using this and (2.10), we have
\[\left\|\mathfrak{Re}^{\mathfrak{i}\theta}AB\right\|\leq\frac{1}{2}\left(\omega \left(BA\right)+\min\left\{\beta_{1},\beta_{2}\right\}\right),\]
where
\[\beta_{1}=\frac{1}{2}\sqrt{\left\|\frac{\left\|A\right\|^{2}}{\left\|B\right\|^ {2}}{\left|B^{*}\right|}^{4}+\frac{\left\|B\right\|^{2}}{\left\|A\right\|^{2} }{\left|A\right|}A\right|^{4}}\right\|+2\omega\left(\left|A\right|^{2}\left|B^ {*}\right|^{2}\right)},\]
and
\[\beta_{2}=\frac{1}{2}\sqrt{\left\|\frac{\left\|A\right\|^{2}}{\left\|B\right\|^ {2}}{\left|B^{*}\right|}^{4}+\frac{\left\|B\right\|^{2}}{\left\|A\right\|^{2} }{\left|A\right|}A\right|^{4}}\right\|+2\omega\left(\left|B^{*}\right|^{2} \left|A\right|^{2}\right)}.\]
This implies,
\[\omega\left(AB\right)\leq\frac{1}{2}\omega\left(BA\right)+\frac{1}{4}\sqrt{ \left\|\frac{\left\|A\right\|^{2}}{\left\|B\right\|^{2}}{\left|B^{*}\right|}^ {4}+\frac{\left\|B\right\|^{2}}{\left\|A\right\|^{2}}{\left|A\right|}^{4} \right\|+2\min\left\{\omega\left(\left|A\right|^{2}\left|B^{*}\right|^{2} \right),\omega\left(\left|B^{*}\right|^{2}\right|A\right|^{2}\right)\right\}}.\]
We also have by (2.9),
|
2302.09444 | mBEST: Realtime Deformable Linear Object Detection Through Minimal
Bending Energy Skeleton Pixel Traversals | Robotic manipulation of deformable materials is a challenging task that often
requires realtime visual feedback. This is especially true for deformable
linear objects (DLOs) or "rods", whose slender and flexible structures make
proper tracking and detection nontrivial. To address this challenge, we present
mBEST, a robust algorithm for the realtime detection of DLOs that is capable of
producing an ordered pixel sequence of each DLO's centerline along with
segmentation masks. Our algorithm obtains a binary mask of the DLOs and then
thins it to produce a skeleton pixel representation. After refining the
skeleton to ensure topological correctness, the pixels are traversed to
generate paths along each unique DLO. At the core of our algorithm, we
postulate that intersections can be robustly handled by choosing the
combination of paths that minimizes the cumulative bending energy of the
DLO(s). We show that this simple and intuitive formulation outperforms the
state-of-the-art methods for detecting DLOs with large numbers of sporadic
crossings ranging from curvatures with high variance to nearly-parallel
configurations. Furthermore, our method achieves a significant performance
improvement of approximately 50% faster runtime and better scaling over the
state of the art. | Andrew Choi, Dezhong Tong, Brian Park, Demetri Terzopoulos, Jungseock Joo, Mohammad Khalid Jawed | 2023-02-18T23:45:29Z | http://arxiv.org/abs/2302.09444v5 | # mBEST: Realtime Deformable Linear Object Detection
###### Abstract
Robotic manipulation of deformable materials is a challenging task that often requires realtime visual feedback. This is especially true for deformable linear objects (DLOs) or "rods", whose slender and flexible structures make proper tracking and detection nontrivial. To address this challenge, we present _mBEST_, a robust algorithm for the realtime detection of DLOs that is capable of producing an ordered pixel sequence of each DLO's centerline along with segmentation masks. Our algorithm obtains a binary mask of the DLOs and then thins it to produce a skeleton pixel representation. After refining the skeleton to ensure topological correctness, the pixels are traversed to generate paths along each unique DLO. At the core of our algorithm, we postulate that intersections can be robustly handled by choosing the combination of paths that minimizes the cumulative bending energy of the DLO(s). We show that this simple and intuitive formulation outperforms the state-of-the-art methods for detecting DLOs with large numbers of sporadic crossings and curvatures with high variance. Furthermore, our method achieves a significant performance improvement of approximately 40 FPS compared to the 15 FPS of prior algorithms, which enables realtime applications.
## I Introduction
As robots become increasingly more intelligent and capable, developing robust and effective deformable material manipulation skills has started to receive large research attention [1]. Among various deformable objects, deformable linear objects (DLOs) -- typically referred to as "rods" by the mechanics community -- are a special group, including everyday objects such as cables, ropes, tubes, and threads. Due to their unique geometric characteristic (width \(\sim\) height \(\ll\) length), DLOs are widely used in various domestic and industrial applications, including surgical suturing [2], knot fastening [3, 4], cable manipulation [5, 6], food manipulation [7], mechanics analysis [8], and more. However, because of their flexibility, DLOs are often prone to complex tangling, which complicates manipulation. In addition, the complicated structures made by DLOs usually have unique topology-induced mechanical properties [9, 10, 11, 12, 13] and are, therefore, used to tie knots for sailing, fishing, climbing, and various other engineering applications. Given all of the above, a robust, efficient, and accurate perception algorithm for DLOs is crucial to both deformable material manipulation and soft robotics.
In this paper, we present an algorithm named _mBEST_ (Minimal Bending Energy Skeleton pixel Traversals) for robust, accurate, and fast instance segmentation of DLOs. Without any prior knowledge regarding the geometries, colors, and total number of DLOs, _mBEST_ takes a raw RGB image as input and provides a series of ordered pixels expressing the centerline of each unique DLO, thus allowing for the configurations of different DLOs to be easily incorporated into motion planning and manipulation schemes.
We implement the following sequence of processing procedures to achieve instance segmentation of DLOs. Like previous work [14], we first apply semantic image segmentation to achieve a binary mask of the DLOs against the background. We try two options for semantic segmentation. The first involves using a Deep Convolutional Neural Network (DCNN) segmentation model, resulting in binary masks of varying quality. The second is color filtering, which can achieve hyper-accurate binary masks but requires adequate color contrast between the DLOs and background. These two options are discussed in further detail in Sec. IV.
After a binary mask is achieved, we apply a thinning algorithm to the mask to obtain a skeleton pixel representation of the DLOs. The resulting representation preserves the connectivity and centerlines of the binary mask while being only a single pixel in width. Therefore, key points such as ends and intersections are easily detected. After a series of refinement steps to ensure topological correctness, the skeleton is then traversed, one end at a time, in a way that minimizes the cumulative bending energy of the DLOs until another end is encountered. Each traversal results in a unique DLO's centerline pixel coordinates, which can then be used to optionally produce segmentation masks. Fig. 1 presents a high-level overview of the _mBEST_ processing pipeline.
Overall, our main contributions in this article are to
1. develop a robust end-to-end pipeline for obtaining ordered centerline coordinates and segmentation masks of DLOs from images;
2. showcase that the relatively simple (and physically insightful) optimization objective of minimizing cumulative bending energy outperforms the state of the art (SOTA); and
3. achieve realtime performance that more than doubles the speed of the previous SOTA.
In addition to the above, we release all the source code and datasets (with ground truth) used at [https://github.com/StructuresComp/mBEST](https://github.com/StructuresComp/mBEST).
The remainder of the article is organized as follows: We proceed with a review of related work in Sec. II. The algorithmic formulation of _mBEST_ is then discussed extensively in Sec. III. In Sec. IV, we report our experimental results comparing _mBEST_ with SOTA approaches. Finally, we offer concluding remarks and discuss potential future research directions in Sec. V.
## II Related Work
Although research into developing manipulation skills for DLOs has been highly prevalent, the perception algorithms used in these efforts are usually undeveloped. For example, in the work of Tong et al. [8], attached markers are required for extracting the configuration of the manipulated DLO. Zhu et al. [5] carefully adjusted the workspace to increase the contrast between the manipulated DLO (cables) and their background. Although these prior efforts successfully completed their target manipulation tasks, the simplistic perception algorithms restrict real world applicability.
Consequently, DLO detection algorithms featuring various methodologies have been proposed. For example, Keipour et al. [15] evaluate both curvatures and distances to fit a continuous DLO. Using data-driven methods, Yan et al. [16] train a neural network to reconstruct the topology of a DLO based on a coarse-to-fine nodal representation. Though these methods achieve good results for some datasets, they work under the strict assumption that only one DLO exists within the scene, thus dramatically restricting their applicability.
One of the first perception algorithms capable of detecting multiple DLOs, _Ariadne_[17], segmented images into superpixels and traversed the superpixels belonging to DLOs in order to produce paths. The ambiguity of intersections is handled using a multi-faceted cost function that takes into consideration color, distance, and curvature. Despite its satisfactory performance, this early approach suffered from a large number of hyperparameters, an overreliance on DLOs being a uniform color, and the tedious requirement on the user to manually select the ends of DLOs. Furthermore, the processing speed of _Ariadne_ was on the order of seconds, precluding realtime operation.
In recent years, data-driven methods have attracted increasing attention in instance segmentation. In particular, researchers have shown that general instance segmentation problems can be tackled efficiently and accurately using Deep Convolutional Neural Networks (DCNNs) [18, 19, 20, 21]. Furthermore, several tools have been introduced to help synthetically generate large quantities of photorealistic data in order to adequately train such models [22, 23]. Using DCNNs, Zanella et al. [24] created segmentations of DLOs such as wires; however, the segmentations did not distinguish between each DLO.
Improving upon _Ariadne_, _Ariadne+_[25], like [24], utilizes a DCNN model to extract an initial binary mask of the DLOs. This allows the algorithm to then apply superpixel segmentation purely on the binary mask itself, significantly reducing the computation time. Paths are then generated in a similar fashion to the original _Ariadne_ algorithm by traversing superpixels while intersections are handled using a neural network to predict the most probable paths. The neural network uses the same three inputs as _Ariadne_: color, distance, and curvature. Despite these improvements, _Ariadne+_ is sub-realtime; i.e., less than 3 FPS.
The state-of-the-art _FASTDLO_[14] improves upon _Ariadne+_'s speed by forgoing superpixel segementation altogether. Instead, it uses a skeleton pixel representation of the DLO binary mask for path traversals. Intersections are then also handled by a neural network. By replacing superpixel segmentation with skeletonization, _FASTDLO_ is reportedly able to achieve a realtime performance of 20 FPS for images of size \(640\times 360\) pixels.
Though _Ariadne+_ and _FASTDLO_ are considered state-of-the-art DLO perception algorithms, both algorithms have been evaluated only on scenes with DLOs with relatively smooth curvatures and minimal self-loops. Our experiments will show that both algorithms struggle to solve complicated
Fig. 1: Pipeline overview of _mBEST_. An input image (a) is converted to a binary mask (b) using a segmentation method. This binary mask is then converted to a skeleton pixel representation (c), where the connectivity and centerlines of the DLOs are preserved as a single-pixel-width structure, and keypoints such as intersections and ends are then detected. This is followed by a series of refinement steps to maintain the topological correctness of the skeleton: split ends (d1) are pruned (d2) and pixels representing a single topological intersection (e1) are clustered and replaced with a more intuitive intersection (e2). Finally, paths for each DLO are generated (f) by traversing skeleton pixels and choosing the path that minimizes cumulative bending energy.
configurations (e.g., DLOs with highly variable curvatures resulting in many crossings and tangles) correctly.
By comparison, our _mBEST_ robustly solves complex scenes using the simple notion that the most probable path is that which minimizes cumulative bending energy. Not only does it outperform the SOTA on complex scenes, but we also achieve realtime performance more than double that of _FASTDLO_. The key algorithmic differences between _mBEST_ and the SOTA are summarized in Table I. Overall, we argue that the use of neural networks to solve intersections yields unsatisfactory results once input scenes stray away from the training data distributions, and we show that using a more principled formulation with physical insight can outperform black box neural network approaches for a wide variety of scenes containing DLOs.
## III Methodology
Our algorithm may be divided into the following steps:
1. DLO Segmentation
2. Skeletonization
3. Keypoint Detection
4. Pruning Split Ends
5. Intersection Clustering and Replacement
6. Minimal Bending Energy Path Generation
7. Computing DLO Radii and Crossing Order
The following sections will describe each step in detail.
### _DLO Segmentation_
The first step in detecting the DLOs is to obtain a binary mask \(\mathbf{M}_{\text{dlo}}\) of the image that distinguishes all DLO-related pixels from the background. As mentioned previously, we use two semantic segmentation methods: a DCNN segmentation model and color filtering. In particular, we use _FASTDLO_'s pretrained DCNN model [14] in our experiments. To eliminate noise, morphological closing and opening operations are performed on the binary mask to remove any hollow areas.
The initial semantic segmentation method is not a key contribution of _mBEST_. Rather, it is a modular component of the pipeline, allowing for different methods to be plugged in depending on the use case.
### _Skeletonization_
The next step of the algorithm is to convert \(\mathbf{M}_{\text{dlo}}\) to a skeleton mask \(\mathbf{M}_{\text{sk}}\) as shown in Fig. 1(b-c). \(\mathbf{M}_{\text{sk}}\) is useful as both the connectivity and general topology of the DLOs are maintained. Furthermore, as segments are only 1 pixel in width, traversals along segments are not susceptible to path ambiguity. To achieve skeletonization, we use an efficient thinning algorithm designed specifically for 2D images; refer to [26] for the details.
### _Keypoint Detection_
Once a skeleton pixel representation is obtained, we can then detect two types of key points: ends and intersections. Locating ends is crucial as they serve as the start and finish points for skeleton pixel traversals. They also indirectly tell us the number of DLO(s) in the image as \(n_{\text{dlo}}=n_{\text{ends}}/2\) (assuming the initial binary mask maintained connectivity). Locating intersections is crucial as these represent the only points at which a pixel traversal will have multiple possible routes. Therefore, care must be given in choosing the correct path when passing through an intersection.
To detect ends and intersections, a skeleton pixel classification kernel,
\[\mathbf{K}=\begin{bmatrix}1&1&1\\ 1&10&1\\ 1&1&1\end{bmatrix},\]
can be used to apply a convolution \(\mathbf{M}_{\text{sk}}\oplus\mathbf{K}\) along the skeleton mask. We can then obtain all end pixels \(\mathbf{E}\) as pixels that have a value of 11 (1 neighbor) and all intersection pixels \(\mathbf{I}\) as pixels having a value greater than 12 (3 or more neighbors).
After obtaining both \(\mathbf{E}\) and \(\mathbf{I}\), additional work must be done to obtain the correct representative sets. For example, end pixels that are unindicative of a topological end may be produced from a noisy binary mask. These "split ends" will then inadvertently produce intersection pixels themselves as shown in Fig. 2. Additionally, a single topological intersection will result in either two Y-shaped divides or a single X-shaped divide as shown in Fig. 3(a). Such pixels must be clustered together accordingly and a single point for the intersection must be determined. In the case of a skeleton possessing two Y-shaped divides in regards to a single intersection, the intersection must also be replaced with an X-shaped divide that more accurately represents the true centerlines of the DLOs. The next two sections will cover the pruning and clustering operations in detail.
### _Pruning Split Ends_
When the boundary of the binary mask \(\mathbf{M}_{\text{dlo}}\) is jagged, the skeleton mask \(\mathbf{M}_{\text{sk}}\) may contain several types of split ends as shown in Fig. 2. Such split ends must be identified and pruned as they do not accurately represent the topology of the DLO(s) and will result in incorrect start points as well as cause path ambiguity during pixel traversals.
Note that the length of a split end can be at most the radius of the DLO it is sprouting from. Therefore, suffice it to say that the length of all split ends should be within a threshold \(\delta\ll\) length of the DLO. Given this, for every end in \(\mathbf{E}\), we can traverse along its segment until 1 of 3 things may occur:
1. an intersection is encountered before traversing \(\delta\) pixels,
2. an end is encountered before traversing \(\delta\) pixels,
3. or neither was encountered after traversing \(\delta\) pixels.
For conditions (1) and (2), we remove the segment that was just traversed from \(\mathbf{M}_{\text{sk}}\) as well as the corresponding end from \(\mathbf{E}\). For condition (1) specifically, we must also remove all intersection pixels that were produced from the pruned split end from \(\mathbf{I}\). For any endpoint that satisfies condition (3), we simply do nothing. This concludes eliminating any noise-induced end and intersection pixels.
### _Intersection Clustering and Replacement_
As mentioned previously in Sec. III-C, a single topological intersection can result in either a 2Y or X-shaped branching as shown in Fig. 3(a). Furthermore, each of these branches may have several intersection pixels, i.e., pixels with 3 or more neighbors. Our goal then is to group each pixel in \(\mathbf{I}\) to a single branch and then group each branch to its true topological intersection. With all the intersection pixels properly grouped, we can then define a single intersection pixel that represents the true center of a crossing for all crossings.
We perform clustering using Density-based Spatial Clustering of Applications with Noise (DBSCAN) [27], a clustering algorithm that clusters data points that are within a distance threshold \(\epsilon\) of eachother. In our case, \(\epsilon\) will refer to Euclidean pixel distance. This algorithm is highly convenient to use as it does not necessitate prior knowledge of the number of clusters and thus, intersections.
To properly group all the intersection pixels in \(\mathbf{I}\), two phases of clustering are performed:
1. (\(\epsilon=2\)) All adjacent pixels in \(\mathbf{I}\) are clustered together. Each cluster is then averaged create a new \(\mathbf{I}\).
2. (\(\epsilon>2\)) All pixels in the new \(\mathbf{I}\) are clustered together with a higher \(\epsilon\). This is to account for intersections that produced multiple Y-shaped branches. A new \(\mathbf{I}\) is then created using the averages of the clusters like step 1.
This two step clustering process (shown in Fig. 3(b)) is crucial as skipping the first clustering step can result in the final center intersection pixel being heavily biased towards a particular branch. After the correct set of intersection pixels \(\mathbf{I}\) is obtained, we can then replace all clustered Y-shaped branches with an X-shaped branch as shown in Fig. 3(c). This is done by simply removing all segments within a square window of the new intersection pixel and then creating new segments that sprout from the new intersection. Note that Fig. 3(c) shows that we record new "ends" when replacing the intersection. These ends are recorded so that we know that an intersection is upcoming during a pixel traversal. This allows us to then take the correct precomputed path, which is discussed in the next section.
### _Minimal Bending Energy Path Generation_
For rods that have nonuniform curvatures, bending energy must be computed in a discretized fashion. Given this, if we discretize a rod into \(N\) nodes and \(N-1\) edges, then the total bending energy becomes
\[E_{b}=\frac{1}{2}\frac{EI}{dl}\sum_{k=1}^{N-2}(\kappa_{k}-\kappa_{k}^{0})^{2}, \tag{1}\]
where \(EI\) is the bending stiffness, \(\kappa_{k}\) and \(\kappa_{k}^{0}\) are the deformed and undeformed discrete dimensionless curvatures at node \(k\in[1,N-2]\), and \(dl\) is the Voronoi length. For our DLOs, we make the assumption that the undeformed curvature is always a straight configuration (\(\kappa^{0}=0\)). It is then trivial to see that minimizing the bending energy of an elastic rod is the same as minimizing the discrete curvatures.
The norm of the discrete dimensionless curvature for a node \(k\) can easily be computed using the unit tangent vectors
Fig. 2: Examples of split ends that may occur during the skeletonization process. Row (a) showcases split ends that may occur at an actual topological end, while row (b) showcases a split end along a segment produced by a jagged mask. For both examples, the first column shows the binary mask; the second column shows the split end after skeletonization, and the third column showcases the topologically correct structure after pruning.
of the adjacent edges [28]:
\[\bar{\kappa}_{k}=\left\|\frac{2\mathbf{t}^{k-1}\times\mathbf{t}^{k}}{1+\mathbf{t }^{k-1}\cdot\mathbf{t}^{k}}\right\|, \tag{2}\]
where \(\mathbf{t}^{k-1}\) and \(\mathbf{t}^{k}\) are the unit tangent vectors of the \(k-1\)-th and \(k\)-th edges, respectively.
Note that the only time we have to choose between multiple paths is at an intersection as traversals through segments are unambiguous. Also recall that all intersections have been replaced with new segments in an X-shaped pattern. Using the new ends shown in Fig. 3(c), we can then compute the combination of paths that minimizes the cumulative bending energy of the DLOs using the optimization scheme
\[(\mathbf{p}_{1}^{1}, \mathbf{p}_{1}^{2}),(\mathbf{p}_{2}^{1},\mathbf{p}_{2}^{2})=\mathop {\arg\min}_{\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}}\lVert\kappa_{1} \rVert+\lVert\kappa_{2}\rVert, \tag{3}\] \[\mathbf{t}_{1}^{1} =\frac{\mathbf{i}-\mathbf{p}_{1}^{1}}{\lVert\mathbf{i}-\mathbf{ p}_{1}^{1}\rVert},\ \mathbf{t}_{1}^{2}=\frac{\mathbf{p}_{1}^{2}-\mathbf{i}}{\lVert\mathbf{p}_{1}^{2 }-\mathbf{i}\rVert},\] \[\mathbf{t}_{2}^{1} =\frac{\mathbf{i}-\mathbf{p}_{2}^{2}}{\lVert\mathbf{i}-\mathbf{ p}_{2}^{2}\rVert},\ \mathbf{t}_{2}^{2}=\frac{\mathbf{p}_{2}^{2}-\mathbf{i}}{\lVert\mathbf{p}_{2}^{2 }-\mathbf{i}\rVert},\] \[\kappa_{1} =\frac{2\mathbf{t}_{1}^{1}\times\mathbf{t}_{1}^{2}}{1+\mathbf{t }_{1}^{1}\cdot\mathbf{t}_{1}^{2}},\ \kappa_{2}=\frac{2\mathbf{t}_{1}^{2}\times\mathbf{t}_{2}^{2}}{1+\mathbf{t}_{2}^ {1}\cdot\mathbf{t}_{2}^{2}},\]
where \(\mathbf{i}\) is the intersection pixel and \((\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d})\) are the new recorded ends from the intersection replacement. A visual example of this optimization can be seen in Fig. 3(d) where out of the 3 possible combinations of paths, the one that minimizes total curvature is selected. With the paths through intersections properly precomputed, the skeleton pixel traversals to obtain each DLO's centerline can now take place. Pseudocode pertaining to the full pipeline can be seen in Alg. 1.
### _Computing DLO Radii and Crossing Order_
The final step of the pipeline concerns computing the pixel radii of the DLOs (to create the optional segmentation masks) as well as ascertaining which DLO is resting on top of the other at intersections. To solve both problems, we use modified versions of _FASTDLO_'s solutions [14].
Similar to _FASTDLO_, to compute the radii of the DLOs, we first perform a distance transform on the binary mask \(\mathbf{M}_{\text{dlo}}\), which results in a new mask \(\mathbf{M}_{\text{dist}}\) containing the closest distance to a 0-value pixel. Diverting from _FASTDLO_, we then use the average distance value of \(\mathbf{M}_{\text{dist}}\) along a DLO's centerline as the radius of that DLO when creating segmentation masks. Using the average in this manner allows for smooth segmentations even if the initial binary mask itself is noisy. For segments of the DLO near the edges of the image, we use the pixel specific values of \(\mathbf{M}_{\text{dist}}\) themselves.
To compute crossing order at intersections, we use the precomputed optimal paths created in Fig. 3(d). Crossing order can then be determined by computing the sum of the standard deviations of the RGB channels of the pixels along each path. The path that contains the lower sum is then assumed to be the one on top. Though this solution from _FASTDLO_ works fairly well, we noticed that failures could occur due to glare along the centerline. This glare issue could even cause failures for intersections with two completely different colored DLOs. Therefore, to eliminate the influence of glare, we compute the standard deviations of the intersection path pixels on a blurred image, rather than the original input image.
## IV Experimental Results
### _Experiments and Datasets_
We conduct a total of 3 experiments using two datasets. The first experiment consists of segmenting a dataset of 10 images from [14] where a series of wires can be seen against complex backgrounds. Though the background is complex, the wires themselves possess relatively smooth curvatures and no self-loops. For this "complex background" dataset, the DLO segmentation method used is _FASTDLO_'s DCNN model. Ground truth images for the complex background dataset are created using the photo editing software Photopea [29].
The second and third experiments consists of segmenting a dataset of 50 images consisting of up to two uniquely colored elastic rods against a simple black background using DCNN and color filter segmentation, respectively. Using both methods lets us observe the effects of the initial DLO segmentation on overall performance. This "simple background" dataset consists of DLOs with highly varying
Fig. 3: The intersection clustering, replacement, and optimal path generation pipeline. Two sample intersections are shown where skeletonization results in a (a1) 2Y-shaped crossing and an (a2) X-shaped crossing. As 2Y-shaped crossings are topologically incorrect, we replace them by (b) clustering the intersection pixels in two stages. The first stage involves grouping adjacent pixels, while the second involves grouping nearby clusters. Using the centroid location of the cluster, we can then (c) replace the intersection by creating new ends and having new segments sprout and connect to the centroid. Finally, the new generated ends and segments can be used to (d) discover the combination of paths that minimizes the cumulative bending energy of the DLO.
curvatures and numerous self-loops. Ground truth images for the simple background dataset are created simply using the color filters themselves. We focus mostly on images with a simple black background since the initial binary mask segmentation is not a key focus of our algorithm. Having said this, we show that _mBEST_ still works for complex backgrounds in Fig. 6. All experiments were run using an Intel i9-9900KF CPU and an NVIDIA RTX 2080 Ti GPU.
### _Baselines and Parameters_
We test our algorithm against two state-of-the-art baselines: _Ariadne+_[25] and _FASTDLO_[14]. For complex background images, the number of superpixels for _Ariadne+_ was set to 75, while simple background images used 200 superpixels. Both these values were chosen as optimal after performing a parameter sweep on each dataset.
For all experiments involving using the DCNN model, a pixel segmentation threshold of 50 and 77 (0-255) were used for the simple and complex backgrounds, respectively. Furthermore, though _Ariadne+_ has its own neural network for initial segmentation of the DLOs, we exchange this for _FASTDLO_'s DCNN model for consistency as well as the latter's model having better performance. Parameters for _mBEST_ involve an intersection clustering threshold of \(\epsilon=75\) and minimum pixel length threshold of \(\delta=40\) for complex background images. For simple background images, (\(\epsilon=45,\delta=25\)) and (\(\epsilon=40,\delta=25\)) were used for segmentations with DCNN and color filtering, respectively.
### _Results_
For experiments, we report two key metrics. First, we look at segmentation accuracy using the popular metric DICE. As all methods share the same initial segmentation method for creating the binary mask, we do not treat the background as one of the classes when computing the DICE scores. In addition to this, we also report the average run times for each algorithm in terms of frames per second (FPS). Both metrics can be seen listed in Table II for all experiments.
Fig. 4 showcases results for the complex background dataset. For this dataset, all three algorithms have relatively similar performance with _mBEST_ slightly edging out the others. As this dataset was originally used to evaluate _FASTDLO_, it comes as no surprise that all three algorithms perform well, especially since the configurations of the DLOs themselves are quite simple. Though segmentation accuracy between the algorithms are similar, _mBEST_ achieves roughly a
Fig. 4: Sample segmentations for the complex background dataset. Each row showcases segmentation results for a different image. From left to right, the first column showcases the original image. Columns 2-4 showcase _Ariadne+_, _FASTDLO_, and _mBEST_ results, respectively. Finally, the fifth column showcases ground truth.
12\(\times\) runtime improvement over _Ariadne+_ and 1.67\(\times\) runtime improvement over _FASTDLO_.
Where we start to see a large improvement in performance is the images containing complex configurations of DLOs as shown in Fig. 5. For these images, _mBEST_ and _Ariadne+_ significantly outperform _FASTDLO_ when using both DCNN and color filtering approaches, with _mBEST_ slightly beating the latter. We suspect that _Ariadne+_'s use of curvature as one of its neural network inputs helps it keep up with _mBEST_. Despite this, we can see that _Ariadne+_ is still susceptible to choosing incorrect paths when encountering intersections with high curvatures as shown in various examples in Fig. 5. In terms of runtime, we see that _mBEST_ has up to 2.66\(\times\) speed boost over _FASTDLO_ and up to a staggering 53.86\(\times\) speed boost over _Ariadne+_ when using color filtering. These improvements decrease when using the DCNN model which indicates that the DCNN passthrough takes a significant amount of computation time on its own. Overall, we show that our physically insightful intersection handling scheme can lead to more robust results when compared to blackbox neural networks.
### _Failure Cases_
Though _mBEST_ performs quite well for detecting complex tangles, it is still reliant on correctly setting \(\epsilon\). Improperly setting \(\epsilon\) by even a few pixels can result in improper handling
Fig. 5: Sample segmentations for the simple background dataset. Each row showcases segmentation results for a different image. From left to right, the first column showcases the original image. Columns 2-4 showcase results for _Ariadne+_, _FASTDLO_, and _mBEST_ using DCNN segmentation, respectively. Columns 5-7 do the same except with color filter segmentation. Finally, ground truth can be seen in the last column.
Fig. 6: Examples of _mBEST_ with DCNN segmentation working for scenes with complex DLO configurations and backgrounds.
of intersections as shown in Fig. 7 where \(\epsilon=40\) results in several topological intersections being incorrectly clustered as one. Removing this hyperparameter reliance will be a focus in future works.
## V Conclusion
In this work, we have introduced _mBEST_, an end-to-end pipeline for DLO segmentation that improves upon SOTA in both accuracy and computational speed. Through a wide variety of experiments, we have shown that _mBEST_ can robustly handle scenes with highly tangled DLOs by simply generating paths that minimize cumulative bending energy. For future work, we would like to explore solutions that take into consideration occlusions, strands touching in parallel, and removing the hyperparameter dependency of \(\epsilon\).
|
2301.10445 | High-Throughput Rate-Flexible Combinational Decoders for Multi-Kernel
Polar Codes | Polar codes have received growing attention in the past decade and have been
selected as the coding scheme for the control channel in the fifth generation
(5G) wireless communication systems. However, the conventional polar codes have
only been constructed by binary (2x2) kernel which poses block length
limitation to powers of 2. To attain more flexible block lengths, multi-kernel
polar codes are proposed. In this paper, a combinational architecture for
multi-kernel polar codes with high throughput is proposed based on successive
cancellation decoding algorithm. The proposed scheme can decode pure-binary,
pure-ternary (3x3), and binary-ternary mixed polar codes. The decoder's
architecture is rate-flexible meaning that a new code rate can be assigned to
the decoder at every clock cycle. The proposed architecture is validated by
FPGA implementation and the results reveal that a code of size N=81 gains the
coded throughput of 1664.5 Mbps. A novel Python-based polar compiler is also
proposed to automatically generate the HDL modules for target decoders. A
designer can input the target block length and kernel ordering of a polar code,
and get the required VHDL files automatically. Based on our simulations, the
majority of the required HDL files can be generated in less than 0.4 seconds. | Hossein Rezaei, Nandana Rajatheva, Matti Latva-aho | 2023-01-25T07:45:51Z | http://arxiv.org/abs/2301.10445v1 | # High-Throughput Rate-Flexible Combinational Decoders for Multi-Kernel Polar Codes
###### Abstract
Polar codes have received growing attention in the past decade and have been selected as the coding scheme for the control channel in the fifth generation (5G) wireless communication systems. However, the conventional polar codes have only been constructed by binary \((2\times 2)\) kernel which poses block length limitation to powers of \(2\). To attain more flexible block lengths, multi-kernel polar codes are proposed. In this paper, a combinational architecture for multi-kernel polar codes with high throughput is proposed based on successive cancellation decoding algorithm. The proposed scheme can decode pure-binary, pure-ternary \((3\times 3)\), and binary-ternary mixed polar codes. The decoder's architecture is rate-flexible meaning that a new code rate can be assigned to the decoder at every clock cycle. The proposed architecture is validated by FPGA implementation and the results reveal that a code of size \(N=81\) gains the coded throughput of \(1664.5\) Mbps. A novel Python-based polar compiler is also proposed to automatically generate the HDL modules for target decoders. A designer can input the target block length and kernel ordering of a polar code, and get the required VHDL files automatically. Based on our simulations, the majority of the required HDL files can be generated in less than \(0.4\) seconds.
Polar code, successive-cancellation decoder, multi-kernel, error-correcting codes, hardware implementation, polar compiler. +
Footnote †: publicationid: 0000-0000/00800.00 © 2023 IEEE
## I Introduction
Polar codes are introduced by Arikan [1] as a family of error-correcting codes with the capability to achieve symmetric channel capacity of the binary-input discrete memoryless channel when the code length approaches infinity. Using a recursive construction, a polar code of size \(N=2^{n}\) can be constructed by the \(n\)th Kronecker power of binary matrix \(T_{2}=\left[\begin{smallmatrix}1&1\\ 1&0\end{smallmatrix}\right]\) also known as Arikan's kernel. This construction converts the physical channel into \(N\) virtual synthetic channels whose reliability approaches either zero or infinity as the code length grows.
Arikan in [1] also proved that polar codes can achieve the symmetric channel capacity using successive cancellation (SC) decoding algorithm. Thenceforth, researchers have intensively sought to improve polar codes of limited size in terms of decoding latency under SC, complexity, power, and error-correction performance. Successive cancellation list (SCL) [2] decoding concatenated with CRC [3] improves the error-correction performance of polar codes allowing them to compete with other channel coding methods like low-density parity-check (LDPC) codes. This effort makes the foundation for the adaption of polar codes to the 3GPP fifth generation new radio (5G-NR) wireless communication standards [4].
The majority of current research however has focused on polar codes constructed by Arikan's kernel [1]. Using a \(2\times 2\) polarization matrix limits the block length of polar codes to powers of \(2\) which can not address all demanding code lengths and rates in beyond 5G networks. The rate-matching schemes such as puncturing and shortening [5, 6] methods have been proposed to cover non-binary block lengths. However, a priori performance and optimality of punctured and shortened codes are hard to evaluate. The polarization phenomenon attained by the Kronecker products of \(2\) can also be expanded to other kernels. The ternary kernel (\(3\times 3\)) in particular has been receiving increased attention due to its low complexity and offering polarization optimality. Using this method, multi-kernel (MK) polar codes [7, 8, 9] offer flexible code lengths by employing kernels with different dimensions. They are characterized by the same computational complexity as Arikan's polar codes and outperform similar punctured and shortened codes in terms of error-correction performance and complexity [8, 10].
In terms of hardware implementation, multiple decoders have been proposed for decoding Arikan's polar codes. The work in [11] adapts the architecture of Arikan's polar codes of [12] to MK codes constructed by Arikan's and ternary kernels. An architecture for decoding MK polar codes with reduced latency is proposed in [10]. However, two mentioned MK decoders of [10] and [11] use complex memory interfaces for reading/writing data from/to the memory for binary and ternary stages resulting in reduced coded throughput.
The motivation of this paper is to develop a flexible architecture to obtain high-throughput MK polar decoders with low power consumption based on the SC algorithm. We address this motivation by using pure combinational architectures. The recursive and feed-forward structure of the SC algorithm facilitates the adaption of pure combinational logic to polar decoders. The SC-based combinational decoders are fully scalable and operate in considerably lower frequencies compared to their sequential counterparts. However, they are able to decode an entire codeword in one long clock cycle which substantially reduces the dynamic power. A key characteristic of the proposed architecture is online rate assignment to a given block length meaning that a new code rate can be assigned to the decoder at every clock cycle.
In [13] we proposed the first MK combinational decoder. It has however two limitations. First, it does not support pure-ternary polar codes. Second, it does not provide the required
memory to load the next log-likelihood ratio (LLR) frame and frozen bit indicator, and also to offload the estimated codeword. Lacking these memories results in having limited throughput since the input data is not ready when the previous frame is decoded. Along with detailed performance and complexity analysis, these limitations are addressed in this work. The proposed architecture supports 55 different block lengths with a maximum block length of \(N_{max}=4096\) constructed by pure-binary, pure-ternary, and binary-ternary mixed kernels. An FPGA implementation and comparison to state-of-the-art MK decoders is conducted in order to validate the architecture.
Finally, we propose a Python-based hardware compiler to automate the process of generating the VHDL files required for the FPGA implementation of the proposed decoders. The motivation is that by changing the block length or kernel ordering, the entire VHDL modules are subjected to change. Using the proposed polar compiler, once the user inputs the block length and kernel ordering, the compiler automatically outputs all necessary VHDL files. In case the user does not enter a kernel order, the compiler automatically assigns the kernel ordering with the highest error-correction performance.
The remainder of this paper is organized as follows. In Section II, we present a background on Arikan's and MK polar codes. The code construction method along with the proposed architecture of the MK decoder and complexity analysis are explained in Section III. Section IV discusses the process of automatic generation of VHDL files for target decoders using the proposed polar compiler. The implementation results and comparison to previous works are detailed in section V. Finally, a conclusion will be given in section VI.
## II Preliminaries
In this section along with a background on polar codes, we provide the code construction methods of Arikan's and MK polar codes. Then, the SC algorithms for decoding Arikan's and MK codes will be summarised.
### _Arikan's Polar Codes_
\(\mathcal{PC}(N,K)\) denotes a polar code of size \(N\)=\(2^{n}\) with \(K\) bits of information where the code rate can be calculated as \(\mathcal{R}=K/N\). Channel polarization phenomenon is proved by Arikan in [1] for binary polar codes and it can be used to transform the physical channel \(W\) into \(N\) individual virtual channels \(W_{i}^{N}\) (\(1\leq i\leq N\)). The divided virtual channels feature relative increased or decreased reliabilities and in case \(N\rightarrow\infty\), the reliability of each channel approaches either \(1\) (perfectly reliable) or \(0\) (perfectly unreliable). Either the Bhattacharya parameters [1] or Gaussian approximation [14] can be used to designate individual reliable channels. The set of \(K\) most reliable bit positions is called the information set indicated by \(\mathcal{I}\) and the remaining \(N\)-\(K\) bit locations are called frozen set denoted by \(\mathcal{F}\). The bit values in frozen set locations are set to zero.
A linear transformation can be used to construct polar codes. It is expressed as \(x=uG\) where \(x\) indicates the encoded stream, \(u\) denotes an \(N\)-bit input vector to the encoder, and \(G\) is the generator matrix. The input vector \(u\) is constructed by inserting the message and frozen data into reliable and unreliable positions, respectively. The generator matrix \(G=T_{2}^{\otimes n}\) is constructed by the n-th Kronecker product of Arikan's kernel \(T_{2}=\left[\begin{smallmatrix}1&1\\ 1&0\end{smallmatrix}\right]\). It can be seen from the definition that \(G\) is constructed in a recursive way. As a result, a polar code of size \(N\) can be constructed by concatenating two codes of size \(N/2\).
### _MK Polar Codes_
By exploiting \(T_{2}\) as the only kernel of the generator matrix, the block lengths of larger codes will be limited to powers of \(2\). However, by considering LDPC WiMAX [15] code lengths as our guideline we can find out that the code lengths constructed by kernels other than \(T_{2}\) are needed. Utilizing only one or few non-binary kernels however, provides most of the desired code lengths. A series of Kronecker products between various kernels can construct the generator matrix as
\[G\triangleq T_{n_{0}}\otimes T_{n_{1}}\otimes\ldots\otimes T_{n_{s}} \tag{1}\]
for a code of size \(N=n_{0}\times n_{1}\times\ldots\times n_{s}\) with \(n_{i}\)s (\(0\leq i\leq s\)) being not necessarily individual prime numbers and \(T_{n}\)'s being squared matrices. Each distinctive prime number can be exploited as a kernel's dimensions. However, the least complex and most desirable non-binary kernel, ternary kernel, is defined as \(T_{3}=\left[\begin{smallmatrix}1&1&1\\ 0&1\end{smallmatrix}\right]\)[8]. The polarization optimality of \(T_{3}\) is proved in [16], though it has a lower polarization exponent compared to \(T_{2}\). In this paper, we investigate codes constructed by any combination of binary and ternary kernels which translates to pure-binary (Arikan's), pure-ternary, and binary-ternary mixed polar codes.
The block length of MK codes in this paper can be formulated as \(N=2^{n}\cdot 3^{m}\) with a generator matrix of \(G=\otimes_{i=0}^{n+m}T_{n_{i}}\) where \(n,m\in\mathbb{N}\) and \(0\leq i\leq n+m\). As an example, we consider the simplest MK polar code of size \(N=6\). There are two possible kernel sequences of \(T_{2}\otimes T_{3}\) and \(T_{3}\otimes T_{2}\) which results in two different generator matrices since the Kronecker product is not commutative. Thus, different kernel orderings shape exclusive polar codes with distinctive performance characteristics. The Tanner graph of the MK code of size \(N=6\) with \(G=T_{2}\otimes T_{3}\) is illustrated in Fig. 1 (a).
### _Arikan's and MK Successive-Cancellation Decoding_
The SC decoding algorithm is originally proposed by Arikan in [1]. This algorithm can be expanded to decode MK polar codes. The decoder tree of Fig. 1 (b) corresponds to the Tanner graph of Fig. 1 (a). The soft information (\(\alpha_{c}\)) enters the root of the tree in form of LLRs. With the condition of visiting the left node first, the LLRs need to traverse the tree and visit all leaves sequentially so that a codeword can be estimated. To this end, three functions are required in a given node \(\nu\). The first one is \(\alpha_{v_{l}}\) which is the set of LLRs to be transferred to the left branch and can be computed as
\[\alpha_{v_{l}}^{b}[i]=sgn(\alpha_{v}[i]\cdot\alpha_{v_{l}}[i+2^{(\lambda\cdot 1 )}])min(|\alpha_{v}[i]|,|\alpha_{v}[i+2^{(\lambda\cdot 1)}]|) \tag{2}\]
where \(i\in[0:2^{(\lambda\cdot 1)}-1]\) and \(\lambda\in[0,n]\) is the level of node \(\nu\) in the binary tree. After estimating the hard decisions of the
left branch at node \(\nu\) (\(\beta_{v_{t}}^{b}\)), we can calculate the set of LLRs to be sent to the right branch using
\[\alpha_{v_{r}}^{b}[i]=(1-2\beta_{v_{l}}^{b}[i])\alpha_{v}[2i]+\alpha_{v}[2i+1] \text{ for }i\in[0:2^{(\lambda\cdot 1)}-1] \tag{3}\]
where \(\alpha_{v_{r}}^{b}\) is the LLR of the right branch. After estimating the hard decision bits at the left and right branches, we can combine them to calculate the hard decisions corresponding to node \(\nu\) by
\[[\beta_{i}^{\nu b},\beta_{i+2^{(\lambda\cdot 1)}}^{\nu b}]=[\beta_{i}^{\nu b \lambda}\oplus\beta_{i}^{\nu br},\beta_{i}^{\nu br}], \tag{4}\]
where \(\oplus\) represents addition over \(\mathbb{F}_{2}\). In case \(\nu\) is a leaf node, the hard decision can be estimated as
\[\beta_{v}=\begin{cases}h(\alpha_{v}),&\text{if }v\in\mathcal{I},\\ 0,&\text{if }v\in\mathcal{F}\end{cases},\ h(x)=\begin{cases}0,&\text{if }x\geq 0,\\ 1,&\text{otherwise}.\end{cases} \tag{5}\]
Throughout the paper, we refer to (2), (3) and (4) as \(f^{b}\), \(g^{b}\) and \(C^{b}\), respectively, as shown in Fig. 1 (b). There are also two general functions that need to be defined. A binary sign function \(s(x)\) and a frozen bit indicator vector \(a\) formulated as
\[s(x)=\begin{cases}0,&\text{if }l\geq 0,\\ 1,&\text{otherwise}\end{cases},\ a_{i}=\begin{cases}0,&\text{if }i\in\mathcal{F},\\ 1,&\text{if }i\in\mathcal{I}.\end{cases} \tag{6}\]
A precise pseudo-code of binary SC decoding algorithm is given in the Algorithm 1.
```
\(N=length(\alpha)\) if\(N==2\)then \(\beta_{0}\gets s(f^{b}(\alpha))\cdot a(0)\) \(\beta_{1}\gets s(g^{b}(\alpha,\beta_{0}))\cdot a(1)\) return \(\beta\leftarrow(\beta_{0},\beta_{1})\) else \(\alpha^{\prime}\gets f^{b}_{N/2}(\alpha)\) \(a^{\prime}\gets a(0\ to\ N/2\text{-}1)\) \(\beta^{\prime}\gets Decode(\alpha^{\prime},a^{\prime})\) \(v^{\prime}\gets C^{b}(\beta^{\prime}(0\ to\ N/4\text{-}1),\beta^{\prime}(N/4 \ to\ N/2\text{-}1))\) \(\alpha^{{}^{\prime\prime}}\gets g^{b}_{N/2}(\alpha,v^{\prime})\) \(a^{{}^{\prime\prime}}\gets a(N/2\ to\ N\text{-}1)\) \(\beta^{{}^{\prime\prime}}\gets Decode(\alpha^{{}^{\prime\prime}},a^{ \prime})\) \(v^{\prime}\gets C^{b}(\beta^{{}^{\prime\prime}}(0\ to\ N/4\text{-}1),\beta^{{}^{ \prime\prime}}(N/4\ to\ N/2\text{-}1))\) return \(\beta^{b}\gets C^{b}(v^{\prime},v^{{}^{\prime\prime}})\)
```
**Algorithm 1**\(\beta^{b}=\) Decode(\(\alpha\), \(a\)) using binary SC
In the case of a pure-ternary node \(\nu\) located at level \(\lambda\), four functions need to be defined to meet the message-passing criterion. The decoding functions corresponding to the left, middle and right branches are shown by \(\alpha_{t_{t}}^{t}\), \(\alpha_{v_{r}}^{t}\) and \(\alpha_{v_{r}}^{t}\), respectively. For \(i\in\ [0,3^{\lambda\text{-}1}\text{-}1]\) the \(\alpha_{vt}^{t}\) is calculated as
\[\alpha_{v_{l}}^{t}[i]=\text{sgn}(\alpha_{v}[i]\cdot\alpha_{v}[i+3^{(\lambda \cdot 1)}]\cdot\alpha_{v}[i+2\times 3^{(\lambda\cdot 1)}]) \tag{7}\] \[\text{min}(|\alpha_{v}[i]|,|\alpha_{v}[i+3^{(\lambda\cdot 1)}]|,| \alpha_{v}[i+2\times 3^{(\lambda\cdot 1)}]|).\]
After calculating the hard decisions from the left branch (\(\beta_{v_{l}}^{t}\)), the LLRs can travel to the middle branch using
\[\alpha_{v_{v}}^{t}[i]=(1\text{-}2\beta_{v_{l}}^{t}[i])\alpha[i]+f^{b}(\alpha[i +3^{(\lambda\cdot 1)}],\alpha[i+2\times 3^{(\lambda\cdot 1)}]). \tag{8}\]
Finally, using the hard decisions from the left and middle branches, the LLR vector can proceed to the right branch by
\[\alpha_{v_{r}}^{t}[i]=(1\text{-}2\beta_{v_{l}}^{t}[i])\alpha[i+3^{(\lambda \cdot 1)}]+(1\text{-}2\beta_{v_{l}}^{t}[i]\oplus\beta_{v_{c}}^{t}[i])\alpha[i+2\] (9) \(\times 3^{(\lambda\cdot 1)}]\).
Now, the hard decisions can be combined at node \(\nu\) using
\[[\beta_{i}^{\nu t},\beta_{i+3^{(\lambda\cdot 1)}}^{\nu t},\beta_{i+2\times 3^{( \lambda\cdot 1)}}^{\nu t}]=[\beta_{i}^{\nu t_{l}}\oplus\beta_{i}^{\nu t_{d}},\beta_{i}^{ \nu t_{l}}\oplus\beta_{i}^{\nu t_{r}},\beta_{i}^{\nu t_{l}} \tag{10}\] \[\oplus\beta_{i}^{\nu t_{c}}\oplus\beta_{i}^{\nu t_{r}}].\]
As can be seen in Fig. 1 (b), equations (7), (8), (9) and (10) are referred as \(f^{t}\), \(g_{1}^{t}\), \(g_{2}^{t}\) and \(C^{t}\), respectively. An accurate statement of ternary SC decoding is given in Algorithm 2.
## III MK codes: Construction and Architecture
### _MK Code Construction_
In [8] a code construction method for MK polar codes which yields significant error-correction performance improvement compared to puncturing [17] and shortening [18] methods is proposed. The encoding complexity of this method remains low and the same general structure of Arikan's polar codes can be used for decoding. The error-correction performance of MK polar code of size \(N=72\) and \(\mathcal{R}=1/2\) with generator matrix as \(G=T_{3}\otimes T_{2}\otimes T_{2}\otimes T_{2}\otimes T_{3}\) is depicted in Fig. 2. A binary phase-shift keying (BPSK) modulation over an additive white Gaussian noise (AWGN) channel is used in our simulations. Clearly, the performance of the MK code exceeds that of punctured and shortened codes constructed using a mother code of \(N^{\prime}=128\).
Arbitrary kernel orderings can be employed to construct the MK polar codes. However, given that the Kronecker product is not commutative, different kernel orderings represent distinctive error-correction performance behaviors [8, 9]. At the present time, no theoretical way is available to find the best kernel ordering. Therefore, for a given code
we need to perform simulations to find the kernel order offering the best error-correction performance. Throughout this paper, the method proposed in [9] is used to find the kernel orderings since it outperforms [8] in terms of error-correction performance as depicted in Fig. 3. As mentioned earlier, the LDPC WiMAX block lengths [15] are used as our guideline. Thus we mainly focus on the codes desired by this standard.
In terms of complexity, MK codes feature substantially lower complexity with respect to puncturing and shortening methods. This stems from the fact that the Tanner graph of MK polar codes is smaller in comparison to that of puncturing and shortening methods which employ a mother code of size \(N^{\prime}=2^{\lceil\log_{2}N\rceil}\). The mother code determines the complexity of punctured and shortened codes. A complexity metric can be defined as the overall required number of LLR computations in each method. We assume that the number of kernels used in the code construction is \(s\) which is equivalent to the number of stages in the code's Tanner graph. Using \(N\times s\) and \(N^{\prime}\log_{2}N^{\prime}\), we can compute the complexity metric of MK and punctured/shortened codes, respectively. Fig. 4 demonstrates the complexity gain of MK codes with reference to punctured and shortened codes for a variety of block lengths. Obviously,
Fig. 3: The error-correction performance of MK code of size \(\mathcal{PC}(144,72)\) constructed by methods of [8] (MK-SCL) and [9] (MK-SCL-MD).
Fig. 2: The error-correction performance of MK code of \(\mathcal{PC}(72,36)\) compared to that of puncturing and shortening methods.
MK codes offer lower LLR computational complexity ranging from \(32.5\%\) to \(62.5\%\) compared to that of punctured and shortened codes.
### _Proposed Decoder Architecture_
Given that the SC algorithm contains no loops, we can design a pure combinational architecture that includes no memory elements between the input and output stages. The prime objective is to obtain high throughput. In this section, we primarily describe the method of implementing belief propagation functions. After, the proposed combinational architecture of Arikan's, pure-ternary and binary-ternary mixed polar codes will be described.
#### Iii-B1 Belief Propagation functions
To design the combinational functions, similar to [19, 20] and in order to avoid conversions between different representations, we use \(Q\) bits to represent the channel observation LLRs in sign-magnitude format. By directly employing equation (2), the \(f^{b}\) function can be implemented by utilizing a comparator and a multiplexer. The pre-computation look-ahead approach can be used at any depth to decrease the latency of SC algorithm [21] at the cost of hardware complexity. Using this technique, all possible output candidates can be pre-computed in one clock cycle, and then the correct candidate can be selected afterward. In all of Arikan's codes of this paper, the polar code of size \(N=4\) is used as the basic building block. The \(g_{2}^{b}\) function in the kernel is implemented by exploiting the pre-computation method. The proposed logic for the pre-computation circuitry exploited in the implementation of the binary basic building block of size \(N=4\) is depicted in Fig. 5. The frozen bit indicators (\(a\)) are not shown here. The implementation of the proposed binary decision logic circuit corresponding to a polar code of size \(N=2\) using only comparators, multiplexers, and logic gates is illustrated in Fig. 6. To further alleviate the latency and complexity of Arikan's polar codes, the decision logic of Fig. 6 estimates the indices in forms of \(2i\) and \(2i\)+\(1\) (\(0\leq i<N/2\)) as
\[\beta_{2i}=(s(\alpha_{2i})\oplus s(\alpha_{2i+1}))\cdot a_{2i}, \tag{11}\]
\[\beta_{2i+1}=\begin{cases}s(\alpha_{2i+1})\cdot a_{2i+1}&\text{if }|\alpha_{2i+1}|\geq|\alpha_{2i}|\\ (s(\alpha_{2i})\oplus\beta_{2i})\cdot a_{2i+1}&\text{otherwise,}\end{cases} \tag{12}\]
The pseudo-code of the proposed SC decoding of binary polar codes is summarized in Algorithm 3. There are two key differences between the proposed algorithm in comparison to that of [19]. The first one is the way we implement the decision logic. Second, the architecture of [19] exploits an encoder of size \(N/2\) in each stage as a part of the glue logic, whereas our proposed architecture employs \(C^{b}\) instead. This leads to consuming substantially lower XOR gates. This fact translates directly into lower latency as \(C^{b}\) is composed of only one layer of XOR gates which has considerably lower latency compared to an encoder of size \(N/2\). For instance, in a polar code of size \(N=32\), \(42,3\%\) of the overall XOR gates can be saved. This modification also results in reducing the consumption of the interconnect resources. Saving interconnect resources is important because interconnect congestion is a phenomenon that limits the performance of large combinational circuits implemented on FPGA.
In order to implement the combinational ternary decoder, three functions \(f^{t}\), \(g_{1}^{t}\) and \(g_{2}^{t}\) are needed to be implemented. Similar to \(f^{b}\) and using equation (7), we can implement the
Fig. 4: The complexity gain of MK codes versus punctured and shortened codes using the defined complexity metric.
Fig. 5: The binary pre-computation circuitry employed in the binary basic building block of size \(N=4\).
Fig. 6: The binary decision logic equivalent to a polar code of size \(N=2\).
```
\(N=length(\alpha)\) if\(N==2\)then \(\beta_{0}\leftarrow(s(\alpha(0))\oplus s(\alpha(1)))\cdot a(0)\) if\(abs(\alpha(1))\geq abs(\alpha(0))\)then \(\beta_{1}\gets s(\alpha(1))\cdot a(1)\) else \(\beta_{1}\leftarrow(s(\alpha(0))\oplus\beta(0))\cdot a(1)\) end if return \(\beta\leftarrow(\beta_{0},\beta_{1})\) else if\(N==4\)then \(\alpha^{\prime}\gets f^{b}_{2}(\alpha)\) \(a^{\prime}\gets a(0\ to\ 1)\) \(\beta^{\prime}\gets Decode(\alpha^{\prime},a^{\prime})\) \(v^{\prime}\gets C^{b}(\beta^{\prime}(0),\beta^{\prime}(1))\) \(\alpha^{{}^{\prime\prime}}_{0}\gets g^{b}_{2}(\alpha,(0,0))\) \(\alpha_{1}\gets g^{b}_{2}(\alpha,(0,1))\) \(\alpha_{2}\gets g^{b}_{2}(\alpha,(1,0))\) \(\alpha_{3}\gets g^{b}_{2}(\alpha,(1,1))\) \(a^{{}^{\prime}}\gets a(2\ to\ 3)\) \(\beta^{{}^{\prime\prime}}_{0}\gets Decode(\alpha^{{}^{\prime\prime}}_{0 },a^{{}^{\prime\prime}}_{0})\) \(\beta^{{}^{\prime}}_{1}\gets Decode(\alpha_{1},a^{{}^{\prime\prime}}_{0 })\) \(\beta^{{}^{\prime\prime}}_{0}\gets Decode(\alpha^{{}^{\prime}}_{0 },a^{{}^{\prime\prime}}_{0})\) \(\beta^{{}^{\prime\prime}}_{3}\gets Decode(\alpha^{{}^{\prime\prime}}_{3},a^{{}^{ \prime\prime}})\) if\(v^{\prime}(0)==0\)then if\(v^{\prime}(1)==0\)then \(v^{{}^{\prime\prime}}\gets C^{b}(\beta^{{}^{\prime\prime}}_{0}(0),\beta^{{}^{ \prime\prime}}_{0}(1))\) else \(v^{{}^{\prime\prime}}\gets C^{b}(\beta^{{}^{\prime\prime}}_{1}(0),\beta^{{} ^{\prime\prime}}_{1}(1)))\) end if else if\(v^{\prime}_{1}==0\)then \(v^{{}^{\prime\prime}}\gets C^{b}(\beta^{{}^{\prime\prime}}_{2}(0),\beta^{{} ^{\prime\prime}}_{2}(1)))\) else \(v^{{}^{\prime\prime}}\gets C^{b}(\beta^{{}^{\prime\prime}}_{3}(0),\beta^{{} ^{\prime\prime}}_{3}(1)))\) end if end if return \(\beta\leftarrow(v^{\prime},v^{{}^{\prime\prime}})\) else \(\alpha^{\prime}\gets f^{b}_{N/2}(\alpha)\) \(a^{\prime}\gets a(0\ to\ N/2\text{-}1)\) \(\beta^{\prime}\gets Decode(\alpha^{\prime},a^{\prime})\) \(v^{\prime}\gets C^{b}(\beta^{{}^{\prime\prime}}(0\ to\ N/2\text{-}1),\beta^{{} ^{\prime\prime}}(N/2\ to\ N\text{-}1))\) \(\alpha^{{}^{\prime\prime}}\gets g^{b}_{N/2}(\alpha,v^{\prime})\) \(a^{{}^{\prime\prime}}_{\nu}\gets a(N/2\ to\ N\text{-}1)\) \(\beta^{{}^{\prime\prime}}_{\nu^{\prime}}\gets Decode(\alpha^{{}^{\prime \prime}},a^{{}^{\prime\prime}})\) \(v^{{}^{\prime\prime}}\gets C^{b}(\beta^{{}^{\prime\prime}}(0\ to\ N/2\text{-}1),\beta^{{}^{\prime\prime}}(N/2\ to\ N\text{-}1))\) return \(\beta\gets C^{b}(v^{\prime},v^{{}^{\prime\prime}})\) end if end for
```
**Algorithm 3**\(\beta^{b}\) = \(Decode(\alpha\), \(a\)) using the proposed approach
Fig. 9 depicts the generalized combinational architecture of a decoder of size \(N\) constructed using Arikan's kernel. An Arikan's decoder of size \(N\) is composed of two decoders of size \(N/2\) glued by a \(f^{b}\), a \(g^{b}\), and a \(C^{b}\). A decoder of size \(N=4\) is exploited as the basic building block of Arikan's decoders. Unlike the decoder of [19], the proposed decoder includes the required memory to load the next LLR frame and frozen bit indicator set, and also the necessary memory to offload the earlier estimated codeword.
Fig. 10 illustrates the proposed combinational architecture of a pure-ternary polar code of size \(N\). A decoder of size \(N\) is constructed by three decoders of size \(N/3\). The glue logic includes one \(f^{t}\), one \(g^{t}_{1}\), one \(g^{t}_{2}\) and two \(C^{t}\) of size \(N/3\). A decoder of size \(N=3\) is employed as the basic building block in this case. The memory structure is similar to that of the proposed Arikan scheme.
As a result of the recursive nature of the SC algorithm, a MK code of size \(N\) can be constructed by a mixed design of Arikan's and ternary architectures. The kernel order determines whether to use \(N=3\) or \(N=4\) as the decision logic circuitry. In case the kernel order does not meet the condition to use \(N=4\) as the decision circuitry, it can be replaced by \(N=2\). The only difference is that the pre-computation method is no longer used. However, in all MK codes of this paper, the condition for using \(N=4\) as the decision circuit is met. After selecting the basic building block, the kernel order determines whether to use a binary or ternary glue circuitry as the top stage. We continue employing the glue logic stages until the target block code is constructed.
The architecture of a MK polar code of size \(N=6\) correlated to the decoder tree of Fig. 1 (b) with \(G=T_{3}\otimes T_{2}\) is displayed in Fig. 11. To avoid congestion, the registers and frozen bit indicators are not shown here. A binary decision-making circuitry (\(N=2\)) is exploited as the basic building block of this decoder as the last kernel in the kernel sequence is a \(T_{2}\). Having a \(T_{3}\) as the next kernel, the glue logic is composed of one \(f^{t}\), one \(g^{t}_{1}\), one \(g^{t}_{2}\), and three binary combine logics (\(C^{b}\)). A ternary combine function (\(C^{t}\)) is also employed to receive the estimated codeword at the root of the tree before writing them into the output registers. Whether to use a binary or ternary combine logic before writing the estimated codeword into the output registers is determined by the first kernel in the kernel sequence (\(T_{3}\) here).
#### Iv-B3 Memory Architecture
The proposed Arikan's, pure-ternary and MK combinational architectures occupy \(N\times(Q+2)\) register bits. These memory elements are utilized to store the input LLRs (\(N\times Q\) bits), estimated codeword (N bits),
Fig. 10: The proposed combinational architecture of the pure-ternary decoder.
Fig. 9: The proposed combinational architecture of the Arikan’s decoder.
and frozen bit indicator set (N bits). It can be seen from Fig. 9, Fig. 10 and Fig. 11 that there are no synchronous logic elements, i.e registers or RAM arrays, between the input and output registers. This characteristic of combinational decoders leads to power and processing time efficiency. Removing the RAM routers also causes decreased hardware complexity and eliminates long read/write latencies. The latency of the decoder is one clock cycle since it generates the estimated codeword in one long clock cycle after accepting the input LLRs. Therefore, the logic between the input and output registers determines the overall critical path.
### _Complexity Analysis of the MK Decoder_
The complexity of the proposed architecture can be expressed as the total number of the basic building blocks i.e. comparators, adders, and subtractors in the design. Let's assume that \(c_{N}^{b}\) indicates the number of comparators utilized in implementing \(f^{b}\). It can be seen in Algorithm 3 that the initial value of the consumed comparators for the Arikan's decoder is \(c_{4}^{b}=2\). It is shown in [19] that the total number of basic building blocks of a combinational-logic-based Arikan's decoder of size \(N\) with \(c_{4}^{b}=2\) can be estimated as
\[c_{N}^{b}+s_{N}^{b}+r_{N}^{b}=N(\frac{3}{2}\log_{2}(N)-1)\approx\frac{3}{2}N \log_{2}(N), \tag{16}\]
where \(s_{N}^{b}\) and \(r_{N}^{b}\) express the number of comparators consumed in the decision logic and the total number of adders and subtractors employed in implementing \(g^{b}\), respectively. Equation (16) proves that the complexity of the Arikan's combinational decoder is in the order of \(\mathcal{O}(N\log_{2}(N))\).
For a pure-ternary polar code, the number of comparators (\(c_{N}^{t}\)) used for implementing \(f^{t}\) and \(g_{1}^{t}\) for a decoder of size \(N\) has the recursive relationship of
\[c_{N}^{t}=3c_{\frac{3}{3}}^{t}+N=3(3c_{\frac{3}{3}}^{t}+\frac{N}{3})+N=\ldots. \tag{17}\]
Using Algorithm 4, we can initialize \(c_{N}^{t}\) by \(c_{3}^{t}=3\). The recursion equation of (17) can exactly be computed as \(N\log_{3}(N)\). Knowing that the number of comparators used in the decision logic for \(N=3\) is \(s_{3}^{t}=3\), we obtain \(s_{N}^{t}=N\).
Finally, the number of adders and subtractors will be estimated. The function \(g_{1}^{t}\) is implemented by one adder and one subtractor, and function \(g_{2}^{t}\) is implemented by two adders and two subtractors. As result, the total number of adders and subtractors can be computed as \(r_{N}^{t}=\frac{3}{2}c_{N}^{t}\). Thus, the number of basic logic blocks of a pure-ternary decoder can be estimated as
\[c_{N}^{t}+s_{N}^{t}+r_{N}^{t}=N(\frac{5}{2}\log_{3}(N)+1)\approx\frac{5}{2}N \log_{3}(N), \tag{18}\]
Equation (18) verifies that the complexity of the pure-ternary combinational decoder is in the order of \(\mathcal{O}(N\log_{3}(N))\). Using (16) and (18), it is obvious that compared Arikan's and pure-ternary polar codes with block lengths in the same range (take \(N^{b}=2048\) and \(N^{t}=2187\) as an example), the complexity of Arikan's codes is lower than that of pure-ternary codes. This is due to the fact that ternary belief propagation functions and decision logic circuits are more complex than those of Arikan. Therefore, we can conclude that the complexity of the proposed mixed-kernel decoders with \(N_{min}=2\) and \(N_{max}=4096\) is lower-bounded by \(\frac{3}{2}N\log_{2}(N)\) and upper-bounded by \(\frac{5}{2}N\log_{3}(N)\), i.e.
\[\frac{3}{2}N\log_{2}(N)\leq c_{N}^{MK}\leq\frac{5}{2}N\log_{3}(N). \tag{19}\]
It should be noted that providing a general equation for complexity analysis of MK codes is not possible as it directly depends on the number and location of different kernels in the kernel sequence.
## IV Auto-Generation of Combinational Polar Decoders
### _High-Level Synthesis_
The process of transforming a higher-level description of algorithms or behaviors into a register-transfer level (RTL) is known as high-level synthesis (HLS) [22]. HLS tools transform high-level programming languages such as C/C++ or Python into hardware description language (HDL). In MK combinational architecture, all the HDL sub-modules need to be modified when the block length or the kernel order is changed. The proposed algorithms can automatically be transformed into HDL to speed up the development process of combinational decoders.
### _Generation Process_
Using Algorithms 3 and 4, we developed a polar compiler [23] scripted in Python to automate the process of generating HDL files for implementing various size decoders with different kernel orderings. The user needs to enter the block length and optional kernel order of the target polar code. In case the user does not enter a kernel order, the compiler automatically assigns the kernel ordering with the highest error-correction performance. There are several functions corresponding to formulas (2-10), basic building blocks, top modules and sub-modules, and interface. The polar compiler calls relevant functions based on the predefined rules. The functions take their inputs from the compiler's top module and output the requested VHDL files. Generally, the process of compilation is similar to HLS flow:
Parameters (specified by user) \(\rightarrow\) functions (high-level description) \(\rightarrow\) VHDL files (HDL)
### _Time Efficiency_
The time efficiency of the proposed polar compiler is evaluated using an AMD Ryzen 7 PRO 5850U x64 CPU operating at 1.90 GHz frequency. The required time for generating all
Fig. 11: The combinational MK polar decoder for \(N=6\) and \(G=T_{3}\otimes T_{2}\).
necessary VHDL files for polar decoders of various sizes are shown in Fig. 12. Each data value is measured by running the proposed compiler \(20\) times and calculating the average value. The CPU time changes based on the required number and complexity of sub-modules which is directly affected by the block length as well as the ordering of kernels. Therefore, it is expected that the required time for compiling MK decoders is higher than that of Arikan's decoders as can be seen in Fig. 12. However, for the majority of polar codes, the compile time is less than \(0.4\) seconds whereas that of the longest MK polar code is \(0.88\) seconds. Thus using the proposed polar compiler is an efficient way of generating all the required VHDL files for combinational polar decoders.
## V Implementation Results and Comparison
All polar codes of this paper are described using VHDL coding in Xilinx Vivado 2019.1 environment. In order to validate the design, logic synthesis, technology mapping, and place-and-route are conducted targeting Xilinx Virtex-6 FPGA (40 nm). Using BPSK modulation over an AWGN channel, a software program generates the random codewords and transfers them to the FPGA. The flexibility and scalability of the proposed decoder are evaluated by implementing different codes with different kernel orderings. In our experiment, we use an extra set of registers to store the input, output, and frozen pattern data. This method allows the decoder to decode a frame with a given frozen pattern, and load another frame and its corresponding frozen set which prevents performance degradation. Likewise, the estimated codeword can be transferred while another decoding is ongoing.
To facilitate the comparison between different schemes, the decoding latency is defined as the time required for decoding a frame. Similar to the binary case, the coded and information throughput of MK polar codes can be calculated as \(\mathcal{T}_{\mathcal{C}}=N.f\) and \(\mathcal{T}_{\mathcal{C}}=N.f.\mathcal{R}\), respectively.
### _Error-Correction Performance and Quantization_
As mentioned earlier, the LDPC WiMAX standard [15] states that a considerable number of block lengths can be constructed by using only one or a few non-binary kernels. The error-correction performance of codes exploiting only one non-binary kernel and representing LLRs in floating-point format is depicted in Fig. 13 as this standard pays special attention to such codes.
We define the quantization scheme as \(Q(Q_{i},Q_{\mathcal{C}})\), where \(Q_{i}\) and \(Q_{c}\) stand for the total number of bits used for representing the number of internal and channel bits of LLRs, respectively. Fig. 14 compares the performance loss of \(Q(4,4)\), \(Q(5,5)\), and \(Q(6,6)\) for a polar code of \(\mathcal{PC}(1024,512)\). Obviously, \(Q(5,5)\) leads to an error-correction performance fairly close to that of the floating-point counterpart with a very negligible margin compared to \(Q(6,6)\). Therefore in this work, we select \(Q(5,5)\) as the quantization scheme.
### _FPGA Implementation Results and Comparison_
Routing in a combinational decoder gets more complicated as the code length increases due to consuming a larger number of logic blocks. The interconnect delay, therefore, increases as the code length grows. Especially this phenomenon reveals itself in FPGAs (as opposed to ASICs) due to using pre-fabricated routing resources.
Table I summarizes the FPGA utilization and timing performance of the proposed combinational decoder versus that of [19] for a wide range of block lengths. It can be seen that in all cases, the proposed decoder roughly doubles the operating frequency and coded throughput. Similar to [19], the performance of the proposed decoder drops as the codelength grows due to interconnect delay. However, this effect is lower in our implementation since it consumes lower logic resources (ranges from 16% to 47%) which is generally due to replacing the encoders with combine logics in the decoder's architecture. It should be noted that no look-up tables (LUTs) are used as memory. The registers in our design are used for retaining the input LLRs, frozen bit pattern as well as the estimated output. They are also employed for implementing small logic circuits. The proposed combinational decoder consumes from 2.6 to 3.09 times the number of registers in comparison to [19]. The main reason is that [19] does not support loading the next frame (and its corresponding frozen pattern bits) while decoding another. It also does not contain the required memory for offloading the previously estimated codeword while another frame is getting decoded. Finally, the proposed decoder does not consume any RAM, while [19] occupies 206 to 7168 bits of RAM. From Table I, it can also be seen that the scalability of the decoder is also improved. The resource
Fig. 12: Compile time for various size polar codes.
consummation of a code of length \(N\) is 2 times greater than that of a code of \(N/2\) plus some overhead stemming from the glue logic that connects two \(N/2\) codes.
The FPGA utilization and performance parameters of various pure-ternary and MK codes are tabulated in Table II. We have tried to incorporate all possible kernel orderings. The resource consumption is determined by the number of kernels used to construct a given code. Although ternary layers occupy more resources compared to the binary layers, however, they construct bigger codes. For instance, two codes of size 512 (9 kernels) from Table I and 576 (8 kernels) from Table II almost occupy the same amount of LUTs and registers. Therefore, the resource consumption is proportional to the target block length, not the kernel order nor the basic building block, which again shows the scalability of the design. Table II also provides the performance parameters of the same block lengths reported in [11] for the sake of comparison. The proposed scheme improves latency in the range of 63% to 75%. In terms of the operating frequency, our architecture operates in 2 to 3 orders of magnitude less than the architecture of [11]. The lower operating frequency is directly translated to dynamic power saving. Finally, the proposed decoder offers \(1.68\times\) to \(3.05\times\) higher throughput concerning codes of [11].
The FPGA utilization and timing performance of various state-of-the-art decoders implemented under SC [11], unrolled SC [24], fast-SSC [25] and MK fast-SSC [10] is summarized in Table III. It should be noted that only [10] supports MK polar codes. To have a fair comparison, all designs are either implemented or scaled to 40 nm technology using the scaling techniques from [27]. Our scheme consumes an almost equal
Fig. 14: Impact of LLR quantization on the error-correction performance of MK code of \(\mathcal{PC}(1024,512)\).
Fig. 13: The error-correction performance of MK polar codes of rate \(R=\frac{1}{2}\).
amount of memory as [11] where it offers 31% lower latency, 3 orders of magnitude lower operating frequency, and \(2.07\) times the throughput. With respect to [24], both schemes occupy roughly the same amount of logic (LUTs) where [24] consumes 72% more registers. In terms of timing performance, [24] has 81% and 2 orders of magnitude higher latency and operating frequency, respectively. It however gains 23.5% throughput. In comparison to designs based on fast-SSC [25] and [10], the proposed scheme consumes approximately \(5\times\) logic and needs nearly 20 Kbits more registers. Note however that our decoder utilizes no RAM, while [25] and [10] require 36.8 and 39.26 Kbits of RAM, respectively. Furthermore, the proposed decoder achieves \(2.09\times\) and \(1.81\times\) higher throughput in comparison to [25] and [10], respectively. Finally, our proposed decoder and [10] support \(55\) different block lengths, however, other decoders support only \(15\) different codes.
## VI Conclusion
In this paper, we proposed a combinational-logic-based hardware architecture for decoding MK polar codes based on the SC algorithm. The proposed architecture offers a high throughput supporting an online rate assignment mechanism. It can decode an entire codeword in only one clock cycle which lowers the operating frequency and dynamic power consumption with reference to the synchronous SC-based architecture. FPGA utilization for a variety of block lengths and kernel orderings is reported. Based on the implementation results, the proposed decoder can obtain the coded throughput of up to 1664.5 Mbps for a code of size \(N=81\). A complexity analysis is also provided which verifies the implementation results.
Finally, we built a Python-based polar compiler that can automatically generate the VHDL files needed for the FPGA implementation of the proposed decoders. By entering the block length and its kernel order, the polar compiler simply outputs all required VHDL modules automatically. The compile time for the polar compiler is also provided.
## Acknowledgment
This research has been supported by the Academy of Finland, 6G Flagship program under Grant 346208.
|
2308.12981 | An approach based on Open Research Knowledge Graph for Knowledge
Acquisition from scientific papers | A scientific paper can be divided into two major constructs which are
Metadata and Full-body text. Metadata provides a brief overview of the paper
while the Full-body text contains key-insights that can be valuable to fellow
researchers. To retrieve metadata and key-insights from scientific papers,
knowledge acquisition is a central activity. It consists of gathering,
analyzing and organizing knowledge embedded in scientific papers in such a way
that it can be used and reused whenever needed. Given the wealth of scientific
literature, manual knowledge acquisition is a cumbersome task. Thus,
computer-assisted and (semi-)automatic strategies are generally adopted. Our
purpose in this research was two fold: curate Open Research Knowledge Graph
(ORKG) with papers related to ontology learning and define an approach using
ORKG as a computer-assisted tool to organize key-insights extracted from
research papers. This approach was used to document the "epidemiological
surveillance systems design and implementation" research problem and to prepare
the related work of this paper. It is currently used to document "food
information engineering", "Tabular data to Knowledge Graph Matching" and
"Question Answering" research problems and "Neuro-symbolic AI" domain. | Azanzi Jiomekong, Sanju Tiwari | 2023-08-23T20:05:42Z | http://arxiv.org/abs/2308.12981v1 | # An approach based on Open Research Knowledge Graph for Knowledge Acquisition from scientific papers
###### Abstract
A scientific paper can be divided into two major constructs which are Metadata and Full-body text. Metadata provides a brief overview of the paper while the Full-body text contains key-insights that can be valuable to fellow researchers. To retrieve metadata and key-insights from scientific papers, knowledge acquisition is a central activity. It consists of gathering, analyzing and organizing knowledge embedded in scientific papers in such a way that it can be used and reused whenever needed. Given the wealth of scientific literature, manual knowledge acquisition is a cumbersome task. Thus, computer-assisted and (semi-)automatic strategies are generally adopted. Our purpose in this research was two fold: curate Open Research Knowledge Graph (ORKG) with papers related to ontology learning and define an approach using ORKG as a computer-assisted tool to organize key-insights extracted from research papers. This approach was used to document the "epidemiological surveillance systems design and implementation" research problem and to prepare the related work of this paper. It is currently used to document "food information engineering", "Tabular data to Knowledge Graph Matching" and "Question Answering" research problems and "Neuro-symbolic AI" domain.
keywords: Digital libraries, Scientific papers, Open Research Knowledge Graph, Knowledge Acquisition, Knowledge management applications, Data and knowledge visualization +
Footnote †: journal: xxx
## 1 Introduction
Scientific papers are one of the greatest assets for scientists. They constitute one of the primary source of knowledge for researchers, and sometimes
for decision makers [1; 2]. They are recorded, indexed and disseminated in scientific publication repositories such as ISI Web of Knowledge, IEE Xplore, Springer, ACM, ScienceDirect, Scopus, Semantic Scholar, etc. In consequence, the body of scientific literature is growing at an enormous rate [2; 3; 4; 5]. This wealth of scientific knowledge is widely disseminated to users who now possess an unprecedented problem of access to scientific literature [4; 5; 6]. In effect, this increase in scientific content poses significant challenges for the researchers who want to sort through, read, understand, compare, and build upon to determine for instance, the state of art in their respective field of interest [4; 7].
Globally, a scientific paper can be divided into two major constructs which are Metadata and Full-body text [4; 5]. Metadata provides a brief overview of the scientific papers and the Full-body text contains valuable information that is beneficial to fellow researchers. To retrieve metadata and key-insights from scientific papers, Knowledge Acquisition (KA) [8] is a central activity in research.
Knowledge are facts, information and skills acquired through experience or education for understanding of a subject area [9]. Concerning scientific papers, knowledge are metadata provided by editors and authors, and key-insights provided by authors which are used by fellow researchers to understand the scientific paper content. Knowledge Acquisition from scientific papers refers to the method for gathering, analyzing and organizing knowledge embedded in these papers. This involves the extraction of structured content in the form of entities, relations, facts, terms, and other types of information that may help researchers to understand the papers and get insights from them [6]. After its acquisition, knowledge is organized in such a way that it can be used and reused whenever needed. Globally, knowledge acquisition can happen through a wide variety of strategies that vary from completely manual to totally automated [1; 7; 8]. Concerning knowledge acquisition from scientific papers, we distinguished the manual process [1; 7; 10; 11] and the (semi-)automatic process.
Given the amount of scientific papers that a domain may have, the manual process can be a cumbersome job, time consuming, not scalable and not efficient. To reduce the burden of KA, computer-assisted and (semi-)automated strategies are proposed [2] for processing and cataloging scientific knowledge, for assisting researchers to choose their papers, navigate amongst papers, compare them and get insights from them.
During the last decades, many researchers have contributed to the automatic extraction of metadata from scientific papers. Multiple rule-based, machine learning and NLP techniques have been proposed [1; 4; 5]. Con
cerning knowledge extraction from the full-body text, it has been reported that key-insights are deeply hidden in the text and are difficult to extract [1; 2; 3; 4; 12]. To allow researchers to collaboratively build the body of knowledge from their domain and research interest, we propose a computer-assisted knowledge acquisition approach. This is based on the use of Open Research Knowledge Graph (ORKG) [3] for automatic acquisition of metadata and manual annotation of the paper with key-insights to produce a semantic description of the scientific knowledge of the domain in a Knowledge Graph (KG). Once extracted and organized, research contributions can be compared using annotated tables and graphics.
This approach is inspired by the use of ORKG in our research since three years to: (1) Organize and compare research contributions so as to build a large dataset of up-to-date knowledge for the following research problems: "ontology learning", "epidemiological surveillance systems design and implementation", "food information engineering", "Tabular data to Knowledge Graph Matching", "Question Answering", and "information extraction from scientific papers", and "Neuro-symbolic AI" domain. (2) Organize research so as to facilitate the update and improvement with the contributions of fellow researchers working on the same research problem or the same domain.
In the rest of the paper, we present Open Research Knowledge Graph in Section 3 and the research methodology in Section 4. In Section 5, we present the approach we propose for Knowledge Acquisition from scientific papers using ORKG and in Section 5 we present the use of this approach on 3 use cases: "epidemiological surveillance systems", "food information engineering" and "knowledge extraction from scientific papers". The latter use case was used to write the related work of this paper (Section 7). Finally, in Section 8, we conclude.
## 2 Scientific papers description
On the basis of its structure, knowledge contained in a scientific paper is broadly classified into two major categories which are metadata (see Section 2.1) and key-insights (see Section 2.2).
### Metadata
Metadata information is used either for scientific paper recommendation by research repositories, or to furnish a brief overview about a scientific paper. The latter allows a researcher to decide the paper's relevance with their domain of interest [4]. Metadata can be defined in two main components:
those that are assigned by the authors (such as the title of the paper, Abstract, Keywords, etc.) and those that are assigned by the editors (such as BibTex and/or DOI, Copyright, Date of publication, etc.).
Metadata extractionMetadata extraction (ME) refers to the identification and extraction of metadata elements. In order to perform ME, there exist multiple datasets that vary on the basis of article's sources, publication venues, etc. On these datasets, multiple automatic approaches are applied. They use the DOI, BibTex or the title of the paper to search and fetch these papers from scientific repositories. Thereafter, rule and/or Machine Learning techniques are used to extract these metadata [4].
### Key-insights
The full body text of the paper hides the key-insights/knowledge that the readers need to extract in order to understand the paper. Even if the authors can choose their own way to organize the full-body text, the journal's Guide for Authors provides to the authors a template composed of the different sections that the paper may include. Whatever the organization of the paper provided by the authors, one can identify the introduction, Research methods and methodologies, Results, Discussion, Related work and/or literature review and conclusion.
Knowledge extraction from the full-content of a scientific paperFrom the full-body text of a scientific paper, entities such as research domain, research problem, methodology of the research, methods, models, algorithms, processes, data-source, data-sets, tools, evaluation measures, results achieved, limitations of the research, future directions, etc. are extracted by the readers in order to understand the paper. These entities once extracted can be organized into instances. These instances can be grouped into classes with associated properties.
From classes, the following relations can be extracted:
* **Taxonomy:** this relation organizes classes into a hierarchical relation. For instance, we used it to organize ontology learning research problems using a taxonomy of research problems related to ontology learning. This taxonomy shows that ontology learning research can be divided into the following research problems: "Ontology learning from unstructured sources", "Ontology learning from semi-structured sources", and "ontology learning from structured sources". These research problems can also be divided by considering different data sources.
* **Association:** This is the link used to define that two classes are related to each other. For example, in the sentence: "Jiomekong et al. proposed to use Hidden Markov Models to extract knowledge from source code", we identified the classes "Techniques" and "Knowledge source". So, a relation named "extract" of application between the class "Technique" and the class "Knowledge source" can be established. The instance of the class "Technique" is then "Hidden Markov Models" and the instance of the class "Knowledge source" is "Source code" and we can have the following statement: "Hidden Markov Models are used to extract knowledge from source code".
Once extracted, key-insights are grouped into research contributions and used to write state-of-the-art. In the latter, many tables and graphics are used to compare research contributions of several authors.
The extraction of key-insights from scientific papers are generally manual. However, knowledge extracted are sparse in different data sources (scientific papers, research computers, etc.), with the risk of being forgotten, lost and making it difficult to compare background research problems to up-to-date ones. In the next section, we present how Open Research Knowledge Graph can be used as a computer-assisted tool to solve these problems.
## 3 Open Research Knowledge Graph
In this section, we present an overview of ORKG (Section 3.1) and the main features used during this research (Section 3.2).
### Overview of ORKG
ORKG is an open research infrastructure designed to acquire, publish and process structured scholarly knowledge published in the scholarly literature [3; 13]. It is built according to the principles of Open Science, Open Data, and Open Source.
* **Open Science:** ORKG resources such as comparisons of scientific papers and the smart reviews can be developed through collaborative network of researchers. Once published, these resources are freely available to anyone who wants to learn about the research question and/or to contribute.
* **Open Data:** All the data ingested in ORKG is in a machine readable format and open to everyone who needs to share, use, re-use, modify,
and share the modified version. The only restriction concerns contributing to an ORKG resource. This restriction consists of having an ORKG account.
* **Open Source:** the source code of ORKG is available to the general public.
Thus, all the ORKG source code, information, data are available under open licenses [3]. To date, ORKG indexes more than 10,000 research papers corresponding to more than 5000 research problems (corresponding to 1237 research fields), more 1000 comparisons, 224 templates, 1000 users, 2216 benchmarks1. Footnote 1: [https://www.orkg.org/orkg/stats](https://www.orkg.org/orkg/stats)
### ORKG features
To help researchers structure and organize the research contributions extracted from scientific papers, ORKG provides a set of features. In this Section, we present the ones we used during our research.
Add research problemsThe research problems of a research area can be described independently, provided with relevant sources and assigned to a taxonomy of research problems [3]. For instance, with ORKG, we can define a taxonomy of research problems related to ontology learning.
Add papersORKG represents an article with [3]:
1. **Article metadata:** The article metadata involves the bibliographic information such as article title, authors, journal, book title, etc. ;
2. **Semantic description of the article:** These are key-insights of the papers extracted and annotated by researchers by following the Subject-Predicate-Object triple principle.
The article metadata and its semantic description are used to annotate the paper. To this end, researchers are allowed to add papers manually or (semi-)automatically to ORKG (see Fig. 1) [3]:
* During the manual process, all the key-metadata (title, author, etc.) and key-insights (research domain, research problem, research tools, etc.) of the papers are manually acquired by the researchers and added to ORKG using a wizard provided by the system.
* To semi-automatically add an article to the system, the key-metadata of the article such as the paper title, DOI or BibTex are provided to the ORKG wizard. These informations are used by the system to fetch the articles key-metadata. Once extracted, these informations are presented to the users so that they can complete missing metadata. Once the metadata are added to the paper, the researchers use a wizard provided by ORKG to semantically describe the paper with key-insights they extracted manually.
Semantic description of research papers.The semantic description of research papers consists of the annotation of these papers with key-insights extracted from them and to organize these elements into research contributions. This allows us to put the paper in machine readable form following the RDF subject-predicate-object paradigm. The ORKG annotation feature is a flexible system that allows users to reuse existing predicates and functions or to create and add their own predicates (properties or attributes). The description of the entities in human readable form allows researchers to have a common understanding of the data between the various stakeholders. Fig. 2 presents an example of a graph representation of a paper with their metadata and key-insights organized in paper contributions.
Figure 1: Manual (first picture) and automatic (second picture) acquisition of meta-data of a paper
The graph of Fig. 2 presents the paper entitled "Knowledge extraction from java source code using Hidden-Markov Models". The key-insights extracted are:
* The research problem which is "Knowledge extraction from source code",
* Different types of knowledge that are extracted,
* Techniques that are used during the extraction process,
* The programming language from which the source code was written.
Add research contributions to a paper.In ORKG, each paper consists of at least one research contribution which addresses at least one research problem and is further described with contribution data including materials, methods, implementation, results or other key-insights. The paper of the Fig. 2 presents one research contribution. These contributions can be compared between them or by other contributions from other papers [13] in an ORKG
Figure 2: RDF graph representation of a paper with their metadata and contributions
comparison table. Papers can be added to ORKG during the creation of the comparison table as presented by the Fig. 3.
We consider in this research that all the key-insights in the research paper such as the definition of research problem, materials and methods used, results obtained, lessons learned, etc. are grouped into research contributions. During the adding paper process, a default research contribution containing the key-insights such as research domain and research problem is filled by the user. Research contributions are described in a structured and semantic way as a Knowledge Graph (see Fig. 2). Therefore, the information will not be only readable by humans, but also by machines [3].
Comparing research papersThe structured content descriptions of scientific contributions presented above are presented in such a way that the contribution becomes comparable with other articles of the research domain. There
Figure 4: A table presenting the comparison of research contributions of papers related to ontology learning from source code
Figure 3: Adding a paper using the comparison table wizard
fore, the structured semantic representation of scientific knowledge in the knowledge graph makes it possible to automatically create literature comparisons. Allard et al. [13] present a workflow designed to compare research contributions in ORKG. Fig. 4 presents an example of a comparison table built using this workflow. This is the comparison of research contributions of papers related to ontology learning from source code. The comparison table can be published with DOI, exported in various formats such as RDF, LaTeX, PDF, CSV and integrated in a literature review. The comparison table link can be shared to other researchers, so that they can improve the comparison by correcting errors or adding missing information.
_Templates_. Scientific papers usually lack a formal metrical structure. It comprises full grammatical sentences, paragraphs in which key-insights are hidden. Identifying and structuring research contributions found in scientific papers is not always an easy task for a research student or newcomers in the domain. This is because the description of scientific findings is complex and is based on expert knowledge. On the other hand, the researcher should decide in which granularity a research contribution should be described so as to be comparable.
The goal of the template is to highlight for a research problem, a set of key-insights that may be found in a scientific paper addressing this research problem. It specifies the structure of scientific information so that [3]: (1) Fellow researchers can compete with more key-insights, (2) New researchers can rapidly get insights in the research domain.
Templates can then be reused in the description of research contributions to facilitate data entry and ensure comparability. For instance, we built a template for documenting existing datasets for metadata extraction from scientific papers 2.
Footnote 2: [https://www.orkg.org/orkg/template/R277000](https://www.orkg.org/orkg/template/R277000)
_Graph visualization._ Once added to a paper, the graph representing the research contribution is generated. This graph can be used for the exploration of scientific contribution.
_Importing survey papers:_. Survey articles present an overview of the state-of-the-art for a specific area. Within survey articles, some overviews or summaries are often presented in (semi-)structured tabular format. From these tables, information on key-insights of the papers involved in the literature review can be extracted (semi-)automatically as follow: the first step
consists of extracting the key-metadata and the key-insights from the table and building a comparison table; the second step involves fixing potential extraction errors and adding additional metadata or key-insights that was not automatically extracted. The Fig. 5 presents the extraction wizard.
Smart ReviewAfter the creation of a comparison, a researcher may create a smart review for giving an overview on research addressing a particular research question. To this end, ORKG furnishes a "What You See Is What You Get (WYSIWIG)" editor allowing researchers to create a structured overview of the literature.
Collaborative work on literature reviewIn ORKG, collaborative work allows a whole community of researchers to collaboratively build the state of the art of a research problem. In effect, many authors working on the same research problems can gather to add and modify research contributions of a scientific paper. Once these contributions are compared using ORKG comparison tables and used to write smart reviews they can be shared amongst other researchers in order to get their viewpoints. To this end, contributions and smart reviews are versioned so that all changes can be discussed by the
Figure 5: Extracting key-insights on graph databases using ORKG
professional community, updated and new versions published. If new literature is published, it is easy to continuously expand the comparison, which thus continues to reflect the current state of knowledge in a comparable way.
## 4 Research Methodology
The research methodology consists of action research. Action research methodology is used when major challenges cannot be studied without implementing them, and where implementation implies a long term commitment because effect may take time to emerge [14]. In our case, we wanted to explore, test and evaluate the different features that can be used for knowledge acquisition from scientific papers using Open Research Knowledge Graph as a computer assistant tool. Given that action research allows us to plan, implement, revise, then implement, lending itself to an ongoing process of reflection and revision, we thought it necessary to use this research methodology.
Globally, the research methodology consisted of a set of aggregated interventions to curate the papers. These interventions involved a series of actions taken during the curation of scientific papers. At its end, we come up with a research methodology that is reported in this section. This research methodology consists of the Pre-Intervention (Section 4.1) and the Intervention (Section 4.2) phase of the Action research methodology. The Post-Intervention presented in Section 6 consists of the use of the methodology presented in this section in three use cases.
### Pre-Intervention
During the ORKG curation, we particularly worked in the domain of Semantic Web. The Pre-Intervention step consists of the definition of the research objective and the organization of the curation.
#### 4.1.1 Research objective
We started this research in 2021 with the objective to document the research problem: "ontology learning". In effect, ontology learning is the automatic or semi-automatic extraction of ontological knowledge from unstructured, semi-structured or fully structured knowledge sources in order to build an ontology from them with little human intervention [15; 16; 17; 18]. This choice was motivated with our recent work on ontology learning from source code [15]. This is the automatic extraction of ontological knowledge from software source code. Thus, we decided to curate this paper first. Thereafter, we curated the related work of this paper.
#### 4.1.2 Selection of papers to curate
We started with the selection of the paper we wrote on ontology learning from source code [15]. Thereafter, we selected all the papers related to ontology learning from other data sources that were cited in this paper. For the "ontology learning from datasources" that were not cited such as "ontology learning from folksonomies" or "ontology learning from thesaurus", we used the famous research repository Semantic Scholar to search for relevant papers. The keyword "ontology learning from xxx" (where xxx represents the data source) were entered in the search bar of the Semantic Scholar platform. We used the papers titles, short abstract provided by Semantic Scholar and paper abstract provided by the authors to select relevant papers. Given that our goal was mainly to curate some papers and understand how to use ORKG for knowledge acquisition from scientific papers, we only choose the papers on the first page results. Key-insights were extracted from these papers, comparison metrics defined and used to compare these papers.
#### 4.1.3 Work organization
Globally, the curation of ORKG involves two groups of people: the ORKG team and the curators. The ORKG team is a group of persons responsible for the organization of the curation meetings, description of tasks of curators, training of curators on the use of the tool and support when they have any difficulties. Before we start the curation in June 2021, a training session was made by the ORKG team. This session was oriented on the presentation of ORKG features, the creation of comparisons using the ORKG comparison editor. During the period of curation, many demos on the creation of comparisons, templates, and smart reviews were made.
To support the curators and respond to all their difficulties, a Mail and a Skype Group were created and a bi-monthly meeting was set-up. During these meetings, we were having 5-10 minutes time to present our work: adding papers, creating comparisons tables, templates, smart review, etc. Thereafter, the questions and the remarks were posed in order to help to ameliorate the work. The meetings were recorded with Skype so that we can watch it later. During these meetings, the comparisons of papers made by the curators were discussed so that they can update and correct errors. Examples of discussions concern the definition of classes, properties, the coding of knowledge extracted from the scientific papers, etc.
### Intervention
During the intervention phase, we extracted key-insights from scientific papers and we used these key-insights to create comparison criteria (these
are ORKG properties). Thereafter, these comparison criteria were used to compare these scientific papers using the ORKG comparison table. This is an iterative and incremental process during which the experience we got during the creation of one comparison table is used to ameliorate this comparison table and create the new ones. Comparison tables were evaluated by the ORKG team and fellow researchers and refined. For instance, the first comparison3 were refined until it was accepted as well organized by the ORKG team and some colleagues working in the domain of ontology learning. Globally, papers related to the following thematic were curated:
Footnote 3: [https://www.orkg.org/orkg/comparison/R138057](https://www.orkg.org/orkg/comparison/R138057)
* Ontology learning from Thesaurus (5 papers),
* Ontology learning from Glossaries (2 papers),
* Ontology learning from taxonomies (2 papers),
* Ontology learning from XML (15 papers),
* Ontology learning from UML (4 papers),
* Ontology learning from source code (9 papers)
* Ontology learning from folksonomies (6 papers)
* Ontology learning from images (2 papers)
* Ontology Learning from Entity Relation Model (9 papers).
At the end, 54 papers were curated, 9 comparison tables were created using these papers, and one smart review on ontology learning from images. In the following paragraphs, we present how we proceed to create these comparisons, the lessons learned and main finding that was used to ameliorate our work.
#### 4.2.1 Creation of the first comparison
The first work we did was to create the first comparison of papers. Nine papers related to "ontology learning from source code" research problem were read, knowledge extracted and ingested into the ORKG platform. To this end, we firstly created a comparison table and using the ORKG comparison table wizard, we added papers to ORKG. These papers were added manually and (semi-)automatically to ORKG:
* We used the manual process for the papers that do not have DOI or BibTex. During this process, all the key-metadata (title, author, etc.) and key-insights (research domain, research problem, etc.) of the papers are manually acquired and added to ORKG using a wizard provided by the system.
* To (semi-)automatically add an article to ORKG, we used the DOI or BibTex to automatically fetch the articles metadata. Once extracted, missing informations are completed and the paper is annotated with key-insights extracted manually.
Once a paper is added, a graph representing the research contribution allows us to visualize and verify that the information on the paper is well structured.
The comparison table of ontology learning papers from source code contains the following elements:
* The first column of the table contains properties, which can also be seen as a comparison criteria.
* The rest of the column corresponds to papers that are compared.
* For each row, the corresponding insight extracted from the paper is presented, so that these elements can be used to compare papers together.
From this comparison, we learned how to organize research contributions using ORKG. The exchange with the ORKG team and some colleagues working in the domain of knowledge engineering allowed us to ameliorate this comparison and a new version was published. We found the tool interesting to save our work so to reuse later in scientific papers as additional material or related work. This motivates us to create more comparisons and explore the other features of the system.
#### 4.2.2 Creation of other comparisons
The creation of the first comparison allowed us to master the use of the comparison wizard. Therefore, 7 more comparisons were created. These comparisons gave rise to a refinement iteration in order to identify all potential knowledge that will be converted into classes, relations and properties and that will be used to build a high-quality and comparable structured scientific knowledge for "ontology learning" research problems. The aim of this structure being to create a common Semantic Model to reflect contributions
to "ontology learning" research problems. For instance, for ontology learning methods such as "TF.IDF", "Unsupervised Learning", "deep learning", "Neural Network", we decided to group them and to create a class labeled "Learning method".
Lesson learnedThe comparisons presented above led to the following lessons:
* Structure and describe research contribution is not an easy task: During the creation of comparisons presented above, we learned that to structure and describe a research paper is not an easy task. In effect, describing research contributions and making them comparable is complex because the granularity of comparison should be decided. For instance, should we consider the comparison of methods for knowledge extraction from "unstructured sources" and "structured sources" or should we go further and compare unstructured data sources such as "text", "images", with the structured ones such as "databases", "UML models"? Given that we wanted fellow researchers to see the methodologies, methods and tools for ontological knowledge extraction from knowledge sources, we decided to add a property that indicate if the data source is unstructured and the type of the data source (e.g., "text", database", etc.)
* Find the accurate property for the comparison is not an easy task: It is recommended to reused as much as possible existing ORKG properties that were created by other researchers. However, we found it difficult because one has to scroll down any time one wants to add a property to a contribution (time consuming). On the other hand, after some time, the description of a property can be forgotten or unknown (for those who did not input them). This makes it difficult to find the right property to use in the comparison tables. Fortunately, the ORKG wizard provides the properties description. However, many properties had the same name and no description.
InsightTo solve the above problems, we found it necessary to use the ORKG template feature to structure scientific papers related to "ontology learning". This template is supposed to contain all the properties that should be compared. To facilitate its accessibility, we decided to add descriptions to all the properties used. Thus, to add a contribution from a paper related to the "ontology learning" research problem, this template is used. This template is a standardized tool that can be refined and used to compare as many scientific papers of this research problem. The creation and the use of this template is presented in the following paragraphs.
#### 4.2.3 Template creation
After many comparisons, we found it necessary to provide a structure to organize the knowledge extracted from papers related to ontology learning. This structure allowed us to facilitate the organization of further relevant papers independently of the curator in a highly consistent knowledge graph.
To create the template, we used the properties we already added in the system for ontology learning from source code, database, UML models, etc. This template involves classes, properties (presented by the tables 1 and 2) applicable to a considerable number of papers related to ontology learning research papers. The comparisons elements that are created using this template are composed of instances of these classes and relations included in the template.
Each class is associated with a property that will appear in the comparison table as a comparison criteria in column property of the comparison table. In addition to these properties, other properties of basic data types are also added to the template. These properties are presented in the table 2.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Class label** & **Example of instances** \\ \hline Knowledge source & Text, databases, source code, etc. \\ \hline Learning purpose & Constructing a new ontology, updating an existing ontology \\ \hline Application domain & Medicine, Geography \\ \hline Learning data source & Java source code, XSD documents \\ \hline Has dataset & 300 source code files selected in the data source \\ \hline Training corpus & 70\% of the dataset \\ \hline Output format &.txt,.owl,.json,.rdf,.xml \\ \hline Input format &.txt,.XML,.png \\ \hline Learning method & Parser-based, Machine Learning-based, HMM, CNN \\ \hline Learning tool & on-to-text, source2onto \\ \hline Technologies & Java, Python, TensorFlow \\ \hline Terms learning & Entities, shape, feature, aspects \\ \hline Relationship & Topological relation, Direction relation \\ \hline Property & DataProperties, ObjectProperties \\ \hline Axiom & Transitive relation, reflexive relation \\ \hline Rule & if(age¡10)then children \\ \hline Evaluation & User evaluation, comparison to a gold standard \\ \hline Knowledge assessment & Empirical measure, human intervention, domain expert \\ \hline \end{tabular}
\end{table}
Table 1: Table describing the classes of the template used to describe contributions of papers related to ontology learning
#### 4.2.4 Using the template to create a new comparison
The template presented in the section above was used to create 14 contributions. These contributions come from 2 papers related to ontology learning from images. To create these contributions, we identified the DOI of the papers found using Semantic Scholar. The DOI was entered using the "adding paper wizard" of ORKG. The system automatically extracts the papers metadata. Thereafter, knowledge was extracted manually and added to the system using the template. These contributions were finally used to create a comparison table. The graph visualization was used for the exploration of scientific contributions. It allowed us to realize that there was some confusion in our comparison. This confusion was corrected, the template and the comparison refined and new versions published. A video presenting the curation of papers related to ontology learning from image was published by the ORKG team4.
Footnote 4: [https://www.youtube.com/watch?v=EwfLJdPRr6o](https://www.youtube.com/watch?v=EwfLJdPRr6o)
#### 4.2.5 Creation of smart review
Once the informations are extracted from the papers related to ontology learning from images, these information were used to write a smart review. The goal of this review was to present and compare related work on ontology learning from image data.
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Property label** & **Description** \\ \hline Class learning & True when the authors extracts classes from the data source \\ \hline Instance learning & True when the authors extracts instances from the data source \\ \hline Taxonomy learning & True when the authors extracts taxonomies of classes or properties from the data source \\ \hline Class hierarchy learning & True when the authors extracts class hierarchies from the data source \\ \hline Validation tool & Presents the technologies used to validate/develop the validation tool \\ \hline Validation comments & Any comments of the authors concerning the validation \\ \hline Recall & This is the recall of the learning tool \\ \hline Precision & This is the precision of the learning tool \\ \hline F-measure & This is the F-measure of the learning tool \\ \hline \end{tabular}
\end{table}
Table 2: Properties for comparing research contribution
#### 4.2.6 Collaborative work on literature review
In this research we did not consider only our viewpoint during the creation of the template and comparison tables. We discussed with colleagues, other researchers using ORKG and the ORKG team to which we sent the links of these resources. This allowed us to refine them and create new versions. It should be noted that any fellow researchers can improve these resources with new information. For instance, if new literature is published, anyone can add a new contribution to the comparison table and publish a new version.
## 5 An approach for knowledge acquisition from scientific papers
Acquiring knowledge from scientific papers from scratch is costly in time and resources. The approach we propose in this paper aims to reduce this cost during the knowledge acquisition process by allowing researchers to create structured repositories of scientific papers related to a research problem and/or a research domain. This approach is inspired by the use of ORKG in our research since three years to:
* Organize and compare research contributions so as to build a large dataset of prior and up-to-date knowledge in our research domain;
* Organize research so as to facilitate the update and improvement with the contributions of fellow researchers working on the same research problem.
In effect, previously, to do a state of the art research, we were searching for relevant scientific papers on the Internet, reading these papers and summarizing them in text format and building comparison tables using LibreOffice Calc and Google Sheet. After the curation of ORKG in 2021, we got new insights on how to acquire and organize scientific literature. The latter is developed in this section as a computer-assisted knowledge acquisition approach from scientific papers (presented by Fig. 6). It describes how knowledge can be extracted from research papers and stored in a knowledge graph in order to facilitate the access to key-insights hidden in research papers. It consists of six steps during which classes, properties and relations are extracted from scientific papers, and used to build a template. Thereafter, the template is used to represent contributions of the papers related to the same research problem. Finally, the contributions are used to build comparison tables, which themselves can be used to write a smart review. These steps are: Knowledge elicitation (Section 5.1), Knowledge
Figure 6: The description of the knowledge acquisition approach proposed in this paper
analysis and interpretation (Section 5.2), Templates creation (Section 5.3), Knowledge representation (Section 5.4), Knowledge use (Section 5.5) and Knowledge verification and validation (Section 5.6).
### Step 1: Knowledge elicitation
First and foremost, the researcher should determine the research domain that he wants to document. Thereafter he should identify the research problem related to this research domain. Once the research domain and the research problem are identified, these informations are used to search for relevant papers using search engines like Google search or search engines in digital research repositories like Semantic Scholar, Springer, Elsevier, IEEE, etc. For instance, in the domain of nutrition, a researcher may be interested in the food recommendation to people on diet. Thus, the following research question may be elicited: "How to recommend food to people in diet?" or "which techniques, methods and methodologies are used for food recommendation?". These research questions are used to search for scientific papers. Relevant papers related to this research domain and research problem are identified using many criteria which can be the title of the paper, the authors, references or citation analysis. References analysis can be used for instance to identify relevant papers to the research problem. During the selection of papers, the importance of a paper is defined as how close it is with the research domain and research problem. This task is done by reading the abstract or the full paper. Only papers that are too close to the research problem are selected. Once the research papers are found, some of them are selected for knowledge elicitation.
During the knowledge acquisition activity, the researcher should read the papers selected previously, identify and extract keywords, clauses, sentences, scientific claims, etc. Globally, all the information that is relevant to understand the paper is identified. This is an iterative process (see Fig. 6) during which the researcher should be sure at the end that he has identified anything relevant. In early iterations of the cycle, the knowledge identified can refer to entities which are grouped, and will give classes. These classes will thus be put in relation with each other. At the end of this step all the relevant knowledge are extracted.
The identification and the extraction process can be done by using handwritten notes, spreadsheet, or underline in order to highlight all the key-insights. Thereafter, each piece of information highlighted can be labeled with the type of knowledge it represents. For instance, if we highlighted "HMM is used to extract information from source code", then, we can label "HMM" as a Machine Learning method, "Source code" as a knowledge
source and "extract information from" as a relation between the ML method and the knowledge source (Fig. 7 presents this triple).
Given that within survey papers some overviews or summaries are often presented in (semi-)structured tabular format, the comparison criterion in these tables should be identified and extracted. These information could be extended with additional information extracted from this survey paper or other papers selected.
Globally, two kinds of information can be identified from the papers. We named them as keywords and keyphrases:
* **Keywords:** keywords are words that are used to represent knowledge. For instance, if we consider the evaluation of ML techniques, we can identify the following keywords: "HMM", "Recall", "Precision", "Accuracy".
* **Keyphrases:** keyphrases are composed of a set of words that are used to represent a part of knowledge. For example, we have "Source code", "Wind power forecasting using time series".
### Step 2: Knowledge analysis and interpretation
Knowledge analysis and interpretation consists of reviewing the elements extracted, identifying the key pieces of knowledge, providing a definition to each of these elements. Thereafter, these knowledge are assembled into related groups. Redundant informations are identified and only one term is selected. The definition of each keyword and keyphrases is provided.
Knowledge obtained after this task are classified into classes, relations, properties and instances. The terms in keywords and keyphrases are used to create the labels of these entities. During this task, the main challenge is to keep the keywords and keyphrases simple and descriptive.
Figure 7: Representation of the triple: ”HMM is used to extract information from source code”
### Step 3: Template creation
The classes, properties, relations and instances are used to create a template using ORKG template editor. This template is a conceptual model of papers dealing with the research domain and research problem addressed by its creator.
The template allows researchers to put key-insights hidden in research papers in a machine readable form. However, to be human readable, Classes, relations, properties and instances should have a definition in human readable form so that any human operator can use the template to register knowledge extracted from a paper. In order to create a consensus, the template link can be sent to researchers working in the research domain to have their point of view. To facilitate her/his amelioration, the author of the template can make the latter editable, so that other researchers can update.
### Step 4: Knowledge representation
The knowledge representation step consists of using the template built in step 3 to annotate research papers related to the research domain and the research problem. Thus, the research contribution is machine and human-readable. Using the template, knowledge related to the research problem and research domain is continually refined and updated through additional knowledge from new scientific papers.
Globally, annotating a paper using ORKG and the template built in step 3 can be manual or (semi-)automatic. During the automatic process, the paper title, DOI or the BibText is entered in the add paper wizard. These metadata are used to fetch the paper and automatically extract other metadata. The next step of the process consists of selecting the research domain, defining the research problem and choosing the template to use in order to fill the other key-insights. Importing survey tables are also done (semi-automatically). Once the table is imported, the curator can correct information extracted and add additional key-insights. The manual process consists of adding the metadata and the key-insights manually.
Once ingested into ORKG, research contributions can be visualized as a semantic network. This graph can be used for the exploration of scientific contributions.
### Step 5: Knowledge use
Extracting knowledge from knowledge sources is not an end in itself. Once represented in a machine readable form, the knowledge acquired should
be used. In our case, the knowledge acquired can be used to compare research papers and write smart reviews. In effect, the structured semantic representation of scientific knowledge in the KG makes it possible to automatically create literature comparisons. We are currently using these resources in our papers. One of these papers concerning "Food Composition Tables" is already published. The second one on "Food Information Engineering" was accepted at the AAAI conference.
### Step 6: Verification and validation
The approach we present in this paper uses ORKG as an intelligent tool for assisting researchers in their work of organizing and comparing key-insights extracted from existing literature. Thus, in step 4 and 5, we show how it can be used to create research contributions and compare scientific papers. To ensure that the templates, contributions, comparisons tables and smart reviews contain the necessary elements and that these elements are well structured and presented, they should be verified and validated. To this end, any researcher who has an account on the ORKG platform can edit any comparison, template, modify and save (for templates) or publish a new version (for comparisons).
## 6 Use cases
Knowledge acquired during the intervention phase of the Action research methodology presented in Section 4.2 were used to propose an approach using ORKG for knowledge acquisition from scientific papers (Section 5). This section constitutes the Post-Intervention of the Action research during which this methodology is used in real world settings to solve related problems. This approach was used to curate over 200 papers corresponding to the "ontology learning", "epidemiological surveillance systems design and implementation", "food information engineering", "Tabular data to Knowledge Graph Matching", "Question Answering", and "information extraction from scientific papers" research problems and, "Neuro-symbolic" domain5. From these research problems, we ingested over 800 contributions in the ORKG platform and we used these contributions to build over 100 comparisons tables. We used the template created during the curation of ORKG, and following steps 4 and 5 of the approach to create research contributions
of papers related to "ontology learning from text" and "ontology learning from videos". The "knowledge use" step consists of creating comparison tables of "ontology learning from videos" and "ontology learning from text" research problems. The overall links to all the resources presented in this Section are given as additional materials. The rest of this section presents how this approach was applied step by step to curate 21 papers related to epidemiological surveillance systems (Section 6.1), how this approach is currently used to curate papers in the domain of food information engineering (Section 6.2) and how we used it to curate the papers used to write the related work of this research (Section 6.3).
### Epidemiological surveillance systems
Epidemiological surveillance systems enable the collection, analysis, and interpretation of data, together with the dissemination of these data to public health practitioners, clinicians, decision makers and the general population for preventing and controlling diseases [19; 20; 21]. It should support timely, efficient, flexible, scalable and interoperable data acquisition, analysis and dissemination. These informations are essential to the planning, implementation and evaluation of public health practices [19; 22]. To design and implement epidemiological surveillance systems, it can be important to have an overview of existing systems. Thus, this section presents how the approach presented in section 5 is used to acquire knowledge from papers related to epidemiological surveillance and build a comparison table.
#### 6.1.1 Step 1: Knowledge elicitation
To furnish relevant information to stakeholders, epidemiological surveillance systems should be designed and implemented so as to always correspond to the requirements. Thus, the current work is about the acquisition of key-insights on epidemiological surveillance design and implementation with the goal to identify approaches, techniques and tools that are used for epidemiological surveillance and to see the limits of existing systems.
Given that epidemiological surveillance systems are primarily concerned with the collection, analysis, interpretation and dissemination of information to different stakeholders, we choose to classify the papers related to "Epidemiological surveillance systems design and implementation" research problem in the domain of "information science".
Once the research problem and the domain are identified, we move to the searching and the selection of papers that will be used. The famous research repository "Semantic Scholar" were used to search for relevant research papers: (1) "epidemiological surveillance system" search string were
entered in the search bar of Semantic Scholar; (2) "Computer Science were chosen" as the field of study.
We found 44600 papers. We used the papers titles, short abstract provided by Semantic Scholar and paper abstract provided by the authors to select relevant papers. Given the large number of papers retrieved, we decided to consider only the first page of results provided by Semantic Scholar. Thereafter, we went through the papers on the first page one by one, selecting those that seemed to be relevant given the research problem. Citations-based analysis was also used to search for relevant papers. This consists of identifying all the papers that are cited and that have been cited. Fortunately these papers are extracted and automatically presented by Semantic Scholar. From the papers identified as relevant, a total of 21 were selected randomly and downloaded.
Once the papers were selected, we divided these papers into two groups: a group of 4 papers for building the template and the rest. Knowledge was acquired from these 4 papers by the identification of important key terms from each paper selected. Thus, each paper was read line by line, and we identified from each of them key-insights that may be of interest to researchers. After the elicitation phase, an exact and complete transcript of the key-insights extracted were made.
#### 6.1.2 Step 2: Knowledge analysis and interpretation
Knowledge that was saved in the transcript was reviewed and analyzed in order to identify key pieces of knowledge and their relationships that represent scientific information carried by the research paper. A deep analysis of elements extracted were used to identify classes, properties and relations as described in Section 5. We were seeking the elements that are applicable to a considerable number of papers related to epidemiological surveillance systems.
#### 6.1.3 Step 3: Template creation
The main classes and properties identified during Step 2 were used to build a template of papers related to "epidemiological surveillance systems design and implementation6" research problem. This template is available online and can be improved by other researchers.
#### 6.1.4 Step 4: Knowledge representation
During the knowledge representation step, the template built throughout the previous step was used to annotate the 4 papers used to build the template. Thereafter, knowledge were acquired from the rest of papers and ingested in ORKG using the template. In total, 21 papers were ingested in ORKG.
#### 6.1.5 Step 5: Knowledge use
The contributions created using the template were used to build a comparison table7. The latter compares papers related to the "epidemiological surveillance system design and implementation" research problem.
Footnote 7: [https://www.orkg.org/orkg/comparison/R146851/](https://www.orkg.org/orkg/comparison/R146851/)
#### 6.1.6 Step 6: Knowledge verification and validation
The discussion with a collaborator who is an epidemiologist allowed us to validate the template. They found that the template, the comparison table and the contributions constitute the elements that are helpful when putting in place an epidemiological surveillance system.
### Food Information Engineering
Food information engineering involves the acquisition, the processing and the diffusion of up-to-date food information to different stakeholders. These informations are compiled from several data sources and used for a variety of purposes such as food recommendation, recipe substitution, food image recognition, etc. Many authors have proposed methodologies, methods and tools for the acquisition, the processing of food information, its storage, diffusion, etc. However, these contributions are scattered in many scientific papers on the Internet and are difficult to exploit. The second use case we chose consists of documenting the "food information engineering" research problem so as to provide to fellow researchers with methodologies, methods, tools, use cases, etc. It consists of documenting the following research question: "how food information is collected, processed, diffused and used?" To reply to this research question, several researches on the acquisition of food knowledge, its storage, querying and diffusion to different stakeholders are done worldwide. Our objective during this work is to document these solutions so as to provide to the research community with a body of knowledge that will help fellow researchers to reduce their research curve.
#### 6.2.1 Step 1: Knowledge elicitation
Our goal during the Knowledge elicitation step was to identify several papers that can allow us to document the research question: "how food information is collected, processed, diffused and used?" We were having prior knowledge on the organization of food information using Food Composition Tables, Food Ontologies and Food Knowledge Graphs. Thus, we position food information engineering research problem in the Semantic Web research domain. Once the research problem to work on and the research domain was determined, we move to the searching of relevant papers. Our goal being to build comparison tables for the following research problems:
* Food Composition Tables construction and description (5 papers processed, 4 comparisons tables built),
* Food Ontology construction, description and integration (27 papers processed, 4 comparison tables built and one smart review wrote),
* Food knowledge graph construction, description and integration (11 papers processed, 4 comparisons tables built and one smart review wrote),
* etc.
We used Google search to search for relevant papers from these subjects, using as keywords the title of the research problems. Only the first page (containing 10 research results) of the Google search platform was considered. In the case of "Food Ontologies" and "Food Knowledge Graphs", we used the most recent review published by Weiqing et al. [23] to identify the research papers related to "Food Ontologies" and "Food Knowledge Graphs". Once retrieved, we choose some of these papers to identify elements that are comparable.
#### 6.2.2 Step 2: Knowledge analysis and interpretation
As we did with epidemiological surveillance systems, knowledge that was identified from the papers downloaded and saved in transcript was reviewed and analyzed in order to identify classes, properties and relations. We used the comparison of "Food Ontologies" and "Food Knowledge Graphs" provided by Weiqing et al. [23] to find additional properties. These tables were imported in the ORKG system89. In this particular case, we started by
getting and using all properties used to compare papers in the review paper.
During the analysis of papers, we found that many authors were providing Question Answering Systems over Food KG. Thus, we decided to document this research problem1011.
Footnote 10: [https://orkg.org/comparison/R239314](https://orkg.org/comparison/R239314)
Footnote 11: [https://orkg.org/comparison/R269002](https://orkg.org/comparison/R269002)
#### 6.2.3 Step 3: Templates creation
Once the papers were selected, they were used to build new templates (8 templates) and update existing templates (two templates were updated). For instance, the template of "Ontology description" created during the Intervention phase (Section 4.2) was updated with new properties, another template created by Natalia Chichkova, an ORKG user for the description of KG was also updated. The following examples of templates were created by zero and are currently used: food composition tables, Question Answering systems and Question Answering benchmark.
#### 6.2.4 Step 4: Knowledge representation
During the knowledge representation step, the template built throughout the previous step was used to annotate all the papers downloaded by considering different research problems. Currently, more than 120 papers related to the domain of "Food Information Engineering" are ingested in the ORKG platform. It should be noted that this is an ongoing work and we want at its end to provide to the research community with a systematic literature review of "food information engineering" research problems.
#### 6.2.5 Step 5: Knowledge use
The contributions created using the template were used to build over 26 comparison tables. The comparison table of "food composition table" research papers allowed us to realize that "food composition tables" change over time and unfortunately, the database did not change. On the other hand, the supports used to distribute these data are sparse on the Internet in different formats. We also realize that up-to-date data can be found in scientific papers. Thus, we build a large scale and up-to-date food composition tables that is currently annotated using Wikidata.
#### 6.2.6 Step 6: Knowledge verification and validation
Knowledge validation consists of presenting this work in challenges and conferences. Our work on "Food Composition Table" was accepted in SemTab
challenge 12 organized by International Semantic Web Conference 202213. The overall work on food information engineering was accepted at "New Faculty Highlights" AAAI-2314 conference program. We are currently adding more papers in order to maintain a state of the art of papers in the domain of "food information engineering".
Footnote 12: [https://sem-tab-challenge.github.io/2022/](https://sem-tab-challenge.github.io/2022/)
Footnote 13: [https://iswc2022.semanticweb.org/](https://iswc2022.semanticweb.org/)
Footnote 14: [https://aaai.org/Conferences/AAAI-23/new-faculty-highlights-cfp/](https://aaai.org/Conferences/AAAI-23/new-faculty-highlights-cfp/)
### Knowledge extraction from scientific papers
The process of literature review starts from the searching of scientific papers from the huge amount of existing ones to the analysis of the paper content and the extraction of key-insights from them. Given the large amount of scientific papers in all domains, this process is laborious, time consuming and cumbersome. To reduce the burden of work, knowledge extraction from scientific papers is of great interest to researchers. During the last years, this research problem has interested many researchers and methodologies, methods and tools have been proposed. Our goal during the third use case was to identify the different types of knowledge that are extracted from scientific papers and to document datasets, methodologies, models and tools used for extracting these knowledge.
#### 6.3.1 Step 1: Knowledge elicitation
Given that the research problem we are documenting is "knowledge extraction", we classified this research problem in the Semantic Web domain. As we did with the two previous use cases, our goal during this step was to identify several papers that can cover the research question we want to document. By using the search keyword: "knowledge extraction from scientific paper" on Google Search engine, we found a great survey [4]. This is a 60 pages survey of datasets, methodologies, methods and tools that are used to extract different types of knowledge from scientific papers. It is organized in two main sections: (1) Metadata extraction, (2) Key-insights extraction. Each section describes the different types of knowledge that are extracted, the methods that are used to extract each type of knowledge and the evaluation of each method. We found this survey interesting for knowledge elicitation.
The survey paper was read line by line in order to identify elements that are comparable. The comparisons tables provided by the authors were great resources for the identification of key-insights. Thus, we combine the
knowledge extracted from these tables with the knowledge extracted from the full body text to obtain a set of key-insights candidates.
#### 6.3.2 Step 2: Knowledge analysis and interpretation
The key-insights identified from the tables and the text were analyzed one by one in order to select the ones that can be considered as relevant. The duplicates were also identified and deleted.
#### 6.3.3 Step 3: Templates creation
Knowledge identified during the previous step were converted into properties, classes and relations. Thereafter, these classes, properties and relations were used to create templates. We found it necessary the create the following templates:
* Template for metadata dataset15: this template is used to describe the content of each metadata dataset. Footnote 15: [https://orkg.org/template/R277000](https://orkg.org/template/R277000)
* Templates for Key-Insight1617: two types of datasets describing key-insights were found: sentence-level key-insight and phrase-level key-insights. These templates are used to describe these datasets. Footnote 16: [https://orkg.org/template/R279223](https://orkg.org/template/R279223)
* Template of metadata system18: this is used to describe the different systems that are used for extracting the metadata from the scientific article. Footnote 17: [https://orkg.org/template/R280533](https://orkg.org/template/R280533)
* Template of key-insight system19: this is used to describe the different systems that are used for extracting key-insights from scientific papers. Footnote 18: [https://orkg.org/template/R280212](https://orkg.org/template/R280212)
In addition to these templates, we reused a template20 that we created during the work on "food information engineering" for evaluating each extraction system. We also used a template21, created by Jennifer D'Souza for the description of existing tools that are proposed for knowledge extraction from scientific papers.
Footnote 20: [https://orkg.org/template/R259041](https://orkg.org/template/R259041)
Footnote 21: [https://orkg.org/template/R166722](https://orkg.org/template/R166722)
#### 6.3.4 Step 4: Knowledge representation
During the knowledge representation step, the template built throughout the previous step was used to annotate papers related to "information extraction from scientific papers".
#### 6.3.5 Step 5: Knowledge use
Currently, more than 50 papers related to "information extraction from scientific papers" are being ingested in ORKG. These papers are used to document the "information extraction from scientific papers" research problem. From these papers, more than 50 research contributions were extracted and used to build 11 comparison tables. These resources were used to write the related work of this research (see Section 7).
#### 6.3.6 Step 6: Knowledge verification and validation
The templates and the contributions provided in this research will be evaluated by the reviewers of this paper. On the other hand, these resources can be evaluated, validated and improved by any researcher working on knowledge extraction from scientific papers.
## 7 Related work
As presented in the previous sections, scientific knowledge can be grouped into two categories: metadata and key-insights [4; 5].
During the last years, many researchers have contributed in the domain of metadata extraction from research papers. Zara et al [4] and Abdul et al. [5] present a great state-of-the-art on this subject. These works show that manual processing is generally used for scientific papers annotations in order to build datasets. Thereafter, these datasets are used to train models that will further be used for metadata extraction. The models that are used for metadata extraction are rule-based, machine learning-based and Natural Language Processing-based. Rule-based models use text features and layouts to define instructions that specify how to extract desired information from scientific papers. On the other hand, methods such as Hidden Markov Models (HMM), Conditional Random Fields (CRF), Support Vector Machines (SVM), Neural Networks are also proposed for metadata extraction from scientific papers. The approaches proposed for metadata extraction are very powerful. The evaluation of the most powerful ones show the performances reaching 95% of F-measure.
Key-insights acquisition consists of reading the scientific paper, identifying relevant knowledge and organizing them or building models for their
automatic extraction. In the rest of this section, we present the different types of key-insights in Section 7.1, existing key-insights datasets in Section 7.2, methods for key-insights extraction in Section 7.3, and tools for key-insights extraction in Section 7.4.
### Key-insights
Key-insights are presented in scientific papers in the form of text, figures and tables. The semi-structured organization of knowledge in tabular data allows us to easily extract key-insights from tables stored in scientific papers. For instance, Food Composition Tables can be extracted in scientific papers for accessing food that people are eating and their nutritive values [24]. However, key-insights hidden in text are more difficult to identify and extract because it is difficult to guess the valuable information enclosed within a research paper text that can be beneficial for each researcher. Zara et al. [4] classified key-insights hidden in the paper text into sentence-level key-insights, phrase-level key-insights, and relation [4]:
* **Sentence-level key-insights:** these are predefined knowledge, in the form of keywords and key-phrases and hidden in the text of an article. For instance, "method", "problem", "objective", "result", etc. are included in almost all scientific papers.
* **Phrase-level key-insights:** These are phrases carrying potential information that are useful to researchers. For instance, "tool or library", "measures and measurements", "language resource product", "location", etc.
* **Relation:** relation can express application of a technique to solve a problem, results generated against various evaluation measures, etc. Phrase-level key-insights can be extended to extract relations because in many cases, relations are expressed between entities.
Key-insights acquisition from scientific papers can be done manually or automatically. We presented in Sections 3, 4, 5 and 6 how ORKG can be used as a computer-assistant tool for semi-automatic acquisition of knowledge from scientific papers. To build models for automatic acquisition (or extraction) of key-insights from scientific papers, there is a need for annotated datasets. In the next section, we present related work on key-insights datasets.
### Datasets
Based on the different types of key-insight that can be extracted from scientific papers, the datasets for extracting these knowledge can be classified as Sentence-level key-insights and Phrase-level key-insight datasets.
* **Sentence-level key-insights datasets:** These datasets contain scientific articles in which sentences are classified based on insights they carry. We gathered the different properties that can be used to compare sentence-level key-insights and we built an ORKG template. Thereafter, this template was used to compare Sentence-level key-insights datasets published in scientific literature.
* **Phrase-level key-insight datasets:** these datasets contain scientific papers in which phrases are annotated with entities corresponding to potential key-insights they may carry. The datasets for phrase-level key-insight extraction are difficult to build and scarce. As we did with sentence-level key-insights, we built an ORKG template of phrase-level key-insights and we used this template to compare phrase-level key-insights datasets.
The comparison of phrase-level and sentence-level key-insights shows that the majority of existing datasets belong to the domain of medical science. On the other hand, these datasets are mainly based on the extraction of knowledge from the abstract only [4].
### Acquisition methods
Acquiring knowledge from scientific papers can be manual or automatic. Automatic knowledge acquisition relies on rules, Machine Learning, Deep Learning and Natural Language Processing techniques for automatic identification and extraction of key-insights. Based on the datasets presented in Section 7.2, Zara et al. [4] classified these methods as sentence-level key-insights and phrase-level key-insights methods.
* **Sentence-level key-insights extraction:** methods for Sentence-level key-insights extraction are focused on the classification of sentences in predefined categories based on insights they carry.
* **Phrase-level key-insights extraction:** methods for Phrase-level key-insights extraction are focused on the extraction of phrases carrying potential information.
To extract sentence-level and phrase-level key-insights from scientific papers, rules, ML, DL and NLP techniques have been proposed. The main techniques proposed are Bayesian classifier, Conditional Random Field, Support Vector Machine, Hidden Markov Models. To compare research work on this subject, we built a template and we used this template to compare several methods for sentence-level and phrase-level key-insights extraction. These methods are not as powerful as metadata extraction methods. Very little works show methods that the F-measure reaches 85% for the extraction of each key-insights.
### Tools for knowledge acquisition
Key-insights acquired from scientific papers are generally grouped into research contributions to make them comparable with other resources. To this end, hand-written notes can be used to organize and build comparison tables and figures. Tools used for knowledge acquisition from scientific papers aim to facilitate this work and make it less laborious, time consuming and cumbersome. They can be classified as computer-assisted tools, tools for automatic extraction of key-insights, digital research repositories and social tagging and bookmarking platforms.
#### 7.4.1 Computer-assisted tools
Computer-assisted tools aim to help researchers to organize key-insights extracted from scientific papers. Spreadsheets software such as Microsoft Excel, Libreofficel Cal or Google spreadsheets are generally used to organize, store and compare research contributions from several research papers. The main advantages of these software is that the data can be stored and reused whenever needed. It is also easier to build graphics with the data. However, these data are not harmonized, isolated in researcher computers or storage and difficult to merge with other research data. Thus, two researchers will make the same effort to extract the same knowledge from a set of scientific papers. These efforts can be saved if the knowledge is organized in a computer-assisted software such as ORKG. In a recent work, Allard et al. [13] present a workflow designed to compare research contributions in ORKG. This paper shows the process to add a paper and the key-insights of this paper in ORKG. However, it did not provide a complete methodology from the knowledge elicitation phase (using template to create a conceptual model of the domain) to the knowledge use phase.
#### 7.4.2 Digital research repositories
Digital research repositories aim at providing researchers with basic filters to ease the search of scientific papers while querying through millions of research papers. To this end, metadata informations are used to provide various searching facilities [4]. On the other hand, key-insights are used to augment keywords and provide short abstracts (e.g., Semantic Scholar) to guide researchers to identify relevant papers to his research problem.
#### 7.4.3 Social tagging and bookmarking platforms
Social tagging and bookmarking platforms (e.g. CiteULike, Bibsonomy, Delicious) are online services for serving scientific communities [5]. The users of these tools can annotate the research articles, bookmark the preferences, etc. This allows them to possess their references or a web page with their own defined tags or keywords. But this does not allow researchers to compare research contributions identified from several research papers.
Even if the knowledge of some Digital research repositories and Social tagging and bookmarking platforms are organized in knowledge graphs (e.g., Springer Nature SciGraph22, Microsoft Academic [25]), these tools does not permit to researchers to structure key-insights hidden so to help other researchers to update with more papers and insights.
Footnote 22: [https://www.springernature.com/gp/researchers/scigraph](https://www.springernature.com/gp/researchers/scigraph)
## 8 Summary and conclusion
Acquiring knowledge from scientific papers from scratch is costly in time and resources. Thus, we propose in this paper an approach using Open Research Knowledge Graph as a computer-assistant tool for knowledge acquisition from scientific papers. It consists of five steps:
* Knowledge elicitation consists of determining the domain and the research problem to document. Using these information, to search for relevant scientific papers and extract elements that one wants to compare.
* Knowledge analysis and interpretation consist of analyzing the pertinence of the elements extracted during knowledge elicitation and the deletion of duplicates.
* Template creation consists of using the elements obtained after the knowledge analysis and interpretation to build a template that will be
used further to organize key-insights extracted and research contributions.
* Knowledge representation consists of using existing templates to structure knowledge extracted in a knowledge graph.
* Knowledge use consists of comparing research contributions in comparison tables, and using them to write reviews of the domain.
* Verification and validation consists of the validation of the templates, the contributions, the comparisons of research contributions and the reviews by fellow researchers.
This approach is currently used to document the "ontology learning", "epidiological surveillance systems design and implementation", "food information engineering", "Tabular data to Knowledge Graph Matching", "Question Answering", and "information extraction from scientific papers" research problems and the "Neuro-symbolic AI" domain. Thus, more than 200 papers are ingested in ORKG. From these papers, more than 800 contributions are documented and these contributions are used to build over 100 comparison tables. At the end of this work, we found that ORKG is a valuable tool that can reduce the working curve of state-of-the-art research.
## Acknowledgement
We are grateful to the Open Research Knowledge Graph team for their following during the curation of ORKG. Our great thanks also goes also to all the curators. Their remarks and questions were very helpful in this work.
|
2306.04497 | Data structures for photoabsorption within the ExoMol project | The ExoMol database currently provides comprehensive line lists for modelling
the spectroscopic properties of molecules in hot atmospheres. Extending the
spectral range of the data provided to ultraviolet (UV) wavelengths brings into
play three processes not currently accounted for in the ExoMol data structure,
namely photodissociation, which is an important chemical process in its own
right,the opacity contribution due to continuum absorption and predissociation
which can lead to significant and observable line broadening effects. Data
structures are proposed which will allow these processes to be correctly
captured and the (strong) temperature-dependent effects predicted for UV
molecular photoabsorption in general and photodissociation in particular to be
represented. | Jonathan Tennyson, Marco Pezzella, Jingxin Zhang, Sergei N. Yurchenko | 2023-06-07T15:05:44Z | http://arxiv.org/abs/2306.04497v1 | # Data structures for photoabsorption within the ExoMol project
###### Abstract
The ExoMol database currently provides comprehensive line lists for modelling the spectroscopic properties of molecules in hot atmospheres. Extending the spectral range of the data provided to ultraviolet (UV) wavelengths brings into play three processes not currently accounted for in the ExoMol data structure, namely photodissociation, which is an important chemical process in its own right, the opacity contribution due to continuum absorption and predissociation which can lead to significant and observable line broadening effects. Data structures are proposed which will allow these processes to be correctly captured and the (strong) temperature-dependent effects predicted for UV molecular photoabsorption in general and photodissociation in particular to be represented.
keywords: Data methods; photoabsorption; exoplanets
## 1 Introduction
The analysis of light as function of wavelengths provides a major window on the Universe. To interpret these signals requires appropriate laboratory data. The ExoMol project (Tennyson & Yurchenko, 2012) was established to provide molecular line list for hot astronomical atmospheres. Up until recently the ExoMol project, in keeping with similar projects such as HITEMP (Rothman et al., 2010; Hargreaves et al., 2019), TheoReTS (Rey et al., 2016) and NASA Ames (Huang et al., 2021), has presented results as (large) lists of transitions or spectral lines. Implicitly this assumes that all lines are discrete transitions between bound states (bound - bound transitions) even though in some cases the upper states may actually lie in the continuum so that these transitions, while still appearing as lines, actually represent part of the bound - free spectrum. The ExoMol project uses a well-defined format for line lists (Tennyson et al., 2013) which has been further developed as part of its data releases (Tennyson et al., 2016, 2020).
However, a number of recent developments, discussed below, have extended the scope of the ExoMol project and therefore the data it provides. This has caused us to consider how to generalise the ExoMol data structure to accommodate both the increased range of data and also its different uses as bound - free data are important not only for opacities and spectroscopic models but also, in that they represent a route to photodissociation, which is an important process for chemical models. Sharp transitions to states lying above the dissociation limit are already starting to be captured by ExoMol as part of standard line lists (Qu et al., 2021; Owens et al., 2022). However, once occupied these above dissociation states can decay either by emitting a photon, such as UV flourescence which is an important astrophysical process (Lupu et al., 2011; Gerard et al., 2022), or they can dissociate. The above dissociation region also contains a continuum component to the photoabsorption caused, for example, by excitation to dissociative electronically excited states. Currently neither ExoMol, or indeed none of the databases cited above, capture this component. At the same time we have started to consider the role of photodissociation (Pezzella et al., 2021, 2022) which itself usually comprises sharp lines sitting on top of a continuum. Photoabsorption into the continuum cannot be represented by the current ExoMol line format. We note that continuous opacities on a grid of temperatures and wavelengths for molecules were generated by Kurucz et al. (1987) and tabulated experimental vacuum ultra violet (VUV) cross sections of molecules are collected in the MPI-Mainz UV/VIS Spectral Atlas (Keller-Rudek et al., 2013) as a continuous spectrum.
A third issue is the representation of the so-called predissociation which occurs when transitions to excited electronic states which lie above the dissociation limit spontaneously undergo a further process (usually a curve crossing or tunneling) which leads to dissociation. The resulting lines are observed to be broadened, often significantly, due to the shorter lifetimes associated with predissociating states. So far while the ExoMol project has included states in its line lists which are predissociative, it has ignored the important line broadening effects which result from the reduced lifetime associated with predissociative states. A recent study by Pavlenko et al. (2022) of the spectrum AlH in M-dwarf star Proxima Cen highlights problems with this approach. Pavlenko _et al._ used the ExoMol AlHambra line list for AlH (Yurchenko et al., 2018) to model this spectrum. For the majority of transitions, which do not show any effects due to predissociation, this line list worked well but it proved to be less accurate for transitions to predissociating states. Importantly, only the lines which showed broadening, often by as much as 5
cm\({}^{-1}\), due to predissociation are not saturated in the stellar spectra meaning that it was only by analysing these broadened lines that Pavlenko _et al._ were able to retrieve abundances of AlH. It would therefore clearly be advantageous to include consideration of predissociation effects in the ExoMol database.
In this research note we propose a generalisation of the current ExoMol format to allow for the various processes discussed above. At the same time we draw a clear distinction between the photoabsorption data, needed for spectral and opacity models, and for the data needed for modelling the chemical consequences of photodissociation.
## 2 The present Exomol data model
Figure 1 gives a simplified ExoMol data structure; a complete specification of the file types is given in Table 1. The master file exomol.all ([https://www.exomol.com/exomol.all](https://www.exomol.com/exomol.all)) gives an overview of the entire database and points towards the.def files which characterise the recommended line lists for each isotopologue for which data is available. The.def file contains specification of the dataset in terms of what is available, for example uncertainties or lifetimes, quantum numbers used in the states file and file sizes. It also gives a version number in yyyymmdd date format. The core of the database are the.states and.trans files which provide a compact form of the line list data.
Table 2 gives the specifications for the mandatory part of the.states file. These specifications include the optional components: uncertainties in the term values, the state lifetime and Lande \(g\)-factor. The specification of term value uncertainties was introduced as part of the last data release (Tennyson et al., 2020) and the aim is to make their inclusion compulsory once the available datasets have all had uncertainties added. The lifetimes column has thus far contained radiative lifetimes computed using the Einstein A coefficients available in the transitions file (Tennyson et al., 2016); so far lifetime effects due to other processes such as predissociation have not been considered. As discussed below we propose changing this.
After the mandatory fields, the states files contains data on the quantum numbers and other meta-data used to specify each state. The number
Figure 1: Summary of the current ExoMol data structure.
and format of these quantum numbers is specified in by the.def file associated with that dataset. Other metadata associated with level can also be included in this section. Table 3 shows the start of the states file for the recently update AlHambra line list for \({}^{27}\)Al\({}^{16}\)O (Bowsman et al., 2021). Note that we have taken the opportunity to update our quantum number specifications to conform with the PyValem python package for parsing, validating, manipulating quantum states labels of atoms, ions and small molecules (Hill and Hanicinec, 2022; Hill et al., 2023). In general, this change only affects electronic state designations for which X2SIGMA+, A2PI and soforth are updated to X(ZSIGMA+), A(2PI), etc.. This update adds two characters to the electronic state field but otherwise should be transparent to users; however, it means that all state labels can now be parsed using PyValem which is important for some uses of the database (Owens et al., 2023).
Table 4 gives the specification of the simpler but generally much larger.trans file.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Field & Fortran Format & C Format & Description \\ \hline \(i\) & I12 & X12d & Upper state ID \\ \(f\) & I12 & X12d & Lower state ID \\ \(A\) & E510.4 & X10.4e & Einstein \(A\) coefficient in s\({}^{-1}\) \\ \(\hat{\triangledown}_{fI}\) & E15.6 & X15.6e & Transition wavenumber in cm\({}^{-1}\) (optional). \\ \hline \multicolumn{3}{l}{Fortran format: (\(\texttt{I12,1x,I12,1x,E150.4,1x,E515.6}\))} \\ \hline \end{tabular}
\end{table}
Table 4: Specification of the transitions file.
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l} \hline \hline Field & Fortran Format & C Format & Description \\ \hline \(i\) & I12 & \(\texttt{X12d}\) & State ID & & & & & & & & \\ \(E\) & F12.6 & \(\texttt{X12.6f}\) & State energy in cm\({}^{-1}\) & & & & & & & \\ \(\text{Sto}\) & I6 & \(\texttt{X6d}\) & State degeneracy & & & & & & & & \\ \(J\) & I7/F7.1 & \(\texttt{X7d}\)/\(\texttt{X7.1f}\) & \(J\)-quantum number (integer/half-integer) & & & & & & & \\ (unc) & F12.6 & \(\texttt{X12.6f}\) & Uncertainty in state energy in cm\({}^{-1}\) (optional) & & & & & & & \\ (\(\tau\)) & E512.4 & \(\texttt{X12.4e}\) & Lifetime in s (optional) & & & & & & & \\ (\(g\)) & F10.6 & \(\texttt{X10.6f}\) & \(\texttt{X10.6f}\) & Lande \(g\)-factor (optional) & & & & & & \\ \hline \hline \end{tabular}
ID: state identifier: a non-negative integer index, starting at 1 \(I\): total angular momentum quantum, excluding nuclear spin
Fortran format, \(J\) integer: (\(\texttt{I12,1x,F12.6,1x,1x,16,17,1x,E512.4,1x,F10.6}\)
or \(J\) half-integer: (\(\texttt{I12,1x,F12.6,1x,16,F7.1,1x,E512.4,1x,F10.6}\)
\end{table}
Table 2: Specification of the mandatory part of the states file with extra data options unc, \(\tau\) and \(g\).
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l} \hline \hline \(i\) & \(\hat{E}\) (cm\({}^{-1}\)) & \(g\) & \(J\) & unc (cm\({}^{-1}\)) & \(\tau\) (s) & +/\(-\) & e/f & State & \(v\) & \(|\Lambda|\) & \(|\Sigma|\) & \(|\Omega|\) & Source Label \\ \hline
1 & **0.00000** & 12 & 0.5 & **0.00001** & inf & + & e & X(2SIGMA+) & **0** & **0** & **0.5** & **0.5** & EH \\
2 & 965.416878 & 12 & 0.5 & **0.01651** & 3.61939+@1 & + & e & X(2SIGMA+) & **1** & **0** & **0.5** & **0.5** & PS \\
3 & 1916.827286 & 12 & 0.5 & **0.016519** & 3.61939+@1 & + & e & X(2SIGMA+) & 2 & **0** & **0.5** & **0.5** & PS \\
4 & 2854.162814 & 12 & 0.5 & **0.022148** & 4.2147e+@0 & + & e & X(2SIGMA+) & 3 & **0** & **0.5** & **0.5** & PS \\
5 & 3777.464572 & 12 & 0.5 & **0.019275** & 2.3874e+@0 & + & e & X(2SIGMA+) & 4 & **0** & **0.5** & **0.5** & PS \\
6 & 4686.658929 & 12 & 0.5 & **0.016235** & 1.2041e+@0 & + & e & X(2SIGMA+) & 5 & **0** & **0.5** & **0.5** & PS \\
7 & 5346.689545 & 12 & 0.5 & **0.009709** & 2.0591e-@4 & + & e & A(2PI) & **0** & **1** & **0.5** & **0.5** & **Ha** \\
8 & 5581.988884 & 12 & 0.5 & **0.014747** & 4.3658e-@2 & + & e & X(2SIGMA+) & 6 & **0** & **0.5** & **0.5** & PS \\ \hline \hline \end{tabular}
\(i\): State counting number;
\(\hat{E}\): Term value (in cm\({}^{-1}\));
\(\hat{\text{Sto}}\): Total state degeneracy;
\(J\): Total angular momentum quantum number;
\(\tau\): Estimated uncertainty of energy level (in cm\({}^{-1}\));
\(\tau\): Lifetime (in s\({}^{-1}\));
\(+/-\): Total parity;
\(e\):f: Rotationaless parity;
State: Electronic term value;
\(v\): Vibrational quantum number;
\(|\Lambda|\): Absolute value of the projection of electronic angular momentum;
\(|\Sigma|\): Absolute value of the projection of the electronic spin;
\(|\Omega|\): Absolute value of the projection of the electronic spin;
\(|\Omega|\): Absolute value of the projection of the total angular momentum;
Source Label: Method used to generate term value (McKemnish et al., 2023).
\end{table}
Table 3: An excerpt from the recently updated states file for \({}^{27}\)Al\({}^{16}\)O, see Bowesman et al. (2021).
## 3 Proposed updated EXOMOL data model
There are three new aspects that need to be included in an updated ExoMol data structure: predissociation, the continuum contribution to the opacity and photodissociation. Figure 2 illustrates the various photoabsorption processes for the case of a diatomic molecule.
### Predissociation
Figure 3 illustrates the main mechanism leading to predissociation; for AlH it is caused by tunneling through a small barrier to a dissociation, while for SH it is caused by couplings to dissociative states crossing the state to which the transition goes to. The effect of predissociation can be included by a minor adjustment to the current ExoMol data structure. Predissociation manifests as a shortened lifetime which leads to enhanced natural line-broadening of any transition to (or from) the state concerned. We therefore propose to generalise the definition of the lifetime, \(\tau\), given in the.states file. For most line lists, where predissociation is not important, \(\tau\) is defined as the natural lifetime due to radiative decay. In cases where predissociation is considered, \(\tau\) will represent the natural life due to both radiative decay and predissociation. For example, the radiative lifetime of \(v=1\) A \({}^{2}\Sigma^{+}\) state of SH was calculated by Gorman et al. (2019) as 5.13 ns, while the predissociative lifetimes was measured as 5.45(24) ps (\(N=0\)) (Wheeler et al., 1997). A similar example of the predissociation effects in the spectra of AlH is the predissociative lifetime of the \(J=23\), \(v=1\), A \({}^{1}\Pi\) state of \({}^{27}\)AlH, which was measured by Baltayan & Nedelec (1979) as 4.5 ns, while the
Figure 3: Left: Potential energy curves for AlH due to Yurchenko et al. (2018b) showing the predissociation region. The \(v=0\) vibrational state of the A \({}^{1}\Pi\) electronic state lies below the AlH dissociation limit: transitions to this state are sharp as they do not show effects due to predissociation. Conversely, states in the \(v=1\) can predissociate by tunneling through the barrier to dissociation and show pronounced effects due to lifetime broadening. Right: The potential energy curves (bound and dissociative) and predissociated states of SH.
Figure 2: Schematic representation of photoabsorption for a diatomic molecule showing line and continuum spectra. The transitions within the ground state electronic state (in black) lead to the rovibrational line spectrum (also in black) represented by transitions to excited electronic states (in red); transitions to the repulsive electronic state produce continuum spectra (in green). The line arrows denote sharp, line transitions, the dot interrupted arrow goes to states above the dissociation limit (\(D_{\rm e}\)) which can exhibit predissociative effects. The golden dotted arrow shows photoabsorption which we represent using bound-continuum cross sections. Photoabsorption above \(D_{\rm e}\) can contribute to photodissociation.
radiation lifetime is calculated to be 101.7 ns (Yurchenko et al., 2018). The reduced lifetime affects the line broadening of the corresponding transitions and therefore is an important factor in retrievals of AlH abundance from stellar spectra, as was recently demonstrated by Pavlenko et al. (2022).
Cases where predissociation effects are included in this lifetime will be marked by a new flag, prediss, included in the.def file; prediss will default to 0 (none) and be set to 1 when the effects are present.
In principle, the natural lifetime provides a contribution to the line profile which sits alongside the temperature-dependent effects of Doppler broadening and pressure-dependent collisional broadening. In practice, the natural lifetime usually makes a minimal contribution to the line profile and is thus neglected. For predissociating states this is no longer the case. However, including the effect of lifetime broadening within a standard Voigt profile is straightforward. Lifetime broadening leads to a Lorentzian line shape like pressure broadening where \(\gamma_{\tau}\), the half-width in cm\({}^{-1}\) due lifetime broadening, is given by
\[\gamma_{\tau}=\frac{\hbar}{2\tau hc}. \tag{1}\]
This half-width can simply be added to the pressure-broadening half-width, \(\gamma_{p}\), to give the total Lorentzian component of the line profile. Given that a Voigt profile is already being used, this has little computational impact on a calculation which suggests that routine use of \(\gamma_{p}+\gamma_{\tau}\) for half-width would avoid the need to worry about whether predissociation needs to be considered or not. To this end, for states with non-negligible predissociative lifetimes, the radiative values of \(\tau_{\rm rad}\) in the ExoMol States files will be replaced by \(\tau_{\rm prediss}\). In the example of the \(J=23\), \(v=1\), A \({}^{1}\Pi\) state of \({}^{27}\)AlH, the ExoMol value \(\tau_{\rm rad}=1.0169\times 10^{-7}\) s can be replaced by the experimental value \(\tau_{\rm prediss}=4.5\times 10^{-9}\) s (Baltayan and Nedelec, 1979), otherwise calculated values will be used.
### Continuum absorption
Both continuum absorption and photodissociation need to be represented as cross sections rather than lines as they are continuum processes. However, in our proposed data model continuum absorption will be represented as a function of wavenumber (cm\({}^{-1}\)) to retain consistency with the line spectra, while photodissociation cross sections will be stored as function of wavelength (nm).
When considering photoabsorption by molecules to states lying above the dissociation limit, the spectrum can be thought of broadly dividing into two classes: line spectra comprising what looks like sharp bound-bound transitions, and absorption directly into the continuum. Predissociation spectra form an intermediate between these two cases and, as discussed above, will be treated as line spectra. The line spectra can be represented using the form of the standard line lists (line positions, Einstein A coefficients, lower/upper state energies and other state descriptions), as captured by the.states and.trans files. However, the bound-continuum photoabsorption is best represented as temperature-dependent photo-absorption cross-sections. These continuum photoabsorption cross sections, whose data specification is given below, will be stored as part of the standard line list data base as separate files for each isotopologue. The temperature-dependent photoabsorption spectrum is then obtained by adding the appropriate line and continuum cross-sections using software such as ExoCross(Yurchenko et al., 2018) or PyExoCross(Zhang et al., 2023).
Continuum molecular absorption due to collision induced absorption (CIA) (Richard et al., 2012; Karman et al., 2019) is already routinely considered as part of astronomical models; it has also been recently recommended that absorption due to the so-called 'water continuum' be considered in model atmospheres for water-rich exoplanets (Anisman et al., 2022).
The data model we propose for including continuum absorption due to simply photoabsorption into continuum states is given in Table 5. Again these cross-sections will be temperature dependent but, unlike CIA and the water continuum, it is probably safe to assume that this continuum is not strongly pressure dependent. The presence of a continuum absorption contribution to the photoabsorption will be indicated by a new flag, continuum, included in the.def file; continuum will default to 0 (none) and be set to 1 when the effects are present.
We note that our proposal involves providing photodissociation data on a wavelength grid (in nm) while continuum absorption cross sections will be provided on a wavenumber grid (in cm\({}^{-1}\)). This latter choice ties in closely with line lists which already prove transitions wavenumber (in cm\({}^{-1}\)). The data structure of continuum absorption cross sections is presented in Table 5. The file names have the following structure: '<ISO-SLUG>--<DSNAME>--<KANGE>--T-TEMP>K--P<PRESSURE>bar...<STEP>.cont', where ISO-SLUG is the iso-slug molecule name (Temyson et al., 2020), DSWAME is the name of the line list, RANGE is the wavenumber range in cm\({}^{-1}\), TEMP is the temperature in K, PRESSURE is the pressure in bar, STEP is the wavenumber step in cm\({}^{-1}\).
\begin{table}
\begin{tabular}{l l l l} \hline \hline Field & Fortran Format & C Format & Description \\ \hline \(\dot{\nu}_{i}\) & F12.6 & \$12.6f & Central bin wavenumber, cm\({}^{-1}\) \\ \(\sigma_{i}\) & E514.8 & \$14.8e & Absorption cross section, cm\({}^{2}\) molec\({}^{-1}\) \\ \hline \hline \multicolumn{4}{l}{Fortran format: (F12.6,1x,E514.8)} \\ \end{tabular}
\end{table}
Table 5: Specification of the.cont continuum photoabsorption cross section file format.
### Photodissociation
Photodissociation cross sections are separated from the line list and form a distinct section in the ExoMol database. At present this section contains calculated cross sections for HCl and HF (Pezzella et al., 2022) and measured cross sections for CO, H\({}_{2}\)O, CO\({}_{2}\), SO\({}_{2}\), NH\({}_{3}\), H\({}_{2}\)CO and C\({}_{2}\)H\({}_{4}\) due to Fateev et al. (2023) and CO\({}_{2}\) due to Venot et al. (2018). The immediate plan is also to add temperature-dependent cross sections due to Qin and co-workers who have performed photodissociation calculations on MgO (Bai et al., 2021), AlH (Qin et al., 2021), AlCI (Qin et al., 2021), AlF (Qin et al., 2022) and O\({}_{2}\)(Hu et al., 2023), as well as HF and HCl (Qin et al., 2022). In due course a structure of photodissociation.pdef files will be implemented to aid the navigation of this section of the database.
As the photodissociation cross sections form a distinct part of the ExoMol database, we have added a new photodissociation definition file (.pdef) file to the data structure; the proposed format of this file is given in Table 6. This gives the necessary metadata to access and interpret the recommended photodissociation cross sections. The information section mirror those given in the.def for the same system. For completeness we have added two more flags to the master file, line and photo, which define whether the line spectra and photodissociation cross sections are present (=1) or not (=0). The default values are line=1 and photo=0 which aligns with structure of previous master files which assumed all data was in the form of a line list.
A file structure for photodissociation was already proposed by Pezzella et al. (2022); however, this is updated and extended here to align with the one proposed for VUV spectra in Tennyson et al. (2020); Table 7 gives the formal specification of the file structure. As a file naming convention we adopt the following:
'<ISO>->DSNAME>-<RANGE>-T<TEMP>->P<PRESSURE>bar_<STEP>-photo', where ISO>-SLUG is the iso-slug molecule, DSNAME is the name of the line list, RANGE is the wavelength range in nm, TEMP is the temperature in K, PRESSURE is the pressure in bar, STEP is the wavelength step in nm. For example, the states file of the photodissociation cross sections for HF the filename: 1H-19F_PTY_90.0-400.1_T200K_P0bar_0.1.nm, see Table 8.
Pezzella et al. (2022) found that their cross sections depended strongly on the temperature of the molecule and proposed presenting these data for 34 temperatures between \(T=0\) and \(T=10000\) K. This data model implicitly assumes that the molecule is in local thermodynamic equilibrium (LTE). We discuss issues with treating non-LTE effects and other issues with this data model in the next section. These data are all implicitly at zero pressure as pressure broadening effects are neglected. Data from other sources will have different temperature and pressure grids.
Experimental cross sections of molecules covering the VUV region has been curated by the MPI-Mainz UV/VIS Spectral Atlas (Keller-Rudek et al., 2013) using a similar, wavelength (in nm) format. In Fig. 4, we illustrate photodissociation cross sections of HCl from ExoMol (Pezzella et al., 2022) and by Nee et al. (1986) at room temperature as provided by the MPI-Mainz UV/VIS Atlas.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Field & Fortran Format & C Format & Description \\ \hline \hline
**Header Information** & & & \\ \hline ID & A11 & \%11s & Always the ASCII string “EXOMOL\_pdef” \\ IsoFormula & A27 & \%27s & Isotopicopologue chemical formula \\ Iso-slug & A160 & \%160s & Isotopicopologue identifier, see text for details \\ DSName & A10 & \%10s & Isotopicopologue dataset name \\ \(V\) & & I8 & \%8d & Version number with format YYYYMMDD \\ MolKey & A27 & \%27s & Standard inchi key of the molecule \\ \(N_{\rm atom}\) & & 4 & \%4d & Number of atoms \\ \hline \hline
**Atom definition** (The following 2 lines occur \(N_{\rm atom}\) times) & & \\ \hline \hline
**Isotopologue Information** & & & \\ \hline \hline \(m_{D}a\)\(m_{E}\) & F12.6,1X,ES14.8 & \%12.6f \%14.8e & Isotopicopologue mass in Da and kg \\ \(l_{\rm sym}\) & & & \\ \(N_{\rm temp}\) & & 46 & \%6s & Molecular symmetry Group (if \(N_{\rm atom}\) = 2 then C or D) \\ \(N_{\rm temp}\) & & 44 & Number of irreducible representations \\ \hline
**ExoMol Information** & & & \\ \hline FileType & A27 & \%27s & Data source: ExoMol, Expt., etc. \\ \(T_{\rm max}\) & & F8.2 & \%8.2f & Maximum temperature for cross sections \\ \(N_{\rm temp}\) & & I3 & No. of temperatures available \\ \(N_{\rm pres}\) & & I3 & No. of pressures available \\ \hline \hline \end{tabular}
\end{table}
Table 6: Photodissociation definition (.pdef) file format; each entry starts on a new line.
\begin{table}
\begin{tabular}{l c c} \hline \hline Field & Fortran Format & C Format & Description \\ \hline \(\lambda_{i}\) & F10.3 & \%10.3f & Central bin wavelength, nm \\ \(\sigma_{i}\) & E513.6 & \%13.6e & Photodissociation cross section, cm\({}^{2}\) molec\({}^{-1}\) \\ \hline \hline \multicolumn{3}{l}{Fortran format: (F10.3,1x,E513.6)} \\ \hline \hline \end{tabular}
\end{table}
Table 7: Specification of the.photo photodissociation cross section file format.
## 4 Omissions from the updated data model
The assumption of LTE for a molecule undergoing photodissociation may not be valid in all cases. Our method of calculating these cross sections does indeed involve computing the initial/final-state dependent data which would be required for a non-LTE representation of photodissociation. However, given that even for diatomic molecules a large number of initial states have to be considered, even considering the vibronic states only, the volume of these data is large. As yet no-one has asked us for non-LTE photodissociation cross sections so at present we do not propose including them in the standard data distribution; if they are required they can be obtained from the present authors. Examples of the state-dependent non-LTE cross sections include the continuous opacities of CH and OH provided by Kurucz et al. (1987) as well as the CH data produced by Popa et al. (2022) and used in their non-LTE analysis of CH in metal-poor stellar atmospheres.
Another issue with our data model for photodissociation is that at present it provides no information of dissociation products. In comparison with the Leiden database (Heays et al., 2017), which provides low-temperature photodissociation cross sections for molecules of astrophysical interest, gives the dissociation products which are given at the threshold to photodissociation but does not provide information on other possible photodissociation products as they may arise at shorter wavelengths. Although our methodology is capable of providing the partial cross sections (or branching ratios) associated with dissociation to different products, so far our models have not been constructed to produce these data. While the initial step in photodissociation generally involves dipole allowed transition to an electronically excited state, the subsequent dissociation step may involve crossings to states which cannot be reached by dipole transitions from the ground state such as ones with different spin multiplicity. Allowing for these extra states represents a significant complication in the calculation and again, so far, no one has asked for these data. However, should these partial cross sections be needed it would be relatively simple to extend our proposed data model to accommodate
Figure 4: Photodissociation spectra of HCI from ExoMol (Pezzella et al., 2022) (theoretical, using natural abundance) and by Nee et al. (1986) (298 K) as provided by the MPI-Mainz UV/VIS Atlas.
\begin{table}
\begin{tabular}{c c} \hline \(\lambda_{i}\) & \(\sigma_{i}\) \\ \hline
90.00 & 4.48427452E-19 \\
90.10 & 4.50720456E-19 \\
90.20 & 4.5296449E-19 \\
90.30 & 4.55158741E-19 \\
90.40 & 4.5729342E-19 \\
90.50 & 4.59377114E-19 \\
90.60 & 4.62624329E-19 \\
90.70 & 1.68471832E-18 \\ \hline \end{tabular} \(\lambda_{i}\): Central bin wavelength, nm; \(\sigma_{i}\): Photodissociation cross section, cm\({}^{2}\) molec\({}^{-1}\).
\end{table}
Table 8: An excerpt from the.nm photodissociation file for \({}^{1}\)H\({}^{19}\)F, see Pezzella et al. (2022).
them; we note that the International Atomic Energy Agency's CollisionDB database (Hill 2023; Hill et al. 2023a) already used PyValem to address this issue in collision cross sections.
## 5 Conclusion
The present research note sets out how we propose to extend the current ExoMol data model to accommodate photoabsorption processes which occur at shorter wavelengths where the possibility of either direct or indirect absorption into the continuum can occur. Broadly these processes are classified as predissociation, continuum absorption and photodissociation contribution to the opacity. While both predissociation and continuum absorption can be accommodated by generalising our current line; a new branch starting form a photodissociation definition file has been added for the photodissociation cross sections.
Given that these various processes considered are not mutually exclusive as, for example, photodissociation also provides a contribution to the opacity, some care is required in defining data structures to facilitate the use of our extended data. We believe our proposals satisfy this requirement but that further expansion will be required to allow for pressure-dependent continuum absorption, photodissociation of molecules not in LTE, or to account for the possibility that photodissociation might result in creation of a variety of different photodissociation products.
## Acknowledgements
We thank Ahmed Al-Refaie, Richard Freedman, Christian Hill, Roxana Lupu, Zhi Qin, Olivia Venot and Ingo Waldmann for helpful discussions on the topic of this work. This work was supported by the European Research Council under Advanced Investigator Project 883830.
## Data Availability
The data discussed in this article is available from the ExoMol database which can be accessed at www.exomol.com.
|
2310.19446 | spAbundance: An R package for single-species and multi-species spatially
explicit abundance models | Numerous modeling techniques exist to estimate abundance of plant and
wildlife species. These methods seek to estimate abundance while accounting for
multiple complexities found in ecological data, such as observational biases,
spatial autocorrelation, and species correlations. There is, however, a lack of
user-friendly and computationally efficient software to implement the various
models, particularly for large data sets. We developed the spAbundance R
package for fitting spatially-explicit Bayesian single-species and
multi-species hierarchical distance sampling models, N-mixture models, and
generalized linear mixed models. The models within the package can account for
spatial autocorrelation using Nearest Neighbor Gaussian Processes and
accommodate species correlations in multi-species models using a latent factor
approach, which enables model fitting for data sets with large numbers of sites
and/or species. We provide three vignettes and three case studies that
highlight spAbundance functionality. We used spatially-explicit multi-species
distance sampling models to estimate density of 16 bird species in Florida,
USA, an N-mixture model to estimate Black-throated Blue Warbler (Setophaga
caerulescens) abundance in New Hampshire, USA, and a spatial linear mixed model
to estimate forest aboveground biomass across the continental USA. spAbundance
provides a user-friendly, formula-based interface to fit a variety of
univariate and multivariate spatially-explicit abundance models. The package
serves as a useful tool for ecologists and conservation practitioners to
generate improved inference and predictions on the spatial drivers of
populations and communities. | Jeffrey W. Doser, Andrew O. Finley, Marc Kéry, Elise F. Zipkin | 2023-10-30T11:13:59Z | http://arxiv.org/abs/2310.19446v2 | # spAbundance: An R package for single-species and multi-species spatially-explicit abundance models
###### Abstract
1. Numerous modeling techniques exist to estimate abundance of plant and wildlife species. These methods seek to estimate abundance while accounting for multiple complexities found in ecological data, such as observational biases, spatial autocorrelation, and species correlations. There is, however, a lack of user-friendly and computationally efficient software to implement the various models, particularly for large data sets.
2. We developed the spAbundance R package for fitting spatially-explicit Bayesian single-species and multi-species hierarchical distance sampling models, N-mixture models, and generalized linear mixed models. The models within the package can account for spatial autocorrelation using Nearest Neighbor Gaussian Processes and accommodate species correlations in multi-species models using a latent factor ap
proach, which enables model fitting for data sets with large numbers of sites and/or species.
3. We provide three vignettes and three case studies that highlight spAbundance functionality. We used spatially-explicit multi-species distance sampling models to estimate density of 16 bird species in Florida, USA, an N-mixture model to estimate Black-throated Blue Warbler (_Setophaga caerulescens_) abundance in New Hampshire, USA, and a spatial linear mixed model to estimate forest aboveground biomass across the continental USA.
4. spAbundance provides a user-friendly, formula-based interface to fit a variety of univariate and multivariate spatially-explicit abundance models. The package serves as a useful tool for ecologists and conservation practitioners to generate improved inference and predictions on the spatial drivers of populations and communities.
Introduction
Understanding how abundance of plant and animal populations varies across space and time is a central objective in ecology and conservation management. A variety of sampling and associated modeling techniques have been developed over the last 50 years to estimate abundance while accounting for imperfect detection (i.e., the failure to observe all individuals of a species that are present at a location during the sampling period), including distance sampling and repeated counts, among others (Nichols et al., 2009). In distance sampling, the probability of detecting an individual is assumed to decay with increasing distance to the observer, which allows for the explicit estimation of abundance/density while accommodating imperfect detection of individuals (Buckland et al., 2001). Hierarchical distance sampling (HDS; Royle et al. 2004) extends classical distance sampling to enable modeling abundance/density as a function of spatially-varying covariates. Royle et al. (2004) introduced N-mixture models, which allow for estimation of abundance (and effects of spatially-varying covariates) while accounting for detection probability using replicated count data during some period where the population is assumed to be closed (i.e., no births/deaths or immigration/emigration). In addition to approaches that explicitly account for imperfect detection, generalized linear mixed models (GLMMs) that estimate relative abundance (i.e., ignoring imperfect detection) can be used to assess how environmental covariates influence relative changes in abundance across space and/or time (Barker et al., 2018; Goldstein and de Valpine, 2022). Multi-species (i.e., multivariate) extensions of HDS (Sollmann et al., 2016), N-mixture models (Yamaura et al., 2012), and GLMMs (e.g., Hui et al. 2015) use count data from multiple species to estimate species-specific patterns in abundance, which may also estimate correlations between species in a joint species distribution model (JSDM) framework (Warton et al., 2015).
When modeling abundance across large spatial domains and/or using a large number of observed locations, accommodating spatial autocorrelation becomes increasingly important (Guelat and Kery, 2018). Spatial autocorrelation can arise from a variety of ecological and/or biological processes, such as additional environmental drivers not
included as covariates in the model, dispersal, species interactions, and source-sink metapopulation dynamics (Chapter 9; Kery and Royle, 2021). Failing to account for residual spatial autocorrelation (i.e., remaining spatial autocorrelation after accounting for environmental covariates) can lead to overly precise estimates and inferior predictive performance. Modeling spatial dependence is commonly done via the addition of spatially structured random effects to point-referenced spatial regression models (i.e., spatially-explicit models). Gaussian process-based random effects provide a flexible non-parametric approach to capture spatial patterns, offer unparalleled process parameter and predictive inference, and yield probabilistic uncertainty quantification. The hierarchical Bayesian framework is the preferred inferential framework for models developed here and in the literature due to their increased flexibility to fit models that would be infeasible with classical methods (Banerjee et al., 2014). Such models are, however, notoriously computationally intensive (Banerjee and Fuentes, 2012), as computational complexity increases in cubic order with the number of spatial locations. These computational bottlenecks make fitting spatially-explicit models impractical for even moderately large data sets using Bayesian software packages such as Stan(Carpenter et al., 2017) and NIMBLE (de Valpine et al., 2017).
Many popular, formula-based R packages exist that can fit various combinations of distance sampling models, N-mixture models, and/or spatially-explicit GLMMs for assessing spatial patterns in abundance (Supplemental Information Table S1). The R package unmarked(Fiske and Chandler, 2011) is commonly used to fit single-species distance sampling and N-mixture models, but cannot accommodate spatial autocorrelation. The dsm package (Miller et al., 2013) can fit spatially-explicit distance sampling models using generalized additive models, the hSDM package (Vieilledent, 2019) can fit spatially-explicit N-mixture models with an intrinsic conditional autoregressive model (Ver Hoef et al., 2018), while the ubms package (Kellner et al., 2021) fits both spatially-explicit distance sampling and N-mixture models using restricted spatial regression (Hodges and Reich, 2010). These packages, however, cannot accommodate multiple species within a multivariate framework. A variety of R packages exist to fit spatially-explicit univariate
and multivariate GLMMs, such as spBayes(Finley et al., 2015), Hmsc(Tikhonov et al., 2020), and sdmTMB(Anderson et al., 2022). However, none of these packages can explicitly account for imperfect detection.
In this paper, we introduce the spAbundance R package for fitting Bayesian single-species and multi-species HDS models, N-mixture models, and GLMMs that may or may not include spatial autocorrelation in large data sets. We fit all spatially-explicit models with Nearest Neighbor Gaussian Processes (NNGPs), a computationally efficient approach that closely approximates a full Gaussian process while drastically reducing computational run times (Datta et al., 2016; Finley et al., 2019). We designed spAbundance syntax to closely follow the syntax of spOccupancy(Doser et al., 2022), an R package that fits a variety of spatially-explicit occupancy models, which together provide a user-friendly and computationally efficient set of tools to model occupancy and abundance while accounting for spatial autocorrelation and imperfect detection.
## 2 Overview of models in spAbundance
Below we give a brief overview of the models included in spAbundance. See Supplemental Information S1 for details on all prior distributions and their default values.
### Single-species hierarchical distance sampling models
The spAbundance functions DS and spDS fit non-spatial and spatial single-species HDS models, respectively. Let \(N(\mathbf{s}_{j})\) denote the true abundance of a species of interest at site \(j=1,\ldots,J\) with spatial coordinates \(\mathbf{s}_{j}\). We model \(N(\mathbf{s}_{j})\) using either a Poisson or negative binomial (NB) distribution following
\[\begin{split} N(\mathbf{s}_{j})&\sim\text{Poisson}(\mu( \mathbf{s}_{j})A(\mathbf{s}_{j})),\,\text{or},\\ N(\mathbf{s}_{j})&\sim\text{NB}(\mu(\mathbf{s}_{j})A(\bm {s}_{j}),\kappa),\end{split} \tag{1}\]
where \(\mu(\mathbf{s}_{j})\) is the average abundance at site \(j\), \(A(\mathbf{s}_{j})\) is an offset, and \(\kappa\) is a positive dispersion parameter. Smaller values of \(\kappa\) indicate overdispersion in the latent abundance
values relative to a Poisson model, while higher values indicate minimal overdispersion in abundance. The offset term \(A(\mathbf{s}_{j})\) can be used to convert \(\mu(\mathbf{s}_{j})\) to units of density (i.e., abundance per unit area), while if \(A(\mathbf{s}_{j})=1\), \(\mu(\mathbf{s}_{j})\) is average abundance per site. We model \(\mu(\mathbf{s}_{j})\) using a log link function following
\[\log(\mu(\mathbf{s}_{j}))=\mathbf{x}(\mathbf{s}_{j})^{\top}\mathbf{\beta}+\mathrm{w}(\mathbf{s}_{j }), \tag{2}\]
where \(\mathbf{\beta}\) is a vector of regression coefficients for a set of covariates \(\mathbf{x}(\mathbf{s}_{j})\) (including an intercept), \(\mathrm{w}(\mathbf{s}_{j})\) is a zero-mean spatial random effect, and the \(\top\) denotes transposition of column vector \(\mathbf{x}(\mathbf{s}_{j})\). For non-spatial HDS models, \(\mathrm{w}(\mathbf{s}_{j})\) is removed from Equation 2. For spatially-explicit HDS, we model \(\mathbf{w}(\mathbf{s})\) using a NNGP as a computationally efficient alternative to using a full spatial GP. More specifically, we assume that
\[\mathbf{w}(\mathbf{s})\sim\mathrm{Normal}(\mathbf{0},\tilde{\mathbf{C}}(\mathbf{s},\mathbf{s}^{ \prime},\mathbf{\theta}), \tag{3}\]
where \(\tilde{\mathbf{C}}(\mathbf{s},\mathbf{s}^{\prime},\mathbf{\theta})\) is a \(J\times J\) NNGP-derived spatial covariance matrix and \(\mathbf{\theta}\) is a vector of parameters governing the spatial process according to a spatial covariance function. spAbundance supports four spatial covariance models: exponential, spherical, Gaussian, and Matern (Banerjee et al., 2014). For the exponential, spherical, and Gaussian functions, \(\mathbf{\theta}=\{\sigma^{2},\phi\}\), where \(\sigma^{2}\) is a spatial variance parameter controlling the magnitude of the spatial random effects and \(\phi\) is a spatial decay parameter controlling the range of spatial autocorrelation, while the Matern function additionally includes a spatial smoothness parameter, \(\nu\). See Supplemental Information S1 for statistical details on the NNGP approximation.
Suppose observers count the number of individuals of the species of interest at each site \(j\). Our software implementation in spAbundance supports two types of "sites": line transects and point count surveys. In line transects, each site \(j\) is a line transect the observer walks along and records the distance of each observed individual to the line within a set of \(k=1,\ldots,K\) distance bands. In point count surveys, each site \(j\) is the center of an imaginary circle at which an observer stands and records the distance of
each observed individual to the center of the circle within \(k=1,\ldots,K\) circular distance bands. Note that sometimes continuous distances are recorded rather than distance bins, in which case the continuous distance measurements can then be split into \(K\) distance bins prior to analysis. Define \(\boldsymbol{y}(\boldsymbol{s}_{j})\) as a vector of \(K\) values indicating the number of individuals observed within each of the \(k\) distance bands at site \(j\). Similarly, let \(\boldsymbol{y}^{*}(\boldsymbol{s}_{j})\) be a vector of \(K+1\) values, where the first \(K\) values correspond to \(\boldsymbol{y}(\boldsymbol{s}_{j})\), and the last value is the number of unobserved individuals at that location (i.e., \(N(\boldsymbol{s}_{j})-\sum_{k=1}^{K}y_{k}(\boldsymbol{s}_{j})\)). Note the last value in \(\boldsymbol{y}^{*}(\boldsymbol{s}_{j})\) is not observed (i.e., since \(N(\boldsymbol{s}_{j})\) is not known). We model \(\boldsymbol{y}^{*}(\boldsymbol{s}_{j})\) according to
\[\boldsymbol{y}^{*}(\boldsymbol{s}_{j})\sim\text{Multinomial}(N(\boldsymbol{s} _{j}),\boldsymbol{\pi}_{j}^{*}), \tag{4}\]
where \(\boldsymbol{\pi}_{j}^{*}\) is a vector of cell-specific detection probabilities with the first \(K\) values denoted as \(\boldsymbol{\pi}_{j}\) and the final value \(\pi_{j,K+1}=1-\sum_{k=1}^{K}\pi_{j,k}\). More specifically, \(\pi_{j,k}\) is the probability of detecting an individual in the \(k\)th distance band at site \(j\). We define \(\pi_{j,k}\) as
\[\pi_{j,k}=\bar{p}_{j,k}\psi_{k}, \tag{5}\]
where \(\bar{p}_{j,k}\) is the probability of detecting an individual in distance band \(k\), given the individual occurs in distance band \(k\), and \(\psi_{k}\) is the probability an individual occurs in distance band \(k\). The definitions of \(\bar{p}_{j,k}\) and \(\psi_{k}\) are different depending on whether the distance bands are linear (as in line transects) or circular (as in point count surveys). Following the standard distance sampling assumption that animals are uniformly distributed in space, for line transects we have
\[\psi_{k}=\frac{b_{k+1}-b_{k}}{B}, \tag{6}\]
where \(b_{k+1}\) and \(b_{k}\) are the upper and lower distance limits for band \(k\), and \(B\) is the line transect half-width (i.e., the maximum distance within which individuals are counted). Further, for distance x we have
\[\bar{p}_{j,k}=\frac{1}{b_{k+1}-b_{k}}\int_{b_{k}}^{b_{k+1}}g(\text{x})d\text{ x}. \tag{7}\]
For point count surveys, we have
\[\psi_{k}=\frac{b_{k+1}^{2}-b_{k}^{2}}{B^{2}}, \tag{8}\]
where \(b_{k+1}\) and \(b_{k}\) are similarly the upper and lower distance limits for band \(k\), and \(B\) is the radius of the full point count circle. We then define \(\bar{p}_{j,k}\) as
\[\bar{p}_{j,k}=\frac{1}{b_{k+1}^{2}-b_{k}^{2}}\int_{b_{k}}^{b_{k+1}}g(\mathrm{x} )2\mathrm{x}d\mathrm{x}. \tag{9}\]
For both line transects and point count surveys, \(g(\mathrm{x})\) is some function of distance \(\mathrm{x}\) from the transect line/point count survey center. We approximate the integrals in Equation 7 and 9 using numerical integration. Our software implementation in spAbundance currently supports two detection functions: half-normal and negative exponential (see Supplemental Information S1 for their definitions). Both of these functions are governed by a scale parameter, \(\sigma_{j}\), which can be modeled as a function of covariates to allow detection probability to vary across sites. More specifically, we have
\[\log(\sigma_{j})=\mathbf{v}_{j}^{\top}\mathbf{\alpha}, \tag{10}\]
where \(\mathbf{\alpha}\) is a vector of regression coefficients for covariates \(\mathbf{v}\) (including an intercept).
### Multi-species hierarchical distance sampling models
Now consider the case where distance sampling data, \(\mathbf{y}_{i}(\mathbf{s}_{j})\), are collected for multiple species \(i=1,\ldots,I\) at each survey location \(j\) with coordinates \(\mathbf{s}_{j}\). We are now interested in estimating the abundance of each species \(i\) at each location \(j\), denoted as \(N_{i}(\mathbf{s}_{j})\). We model \(N_{i}(\mathbf{s}_{j})\) analogous to Equation 1, with expected abundance now varying by species and site according to
\[\log(\mu_{i}(\mathbf{s}_{j}))=\mathbf{x}(\mathbf{s}_{j})^{\top}\mathbf{\beta}_{i}+\mathrm{w}_{ i}^{*}(\mathbf{s}_{j}), \tag{11}\]
where \(\mathbf{\beta}_{i}\) are the species-specific effects of covariates \(\mathbf{x}(\mathbf{s}_{j})\) (including an intercept) and \(\mathrm{w}_{i}^{*}(\mathbf{s}_{j})\) is a species-specific random effect. When \(N_{i}(\mathbf{s}_{j})\) is modeled using a negative
binomial distribution, we estimate a separate dispersion parameter \(\kappa_{i}\) for each species. We model \(\mathbf{\beta}_{i}\) as random effects arising from a common, community-level normal distribution, which leads to increased precision of species-specific effects compared to single-species models (Sollmann et al., 2016). For example, the species-specific abundance intercept \(\beta_{0,i}\) is modeled according to
\[\beta_{0,i}\sim\text{Normal}(\mu_{\beta_{0}},\tau_{\beta_{0}}^{2}), \tag{12}\]
where \(\mu_{\beta_{0}}\) is the community-level abundance intercept, and \(\tau_{\beta_{0}}^{2}\) is the variance of the intercept across all \(I\) species. The observation portion of the multi-species distance sampling model is identical to the single-species model and follows Equations 4-10, with all parameters indexed by species, and the species-specific coefficients \(\mathbf{\alpha}_{i}\) modeled hierarchically analogous to the species-specific abundance coefficients \(\mathbf{\beta}_{i}\) (Equation 12).
spAbundance fits three types of multi-species models that differ in how they incorporate the species-specific random effect \(\text{w}_{i}^{*}(\mathbf{s}_{j})\) (if included). The function msDS fits the non-spatial multi-species distance sampling model of Sollmann et al. (2016) in which we remove the random effect \(\text{w}_{i}^{*}(\mathbf{s}_{j})\) from Equation 11. The function sfMsDS fits spatial multi-species distance sampling models using a spatial factor model (Hogan and Tchernis, 2004), which simultaneously accommodates spatial autocorrelation and residual species correlations in a spatial joint species distribution model framework. Briefly, we decompose \(\text{w}_{i}^{*}(\mathbf{s}_{j})\) into a linear combination of \(q\) latent variables (i.e., factors) and their associated species-specific coefficients (i.e., factor loadings). More specifically, we have
\[\text{w}_{i}^{*}(\mathbf{s}_{j})=\mathbf{\lambda}_{i}^{\top}\text{w}(\mathbf{s}_{j}), \tag{13}\]
where \(\mathbf{\lambda}_{i}^{\top}\) is the \(i\)th row of factor loadings from an \(I\times q\) loadings matrix \(\mathbf{\Lambda}\), and \(\text{w}(\mathbf{s}_{j})\) is a vector of length \(q\) of independent spatial factors at site \(j\). By setting \(q<<I\), we achieve dimension reduction to efficiently model communities with a large number of species (Taylor-Rodriguez et al., 2019; Doser et al., 2023). The approach accounts for residual species correlations via their species-specific responses to the \(q\) spatial factors, which results in a residual interspecies covariance matrix that can be derived from the
model as \(\mathbf{\Sigma}=\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\). We model each spatial factor using an independent NNGP according to Equation 3, except we fix the spatial variance parameter to 1 to ensure identifiability (Lopes and West, 2004). As an alternative, the function lfMsDS models \(\mathrm{w}_{i}^{*}(\mathbf{s}_{j})\) identical to Equation 13, except assumes each of the \(q\) factors in \(\mathbf{w}(\mathbf{s}_{j})\) arises from an independent standard normal distribution. This model does not account for spatial autocorrelation but does allow for the estimation of species correlations. The models fit by sfsDS and lfMsDS can be thought of as abundance-based JSDMs that account for imperfect detection (Tobler et al., 2019; Chapter 8 in Kery and Royle, 2021).
Our factor modeling approach to fitting spatially-explicit multi-species models in spAbundance implicitly assumes species are correlated through latent factors \(\mathbf{w}(\mathbf{s}_{j})\). If there is no interest in residual species correlations, we could imagine a multi-species model that includes a separate spatial process for each species. However, we do not include such models in spAbundance because they are computationally infeasible when working with even a moderate number of species (e.g., 10). Further, in the context of occupancy models, the spatial factor modeling approach has been shown to perform equally as well as a model that estimates a separate spatial random effect for each species even when there are no residual correlations between the species in the community (Doser et al., 2023).
### Single-species N-mixture models
The functions NMix and spNMix fit non-spatial and spatial N-mixture models in spAbundance. Following the N-mixture model structure of Royle (2004), we assume observers count the number of individuals of a target species at each site \(j\) over a set of multiple surveys \(k=1,\ldots,K_{j}\), denoted as \(y_{k}(\mathbf{s}_{j})\). Note the number of surveys can vary by site, but at least some sites must be surveyed more than once to ensure identifiability. We model \(y_{k}(\mathbf{s}_{j})\) conditional on the true abundance of the species at site \(j\), \(N(\mathbf{s}_{j})\), following
\[y_{k}(\mathbf{s}_{j})\sim\mathrm{Binomial}(N(\mathbf{s}_{j}),p_{j,k}), \tag{14}\]
where \(p_{j,k}\) is the probability of detecting an individual given it is present at the site.
We model \(p_{j,k}\) using a logit link function in which we can allow detection probability to vary over space and/or surveys. More specifically, we have
\[\text{logit}(p_{j,k})=\mathbf{v}_{j,k}^{\top}\mathbf{\alpha}, \tag{15}\]
where \(\mathbf{\alpha}\) is a vector of effects of covariates \(\mathbf{v}(\mathbf{s}_{j})\) (including an intercept). The model for abundance \(N(\mathbf{s}_{j})\) is identical to the single-species distance sampling model, which can include covariates and/or spatial random effects (Equations 1-3).
### Multi-species N-mixture models
Analogous to HDS models, we can extend single-species N-mixture models to model abundance of a community of \(I\) total species (Yamaura et al., 2012). In multi-species N-mixture models, we estimate the abundance of species \(i\) at spatial location \(j\), \(N_{i}(\mathbf{s}_{j})\). Our model for \(N_{i}(\mathbf{s}_{j})\) follows that of multi-species HDS models, such that expected abundance can be modeled as a function of species-specific effects of spatially-varying covariates (Equation 11) and species-specific random effects that can accommodate residual species correlations and residual spatial autocorrelation using a factor modeling approach (Equation 13). Species-specific covariate effects are modeled hierarchically following Equation 12. The observation portion of the multi-species N-mixture model is identical to the single-species model, now with species-specific covariate effects modeled hierarchically analogous to the abundance coefficients. spAbundance provides functions to fit non-spatial multi-species N-mixture models with (lfMsNMix) and without (msNMix) residual species correlations, as well as spatial multi-species N-mixture models that account for residual species correlations and spatial autocorrelation (sfMsNMix).
### Single-species GLMMs
The functions abund and spAbund fit single-species (i.e., univariate) non-spatial and spatial GLMMs in spAbundance using abundance and related (e.g., biomass) data. As opposed to HDS and N-mixture models, GLMMs do not explicitly account for imperfect
detection via an additional hierarchical component to the model, and instead directly model the observed abundance at site \(j\), \(y(\mathbf{s}_{j})\) to provide inference on relative abundance (e.g., Chapter 1 in Kery and Royle 2021). Observed abundance \(y(\mathbf{s}_{j})\) is modeled using some probability distribution with mean \(\mu(\mathbf{s}_{j})\). spAbundance currently supports Poisson and negative binomial for use with count data and the Gaussian distribution for use with continuous abundance data (e.g., biomass). Mean relative abundance \(\mu(\mathbf{s}_{j})\) is modeled according to Equation 2 for the Poisson and negative binomial cases, while the log link function is removed for the Gaussian case. Note that variables thought to influence detection probability can be incorporated in the model for \(\mu(\mathbf{s}_{j})\) to improve estimates of relative abundance (e.g., random observer effects, Link and Sauer 2002).
### Multi-species GLMMs
Now consider the case where we have count data for multiple species \(I\) at each survey location \(j\), denoted \(y_{i}(\mathbf{s}_{j})\). We jointly model relative abundance of each species using a multivariate GLMM (e.g., Warton et al. 2015; Hui et al. 2015), in which expected abundance for each species \(i\) at site \(j\), \(\mu_{i}(\mathbf{s}_{j})\), is modeled analogous to Equations 11-13. Note the log link function is removed from Equation 11 when modeling abundance using a Gaussian distribution. As with HDS and N-mixture models, spAbundance provides functions to fit non-spatial multivariate GLMMs with (lfMsAbund) and without (msAbund) residual species correlations. Multivariate spatial GLMMs with residual species correlations are fit using the sfMsAbund function.
## 3 spAbundance functionality
Here we highlight the five main tasks performed by spAbundance (see Table 1 for function names).
_1. Data simulation._ The functions simDS, simMsDS, simNMix, simMsNMix, simAbund, and simMsAbund simulate data under the single-species and multi-species HDS, N-mixture, and GLMM frameworks for use in simulation studies or power analyses.
2 Model fitting
Model fitting functions were described previously (Section 2). All models are implemented in a Bayesian framework using custom Markov chain Monte Carlo (MCMC) algorithms written in C/C++ using R's foreign language interface. spAbundance uses standard R formula syntax to specify abundance and detection probability models, with options to include random intercepts and random slopes using lme4 syntax (Bates et al., 2015). Users can specify initial values for the MCMC algorithm as well as each parameter's prior distribution to yield vague or informative priors as desired (Supplemental Information S1).
_3. Model validation and comparison_. The function ppcAbund performs posterior predictive checks on spAbundance model objects to assess model Goodness of Fit. The function waicAbund computes the conditional version (Millar, 2018) of the Widely Applicable Information Criterion (WAIC; Watanabe 2010) for model selection and assessment.
_4. Posterior summaries_. We include summary functions for spAbundance model objects that display concise summaries of the posterior distributions for estimated parameters as well as the potential scale reduction factor (R; Gelman and Rubin 1992) and effective sample size for convergence diagnostics. Simple plot functions allow for further convergence diagnostics via visual assessment of traceplots. The complete posterior samples are returned as coda::mcmc objects (Plummer et al., 2006)
_5. Prediction._ predict functions for all spAbundance model objects provide predictions of abundance across a user-specified set of locations (given covariate values and spatial coordinates). The resulting posterior predictive distributions can be used to generate abundance-based species distribution maps with associated uncertainty. Users can also predict detection probability for HDS and N-mixture models to yield insight on how detection probability varies across a user-specified range of covariate values.
## 4 Worked examples and online resources
We demonstrate spAbundance functionality with three worked examples and three vignettes. Complete details for all worked examples are provided in Supplemental Informa
tion S1, along with associated code and data available on GitHub ([https://github.com/doserjef/Doser_et_al_2023_spAbundance](https://github.com/doserjef/Doser_et_al_2023_spAbundance)). The vignettes are provided in Supplemental Information S2-S4 as well as on the package website ([https://www.jeffdoser.com/files/spabundance-web/](https://www.jeffdoser.com/files/spabundance-web/)). Here we provide a short overview of the worked examples and vignettes.
### Case study 1: Bird density in central Florida
This case study demonstrates spAbundance functionality to fit spatial and nonspatial multi-species HDS models. We estimated density of 16 bird species in 2018 in the Disney Wilderness Preserve in central Florida, USA. Distance sampling data were collected as part of the National Ecological Observatory Network landbird monitoring program (Barnett et al., 2019). We compared the performance of the three multi-species model variants in spAbundance using WAIC. The spatial model substantially outperformed the non-spatial model with species correlations (\(\Delta\text{WAIC}=99\)) and the non-spatial model without species correlations (\(\Delta\text{WAIC}=160\)). Effects of forest cover on species-specific density varied across the community (Figure 1A), resulting in clear spatial variation in density of the 16 species (Supplemental Information S1 Figure S1). Detection probability quickly decayed with increasing distance from the observer (Figure 1B).
### Case study 2: Black-throated Blue Warbler abundance in Hubbard Brook Experimental Forest
In this case study, we showcase how to fit spatial and nonspatial single-species N-mixture models. We estimated abundance of Black-throated Blue Warblers (_Setophaga caerulescens_) in the Hubbard Brook Experimental Forest in New Hampshire, USA using repeated count data from 2015 (Rodenhouse and Sillett, 2021; Supplemental Information S1). We found minimal support for overdispersion and residual spatial autocorrelation, with a non-spatial Poisson N-mixture model performing best according to WAIC among multiple candidate models. A strong negative quadratic relationship with elevation revealed that abundance peaked at mid-elevations in the forest (Supplemental Information S1
Figure S2).
### Case study 3: Forest biomass across the continental USA
Our final case study demonstrates how spAbundance can be used to fit models using "big data". We estimated forest aboveground biomass across the continental US using data from \(J=86,933\) forest inventory plots (Figure 2A) collected via the US Forest Service Forest Inventory and Analysis Program (Bechtold and Patterson, 2005). We fit a spatially-explicit univariate GLMM using a Gaussian distribution with an ecoregion-specific random slope of tree canopy cover to reflect potential spatial variation in the relationship between canopy cover and biomass across different forest types. We found an overall positive relationship between tree canopy cover and biomass (median = 0.54 95% credible interval 0.43-0.66), but clear variation in the magnitude of the effect across ecoregions (Figure 2B). Biomass predictions across the US aligned with expectations, with highest biomass predicted in the Pacific Northwest (Figure 2C,D).
### Vignettes
The three package vignettes provide complete details and examples for fitting all single-species and multi-species model types for HDS models (Supplemental Information S2), N-mixture models (Supplemental Information S3), and GLMMs (Supplemental Information S4). We provide extensive details on the required data formats for implementing the models in spAbundance and all function arguments including their default values. We additionally provide code to manipulate resulting objects after fitting models to generate a variety of plot types and summary figures.
## 5 Conclusions and future directions
The aim in developing spAbundance is to provide ecologists and conservation practitioners with a user-friendly tool to better quantify and understand spatial variation in the abundance of plant and animal populations. This R package fits Bayesian spatially-explicit
single-species and multispecies versions of three of the most common modeling frameworks for "unmarked" data types: hierarchical distance sampling models, N-mixture models, and generalized linear mixed models. By using efficient statistical algorithms implemented in C/C++ via R's foreign language interface, spAbundance is capable of handling data sets with a large number of species (e.g., \(>\)100) and locations (e.g., 100,000). Our future aim is to add functionality for zero-inflated models and spatiotemporal models, including generalized HDS and N-mixture models (Chandler et al., 2011). Together, the package vignettes (Supplemental Information S2-S4), code to implement the three case studies ([https://github.com/doserjef/Doser_et_al_2023_spAbundance](https://github.com/doserjef/Doser_et_al_2023_spAbundance)), and the package website ([https://www.jeffdoser.com/files/spabundance-web/](https://www.jeffdoser.com/files/spabundance-web/)) provide full details and thorough exposition of spAbudance model objects.
## 6 Data availability statement
The package spAbundance is available on the Comprehensive R Archive Network (CRAN; [https://cran.r-project.org/web/packages/spAbundance/index.html](https://cran.r-project.org/web/packages/spAbundance/index.html)). Data and code used in the examples are available on GitHub ([https://github.com/doserjef/Doser_et_al_2023_spAbundance](https://github.com/doserjef/Doser_et_al_2023_spAbundance)) and will be posted on Zenodo upon acceptance.
## 7 Acknowledgements
We thank J. Andrew Royle for helpful comments on the hierarchical distance sampling functionality and vignette. This work was supported by: E.F.Z. NSF grants DBI-1954406 and DEB-2213565; A.O.F. NASA CMS grants Hayes (CMS 2020) and Cook (CMS 2018), NSF grant DMS-1916395, joint venture agreements with the USDA Forest Service Forest Inventory and Analysis, USDA Forest Service Region 9 Forest Health Protection Northern Research Station.
## 8 Author Contributions
J.W.D. developed the package with insights from A.O.F; J.W.D. wrote the package vignettes with insights from M.K.; J.W.D. performed analyses and led writing of the manuscript with critical insights from E.F.Z., M.K., and A.O.F. All authors gave final approval for publication.
## References
* Anderson et al. (2022) Anderson, S. C., Ward, E. J., English, P. A., and Barnett, L. A. (2022). sdmTMB: an R package for fast, flexible, and user-friendly generalized linear mixed effects models with spatial and spatiotemporal random fields. _bioRxiv_.
* Banerjee et al. (2014) Banerjee, S., Carlin, B. P., and Gelfand, A. E. (2014). _Hierarchical modeling and analysis for spatial data_. Chapman and Hall/CRC.
* Banerjee and Fuentes (2012) Banerjee, S. and Fuentes, M. (2012). Bayesian modeling for large spatial datasets. _WIREs Computational Statistics_, 4(1):59-66.
* Barker et al. (2018) Barker, R. J., Schofield, M. R., Link, W. A., and Sauer, J. R. (2018). On the reliability of N-mixture models for count data. _Biometrics_, 74(1):369-377.
* Barnett et al. (2019) Barnett, D. T., Duffy, P. A., Schimel, D. S., Krauss, R. E., Irvine, K. M., Davis, F. W., Gross, J. E., Azuaje, E. I., Thorpe, A. S., Gudex-Cross, D., et al. (2019). The terrestrial organism and biogeochemistry spatial sampling design for the National Ecological Observatory Network. _Ecosphere_, 10(2):e02540.
* Bates et al. (2015) Bates, D., Machler, M., Bolker, B., and Walker, S. (2015). Fitting linear mixed-effects models using lme4. _Journal of Statistical Software_, 67(1):1-48.
* Bechtold and Patterson (2005) Bechtold, W. A. and Patterson, P. L. (2005). _The enhanced forest inventory and analysis program-national sampling design and estimation procedures_. Number 80. USDA Forest Service, Southern Research Station.
* Bechtold et al. (2016)
Buckland, S. T., Anderson, D. R., Burnham, K. P., Laake, J. L., Borchers, D. L., Thomas, L., et al. (2001). _Introduction to distance sampling: estimating abundance of biological populations_. Oxford (United Kingdom) Oxford Univ. Press.
* Carpenter et al. (2017) Carpenter, B., Gelman, A., Hoffman, M. D., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M., Guo, J., Li, P., and Riddell, A. (2017). Stan: A probabilistic programming language. _Journal of Statistical Software_, 76(1).
* Chandler et al. (2011) Chandler, R. B., Royle, J. A., and King, D. I. (2011). Inference about density and temporary emigration in unmarked populations. _Ecology_, 92(7):1429-1435.
* Datta et al. (2016) Datta, A., Banerjee, S., Finley, A. O., and Gelfand, A. E. (2016). Hierarchical nearest-neighbor Gaussian process models for large geostatistical datasets. _Journal of the American Statistical Association_, 111(514):800-812.
* de Valpine et al. (2017) de Valpine, P., Turek, D., Paciorek, C., Anderson-Bergman, C., Temple Lang, D., and Bodik, R. (2017). Programming with models: writing statistical algorithms for general model structures with NIMBLE. _Journal of Computational and Graphical Statistics_, 26:403-413.
* Doser et al. (2023) Doser, J. W., Finley, A. O., and Banerjee, S. (2023). Joint species distribution models with imperfect detection for high-dimensional spatial data. _Ecology_, 104(9):e4137.
* Doser et al. (2022) Doser, J. W., Finley, A. O., Kery, M., and Zipkin, E. F. (2022). spOccupancy: An R package for single-species, multi-species, and integrated spatial occupancy models. _Methods in Ecology and Evolution_, 13(8):1670-1678.
* Finley et al. (2015) Finley, A. O., Banerjee, S., and Gelfand, A. E. (2015). spBayes for Large Univariate and Multivariate Point-Referenced Spatio-Temporal Data Models. _Journal of Statistical Software_, 63(13):1-28.
* Finley et al. (2019) Finley, A. O., Datta, A., Cook, B. D., Douglas C. Morton, H. E. A., and Banerjee, S. (2019). Efficient algorithms for Bayesian Nearest Neighbor Gaussian Processes. _Journal of Computational and Graphical Statistics_, 28(2):401-414.
* Finley et al. (2019)
Fiske, I. and Chandler, R. (2011). unmarked: an R package for fitting hierarchical models of wildlife occurrence and abundance. _Journal of Statistical Software_, 43(10):1-23.
* Gelman and Rubin (1992) Gelman, A. and Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. _Statistical Science_, 7(4):457-472.
* Goldstein and de Valpine (2022) Goldstein, B. R. and de Valpine, P. (2022). Comparing N-mixture models and GLMMs for relative abundance estimation in a citizen science dataset. _Scientific Reports_, 12(1):12276.
* Guelat and Kery (2018) Guelat, J. and Kery, M. (2018). Effects of spatial autocorrelation and imperfect detection on species distribution models. _Methods in Ecology and Evolution_, 9(6):1614-1625.
* Hodges and Reich (2010) Hodges, J. S. and Reich, B. J. (2010). Adding spatially-correlated errors can mess up the fixed effect you love. _The American Statistician_, 64(4):325-334.
* Hogan and Tchernis (2004) Hogan, J. W. and Tchernis, R. (2004). Bayesian factor analysis for spatially correlated data, with application to summarizing area-level material deprivation from census data. _Journal of the American Statistical Association_, 99(466):314-324.
* Hui et al. (2015) Hui, F. K., Taskinen, S., Pledger, S., Foster, S. D., and Warton, D. I. (2015). Model-based approaches to unconstrained ordination. _Methods in Ecology and Evolution_, 6(4):399-411.
* Kellner et al. (2021) Kellner, K. F., Fowler, N. L., Petroelje, T. R., Kautz, T. M., Beyer Jr, D. E., and Belant, J. L. (2021). ubms: An R package for fitting hierarchical occupancy and N-mixture abundance models in a Bayesian framework. _Methods in Ecology and Evolution_, 13(3):577-584.
* Kery and Royle (2021) Kery, M. and Royle, J. A. (2021). _Applied hierarchical modeling in ecology: Analysis of distribution, abundance, and species richness in R and BUGS: Volume 2: Dynamic and advanced models_. Academic Press.
* Link and Sauer (2002) Link, W. A. and Sauer, J. R. (2002). A hierarchical analysis of population change with application to cerulean warblers. _Ecology_, 83(10):2832-2840.
* Laird et al. (2015)
Lopes, H. F. and West, M. (2004). Bayesian model assessment in factor analysis. _Statistica Sinica_, pages 41-67.
* Millar (2018) Millar, R. B. (2018). Conditional vs marginal estimation of the predictive loss of hierarchical models using WAIC and cross-validation. _Statistics and Computing_, 28(2):375-385.
* Miller et al. (2013) Miller, D. L., Burt, M. L., Rexstad, E. A., and Thomas, L. (2013). Spatial models for distance sampling data: recent developments and future directions. _Methods in Ecology and Evolution_, 4(11):1001-1010.
* Nichols et al. (2009) Nichols, J. D., Thomas, L., and Conn, P. B. (2009). Inferences about landbird abundance from count data: recent advances and future directions. _Modeling demographic processes in marked populations_, pages 201-235.
* Plummer et al. (2006) Plummer, M., Best, N., Cowles, K., and Vines, K. (2006). CODA: Convergence Diagnosis and Output Analysis for MCMC. _R News_, 6(1):7-11.
* Rodenhouse and Sillett (2021) Rodenhouse, N. L. and Sillett, T. S. (2021). Valley-wide Bird Survey, Hubbard Brook Experimental Forest, 1999-2016 (ongoing). [https://doi.org/10.6073/pasta/faca2b2cf2db9d415c39b695cc7fc21](https://doi.org/10.6073/pasta/faca2b2cf2db9d415c39b695cc7fc21). Accessed: 2021-09-07.
* Royle (2004) Royle, J. A. (2004). N-mixture models for estimating population size from spatially replicated counts. _Biometrics_, 60(1):108-115.
* Royle et al. (2004) Royle, J. A., Dawson, D. K., and Bates, S. (2004). Modeling abundance effects in distance sampling. _Ecology_, 85(6):1591-1597.
* Sollmann et al. (2016) Sollmann, R., Gardner, B., Williams, K. A., Gilbert, A. T., and Veit, R. R. (2016). A hierarchical distance sampling model to estimate abundance and covariate associations of species and communities. _Methods in Ecology and Evolution_, 7(5):529-537.
* Taylor-Rodriguez et al. (2019) Taylor-Rodriguez, D., Finley, A. O., Datta, A., Babcock, C., Andersen, H.-E., Cook, B. D., Morton, D. C., and Banerjee, S. (2019). Spatial factor models for high
-dimensional and large spatial data: An application in forest variable mapping. _Statistica Sinica_, 29:1155.
* Tikhonov et al. (2020) Tikhonov, G., Opedal, O. H., Abrego, N., Lehikoinen, A., de Jonge, M. M., Oksanen, J., and Ovaskainen, O. (2020). Joint species distribution modelling with the r-package Hmsc. _Methods in Ecology and Evolution_, 11(3):442-447.
* Tobler et al. (2019) Tobler, M. W., Kery, M., Hui, F. K., Guillera-Arroita, G., Knaus, P., and Sattler, T. (2019). Joint species distribution models with species correlations and imperfect detection. _Ecology_, 100(8):e02754.
* Ver Hoef et al. (2018) Ver Hoef, J. M., Peterson, E. E., Hooten, M. B., Hanks, E. M., and Fortin, M.-J. (2018). Spatial autoregressive models for statistical inference from ecological data. _Ecological Monographs_, 88(1):36-59.
* Vieilledent (2019) Vieilledent, G. (2019). _hSDM: Hierarchical Bayesian Species Distribution Models_. R package version 1.4.1.
* Warton et al. (2015) Warton, D. I., Blanchet, F. G., O'Hara, R. B., Ovaskainen, O., Taskinen, S., Walker, S. C., and Hui, F. K. (2015). So many variables: Joint modeling in community ecology. _Trends in Ecology & Evolution_, 30(12):766-779.
* Watanabe (2010) Watanabe, S. (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. _Journal of Machine Learning Research_, 11(12).
* Yamaura et al. (2012) Yamaura, Y., Royle, J. A., Shimada, N., Asanuma, S., Sato, T., Taki, H., and Makino, S. (2012). Biodiversity of man-made open habitats in an underused country: a class of multispecies abundance models for count data. _Biodiversity and Conservation_, 21:1365-1380.
## Tables and Figures
Figure 1: Species-specific effects of forest cover on density (A) and relationship between detection probability and distance from the observer (B) in the central Florida bird case study. Panel (A) shows the estimated mean (dark line), 50% credible interval (box), and 95% credible interval (whiskers) for the effect of forest cover on the overall community (COMM) and 16 individual species. In Panel (B), lines represent posterior mean detection probabilities for each species. The black line represents the average across the community (i.e., the community-level effect), and the grey region is the associated 95% credible interval. See Supplemental Information S1 for species code definitions.
Figure 2: Data and predictions from the forest biomass case study. Panel A shows the observed locations of the 86,933 Forest Inventory and Analysis plots. Note these are the publicly available perturbed locations in which FIA adds a small amount of random noise to the true plot locations. Panel B shows the estimated random effect of tree canopy cover on forest biomass within distinct ecoregions. Panel C shows predicted biomass (posterior median) across the continental USA (tons per acre), with associated uncertainty (95% credible interval width) in Panel D.
Supplemental Information S1 for spAbundance: An R package for single-species and multi-species spatially-explicit abundance models
### 1 Additional statistical details
### Nearest Neighbor Gaussian Process
Let \(\mathcal{L}=\{\mathbf{s}_{1},\mathbf{s}_{2},\ldots,\mathbf{s}_{J}\}\) be the set of sampled spatial locations, and define \(\mathbf{w}\) as a \(J\times 1\) vector of spatial random effects. We envision \(\mathrm{w}(\mathbf{s}_{j})\) as a realization of a smooth latent surface \(\{\mathrm{w}(\mathbf{s})\mid\mathbf{s}\in\mathcal{D}\}\), where \(\mathcal{D}\) is the geographical domain of interest. First, suppose Gaussian Processes (GPs) are used to model \(\mathbf{w}\), as is common throughout the spatial statistics literature (e.g., Banerjee et al.2014). By definition, a GP model for \(\{\mathrm{w}(\mathbf{s})\}\) implies that for any finite set of locations \(\mathcal{L}\), the vector of random effects \(\mathbf{w}\) follows a zero-mean multivariate Gaussian distribution with a \(J\times J\) covariance matrix \(\mathbf{C}(\mathbf{s},\mathbf{s}^{\prime},\mathbf{\theta})\) that is a function of the distances between any pair of site coordinates \(\mathbf{s}\) and \(\mathbf{s}^{\prime}\) and a set of parameters (\(\mathbf{\theta}\)) that govern the spatial process according to a parametric covariance
function. \(\mathbf{\theta}\) consists of a spatial variance (\(\sigma^{2}\)) and spatial decay (\(\phi\)) parameter for the exponential, spherical, and Gaussian covariance functions, and additionally includes a spatial smoothness parameter \(\nu\) for the Matern covariance function.
Both frequentist and Bayesian estimation of spatial models using GPs requires taking the inverse and determinant of a dense \(J\times J\) covariance matrix (i.e., \(\mathbf{C}(\mathbf{s},\mathbf{s}^{\prime},\mathbf{\theta})\)) that involves \(O(J^{3})\) computations (floating point operations or FLOPs), which quickly renders such an approach impractical for even moderately sized data sets (i.e., hundreds of spatial locations). In spAbundance, we replace the GP prior for the spatial random effects with a Nearest Neighbor Gaussian Process (NNGP) prior (Datta et al., 2016). The NNGP is a valid GP that is based on writing the full multivariate Gaussian distribution for \(\mathbf{w}\) as a product of conditional densities, such that
\[p(\mathbf{w})=p(\mathrm{w}(\mathbf{s}_{1}))\cdot p(\mathrm{w}(\mathbf{s}_{2})\mid \mathrm{w}(\mathbf{s}_{1}))\cdots p(\mathrm{w}(\mathbf{s}_{J})\mid\mathrm{w}(\mathbf{s}_{J- 1}),\ldots,\mathrm{w}(\mathbf{s}_{1})),\] (S1)
where \(p(\cdot)\) denotes a probability density function. The NNGP prior achieves computational efficiency by replacing the conditioning sets on the right-hand side of (S1) with a set of new conditioning sets, whose maximum size is determined by a pre-specified number of neighbors, \(m\), where \(m<<J\). Datta et al. (2016) showed that \(m=15\) provides nearly identical inference to the full GP under a variety of scenarios. Let \(n(\mathbf{s}_{j})\) denote the set of at most \(m\) neighbors for location \(\mathbf{s}_{j}\). Following Vecchia (1988), we set \(n(\mathbf{s}_{j})\) to be the set of at most \(m\) nearest neighbors of \(\mathbf{s}_{j}\) from \(\{\mathbf{s}_{1},\mathbf{s}_{2},\ldots,\mathbf{s}_{j-1}\}\) with respect to Euclidean distance. Note, this requires the set of \(\mathcal{L}\) locations to have some prespecified ordering. In spAbundance, we order the coordinates along the horizontal axis.
Through careful construction of the neighbor sets and set of spatial locations as a directed acyclic graph, Gaussian distribution theory reveals the NNGP prior yields a new joint density for \(\mathbf{w}\), denoted \(\tilde{p}(\mathbf{w})\). Let \(\mathbf{w}(n(\mathbf{s}_{j}))\) denote the at most \(m\) realizations of the NNGP at the locations in the neighbor set \(n(\mathbf{s}_{j})\). Let \(C(\cdot,\mathbf{\theta})\) denote the covariance function of the original Gaussian Process (GP) from which the NNGP is derived. For any two sets \(A_{1}\) and \(A_{2}\), define \(\mathrm{C}_{A_{1},A_{2}}(\mathbf{\theta})\) as the covariance matrix between the observations
in \(A_{1}\) and \(A_{2}\). Our NNGP prior for \(\mathbf{w}\) thus takes the form
\[\tilde{p}(\mathbf{w})=\prod_{j=1}^{J}\mathrm{Normal}(\mathrm{w}(\mathbf{s}_{j})\mid \mathbf{w}(n(\mathbf{s}_{j}))\mathbf{b}(\mathbf{s}_{j}),\mathrm{f}(\mathbf{s}_{j})),\] (S2)
where \(\mathbf{b}(\mathbf{s}_{j})\) is defined as
\[\mathbf{b}(\mathbf{s}_{j})=\mathbf{C}_{\mathbf{s}_{j},n(\mathbf{s}_{j})}(\mathbf{\theta}) \mathbf{C}_{n(\mathbf{s}_{j}),n(\mathbf{s}_{j})}^{-1}(\mathbf{\theta}),\] (S3)
with \(\mathbf{b}(\mathbf{s}_{1})=\mathbf{0}\), and \(\mathrm{f}(\mathbf{s}_{j})\) is defined as
\[\mathrm{f}(\mathbf{s}_{j})=\mathbf{C}_{\mathbf{s}_{j},\mathbf{s}_{j}}(\mathbf{\theta})- \mathbf{C}_{\mathbf{s}_{j},n(\mathbf{s}_{j})}(\mathbf{\theta})\mathbf{C}_{n(\mathbf{s}_{j}),n (\mathbf{s}_{j})}^{-1}(\mathbf{\theta})\mathbf{C}_{n(\mathbf{s}_{j}),\mathbf{s}_{j}}(\mathbf{ \theta}).\] (S4)
### Detection functions in hierarchical distance sampling models
spAbundance currently supports two detection functions for single-species and multi-species hierarchical distance sampling models: the half-normal and negative exponential. Both functions are controlled by a scale parameter, \(\sigma_{j}\), which controls the rate of distance-dependent decay in detection probability. For distance x, the half-normal detection function takes the form
\[g(\mathrm{x})=\exp(-\frac{\mathrm{x}^{2}}{2\sigma_{j}^{2}}).\] (S5)
The negative exponential detection function takes the form
\[g(\mathrm{x})=\exp(-\frac{x}{\sigma_{j}}).\] (S6)
### Prior distributions in spAbundance
The prior distributions in spAbundance are sufficiently flexible to allow for vague or weakly informative priors, as well as informative priors if such information is available. spAbundance by default will assign default values for all hyperparameters in the prior
distributions, which result in weakly informative priors. The priors argument in all model fitting functions allows users to explicity specify the hyperparameter values for all prior distributions. When prior distributions are not explicitly specified by the user, the default values used for the hyperparameter values in the priors will be reported to the screen. Below we describe the prior distribution for each individual parameter, as well as our choice for the default hyperparameter values for the priors in spAbundance.
### Single-species models
We assign Gaussian (i.e., normal) priors to the abundance (\(\mathbf{\beta}\)) and detection (\(\mathbf{\alpha}\)) regression coefficients (for HDS and N-mixture models). Gaussian priors are a common choice for prior distributions on regression coefficients (Gelman et al., 2014). Our default prior on the abundance regression coefficients (\(\mathbf{\beta}\)) is a normal distribution with a mean of 0 and variance of 100, which results in a vague prior distribution that has a small impact on the resulting posterior estimate. The same prior is also used for detection regression coefficients (\(\mathbf{\alpha}\)) in hierarchical distance sampling models. For the detection regression coefficients in N-mixture models, spAbundance will by default set the mean to 0 and variance to 2.72. This results in a relatively flat prior across the probability scale (e.g., Northrup and Gerber, 2018).
For the spatial parameters in all spatially-explicit models, priors must be at least weakly informative for the model to identify the spatial decay (\(\phi\)) and spatial variance parameters (\(\sigma^{2}\); Banerjee et al., 2014). Accordingly, we follow standard recommendations from the spatial statistics literature for placing priors on the spatial variance (\(\sigma^{2}\)), spatial decay (\(\phi\)), and spatial smoothness (\(\nu\)) parameter, where the spatial smoothness parameter only applies if using a Matern covariance function (Banerjee et al., 2014). Specifically, we place an inverse-Gamma prior on the spatial variance and uniform priors on the spatial decay (and smoothness parameter if applicable). For the spatial variance parameter, we suggest following recommendations from Banerjee et al. 2014 and setting the shape parameter to 2 and the scale parameter equal to our best guess of the spatial variance. By default, spAbundance will set the shape parameter to 2 and the scale parameter to 1.
This weakly informative prior suggests a prior mean of 1 for the spatial variance. For the spatial decay parameter, we assume a uniform prior, again following Banerjee et al. 2014. By default, we set the lower and upper bounds of the spatial decay parameter based on the minimum and maximum distances between sites in the data. More specifically, the default prior values set the lower bound to 3/max and the upper bound to 3/min, where min and max are the minimum and maximum distances between sites in the data set, respectively. This equates to a vague prior that states that spatial autocorrelation in the spatial random effects could only be between sites that are very close to each other, or could span across the entire observed study area. If additional information is known on the extent of spatial autocorrelation in the data, the user may place more restrictive bounds on the uniform prior, which would reduce the amount of time needed for adequate mixing and convergence of the MCMC chains. We do not set default bounds on the spatial smoothness parameter, \(\nu\), and rather require the user to specify these bounds if a Matern correlation function is used.
We assign a uniform distribution for the negative binomial dispersion parameter \(\kappa\) when fitting models in spAbundance with a negative binomial distribution. By default, we set the lower bound of the uniform distribution to 0 and the upper bound of the uniform distribution to 100. Recall that smaller values of \(\kappa\) indicate overdispersion in the abundance values relative to a Poisson model, while higher values indicate minimal overdispersion in abundance. In particular, as \(\kappa\rightarrow\infty\), the negative binomial distribution becomes the Poisson distribution. When there is little support for overdispersion in abundance relative to the Poisson distribution, the estimates of \(\kappa\) will likely approach the upper bound of the uniform prior distribution. When the upper bound of the uniform distribution is substantially high (e.g., the default value of 100), this is a good indication that there is little support for overdispersion, and that model selection criteria (e.g., WAIC) will likely favor the simpler Poisson distribution.
When fitting GLMMs with a Gaussian distribution, we assign an inverse-Gamma prior to the residual variance parameter \(\tau^{2}\). Our default hyperparameter values are 0.01 and 0.01 for both the shape and scale parameters, which corresponds to a vague prior such
that the estimated value is almost entirely informed by the data alone.
When fitting models with random intercepts and/or slopes in spAbundance, we use inverse-Gamma priors for any random effect variances on abundance (\(\sigma_{\mu}^{2}\)) and/or detection (\(\sigma_{p}^{2}\)). Following the recommendations of Lunn et al. (2013), by default we set the scale and shape hyperparameters to 0.1, which results in a weakly informative prior on the random effect variances. This prior can result in shrinkage of the random effect values towards zero under certain circumstances (Gelman, 2006), although this primarily only occurs when the number of levels in the random effect is small (e.g., less than 10). Future versions of the package will also allow for half-Cauchy priors on the random effect variance parameters, which has been shown to perform better than the inverse-Gamma prior in situations where the random effect has very few levels (Gelman, 2006).
### Multi-species models
We assign Gaussian priors to the community-level occurrence (\(\mathbf{\mu}_{\beta}\)) and detection (\(\mathbf{\mu}_{\alpha}\)) mean regression parameters. Following the same logic as presented in Section 1.4 for single-species models, we set the prior mean to 0 and the prior variance to 100 for the abundance parameters, and the prior variance to 2.72 for the detection parameters.
For the community-level abundance (\(\mathbf{\tau}_{\beta}^{2}\)) and detection (\(\mathbf{\tau}_{\alpha}^{2}\)) variance parameters, we assign inverse-Gamma priors. As described for the random effect variances in the single-species models in Section 1.4, we by default set the scale and shape parameters to 0.1.
Uniform priors are specified for the species-specific negative binomial dispersion parameters using the same default hyperparameter values as the single-species case. Inverse-gamma priors are again used for the species-specific Gaussian residual variance parameters, with default shape and scale hyperparameters set to 0.01. Inverse-Gamma priors are again used for random effect variances for any additional random effects included in the multi-species models. Hyperparameter values are by default set to 0.1.
For models accounting for residual species autocorrelation using a factor modeling approach, we require additional constraints to ensure identifiability of the species-specific factor loadings from the latent factors (Papastamoulis and Ntzoufras, 2022). Following
Aguilar and West (2000) and Taylor-Rodriguez et al. (2019), we set all elements in the upper triangle of the \(I\times q\) factor loadings matrix \(\mathbf{\Lambda}\) equal to 0 and its diagonal elements equal to 1. We assign standard normal prior distributions (i.e., a normal prior distribution with mean 0 and variance 1) to all elements in \(\mathbf{\Lambda}\) below the upper diagonal. For spatially-explicit multi-species models, we assign a uniform prior to each of the \(q\) spatial decay parameters in \(\mathbf{\phi}\). We use the same default hyperparameter values as those described for the single-species models.
## 2 Case Study 1: Bird density in south-central Florida
In our first case study, we estimated density of 16 bird species in 2018 in the Disney Wilderness Preserve in central Florida, USA using a spatial multi-species distance sampling model. These data were collected as part of the National Ecological Observatory Network landbird monitoring program (Barnett et al., 2019). Observers recorded the number of all bird species detected during a six-minute, unlimited radius point count survey at 90 sites. The distance of each individual bird to the observer was recorded using a laser rangefinder. We only used observations within 250m and subsequently binned the distance measurements into four distance bands (0-25m, 25-50m, 50-100m, 100-250m). We removed all species with less than 10 observations, resulting in a total of 16 species. We modeled abundance as a function of forest cover and grassland cover, and modeled detection probability as a function of wind. Forest cover and grassland cover were calculated from the USGS EROS (Earth Resources Observation and Science) Center, which produces high-resolution (250m) annual land cover maps across the continental US that are backcasted to 1938 (Sohl et al., 2016). We calculated the proportion of grassland and forest cover within 1km of each point count survey location. All covariates were standardized to have a mean of 0 and standard deviation of 1. We compared the performance of three multi-species distance sampling models: (1) a non-spatial model without species correlations using msDS; (2) a non-spatial model with species correlations using lfMsDS; and (3) a spatial model with species correlations using sfMsDS. We used
two latent factors for the models that included species correlations, as exploratory data analysis revealed models with more factors did not converge due to the large number of rare species in the data set. We used a NNGP with 15 neighbors for the spatially-explicit model (as recommended in Datta et al.2016). We ran all models for three chains of 100,000 iterations with a burn-in period of 50,000 iterations and a thinning rate of 50, resulting in a total of 3000 samples from the posterior distribution. Convergence was assessed using visual assessment of trace plots, the potential scale reduction factor (\(\hat{\text{R}}\)), and effective sample size. Model run times to run three chains in sequence each with 100,000 MCMC samples were 29 minutes for the non-spatial model, 39 minutes for the non-spatial model with species-correlations, and 56 minutes for the spatial model. Models were fit using a single CPU on a computer running a Linux operating system with an Intel 3.8 GHz i7-7700HQ 4-core processor and 16GB RAM.
The spatial multi-species HDS model substantially outperformed both the non-spatial HDS model with species correlations (\(\Delta\text{WAIC}=99\)) and the non-spatial HDS model without species correlations (\(\Delta\text{WAIC}=160\)). Maps of density predicted across the preserve show substantial spatial variation in density both within and across species (Figure S1). We found small yet variable effects of forest cover on density across the 16 species, which resulted in a community-level average effect at essentially 0 (Figure 1A main text). Detection probability quickly decayed with increasing distance from the observer (Figure 1B main text).
Species codes shown in Figure 1 in the main manuscript are: AMCR (American Crow), BACS (Bachman's Sparrow), CARW (Carolina Wren), COGD (Common Ground-Dove), CONI (Common Nighthawk), COYE (Common Yellowthroat), EABL (Eastern Bluebird), EAME (Eastern Meadowlark), EATO (Eastern Towhee), GCFL (Great-crested Flycatcher), MODO (Mourning Dove), NOCA (Northern Cardinal), NOMO (Northern Mockingbird), RBWO (Red-bellied Woodpecker), RHWO (Red-headed Woodpecker), WEVI (White-eyed Vireo).
Figure S1: Density estimates (posterior means) and corresponding uncertainty (95% credible interval widths) for (A, C) Carolina Wren (_Thryothorus ludovicanus_; CARW) and (B, D) Eastern Meadowlark (_Sturnella magna_; EAME) across the Disney Wilderness Preserve using a spatial multi-species distance sampling model that accounts for residual species correlations.
Case Study 2: Black-throated Blue Warbler abundance in Hubbard Brook Experimental Forest
In this case study, we estimated abundance of Black-throated Blue Warblers (_Setophaga caerulescens_) in the Hubbard Brook Experimental Forest in New Hampshire, USA (Rodenhouse and Sillett, 2021) using nonspatial and spatial N-mixture models. Data were collected using standard point count surveys at 373 sites three times during the breeding season in 2015. Some sites were not sampled for all three surveys, resulting in an imbalanced data set. During each survey observers recorded the number of individuals of all bird species within a 50m radius circle. We included time of day (linear) and day of year (linear and quadratic) as fixed effects in the detection portion of the model, and specified linear and quadratic effects of elevation as abundance predictors. We compared the performance of four models: (1) a non-spatial Poisson N-mixture model; (2) a non-spatial negative binomial N-mixture model; (3) a spatial Poisson N-mixture model; and (4) a spatial negative binomial N-mixture model. Model performance was compared using WAIC. We ran all four candidate models for three chains of 125,000 MCMC samples discarding 85,000 samples as burn-in and using a thinning rate of 20 for a total of 6000 posterior samples. After model fitting, we subsequently predicted abundance across the Hubbard Brook Experimental Forest using the top performing model to generate a map of Black-throated Blue Warbler abundance across the forest. Model run times to complete all three chains run in sequence using a single CPU were 7.9 minutes for the non-spatial Poisson model, 10.6 minutes for the non-spatial negative binomial model, 18.8 minutes for the spatial Poisson model, and 20.3 minutes for the spatial negative binomial model. Models were fit on a computer running a Linux operating system with an Intel 3.8 GHz i7-7700HQ 4-core processor and 16GB RAM.
We found little support for overdispersion or residual spatial autocorrelation in model estimates of Black-throated Blue Warbler abundance. The non-spatial and spatial Poisson N-mixture models had nearly identical WAIC values (WAIC = 2204), while the Poisson N-mixture models slightly outperformed both the non-spatial (WAIC = 2207) and spatial
(WAIC = 2208) negative binomial N-mixture models. We present model results for the non-spatial Poisson N-mixture model. Abundance had a strong negative quadratic relationship with elevation (Table S2), indicating Black-throated Blue Warbler abundance peaks at mid-elevations in the region. Detection probability showed a positive, linear relationship with the day of year, indicating higher detection probability of individuals later in the breeding season. Predictions of abundance across Hubbard Brook showcase the strong relationship between abundance and elevation (Figure S2).
## 4 Case study 3: Forest biomass across the continental USA
In our final case study, we showcase spAbundance functionality for fitting spatial linear mixed models with a "big" spatial data set. Specifically, we estimated forest aboveground biomass (AGB) across the continental US with data from \(J=86,933\) forest inventory plots. Biomass is a sensible measure of abundance that is widely used for modeling plant distributions, especially in forestry applications across large spatial scales. These data come from the Forest Inventory and Analysis monitoring program of the US Forest Service (Bechtold and Patterson, 2005). Permanent field plot locations were established based on an equal probability sampling design (Bechtold and Patterson, 2005), with each field plot visited on a five-year cycle for the eastern US and a ten-year cycle for the western US. Field crews record stem measurements for all trees with diameter at breast height of 12.7cm or greater, and well-established allometric equations were used to estimate forest biomass for each plot. Here we extracted plot-level biomass (tons per acre) from the most recent cycle across the continental US. For this illustrative analysis, we used the publicly available perturbed plot coordinates (i.e., FIA adds a small amount of random noise to plot locations to protect ownership privacy and ensure ecological integrity). AGB takes positive, continuous values, and so we square root transformed
biomass values for use as our response variable to ensure positive support of biomass values after back transformation and to meet basic linear model assumptions. We modeled square-root transformed biomass using a GLMM with a Gaussian distribution (i.e., a linear mixed model). Mean square-root transformed biomass was modeled as a function of three covariates: elevation (linear), 30-year maximum temperature climate normal (linear and quadratic), and tree canopy cover in 2021 (linear). Elevation data were accessed from Terrain Tiles on August 17, 2022 from [https://registry.opendata.aws/terrain-tiles](https://registry.opendata.aws/terrain-tiles). We obtained climate normals from TerraClimate (Abatzoglou et al., 2018) and tree canopy cover from NLCD (Coulston et al., 2012; Dewitz, 2023). We compared four candidate models: (1) a linear mixed model with a random intercept of ecoregion as a simple approach to accommodate spatial variation in biomass across environmentally distinct regions; (2) a linear mixed model with an ecoregion random intercept and an ecoregion-specific random slope of tree canopy cover to reflect potential spatial variation in the relationship across different forest types; (3) a spatial linear mixed model using an NNGP with 5 neighbors; and (4) a spatial linear mixed model using an NNGP with 5 neighbors and an ecoregion-specific random slope for tree canopy cover. We ran the four candidate models for three chains of 250,000 MCMC samples discarding 190,000 samples as burn-in and using a thinning rate of 20 for a total of 9000 posterior samples. Each of the three chains was run in parallel using 5 CPU on a Linux workstation using Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 36 CPU and 500 GB of RAM. Model run times for each chain were 2.3 hours for the non-spatial model without random slopes, 3.3 hours for the non-spatial model with random slopes, 12.9 hours for the spatial model without random slopes, and 15.4 hours for the spatial model with random slopes.
The spatially-explicit model with ecoregion-specific random slopes for tree canopy cover substantially outperformed the spatial model without random slopes (\(\Delta\text{WAIC}=1,287\)), the non-spatial model with random slopes (\(\Delta\text{WAIC}=8,177\)), and the non-spatial model without random slopes (\(\Delta\text{WAIC}=11,390\)). The top-performing model showed an overall significantly positive relationship with biomass (median = 0.54, 95% credible interval 0.43-0.66), but clear variation in the magnitude of the effect across ecoregions,
with larger magnitude values in the western US compared to the eastern US (Figure 2B main text). Biomass predictions across the US aligned with expectations, with highest biomass predicted in the Pacific Northwest (Figure 2C,D main text). |
2302.08982 | (S)GD over Diagonal Linear Networks: Implicit Regularisation, Large
Stepsizes and Edge of Stability | In this paper, we investigate the impact of stochasticity and large stepsizes
on the implicit regularisation of gradient descent (GD) and stochastic gradient
descent (SGD) over diagonal linear networks. We prove the convergence of GD and
SGD with macroscopic stepsizes in an overparametrised regression setting and
characterise their solutions through an implicit regularisation problem. Our
crisp characterisation leads to qualitative insights about the impact of
stochasticity and stepsizes on the recovered solution. Specifically, we show
that large stepsizes consistently benefit SGD for sparse regression problems,
while they can hinder the recovery of sparse solutions for GD. These effects
are magnified for stepsizes in a tight window just below the divergence
threshold, in the "edge of stability" regime. Our findings are supported by
experimental results. | Mathieu Even, Scott Pesme, Suriya Gunasekar, Nicolas Flammarion | 2023-02-17T16:37:08Z | http://arxiv.org/abs/2302.08982v2 | # (S)GD over Diagonal Linear Networks:
###### Abstract.
In this paper, we investigate the impact of stochasticity and large stepsizes on the implicit regularisation of gradient descent (GD) and stochastic gradient descent (SGD) over diagonal linear networks. We prove the convergence of GD and SGD with macroscopic stepsizes in an overparametrised regression setting and characterise their solutions through an implicit regularisation problem. Our crisp characterisation leads to qualitative insights about the impact of stochasticity and stepsizes on the recovered solution. Specifically, we show that large stepsizes consistently benefit SGD for sparse regression problems, while they can hinder the recovery of sparse solutions for GD. These effects are magnified for stepsizes in a tight window just below the divergence threshold, in the "edge of stability" regime. Our findings are supported by experimental results.
## 1. Introduction
The stochastic gradient descent algorithm (SGD) [Robbins and Monro, 1951] is the foundational algorithm for almost all neural network training. Though a remarkably simple algorithm, it has led to many impressive empirical results and is a key driver of deep learning. However the performances of SGD are quite puzzling from a theoretical point of view as (1) its convergence is highly non-trivial and (2) there exist many global minimums which generalise very poorly [Zhang et al., 2017].
To explain this second point, the concept of implicit regularisation has emerged: if overfitting is harmless in many real-world prediction tasks, it must be because the optimisation process is _implicitly favoring_ solutions that have good generalisation properties for the task. The canonical example is overparametrised linear regression with more trainable parameters than number of samples: although there are infinitely many solutions that fit the samples, GD and SGD explore only a small subspace of all the possible parameters. As a result, they implicitly converge to the closest solution in terms of the \(\ell_{2}\) distance, and this without explicit regularisation [Zhang et al., 2017, Gunasekar et al., 2018a].
Currently, most theoretical works on implicit regularisation have primarily focused on continuous time approximations of (S)GD where the impact of crucial hyperparameters such as the stepsize and the minibatch size are ignored. One such common simplification is to analyse gradient flow, which is a continuous time limit of GD and minibatch SGD with an infinitesimal stepsize. By definition, this analysis cannot capture the effect of stepsize or stochasticity. Another approach is to approximate SGD by a stochastic gradient flow [Wojtowytsch, 2021, Pesme et al., 2021], which tries to capture the noise and the stepsize using an appropriate stochastic differential equation. However, there are no theoretical guarantees that these results can be transferred to minibatch SGD. This is problematic since the performances of most deep learning models are highly sensitive to the choice of stepsize and minibatch size--their importance is common knowledge in practice and has also been systematically established in controlled experiments [Keskar et al., 2017a, Masters and Luschi, 2018, Geiping et al., 2022].
In this work, we aim to address the gaps in our understanding of the impact of stochasticity and stepsizes by analysing the (S)GD trajectory in 2-layer diagonal networks (DLNs). We can already see in Fig. 1 the importance of these parameters in a noiseless sparse recovery problem which we detail later: the solutions recovered by SGD and GD have very different generalisation performances.
The 2-layer diagonal linear network which we consider is a simplified neural network that has received significant attention lately [Woodworth et al., 2020, Vaskevicius et al., 2019, HaoChen et al., 2021, Pillaud-Vivien et al., 2022]. Despite its simplicity, it surprisingly reveals training characteristics which are observed in much more complex architectures. It therefore serves as an
ideal proxy model for gaining a deeper understanding of complex phenomenons such as the roles of initialisation, stochasticity and stepsize.
### Main results and paper organisation
Overparametrised regression and diagonal linear networks are introduced in Section 2. We formulate our main theorem (Theorem 1) in Section 3, and state it informally here.
**Theorem** (Informal).: _For macroscopic stepsizes, gradient descent and stochastic gradient descent over \(2\)-layer diagonal linear networks converge to a specific zero-training loss solution \(\beta^{\star}_{\infty}\) explicitly characterised through an implicit regularisation problem. Furthermore, the generalisation properties of \(\beta^{\star}_{\infty}\) are fully characterised by the sum of the squared (stochastic) gradients along the iterates' trajectory._
Contrary to previous works, we show the convergence of the iterates and establish the associated regularisation problem **without having to assume vanishing stepsizes**.
Then in Sections 4 and 5 we interpret our main result to understand the effect of the stepsize and of stochasticity on the recovered solution. As clearly illustrated in Fig. 1, in the sparse regression setting, there is a stark difference between the generalisation performances of GD and SGD. While using a large stepsize leads to a sparser solution in the case of SGD and is highly beneficial, the exact opposite holds for GD for which the use of a large stepsize is in fact detrimental. We explain this phenomenon by showing that GD tends to recover a low **weighted**-\(\ell_{1}\)-norm solution which prevents the recovery of the sparse signal. For SGD, the story is different and thanks to noise the recovered solution enjoys a low \(\ell_{1}\)-norm. We show that the previous observations are amplified as we push the stepsize towards a threshold value \(\bar{\gamma}_{\max}\), corresponding to the value above which the iterates do not converge towards a global solution anymore. The range of stepsizes just below this threshold corresponds to a brittle regime in which the iterates "oscillate" and where the loss converges very slowly. This regime is often denoted as the _Edge of Stability_ regime and we explain why the sparse recovery performances of SGD are highly improved in this regime, while it is the opposite for GD.
### Related works. Implicit bias
The concept of implicit bias in neural networks has been studied recently, starting with the seminal work of Soudry et al. (2018) on max-margin linear classification. This line of research has been further extended to multiplicative parametrisations (Guanasekar et al., 2018), linear networks (Ji and Telgarsky, 2019), and homogeneous networks (Ji and Telgarsky, 2020; Chizat et al., 2019). For diagonal linear networks, Woodworth et al. (2020) demonstrate that the scale of the initialisation determines the type of solution obtained, with large initialisations yielding minimum \(\ell_{2}\)-norm solutions--the neural tangent kernel regime (Jacot et al., 2018) and small initialisation resulting in minimum \(\ell_{1}\)-norm solutions--the _rich regime_(Chizat
Figure 1: Noiseless sparse regression with a \(2\)-layer diagonal linear network for stepsizes such that the iterates converge to a global solution: the solutions recovered by SGD and GD have very different generalisation properties. The dashed vertical lines correspond to the maximum stepsize that can be used before the iterates stop to converge. See the last paragraph of Section 2 for the precise experimental setting.
et al., 2019). The analysis relies on the link between gradient descent and mirror descent established by Ghai et al. (2020) and further explored by Vaskevicius et al. (2020), Wu and Rebeschini (2020). These works focus on full batch gradient, and most of them study the inifitesimal stepsize limit (gradient flow), leading to general insights and results that do not take into account the effect of stochasticity and large stepsizes.
**The effect of stochasticity in SGD on generalisation.** The relationship between stochasticity in SGD and generalisation has been extensively studied in various works (Mandt et al., 2016; Hoffer et al., 2017; Chaudhari and Soatto, 2018; Kleinberg et al., 2018; Wu et al., 2018). Empirically, models generated by SGD exhibit better generalisation performance than those generated by GD (Keskar et al., 2017; Jastrzebski et al., 2017; He et al., 2019). Explanations related to the flatness of the minima picked by SGD have been proposed (Hochreiter and Schmidhuber, 1997). Label noise has been shown to influence the implicit bias of SGD (HaoChen et al., 2021; Blanc et al., 2020; Damian et al., 2021; Pillaud-Vivien et al., 2022) by implicitly regularising the sharp minimisers. Recently, studying a _stochastic gradient flow_ that models the noise of SGD in continuous time with Brownian diffusion, Pesme et al. (2021) characterised for diagonal linear networks the limit of their stochastic process as the solution of an implicit regularisation problem. However similar explicit characterisation of the implicit bias remains unclear for SGD with large stepsizes.
**The effect of stepsizes in GD and SGD.** Recent efforts to understand how the choice of stepsizes affects the learning process and the properties of the recovered solution suggest that larger stepsizes lead to the minimisation of some notion of flatness of the loss function (Smith and Le, 2018; Keskar et al., 2017; Nacson et al., 2022; Jastrzebski et al., 2018; Wu et al., 2018; Mulayoff et al., 2021), backed by empirical evidences or stability analyses. Larger stepsizes have also been proven to be beneficial for specific architectures or problems: two-layer network (Li et al., 2019), regression (Wu et al., 2021), kernel regression (Beugnot et al., 2022) or matrix factorisation (Wang et al., 2022). For large stepsizes, it has been observed that GD enters an _Edge of Stability (EoS)_ regime (Cohen et al., 2021), in which the iterates and the train loss oscillate before converging to a zero-training error solution; this phenomenon has then been studied on simple toy models (Ahn et al., 2022; Zhu et al., 2023; Chen and Bruna, 2022; Damian et al., 2023) for GD. Recently, Andriushchenko et al. (2022) presented empirical evidence that large stepsizes can lead to loss stabilisation and towards simpler predictors.
## 2 Setup and preliminaries
**Overparametrised linear regression.** We consider a linear regression over inputs \((x_{1},\ldots,x_{n})\in(\mathbb{R}^{d})^{n}\) and outputs \((y_{1},\ldots,y_{n})\in\mathbb{R}^{n}\). We consider _overparametrised_ problems where input dimension \(d\) is (much) larger than the number of samples \(n\). In this case, there exists infinitely many linear predictors \(\beta^{\star}\in\mathbb{R}^{d}\) which perfectly fit the training set, _i.e._, \(y_{i}=\langle\beta^{\star},x_{i}\rangle\) for all \(1\leqslant i\leqslant n\). We call such vectors _interpolating predictors_ or _interpolators_ and we denote by \(\mathcal{S}\) the set of all interpolators \(\mathcal{S}=\{\beta^{\star}\in\mathbb{R}^{d}\text{ s.t. }\langle\beta^{\star},x_{i} \rangle=y_{i},\forall i\in[n]\}\). Note that \(\mathcal{S}\) is an affine space of dimension greater than \(d-n\) and equal to \(\beta^{\star}+\text{span}(x_{1},\ldots,x_{n})^{\perp}\) for any \(\beta^{\star}\in\mathcal{S}\). We consider the following quadratic loss:
\[\mathcal{L}(\beta)=\frac{1}{2n}\sum_{i=1}^{n}(\langle\beta,x_{i}\rangle-y_{i} )^{2}\,.\]
**2-layer linear diagonal network.** We parametrise regression vectors \(\beta\) as functions \(\beta_{w}\) of trainable parameters \(w\in\mathbb{R}^{p}\). Although the final prediction function \(x\mapsto\langle\beta_{w},x\rangle\) is linear in the input \(x\), the choice of the parametrisation drastically changes the solution recovered by the optimisation algorithm (Gunasekar et al., 2018). In the case of the linear parametrisation \(\beta_{w}=w\) many first-order methods (SGD, GD, with or without momentum) converge towards the same solution and the choice of stepsize does not impact the recovered solution beyond convergence. In an effort to better understand the effects of stochasticity and large stepsize, we consider a toy neural network, a 2-layer diagonal linear neural network given by:
\[\beta_{w}=u\odot v\text{ where }w=(u,v)\in\mathbb{R}^{2d}\,. \tag{1}\]
This parametrisation can be viewed as a simple neural network \(x\mapsto\langle u,\sigma(\operatorname{diag}(v)x)\rangle\) where the output weights are represented by \(u\), the inner weights is the diagonal matrix \(\operatorname{diag}(v)\), and the activation \(\sigma\) is the identity function. We refer to \(w=(u,v)\in\mathbb{R}^{2d}\) as the _neurons_ and to \(\beta\coloneqq u\odot v\in\mathbb{R}^{d}\) as the _prediction parameter_. With the parametrisation (1), the loss function \(F\) over parameters \(w=(u,v)\in\mathbb{R}^{2d}\) is given by:
\[F(w)\coloneqq\mathcal{L}(u\odot v)=\frac{1}{2n}\sum_{i=1}^{n}(y_{i}-\langle u \odot v,x_{i}\rangle)^{2}\,. \tag{2}\]
It is worth noting that despite the simplicity of the parametrisation, the corresponding optimisation problem is non-convex and is challenging to analyse.
#### Mini-batch Stochastic Gradient Descent.
We minimise \(F\) using mini-batch SGD:
\[w_{0}=(u_{0},v_{0})\,,\quad w_{k+1}=w_{k}-\gamma_{k}\nabla F_{\mathcal{B}_{k}} (w_{k})\,, \tag{3}\]
where \(\gamma_{k}\) are stepsizes, \(\mathcal{B}_{k}\subset[n]\) are mini-batches of \(b\in[n]\) distinct samples sampled uniformly and independently, and \(\nabla F_{\mathcal{B}_{k}}(w_{k})\) are minibatch gradients of partial loss over \(\mathcal{B}_{k}\)
\[F_{\mathcal{B}_{k}}(w)\coloneqq\mathcal{L}_{\mathcal{B}_{k}}(u\odot v) \coloneqq\frac{1}{2b}\sum_{i\in\mathcal{B}_{k}}(y_{i}-\langle u\odot v,x_{i} \rangle)^{2}\,.\]
We emphasise that our analysis holds for any batch-size \(b\in[n]\) and stepsizes \(\{\gamma_{k}\}_{k}\). Classical stochastic gradient descent and full-batch gradient descent are special cases with \(b=1\) and \(b=n\), respectively. For \(k\geqslant 0\), we consider the successive predicting parameters \(\beta_{k}\coloneqq u_{k}\odot v_{k}\) built from the neurons \(w_{k}=(u_{k},v_{k})\).
#### Initialisation.
We analyse SGD initialised at \(u_{0}=\sqrt{2}\boldsymbol{\alpha}\in\mathbb{R}_{>0}^{d}\) and \(v_{0}=\boldsymbol{0}\in\mathbb{R}^{d}\), resulting in \(\beta_{0}=\boldsymbol{0}\in\mathbb{R}^{d}\) independently of the chosen neuron initialisation \(\boldsymbol{\alpha}\)1.
Footnote 1: In Appendix C, we show that the (S)GD trajectory with this initialisation exactly matches that of another common parametrisation \(\beta_{w}=w_{+}^{2}-w_{-}^{2}\) with initialisation \(w_{+,0}=w_{-,0}=\boldsymbol{\alpha}\)
#### Notations.
We denote by \(y=\frac{1}{\sqrt{n}}(y_{1},\ldots,y_{n})\in\mathbb{R}^{n}\) the normalised output vector and by \(X\in\mathbb{R}^{n\times d}\) the input matrix whose \(i^{th}\) row is the normalised input \(\frac{1}{\sqrt{n}}x_{i}\in\mathbb{R}^{d}\). \([n]\) denotes the set of all integers from \(1\) to \(n\). Let \(H\coloneqq\frac{1}{n}\sum_{i}x_{i}x_{i}^{\top}\) denote the Hessian of \(\mathcal{L}\). For a batch \(\mathcal{B}\subset[n]\) we define the batch loss as \(\mathcal{L}_{\mathcal{B}}(\beta)=\frac{1}{|\mathcal{B}|}\sum_{i\in\mathcal{B}} (\langle x_{i},\beta\rangle-y_{i})^{2}\) and its Hessian as \(H_{\mathcal{B}}\coloneqq\frac{1}{|\mathcal{B}|}\sum_{i\in\mathcal{B}}x_{i}x_{ i}^{\top}\). Let \(L\) denote the "smoothness" such that \(\forall\beta\), \(\|H_{\mathcal{B}}\beta\|_{2}\leqslant L\|\beta\|_{2}\), \(\|H_{\mathcal{B}}\beta\|_{\infty}\leqslant L\|\beta\|_{\infty}\) for all batches \(\mathcal{B}\subset[n]\) of size \(b\). All expectations in the paper are with respect to the uniform sampling of the batches \((\mathcal{B}_{k})_{k}\). A real function (e.g, \(\log,\exp\)) applied to a vector must be understood as element-wise application, and for vectors \(u,v\in\mathbb{R}^{d}\), \(u^{2}=(u_{i}^{2})_{i\in[d]}\) and \(u\odot v=(u_{i}v_{i})_{i\in[d]}\). We write \(\boldsymbol{1}\), \(\boldsymbol{0}\) for the constant vectors with coordinates \(1\) and \(0\) respectively. For vectors \(u,v\in\mathbb{R}^{d}\), we write \(u\leqslant v\) for \(\forall i\in[d]\), \(u_{i}\leqslant v_{i}\) and for symmetric matrices \(A,B\in\mathbb{R}^{d\times d}\) we write \(A\succeq B\) for \(B-A\) positive semi-definite.
#### Experimental details.
We consider the noiseless sparse regression setting where \((x_{i})_{i\in[n]}\sim\mathcal{N}(0,I_{d})\) and \(y_{i}=\langle\beta_{\ell_{1}}^{*},x_{i}\rangle\) for some \(s\)-sparse vector \(\beta_{\ell_{1}}^{*}\). We perform (S)GD over the DLN with a uniform initialisation \(\boldsymbol{\alpha}=\alpha\boldsymbol{1}\in\mathbb{R}^{d}\) where \(\alpha>0\). Fig. 1 and Fig. 2 (left) correspond to the setup \((n,d,s,\alpha)=(20,30,3,0.1)\) and Fig. 2 (right) and Fig. 3 to the setup \((n,d,s,\alpha)=(50,100,2,0.1)\).
## 3 Implicit bias of SGD and GD
#### Warmup: gradient flow.
We first review prior findings on gradient flow on diagonal linear neural networks. Woodworth et al. (2020) show that the limit \(\beta_{\boldsymbol{\alpha}}^{*}\) of the _gradient flow_\(dw_{t}=-\nabla F(w_{t})\mathrm{d}t\) initialised at \((u_{0},v_{0})=(\sqrt{2}\boldsymbol{\alpha},\boldsymbol{0})\) is the solution of the minimal interpolation problem:
\[\beta_{\boldsymbol{\alpha}}^{*}=\operatorname*{argmin}_{\beta^{*}\in\mathcal{S} }\,\psi_{\boldsymbol{\alpha}}(\beta^{*})\,, \tag{4}\]
where \(\psi_{\boldsymbol{\alpha}}\) is the hyperbolic entropy function (Ghai et al., 2020) defined as:
\[\psi_{\boldsymbol{\alpha}}(\beta)=\frac{1}{2}\sum_{i=1}^{d}\Big{(}\beta_{i} \mathrm{arcsinh}(\frac{\beta_{i}}{\alpha_{i}^{2}})-\sqrt{\beta_{i}^{2}+\alpha_{ i}^{4}}+\alpha_{i}^{2}\Big{)}. \tag{5}\]
The key characteristic of the hyperbolic entropy is its ability to interpolate between the \(\ell_{1}\) and \(\ell_{2}\) norms as the scale of the uniform initialisation \(\mathbf{\alpha}=\alpha\mathbf{1}\in\mathbb{R}^{d}\) approaches zero and infinity respectively (Woodworth et al., 2020, Theorem 2).
The implicit regularisation characterisation in (4) highlights that the scale of the initial weights \(\alpha>0\) controls the shape of the solution recovered by the algorithm. Small initialisation results in solutions with low \(\ell_{1}\)-norm known to induce sparse recovery guarantees (Candes et al., 2006). This setting is often referred to as the "rich" regime (Woodworth et al., 2020). In contrast, using large initial weights leads to solutions with low \(\ell_{2}\)-norm, a setting known as the "kernel" or "lazy" regime. The weights of the neurons make only small adjustments to fit the data, resulting in dynamics similar to kernel linear regression. Overall, to retrieve the minimum \(\ell_{1}\)-norm solution \(\beta^{\star}_{\ell_{1}}:=\operatorname*{argmin}_{\beta^{\star}\in\mathcal{S}} \|\beta^{\star}\|_{1}\), it is recommended to use the smallest possible initialisation scale \(\alpha\). However, with \(\mathbf{\alpha}=0\), \(w_{0}=(0,0)\) is a saddle point of \(F\), which makes the training longer as \(\mathbf{\alpha}\to 0\).
In addition to the scale of \(\mathbf{\alpha}\), a lesser studied aspect of initialisation is its "shape", which is a term we use to refer to the relative distribution of \(\{\alpha_{i}\}\) along the \(d\) coordinates. For uniform initialisation \(\mathbf{\alpha}=\alpha\mathbf{1}\), we know that the limit of \(\alpha\to 0\) is \(\psi_{\mathbf{\alpha}}\propto\|.\|_{1}\). However, if each entry of \(\mathbf{\alpha}\)_does not go to zero "uniformly"_, the limit \(\lim_{\mathbf{\alpha}\to\mathbf{0}}\psi_{\mathbf{\alpha}}\) becomes a **weighted**\(\ell_{1}\)-norm instead of the standard \(\ell_{1}\)-norm, which can negatively affect sparse recovery (Example 1).
### Main result: convergence and implicit bias
In Theorem 1, we prove that for an initialisation \(\mathbf{\alpha}\in\mathbb{R}^{d}\) and for macroscopic stepsizes, minibatch stochastic gradient descent on \(F\) converges almost surely to an interpolator, which we denote \(\beta^{\star}_{\infty}\). Moreover, as for gradient flow, this interpolator minimises the hyperbolic entropy (5) but for a trajectory-dependent _effective_ initialisation \(\mathbf{\alpha}_{\infty}\in\mathbb{R}^{d}\) which is component-wise stricly smaller than \(\mathbf{\alpha}\).
**Theorem 1**.: _Let \((u_{k},v_{k})_{k\geqslant 0}\) follow the mini-batch SGD recursion (3) initialised at \(u_{0}=\sqrt{2}\mathbf{\alpha}\in\mathbb{R}^{d}_{>0}\) and \(v_{0}=\mathbf{0}\), and let \((\beta_{k})_{k\geqslant 0}=(u_{k}\odot v_{k})_{k\geqslant 0}\). There exists \(B>0\) and a numerical constant \(c>0\) such that for stepsizes satisfying \(\gamma_{k}\leqslant\frac{c}{LB}\), the iterates satisfy \(\left\|\gamma_{k}\nabla\mathcal{L}_{\mathcal{B}_{k}}(\beta_{k})\right\|_{\infty}\leqslant 1\) and \(\left\|\beta_{k}\right\|_{\infty}\leqslant B\) for all \(k\), and:_
1. \((\beta_{k})_{k\geqslant 0}\) _converges almost surely to some_ \(\beta^{\star}_{\infty}\in\mathcal{S}\)_, that satisfies:_ \[\beta^{\star}_{\infty}=\operatorname*{argmin}_{\beta^{\star}\in\mathcal{S}} \,D_{\psi_{\mathbf{\alpha}_{\infty}}}(\beta^{\star},\tilde{\beta}_{0})\,,\] (6) _where_ \(\mathbf{\alpha}_{\infty}\in\mathbb{R}^{d}_{\geqslant 0}\) _and_ \(\tilde{\beta}_{0}\in\mathbb{R}^{d}\)_._
2. \(\mathbf{\alpha}_{\infty}\in\mathbb{R}^{d}\) _satisfies_ \(\mathbf{\alpha}_{\infty}\leqslant\mathbf{\alpha}\) _and is equal to:_ \[\mathbf{\alpha}_{\infty}^{2}=\mathbf{\alpha}^{2}\odot\exp\left(-\sum_{k=0}^{\infty}q \big{(}\gamma_{k}\nabla\mathcal{L}_{\mathcal{B}_{k}}(\beta_{k})\big{)}\right)\,,\] (7) _where_ \(q(x)=-\frac{1}{2}\ln((1-x^{2})^{2})\geqslant 0\) _for_ \(|x|\leqslant\sqrt{2}\)_, and_ \(\tilde{\beta}_{0}\) _is a perturbation term equal to:_ \[\tilde{\beta}_{0}=\frac{1}{2}\big{(}\mathbf{\alpha}_{+}^{2}-\mathbf{\alpha}_{-}^{2} \big{)},\] _where,_ \(q_{\pm}(x)=\mp 2x-\ln((1\mp x)^{2})\)_, and_ \(\mathbf{\alpha}_{\pm}^{2}=\mathbf{\alpha}^{2}\odot\exp\left(-\sum_{k=0}^{\infty}q_{\pm} (\gamma_{k}\nabla\mathcal{L}_{\mathcal{B}_{k}}(\beta_{k}))\right)\)_._
Full characterisation of the recovered solution.The regularisation problem (6) fully characterises the interpolator selected by the algorithm. It is worth noting that the assumption on the stepsize is only required to show the convergence of the iterates. Consequently, **regardless of stepsize sequence chosen**, as long as the iterates \(\beta_{k}\) converge to zero training error, the implicit regularisation characterisation in Eq. (6) holds true.
This remark even holds for adaptive stepsize schedules which keep the stepsize scalar such as AdaDelta (Zeiler, 2012). To our knowledge, this is the first complete characterisation of the implicit bias of gradient methods with practical stepsizes. Thus our result extends beyond the classical continuous-time framework where all previous results were derived (Woodworth et al., 2020, Pesme et al., 2021). Note that for \(\gamma_{k}\to 0\) we have \(\mathbf{\alpha}_{\infty}\to\mathbf{\alpha}\) and \(\tilde{\beta}_{0}\to\mathbf{0}\) (Proposition 13), recovering previous results for gradient flow (4).
\(\tilde{\beta}_{0}\) **can be ignored.** We show in Proposition 12 that the magnitude of \(\tilde{\beta}_{0}\) is negligable in front of the magnitudes of \(\beta^{\star}\in S\). Hence, one can roughly ignore the term \(\tilde{\beta}_{0}\) and the implicit regularisation Eq. (6) can be simplified as \(\beta^{\star}_{\infty}\approx\operatorname*{argmin}_{\beta^{\star}\in S}\psi_{ \boldsymbol{\alpha}_{\infty}}(\beta^{\star})\).
**Effective initialisation \(\boldsymbol{\alpha}_{\infty}\) (with a twist).** Considering that \(\tilde{\beta}_{0}\approx 0\), the solution \(\beta^{\star}_{\infty}\) obtained with minibatch SGD or GD minimises the same potential function that the solution of gradient flow Eq. (4) but with an effective initialisation of \(\boldsymbol{\alpha}_{\infty}\), which is elementwise strictly smaller than \(\boldsymbol{\alpha}\). Thus, based on the properties of hyper-entropy, we expect our effective implicit regulariser \(\psi_{\boldsymbol{\alpha}_{\infty}}\) to be closer to the \(\ell_{1}\)-norm than \(\psi_{\boldsymbol{\alpha}}\). We thus expect the smaller scale of \(\boldsymbol{\alpha}_{\infty}\) to always help in the recovery of low \(\ell_{1}\)-norm solutions.
However, as clearly seen in Fig. 1, this is not always the case. This is because the recovery of low \(\ell_{1}\)-norm solutions does not just depend on the scale \(\|\boldsymbol{\alpha}_{\infty}\|_{1}\), but also on how uniform the "shape" of \(\boldsymbol{\alpha}_{\infty}\) is. The full picture of how \(\boldsymbol{\alpha}_{\infty}\) impacts recovery of minimum \(\ell_{1}\)-norm solution is more nuanced and we discuss this in more details in the following sections. In particular, we show that the shape of our effective initialisation \(\boldsymbol{\alpha}_{\infty}\) is affected by various factors including batchsize and input data \(X\). We see that, unlike GD, the stochasticity in SGD leads to a more uniform \(\boldsymbol{\alpha}_{\infty}\) and larger stepsizes (up to divergence) highly benefit the recovery of low \(\ell_{1}\)-norm solutions, as seen Fig. 1.
**Effect of the stepsizes.** The impact of the macroscopic stepsizes is reflected in the effective initialisation \(\boldsymbol{\alpha}_{\infty}\). The difference with gradient flow is directly associated with the quantity \(\sum_{k}q(\gamma_{k}g_{k})\): the larger this sum, the more the recovered solution differs from that of gradient flow. Also, as the (stochastic) gradients \(g_{k}\) converge to \(0\) and that \(q(x)\overset{x\to 0}{\sim}x^{2}\), one should think of this sum as roughly being \(\sum_{k}\gamma_{k}^{2}g_{k}^{2}\). While it is clear that the stepsize influences our effective initialization \(\boldsymbol{\alpha}_{\infty}\), it is not straightforward to predict the exact impact it has and whether it aids in recovering a low \(\ell_{1}\)-norm solution. Our analysis in the next section shows that the improvement highly depends on the batch size and whether the inputs are centered or not.
### Sketch of proof and time-varying mirror descent
Since the loss \(F\) is non-convex, it is non-trivial that the iterates \((u_{k},v_{k})\) converge towards a global minimum. To prove this result, we consider the iterates \(\beta_{k}=u_{k}\odot v_{k}\) and show that they follow a mirror descent recursion with time-varying potentials \((h_{k})_{k\geqslant 0}\)(Orabona et al., 2015) on the convex loss \(\mathcal{L}(\beta)\). The potentials are defined just below, and are closely related to the hyperbolic entropy (5).
**Proposition 1**.: \((\beta_{k}=u_{k}\odot v_{k})_{k\geqslant 0}\) _satisfy the Stochastic Mirror Descent recursion with varying potentials \((h_{k})_{k}\):_
\[\nabla h_{k+1}(\beta_{k+1})=\nabla h_{k}(\beta_{k})-\gamma_{k}\nabla\mathcal{ L}_{\mathcal{B}_{k}}(\beta_{k})\,, \tag{8}\]
_where \(h_{k}:\mathbb{R}^{d}\to\mathbb{R}\) for \(k\geqslant 0\) are strictly convex functions._
By suitably modifying classical convex optimization techniques to account for time-varying potentials (Proposition 8, that can also be of independent interest), we can prove the convergence of the iterates towards an interpolator \(\beta^{\star}_{\infty}\) along with that of all the relevant quantities which appear in Theorem 1. The implicit regularisation problem then directly follows from: (1) the limit condition \(\nabla h_{\infty}(\beta_{\infty})\in\operatorname*{Span}(x_{1},\ldots,x_{n})\) as seen from Eq. (8) and (2) the interpolation condition \(X\beta^{\star}_{\infty}=y\). Indeed, these two conditions exactly correspond to the KKT conditions of the convex problem Eq. (6). Furthermore, while we perform our analysis for minibatch SGD, Proposition 1 holds for any other stochastic gradient methods (_e.g._, label-noise).
We now explicit the reference functions \((h_{k})_{k\geqslant 0}\), that are related to the hyperbolic entropy of scale \(\alpha_{k}\) with a translation \(\phi_{k}\), where \(\alpha_{k}\) and \(\phi_{k}\) are defined as follows. Let \(q,q_{\pm}:\mathbb{R}\to\mathbb{R}\cup\{\infty\}\) be defined as:
\[q_{\pm}(x)=\mp 2x-\ln\left((1\mp x)^{2}\right),\] \[q(x)=\frac{1}{2}(q_{+}(x)+q_{-}(x))=-\frac{1}{2}\ln\left((1-x^{2}) ^{2}\right),\]
with the convention that \(q(1)=\infty\). Notice that \(q(x)\geqslant 0\) for \(|x|\leqslant\sqrt{2}\) and \(q(x)<0\) otherwise. For the iterates \(\beta_{k}=u_{k}\odot v_{k}\in\mathbb{R}^{d}\), we define the following quantities:
\[\alpha_{\pm,k}^{2} =\alpha^{2}\exp(-\sum_{i=0}^{k-1}q_{\pm}(\gamma_{\ell}\nabla\mathcal{ L}_{\mathcal{B}_{\ell}}(\beta_{\ell})))\in\mathbb{R}^{d}\,,\] \[\alpha_{k}^{2} =\alpha_{+,k}\odot\alpha_{-,k}\,,\] \[\phi_{k} =\frac{1}{2}\operatorname{arcsinh}\big{(}\frac{\alpha_{+,k}^{2}- \alpha_{-,k}^{2}}{2\alpha_{k}^{2}}\big{)}\in\mathbb{R}^{d}\,.\]
Finally for \(k\geqslant 0\), the reference function \((h_{k}:\mathbb{R}^{d}\to\mathbb{R})_{k\geqslant 0}\) in Proposition 1 have the following expression:
\[h_{k}(\beta)=\psi_{\alpha_{k}}(\beta)-\left\langle\phi_{k},\beta\right\rangle, \tag{9}\]
where \(\psi_{\alpha_{k}}\) is the hyperbolic entropy (5) of scale \(\alpha_{k}\). A natural and crucial consequence of Proposition 1 is the following corollary, that characterises the limit, and that holds irrespectively of the structure of the stochastic gradients, as long as \(g_{k}\in\operatorname{Span}(x_{1},\ldots,x_{n})\) holds for all \(k\) (as is the case for minibatch SGD, GD, or label noise SGD).
**Corollary 1**.: _Assume that the iterates converge to some interpolator \(\beta_{\infty}^{*}\)and that there exist \(\alpha_{\infty},\phi_{\infty}\in\mathbb{R}^{d}\) such that \(\alpha_{k}\to\alpha_{\infty}\) and \(\phi_{k}\to\phi_{\infty}\). Then, \(\beta_{\infty}^{*}\) is uniquely defined by the following implicit regularization problem:_
\[\beta_{\infty}^{*}=\operatorname*{argmin}_{\beta^{*}\in S}\,D_{\psi_{\alpha_ {\infty}}}(\beta^{*},\tilde{\beta}_{0})\,,\]
_where \(\tilde{\beta}_{0}=2\alpha_{\infty}^{2}\sinh\big{(}2\phi_{\infty}\big{)}\)._
This result is directly obtained by noticing that for all \(k\geqslant 0\), we have \(\nabla h_{k}(\beta_{k})\in\operatorname{Span}(x_{1},\ldots,x_{n})\), the convergence of the iterates and of the reference functions being a consequence of the assumptions made in the corollary.
## 4. Analysis of the implicit bias and of the effective initialisation: Stochasticity and stepsize
In this section, we analyse the effects of large stepsizes and stochasticity on the implicit bias of minibatch SGD, and in particular on the effective initialisation \(\mathbf{\alpha}_{\infty}\). We explain how these two features influence the effective initialisation \(\mathbf{\alpha}_{\infty}\) appearing in Theorem 1. Two major factors influence the recovered interpolator \(\beta_{\infty}^{*}\):
1. **Scale** of \(\mathbf{\alpha}_{\infty}\): for a homogeneous vector \(\mathbf{\alpha}=\alpha\mathbf{1}\), as the scale \(\alpha\) decreases, the function \(\psi_{\mathbf{\alpha}}\) becomes increasingly similar to the \(\ell_{1}\)-norm and the sparse recovery guarantees of \(\beta_{\mathbf{\alpha}}^{*}\) Eq. (4) improve. See Fig. 6 in Appendix D and Theorem 2 of Woodworth et al. (2020) for a precise characterisation.
2. **Shape** of \(\mathbf{\alpha}_{\infty}\): For \(\mathbf{\alpha}\in\mathbb{R}_{\geqslant 0}^{d}\), we can show that (see Appendix D), \(\psi_{\mathbf{\alpha}}(\beta)\stackrel{{\mathbf{\alpha}\to 0}}{{\sim}}\sum_{i=1}^{d}\ln( \frac{1}{\alpha_{i}})|\beta_{i}|\). Thus, a heterogeneous vector \(\ln(1/\mathbf{\alpha}_{\infty})\) with entries of differing magnitude results in minimising a **weighted**\(\ell_{1}\)-norm at small scales. This phenomenon can lead to solutions with vastly different sparsity structure than the minimum \(\ell_{1}\)-norm interpolator. See Fig. 6 in Appendix D for an intuitive illustration.
From Eq. (7), we see that the scale and the shape of \(\mathbf{\alpha}_{\infty}\) are controlled by \(\sum_{k}q(\gamma_{k}\nabla\mathcal{L}_{\mathcal{B}_{k}}(\beta_{k}))\). We henceforth call this quantity the _gain vector_. For simplicity, from now on, we consider constant stepsize \(\gamma_{k}=\gamma\) for all \(k\geqslant 0\) and a uniform initialisation of the neurons \(\mathbf{\alpha}=\alpha\mathbf{1}\) where \(\alpha>0\) is the initialisation scale. We can then write the gain vector:
\[\operatorname{Gain}_{\gamma}\coloneqq\ln\left(\frac{\mathbf{\alpha}^{2}}{\mathbf{ \alpha}_{\infty}^{2}}\right)=\sum_{k}q(\gamma\nabla\mathcal{L}_{\mathcal{B}_{k }}(\beta_{k}))\in\mathbb{R}^{d}\,.\]
Following our discussion on the scale and the shape of \(\mathbf{\alpha}_{\infty}\),
1. The **magnitude** of \(\|\operatorname{Gain}_{\gamma}\|_{1}\) indicates how much the implicit bias of (S)GD differs from that of gradient flow: \(\|\operatorname{Gain}_{\gamma}\|_{1}\sim 0\) implies that \(\mathbf{\alpha}_{\infty}\sim\mathbf{\alpha}\) and therefore the recovered solution is close to that of gradient flow. On the contrary, \(\|\operatorname{Gain}_{\gamma}\|_{1}\gg\ln(1/\alpha)\) implies that
\(\mathbf{\alpha}_{\infty}\) has effective scale much smaller than \(\mathbf{\alpha}\) thereby changing the implicit regularisation Eq. (6).
2. The **shape** of \(\text{Gain}_{\gamma}\) indicates which coordinates of \(\beta\) in the associated minimum weighted \(\ell_{1}\) problem are most penalised. Since \[\psi_{\mathbf{\alpha}_{\infty}}(\beta)\sim\ln(\frac{1}{\alpha})\|\beta^{*}\|_{1}+ \sum_{i=1}^{d}\text{Gain}_{\gamma}(\text{i})|\beta_{\text{i}}|,\] (10) coordinates of \(\beta\) corresponding to the largest entries of \(\text{Gain}_{\gamma}\) are less likely to be recovered.
### Scale of \(\text{Gain}_{\gamma}\)
We start by introducing some data-dependent constants.
**Definition 1**.: _Recall that for a batch \(\mathcal{B}\subset[n]\) of size \(b\), \(H_{\mathcal{B}}=\frac{1}{b}\sum_{i\in\mathcal{B}}x_{i}x_{i}^{\top}\) and \(H=\frac{1}{n}\sum_{i}x_{i}x_{i}^{\top}\). Let \(\Lambda_{b},\lambda_{b}>0\) be the largest and smallest values, respectively, such that:_
\[\lambda_{b}H\preceq\mathbb{E}_{\mathcal{B}}\big{[}H_{\mathcal{B}}^{2}\big{]} \preceq\Lambda_{b}H.\]
For all \(b\), \((\lambda_{b},\Lambda_{b})\) only depend on \(X\). For \(b=n\), we have \((\lambda_{n},\Lambda_{n})=(\lambda_{\min}^{+}(H),\lambda_{\max}(H))\) where \(\lambda_{\min}^{+}(H)\) is the smallest non-null eigenvalue of \(H\). For \(b=1\), we have \(\min_{i}\|x_{i}\|_{2}^{2}\leqslant\lambda_{1}\leqslant\Lambda_{1}\leqslant \max_{i}\|x_{i}\|_{2}^{2}\). The following proposition highlights the dependencies of the scale of the gain \(\|\text{Gain}_{\gamma}\|_{1}\) in terms of various problem constants.
**Proposition 2**.: _For any stepsize \(\gamma>0\), initialisation \(\alpha\mathbf{1}\) and batch size \(b\in[n]\), the magnitude of the gain satisfies:_
\[\lambda_{b}\gamma^{2}\sum_{k}\mathcal{L}(\beta_{k})\leqslant\mathbb{E}\left[ \|\text{Gain}_{\gamma}\|_{1}\right]\leqslant\Lambda_{b}\gamma^{2}\sum_{k} \mathcal{L}(\beta_{k})\,, \tag{11}\]
_where the expectation is over uniform and independent sampling of the batches \((\mathcal{B}_{k})_{k\geqslant 0}\) at each iteration. Furthermore, for stepsize \(0<\gamma\leqslant\gamma_{\max}=\frac{c}{BL}\), we have that:_
\[\sum_{k}\gamma^{2}\mathcal{L}(\beta_{k})=\Theta\left(\gamma\ln\left(\frac{1}{ \alpha}\right)\left\|\beta_{\ell_{1}}^{*}\right\|_{1}\right)\,. \tag{12}\]
**The slower the training, the larger the gain.** Eq. (11) shows that the slower the training loss converges to \(0\), the larger the sum of the loss, leading to a larger scale of \(\text{Gain}_{\gamma}\). It extends observations previously made for stochastic gradient flow (Pesme et al., 2021) to SGD and GD.
**Impact of the stepsize.** The effect of the stepsize on the magnitude of the gain is not directly visible in Eq. (11) because larger stepsize \(\gamma\) tends to speed up the training. However, Eq. (12) clearly shows that increasing that stepsize **boosts** the magnitude \(\|\text{Gain}_{\gamma}\|_{1}\) up until the limit of
\(\gamma_{\max}\). Therefore, the larger the stepsize the smaller is the effective scale of \(\mathbf{\alpha}_{\infty}\), which in turn, if the gap is significant, leads to a large deviation of (S)GD from the gradient flow.
**Impact of stochasticity.** In the following Corollary, we demonstrate the effect of stochasticity (through the batch size \(b\)) on the magnitude of the gain. It requires some probabilistic assumptions2 on the data distribution which enables the control of the values of \(\lambda_{b}\) and \(\Lambda_{b}\).
Footnote 2: The Gaussian assumption can be generalized to non-isotropic sub-Gaussian random variables using more refined concentration bounds. The zero-mean assumption can be relaxed at the cost of an additional \(\|\mu\|^{2}\).
**Corollary 2**.: _Assume that the inputs are sampled from \(\mathcal{N}(0,\sigma^{2}I_{d})\) for \(\sigma^{2}>0\). Then, we have \(\lambda_{b}=\Theta\Big{(}\frac{\sigma^{2}d}{b}\Big{)}\) and \(\Lambda_{b}=\Theta\Big{(}\frac{\sigma^{2}d}{b}\Big{)}\) with probability \(1-Cne^{-cd}\) over the dataset and thus, with stepsize \(0<\gamma\leqslant\gamma_{\max}=\frac{c}{BL}\), we have that:_
\[\mathbb{E}\left[\|\mathrm{Gain}_{\gamma}\|_{1}\right]=\Theta\Big{(}\gamma \frac{\sigma^{2}d}{b}\ln\big{(}\frac{1}{\alpha}\big{)}\|\beta_{\ell_{1}}^{*} \|_{1}\Big{)}\,. \tag{13}\]
Corollary 2 directly shows that the scale of the \(\mathrm{Gain}_{\gamma}\) decreases with the size of the batch and that there exists a factor \(n\) between that of SGD and that of GD. In Fig. 1, this explains why for \(\gamma\leqslant\gamma_{\max}\), there is a difference of recovered solutions between SGD and gradient flow but not between GD and gradient flow.
**Linear scaling rule.** Notice from Corollary 2 that the magnitude of the \(\mathrm{Gain}_{\gamma}\) depends on the value \(\frac{\gamma}{b}\). This is reminiscent of the linear scaling rule, which is a standard practice in deep learning (Goyal et al., 2017): SGD with \(b=1\) and stepsize \(\gamma\) is expected to behave similarly to SGD with batch-size \(b^{\prime}\) but rescaled stepsize \(\gamma^{\prime}=\gamma\times b^{\prime}\).
**Loose analysis and Edge of Stability.** Our results hold for stepsizes such that \(\gamma\leqslant\gamma_{\max}=\frac{c}{LB}\) where \(c\) is some numerical constant. While this bound is accurate in terms of its dependencies on the problem constant, it tends to be conservative in terms of the value of \(c\): empirically, the loss still converges for larger stepsizes. Let then \(\tilde{\gamma}_{\max}\) be the largest stepsize one can use before the iterates do not converge, _i.e._, \(\tilde{\gamma}_{\max}=\sup_{\gamma\geqslant 0}\{\gamma\text{ s.t. }\forall\gamma^{\prime}\leqslant\gamma,\ \sum_{k}\mathcal{L}(\beta_{k}^{\gamma^{\prime}})<\infty\}\). We directly have that \(\gamma_{\max}\leqslant\tilde{\gamma}_{\max}\), and for \(\gamma\to\tilde{\gamma}_{\max}\), (12) cannot hold and the sum \(\sum_{k}\mathcal{L}(\beta_{k})\) diverges as \(\gamma\to\tilde{\gamma}_{\max}\),3 which is clearly observed in Fig. 2 (left). Also, notice that the value \(\tilde{\gamma}_{\max}\) depends on whether we consider GD or SGD and that as expected, one can use larger stepsizes for gradient descent, even though the stepsize regime \(\gamma\in[\gamma_{\max},\tilde{\gamma}_{\max}]\) is not captured in classical analyses.
Footnote 3: note that we observe this experimentally, however we do not show it and leave it as future work.
For such very large stepsizes, the iterates of gradient descent tend to "bounce" and that of stochastic gradient descent to "fluctuate": this regime is commonly referred to as the _Edge of Stability_. As the convergence of the loss can be made arbitrarily slow due to these bouncing effects, the magnitude of \(\mathrm{Gain}_{\gamma}\) can be made arbitrarily big, and the recovered solution heavily differs from that of gradient flow as seen in Fig. 1.
By analysing the magnitude \(\|\mathrm{Gain}_{\gamma}\|_{1}\), we have explained how (S)GD with large stepsizes behave differently than gradient flow. However, our analysis so far does not show qualitatively a different behaviour between SGD and GD beyond the linear stepsize scaling rules. In contrast, Fig. 1 fundamentally shows different behaviours of SGD and GD: to explain this, we need to understand the shape of \(\mathrm{Gain}_{\gamma}\).
### Shape of \(\mathrm{Gain}_{\gamma}\).
In this section, we restrict our analysis to single batch SGD (\(b=1\)) and full batch GD (\(b=n\)). We also focus on tractable sparse recovery settings, wherein the minimum \(\ell_{1}\)-norm interpolator is also the sparsest interpolator (Candes et al., 2006). Precise assumptions are stated below.
We first visualise in Fig. 2 (right) a typical shape of \(\mathrm{Gain}_{\gamma}\) from SGD and GD trained with large stepsizes. In Fig. 2 (right), we see that GD and SGD indeed lead to different shapes of \(\mathrm{Gain}_{\gamma}\). Importantly, we see that for GD, the magnitude of \(\mathrm{Gain}_{\gamma}\) is higher for coordinates in the support of \(\beta_{\ell_{1}}^{*}\). This is undesirable as based on our discussion above: the coordinates with high gain magnitude are adversely weighted in the asymptotic limit of \(\psi_{\mathbf{\alpha}_{\infty}}\). This would explain the observation that GD in this regime has bad sparse recovery guarantees in spite of having small scale of \(\mathbf{\alpha}_{\infty}\).
The **shape** of \(\text{Gain}_{\gamma}\) is determined by the sum of the squared gradients \(\sum_{k}\nabla\mathcal{L}_{\mathcal{B}_{k}}(\beta_{k})^{2}\), and in particular by the degree of heterogeneity among the coordinates of this sum. Precisely analysing the sum over the whole trajectory of the iterates \((\beta_{k})_{k}\) is technically out of reach. However, we empirically observe for the trajectories shown in Fig. 2 that the shape is largely determined within the first few iterates and that the shape of the whole sum is close to that \(\mathbb{E}[\nabla\mathcal{L}_{\mathcal{B}_{k}}(\beta_{0})]\). We formalise this observation below.
**Observation 1**.: \(\sum_{k}\nabla\mathcal{L}_{\mathcal{B}_{k}}(\beta_{k})^{2}\overset{\infty}{ \propto}\mathbb{E}[\nabla\mathcal{L}_{\mathcal{B}_{k}}(\beta_{0})^{2}]\,.\)__
Having made the observation, to further understand the behaviour and the effects of the stochasticity and the stepsize on the shape of \(\text{Gain}_{\gamma}\), we analyse a noiseless sparse recovery problem under the following standard assumption 1 (Candes et al., 2006) and as common in the sparse recovery literature, we make the following assumption 2 on the inputs.
**Assumption 1**.: _There exists an \(s\)-sparse ground truth vector \(\beta^{\star}_{\text{sparse}}\) where \(s\) verifies \(n=\Omega(s\ln(d))\), such that \(y_{i}=\langle\beta^{\star}_{\text{sparse}},x_{i}\rangle\) for all \(i\in[n]\)._
**Assumption 2**.: _There exists \(\delta,c_{1},c_{2}>0\) such that for all \(s\)-sparse vectors \(\beta\), there exists \(\varepsilon\in\mathbb{R}^{d}\) such that \((X^{\top}X)\beta=\beta+\varepsilon\) where \(\|\varepsilon\|_{\infty}\leqslant\delta\|\beta\|_{2}\) and \(c_{1}\|\beta\|_{2}^{2}\mathbf{1}\leqslant\frac{1}{n}\sum_{i}x_{i}^{2}\langle x _{i},\beta\rangle^{2}\leqslant c_{2}\|\beta\|_{2}^{2}\mathbf{1}\)._
The first part of Assumption 2 closely resembles the classical restricted isometry property (RIP) and is relevant for GD while the second part is relevant for SGD. Such an assumption is not restrictive and holds with high probability for Gaussian inputs \(\mathcal{N}(0,\sigma^{2}I_{d})\) (see Lemma 9 in Appendix). Based on the claim above, we analyse the shape of the (stochastic) gradient at initialisation. For GD and SGD, it respects writes, where \(g_{0}=\nabla\mathcal{L}_{i_{0}}(\beta_{0})^{2}\), \(i_{0}\sim\text{Unif}([\text{n}])\):
\[\nabla\mathcal{L}(\beta_{0})^{2}=[X^{\top}X\beta^{\star}]^{2}\,,\ \ \mathbb{E}_{i_{0}}[g_{0}]=\frac{1}{n}\sum_{i}x_{i}^{2}\langle x_{i},\beta^{ \star}\rangle^{2}.\]
The following lemma then shows that while the initial stochastic gradients of SGD are homogeneous, it is not the case for that of GD.
**Proposition 3**.: _Under Assumption 2, the squared full batch gradient and the expected stochastic gradient at initialisation satisfy, for some \(\varepsilon\) verifying \(\|\varepsilon\|_{\infty}<\!<\!\!<\left\|\beta^{\star}_{\text{sparse}}\right\|_ {\infty}^{2}\):_
\[\nabla\mathcal{L}(\beta_{0})^{2}=(\beta^{\star}_{\text{sparse}}) ^{2}+\varepsilon\,, \tag{14}\] \[\mathbb{E}_{i_{0}}[\nabla\mathcal{L}_{i_{0}}(\beta_{0})^{2}]= \Theta\Big{(}\|\beta^{\star}\|_{2}^{2}\mathbf{1}\Big{)}\,. \tag{15}\]
**The gradient of GD is heterogeneous.** Since \(\beta^{\star}_{\text{sparse}}\) is sparse by definition, from Eq. (14) we deduce that \(\nabla\mathcal{L}(\beta_{0})\) is heterogeneous and that it takes larger values on the support of \(\beta^{\star}_{\text{sparse}}\). Along with observation 1, this means that \(\text{Gain}_{\gamma}\)**has much larger values on the support of \(\beta^{\star}_{\text{sparse}}\)**. The corresponding weighted \(\ell_{1}\)-norm therefore has bigger weights penalising the coordinates which belong to the support of \(\beta^{\star}_{\text{sparse}}\), which is harmful for the recovery of \(\beta^{\star}_{\text{sparse}}\) (as explained in Example 1, Appendix D).
**The stochastic gradient of SGD is homogeneous.** On the contrary, from Eq. (15), we have that the initial stochastic gradients are homogeneous, leading to a weighted \(\ell_{1}\)-norm where the weights are roughly balanced. The corresponding weighted \(\ell_{1}\)-norm is therefore close to the uniform \(\ell_{1}\)-norm and the classical \(\ell_{1}\) recovery guarantees are expected.
### Uncentered data
When the data is uncentered, the discussion and the conclusion for GD are somewhat different. This paragraph is motivated by the observation of Nason et al. (2022) who notice that GD with large stepsizes helps to recover low \(\ell_{1}\) solutions for uncentered data (Fig. 4). We make the following assumptions on the uncentered inputs.
**Assumption 3**.: _There exist \(\mu\in\mathbb{R}^{d}\) and \(\delta,c_{0},c_{1},c_{2}>0\) such that for all \(s\)-sparse vectors \(\beta\) verifying \(\langle\mu,\beta\rangle\geqslant c_{0}\|\beta\|_{\infty}\|\mu\|_{\infty}\), there exists \(\varepsilon\in\mathbb{R}^{d}\) such that \((X^{\top}X)\beta=\langle\beta,\mu\rangle\mu+\varepsilon\) where \(\|\varepsilon\|_{2}\leqslant\delta\|\beta\|_{2}\) and \(c_{1}\langle\beta,\mu\rangle^{2}\mu^{2}\leqslant\frac{1}{n}\sum_{i}x_{i}^{2} \langle x_{i},\beta\rangle^{2}\leqslant c_{2}\langle\beta,\mu\rangle^{2}\mu^{2}\)._
Assumption 3 is not restrictive and holds with high probability for \(\mathcal{N}(\mu\mathbf{1},\sigma^{2}I_{d})\) inputs when \(\mu\gg\sigma\mathbf{1}\) (see Lemma 8 in Appendix). The following lemma characterises the initial shape of SGD and GD gradients for uncentered data.
**Proposition 4** (Shape of the (stochastic) gradient at initialisation).: _Under Assumption 3 and if \(\left\langle\mu,\beta^{\star}_{\text{sparse}}\right\rangle\geqslant c_{0}\norm{ \beta}_{\infty}\norm{\mu}_{\infty}\), the squared full batch gradient and the expected stochastic gradient descent at initialisation satisfy, for some \(\varepsilon\) satisfying \(\norm{\varepsilon}_{\infty}\ll\norm{\beta_{\text{sparse}}}_{2}\):_
\[\nabla\mathcal{L}(\beta_{0})=\left\langle\beta^{\star}_{\text{sparse}},\mu \right\rangle^{2}\mu^{2}+\varepsilon\:, \tag{16}\]
\[\mathbb{E}_{i\sim\mathrm{Unif}([n])}[\nabla\mathcal{L}_{i}(\beta_{0})^{2}]= \Theta\Big{(}\langle\beta^{\star}_{\text{sparse}},\mu\rangle^{2}\mu^{2}\Big{)}\:. \tag{17}\]
In this case the initial gradients of SGD and of GD **are both homogeneous**, explaining the behaviours of gradient descent in Fig. 4 (App. A): large stepsizes help in the recovery of the sparse solution in the presence of uncentered data, as opposed to centered data. Note that for decentered data with a \(\mu\in\mathbb{R}^{d}\) orthogonal to \(\beta^{\star}_{\text{sparse}}\), there is no effect of decentering on the recovered solution. If the support of \(\mu\) is the same as that of \(\beta^{\star}_{\text{sparse}}\), the effect is detrimental and the same discussion as in the centered data case applies.
## 5. Edge of Stability: The Neural Point of View
In recent years it has been noticed that when training neural networks with 'large' stepsizes at the limit of divergence, GD and SGD enter the _Edge of Stability (EoS)_ regime. In this regime, as seen in Fig. 3, the iterates of GD 'oscillate' while the iterates of SGD 'fluctuate'. In this section we come back to the point of view of the neurons \(w_{k}=(u_{k},v_{k})\in\mathbb{R}^{2d}\) and make the connection between our previous results and the common understanding of the _EoS_ phenomenon for gradient descent. The question we seek to answer is: _in which case does GD enter the EoS regime, and if so, what are the consequences on the trajectory?_ We emphasise that this section aims to provide insights rather than formal statements.
We consider a small initialisation \(\alpha\) such that gradient flow converges close to the sparse interpolator \(\beta^{\star}_{\text{sparse}}=\beta_{w^{\star}_{\text{sparse}}}\). The trajectory of GD as seen in Fig. 3 (left) can be decomposed into up to 3 phases.
1. **First phase: gradient flow.** The stepsize is appropriate for the local curvature and the iterates of GD remain close to the trajectory of gradient flow. If the stepsize is such that \(\gamma<\frac{2}{\lambda_{\text{max}}(\nabla^{2}F(w^{\star}_{\text{sparse}}))}\), then the stepsize is compatible with the local curvature and the GD iterates converge in this case GF and GD converge to the same point as seen in Fig. 1 for small stepsizes. For larger \(\gamma>\frac{2}{\lambda_{\text{max}}(\nabla^{2}F(w^{\star}_{\text{sparse}}))}\), the iterates cannot converge and enter the oscillating phase.
2. **Second phase: oscillations.** The iterates start oscillating. The gradient of \(F\) in the vicinity of \(w^{\star}_{\text{sparse}}\) writes \(\nabla_{(u,v)}F(w)\sim(\nabla\mathcal{L}(\beta)\odot v,\nabla\mathcal{L}( \beta)\odot u)\), therefore for \(w\sim w^{\star}_{\text{sparse}}\) we have that \(\nabla_{u}F(w)_{i}\sim\nabla_{v}F(w)_{i}\sim 0\) for \(i\notin\mathrm{supp}(\beta^{\star}_{\text{sparse}})\) and the gradients roughly belong to \(\mathrm{Span}(e_{i},e_{i+d})_{i\in\mathrm{supp}(\beta^{\star}_{\text{sparse}})}\). This means that only the coordinates of the neurons \((u_{i},v_{i})\) for \(i\in\mathrm{supp}(\beta^{\star}_{\text{sparse}})\) can oscillate and similarly for \((\beta_{i})_{i\in\mathrm{supp}(\beta^{\star}_{\text{sparse}})}\).
Figure 3. **(S)GD at the _EoS. Left:_ For GD, the coordinates on the support of \(\beta^{\star}_{\text{sparse}}\) oscillate and drift towards 0. _Right:_ For SGD, all the coordinates fluctuate and the iterates converge towards \(\beta^{\star}_{\text{sparse}}\).
3. **Last phase: convergence.** Due to the oscillations, the iterates gradually drift towards a region of lower curvature where they may (potentially) converge. Theorem 1 enables us to understand where they converge: the coordinates of \(\beta_{k}\) that have oscillated significantly along the trajectory belong to the support of \(\beta_{\text{sparse}}^{\star}\), and therefore \(\text{Gain}_{\gamma}(\text{i})\) becomes much larger for \(i\in\text{supp}(\beta_{\text{sparse}}^{\star})\) than for the other coordinates. Therefore, the coordinates of the solution recovered in the _EoS_ regime are heavily penalised on the support of the sparse solution. This is observed in Fig. 3 (left): the oscillations of \((\beta_{i})_{i\in\text{supp}(\beta_{\text{sparse}}^{\star})}\) lead to a gradual shift of these coordinates towards 0, hindering an accurate recovery of the solution \(\beta_{\text{sparse}}^{\star}\).
**SGD in the _EoS_ regime.** In Fig. 3 (right), for stepsizes in the _EoS_ regime, just below the non-convergence threshold, the behavior of SGD is different to that of GD: the fluctuation of the coordinates occurs evenly over all coordinates, leading to a uniform \(\boldsymbol{\alpha}_{\infty}\). These homogeneous fluctuations are reminiscent of label-noise SGD (Andriushchenko et al., 2022) and Pillaud-Vivien et al. (2022) showed that label-noise SGD can recover the sparse interpolator in DLNs.
**Flat minima and generalisation.** Common knowledges are that flatter minima are beneficial for generalisation, and that larger stepsizes lead to flatter minimas (Nacson et al., 2022; Hochreiter and Schmidhuber, 1997). However in our DLN case, while larger stepsizes indeed drive the solution to a flatter minima (Fig. 5 (left), appendix A), as seen previously this leads to bad generalisation, illustrating the fact that generalisation is more complex than a story of flatness at the optimum.
## Conclusion
We study the effect of stochasticity along with large stepsizes when training DLNs with (S)GD. We prove convergence of the iterates as well as explicitly characterise the recovered solution by exhibiting an implicit regularisation problem which depends on the iterates' trajectory. We show that large stepsizes highly changes the solution recovered by (S)GD with respect to gradient flow. However, surprisingly, the generalisation properties of GD with large stepsizes are highly different to those of SGD: without stochasticity, the use of large stepsizes can prevent the recovery of the sparse interpolator. We also provide insights on the link between the _Edge of Stability_ regime and our results.
**Aknowledgements.** M. Even deeply thanks Laurent Massoulie for making it possible to visit Microsoft Research and the Washington state during an internship supervised by Suriya Gunasekar, the MSR Machine Learning Foundations group for hosting him, and Martin Jaggi for inviting him for a week in Lausanne at EPFL, making it possible to meet and discuss with Scott Pesme and Nicolas Flammario. |
2305.03870 | Knowledge Transfer from Teachers to Learners in Growing-Batch
Reinforcement Learning | Standard approaches to sequential decision-making exploit an agent's ability
to continually interact with its environment and improve its control policy.
However, due to safety, ethical, and practicality constraints, this type of
trial-and-error experimentation is often infeasible in many real-world domains
such as healthcare and robotics. Instead, control policies in these domains are
typically trained offline from previously logged data or in a growing-batch
manner. In this setting a fixed policy is deployed to the environment and used
to gather an entire batch of new data before being aggregated with past batches
and used to update the policy. This improvement cycle can then be repeated
multiple times. While a limited number of such cycles is feasible in real-world
domains, the quality and diversity of the resulting data are much lower than in
the standard continually-interacting approach. However, data collection in
these domains is often performed in conjunction with human experts, who are
able to label or annotate the collected data. In this paper, we first explore
the trade-offs present in this growing-batch setting, and then investigate how
information provided by a teacher (i.e., demonstrations, expert actions, and
gradient information) can be leveraged at training time to mitigate the sample
complexity and coverage requirements for actor-critic methods. We validate our
contributions on tasks from the DeepMind Control Suite. | Patrick Emedom-Nnamdi, Abram L. Friesen, Bobak Shahriari, Nando de Freitas, Matt W. Hoffman | 2023-05-05T22:55:34Z | http://arxiv.org/abs/2305.03870v2 | # Knowledge Transfer from Teachers to
###### Abstract
Standard approaches to sequential decision-making exploit an agent's ability to continually interact with its environment and improve its control policy. However, due to safety, ethical, and practicality constraints, this type of trial-and-error experimentation is often infeasible in many real-world domains such as healthcare and robotics. Instead, control policies in these domains are typically trained offline from previously logged data or in a _growing-batch_ manner. In this setting a fixed policy is deployed to the environment and used to gather an entire batch of new data before being aggregated with past batches and used to update the policy. This improvement cycle can then be repeated multiple times. While a limited number of such cycles is feasible in real-world domains, the quality and diversity of the resulting data are much lower than in the standard continually-interacting approach. However, data collection in these domains is often performed in conjunction with human experts, who are able to label or _annotate_ the collected data. In this paper, we first explore the trade-offs present in this growing-batch setting, and then investigate how information provided by a teacher (i.e., demonstrations, expert actions, and gradient information--differentiated with respect to actions) can be leveraged at training time to mitigate the sample complexity and coverage requirements for actor-critic methods. We validate our contributions on tasks from the DeepMind Control Suite.
## 1 Introduction
Safe and reliable policy optimization is important for real-world deployments of reinforcement learning (RL). However, standard approaches to RL leverage consistent trial-and-error experimentation, where policies are continuously updated as new data is aggregated from environment interactions (Mnih et al., 2013, 2015; Silver et al., 2016; Van Hasselt et al., 2015). In this setting, agents intermittently act poorly (Ostrovski et al., 2021), often choosing to explore under-observed actions or act under substandard policies. In real-world settings, it is crucial due to ethical and safety reasons that an agent behaves above an acceptable standard during deployments. As such, recent work has focused on learning control policies either offline from previously gathered data or in a growing-batch manner, where at each deployment a fixed policy is used to gather experiential data and is updated using data aggregated from current and previous deployments (Lange et al., 2012; Agarwal et al., 2019; Levine et al., 2020; Gulcehre et al., 2021). Decoupling policy optimization from environment interaction in this fashion affords practitioners the ability to rigorously evaluate the performance and risks associated with the current policy between subsequent deployments (Gottesman et al., 2019; Thomas and Brunskill, 2016).
While learning in an offline or growing-batch manner allows for extensive pre-deployment evaluation, it is also known to hinder the agent performance (Ostrovski et al., 2021; Levine et al., 2020).
By deploying to the environment less frequently, these agents are often unable to eliminate the overestimation bias that exists with respect to out-of-distribution samples. As such, in comparison to agents trained via consistent online learning, offline and growing-batch agents are often deluded, exploring actions with overestimated value estimates (Fujimoto et al., 2019; Kumar et al., 2019; Siegel et al., 2020). Several remedies have been proposed to prevent this behavior including by preventing the value function from evaluating unseen actions (Peng et al., 2019; Wang et al., 2020; Kumar et al., 2020) or constraining the learned policy to remain close to the offline data (Nair et al., 2020; Fujimoto et al., 2019; Kumar et al., 2019). However, the performance of such agents is often limited by the quality and availability of batch data. While opportunities for additional data collection within the growing-batch setting can help alleviate these issues, they still tend to perform worse than their fully online counterparts due to limited coverage of the state-action space (Haarnoja et al., 2018; Liu and Brunskill, 2018). To mitigate these coverage requirements, pre-training has been extensively studied as a mechanism for obtaining good initial policies.
In this work, rather than training agents entirely from scratch, we first initialize policies in a supervised fashion via behavior cloning on teacher-provided demonstrations. Doing so naively, however, can result in a drop in performance when transitioning from pre-training to the growing batch setting, often due to poorly-initialized value functions (Uchendu et al., 2022; Kostrikov et al., 2021; Agarwal et al., 2022). To avoid this we also investigate policy regularization to promote monotonic policy improvement across growing-batch cycles. This regularization aims to keep the current learned policy close to its pre-trained counterpart or to estimated policies from previous deployments during early periods of the agent's learning process. Additionally, in real-world domains, deployments are typically done in conjunction with human experts, who are able to annotate the collected data with additional information that can be leveraged at training-time to further improve policy optimization. These annotations can come in the form of demonstrations, alternative actions, or (weak) gradient information that are differentiated with respect to actions (and not parameters which are impractical to provide from an external system). In this work, we train agents in a growing-batch setting and investigate the benefits of incorporating realistic external information provided by _teachers_ (e.g., another RL agent, a human, or a well-defined program) during policy improvement (see Figure 1). We investigate the use of transition-specific annotations from teachers, specifically considering (1) teacher-provided actions, where, for select transitions chosen via a value-based filter, we constrain the learned policy to remain close to the teacher's suggested action, and provide direct corrective information in the form of (2) teacher-provided critic gradients (differentiated with respect to actions), where the learned policy is nudged toward learning actions that maximize the teacher's internal (presumably unknown) representation of the value function.
Figure 1: Growing-batch RL with teacher annotations. The policy and (optionally) critic networks are first initialized from offline data (Sections 3.2-3.3); in our work we use an offline dataset of teacher demonstrations. Within each cycle, a fixed policy \(\pi_{\theta_{k-1}}\) is deployed within the environment and used to gather data in the replay buffer \(\mathcal{D}_{k}\). Data from previous cycles are then aggregated (i.e., \(\cup_{i=1}^{k}\mathcal{D}_{i}\)) along with per-transition teacher annotations, and used to update the policy and critic networks, \(\pi_{\theta_{k}}\) and \(Q_{\phi_{k}}\), respectively. The forms of annotations are discussed in Section 3.4.
We evaluate our proposed approach on a set of continuous control tasks selected from the DeepMind Control Suite using actor-critic agents trained under distributional deterministic policy gradients. Our results suggest that effective policy regularization paired with teacher-provided annotations offers a mechanism of improve the sample efficiency of growing-batch agents, while ensuring safe and reliable policy improvement between cycles.
## 2 Background & Growing-batch Reinforcement Learning
Interactions between an agent and the environment can be modeled as an infinite-horizon Markov decision process (MDP) \((\mathcal{S},\mathcal{A},\mathbb{P},R,\gamma)\) where at each time step \(t\) the agent observes the state of the environment \(s_{t}\in\mathcal{S}\), executes an action \(a_{t}\in\mathcal{A}\) according to a deterministic policy \(\pi\), and receives a reward \(r_{t}=R(s_{t},a_{t})\). The goal of RL is to learn an optimal policy \(\pi^{*}\) that maximizes the expected future discounted reward, or return, \(G_{\pi^{*}}=\max_{\pi}G_{\pi}=\max_{\pi}E\left[\sum_{t=0}^{\infty}\gamma^{t}r_{ t}|\pi\right]\) that it receives from the environment, where \(\gamma\in[0,1]\) is a discount factor.
We focus here on off-policy RL algorithms as these can learn from policies different than the current behavior policy, which is necessary when learning from older deployment cycles and expert data. We consider _growing-batch_ settings where data generation is decoupled from policy improvement. That is, scenarios where updates to the policy are made only after large batches of experiential data are collected from the environment. We will also focus on actor-critic algorithms where \(\Psi_{k}=\{\phi_{k},\theta_{k}\}\) represent parameters used to model the agent's critic and policy networks, respectively. Under the growing batch setup, agent interaction with the environment and parameter updates are performed within structures we call cycles. Within a given cycle \(k\), an agent interacts with the environment using the fixed policy \(\pi_{\theta_{k-1}}\) and stores each transition within a replay buffer we denote as \(\mathcal{D}_{k}\). Transitions gathered from previous cycles \(\cup_{i=1}^{k-1}\mathcal{D}_{i}\) are then aggregated with \(\mathcal{D}_{k}\) and are used to generate an updated \(\psi_{k}\), i.e., an update of agent's model parameters learned via an off-policy learning algorithm. This process is generally repeated for a fixed number of cycles \(N\) or until an optimal decision-making policy is retrieved.
While several parallels between online and growing batch RL can be drawn, our growing-batch experimental setup differs in two key aspects. Specifically, (1) the number of pre-specified cycles we explore is small, while (2) the size of each newly gathered batch dataset is large. This distinction is important when considering domains or areas of application such as clinical trials or deployments of self-driving vehicles, where performing nearly real-time continual parameter updates after collecting only a few transitions is impractical or, perhaps, infeasible due to safety, resource, and/or implementation constraints.
Under the actor-critic approach that we focus on in this work, we estimate a parametric policy \(\pi_{\theta}\) by maximizing the expected return \(\mathcal{J}(\theta)=\mathbb{E}_{(s,a)\sim\mathcal{D}}\left[Q^{\pi_{\theta}}(s,a)\right],\) where \(Q^{\pi_{\theta}}\) is the associated value function. For continuous control tasks, \(\mathcal{J}(\theta)\) can be directly optimized by performing parameter updates on \(\theta\) with respect to the deterministic policy gradient:
\[\nabla\mathcal{J}(\theta)=\mathbb{E}_{s\sim\mathcal{D}}\left[\nabla_{\theta} \pi_{\theta}\nabla_{a}Q_{\phi}(s,a)\big{|}_{a=\pi_{\theta}(s)}\right]; \tag{1}\]
see (Silver et al., 2014) for further details on computing this gradient in practice.
As is commonly done, we update the critic \(Q_{\phi}\) by minimizing the squared Bellman error, represented under the following loss:
\[\mathcal{L}(\phi)=\mathbb{E}_{(s,a)\sim\mathcal{D}}\left[\left(Q_{\phi}(s,a)- \left(\mathcal{T}_{\pi_{\theta^{\prime}}}Q_{\phi^{\prime}}\right)(s,a)\right) ^{2}\right], \tag{2}\]
where we use separate target policy and value networks (i.e., represented under \(\theta^{\prime}\) and \(\phi^{\prime}\)) to stabilize learning. For our set of experiments, we also make use of \(n\)-step returns in the TD-error from equation 2 and, rather than directly learning the value function, we use a distributional value function \(Z_{\phi}(s,a)\) whose expected value \(Q_{\phi}(s,a)=\mathbb{E}\left[Z_{\phi}(s,a)\right]\) forms our value estimate. For further details see (Barth-Maron et al., 2018), however alternative policy optimization methods could be used within our growing batch framework.
## 3 Teacher-Guided Growing-batch RL
### Estimating Safe, Reliable, and Sample-efficient Policies
Real-world applications of RL typically have significant safety and ethical requirements that highlight the need for both (1) a good initialization of \(\pi_{0}\) and (2) safe (and ideally) monotonic improvement of successive policies \(\pi_{k}\) for each growing-batch cycle \(k\geq 1\). Unfortunately, this is difficult to achieve when naively using value-based RL methods. For instance, while \(\pi_{0}\) can be initialized from a batch of expert demonstrations using imitation learning techniques such as behavioral cloning (BC), a poorly-initialized state-action value function \(Q^{\pi_{0}}\) can lead to a significant drop in performance when training a subsequent policy \(\pi_{1}\), regardless of the initial quality of \(\pi_{0}\)(Uchendu et al., 2022). While proper policy initialization can help reduce this drop, agents continually learning in a trial-and-error manner nevertheless risk encountering substandard intermittent policies (i.e., \(G_{\pi_{k}}\leq G_{\pi_{k-1}}\)).
To address these challenges, we make use of techniques that mitigate the risk of policy deterioration between cycles and hasten the learning process of conventional RL agents by leveraging external information at initialization and training time. Our approach takes advantage of queryable embodiments of knowledge we refer to as _teachers_. Teachers can provide well-informed knowledge of the task at hand in the form of the following:
1. demonstrations of the given RL process, or
2. training-time annotations (i.e., teacher-provided actions and gradient information--differentiated with respect to actions) of agent-generated transitions.
In what follows, we explore how agents can leverage these forms of knowledge for sample-efficient learning of policies within a stable and monotone training process.
### Policy Initialization
Rather than training agents using a randomly-initialized policy, we leverage demonstrations gathered from teachers, as is common in real-world RL. We assume transitions gathered by a teacher are of the format \(\{(s,a,r,s^{\prime})\}\) and are stored within the dataset \(\mathcal{D}_{\text{T}}\). Using batches of these transitions, we train an initial policy \(\pi_{0}\) via behavioral cloning (BC)
\[\pi_{0}=\arg\min_{\theta}\Bigg{(}\ \ \frac{1}{|\mathcal{D}_{\text{T}}|}\sum_{( s,a)\sim\mathcal{D}_{\text{T}}}\|\pi_{\theta}(s)-a\|_{2}^{2}\Bigg{)}.\]
We then initialize the value function \(Q^{\pi_{0}}\) by performing policy evaluation with respect to \(\pi_{0}\) (i.e., minimize \(\mathcal{L}(\phi)\) in equation 2 using \(\pi_{0}\)).
Policies initialized using BC obtain baseline performance are comparable to the teacher for observations that closely resemble those gathered within the batch dataset \(\mathcal{D}_{T}\). However, for out-of-distributions observations, BC-initialized policies tend to perform poorly due to compounding errors from the selection of sub-optimal actions (Ross et al., 2010). Furthermore, due to over-estimation bias for out-of-distribution observation-action pairs, optimizing BC-initialized \(\pi_{0}\) by taking a few policy gradient steps according to equation 1 can essentially erase the performance gain from BC. This phenomena predicates the need for effective policy regularization procedures.
### Policy Regularization
To avoid forgetting the performance gain of the initialized policy during each subsequent policy optimization step, we explore augmenting the deterministic policy gradient loss with a _regularizer_, \(\mathbb{E}_{s\sim\rho^{*}}\|\pi_{0}(s)-\pi_{\theta_{1}}(s)\|_{2}^{2}\). We generalize this for all cycles and obtain the following policy loss:
\[\mathcal{J}(\theta_{k})=\mathcal{J}_{\text{DLPG}}(\theta_{k})+ \ \lambda\underbrace{\mathbb{E}_{s\sim\rho^{*}}\|\pi_{0}(s)-\pi_{\theta_{k}}(s) \|_{2}^{2}}_{\text{BC regularizer}}, \tag{3}\]
where \(\lambda\) is a regularization parameter. As each subsequent policy \(\pi_{\theta_{k}}\) is trained, the BC-regularizer ensures that the learned policy remains close to the BC-initialized policy \(\pi_{0}\) according to the
strength of regularization parameter \(\lambda\). Due to the deterministic, continuous output of our policy we base this regularizer on the Euclidean distance between policy outputs, however for policies with stochastic outputs it would also be possible to make use of the Kullback-Leibler (KL) divergence \(D_{\text{KL}}(\pi_{0}\,||\,\pi_{\theta_{k}})\) between \(\pi_{0}\) and \(\pi_{\theta_{k}}\).
However, while the BC-initialized policy serves as a good starting point, staying too close to it prevents the agent from improving on the expert behavior. We thus decay the strength of the regularizer over subsequent deployments by incorporating an exponential decay weight \(\alpha\in(0,1)\) to the objective function in equation 3:
\[\mathcal{J}(\theta_{k})=(1-\alpha)\ \mathcal{J}_{\text{DAPG}}(\theta_{k})+\ \alpha\ \mathbb{E}_{s\sim\rho^{*}}\|\pi_{0}(s)-\pi_{\theta_{k}}(s)\|_{2}^{2}., \tag{4}\]
By treating \(\alpha\) as a function of the total number of stochastic gradient (SGD) steps taken, we introduce a learning process that initially constrains the learned policy to remain close to the BC-initialized policy and gradually transitions to solely learning from the D4PG loss component. By choosing an appropriate rate parameter for \(\alpha\), this form of regularization allows the learned policy to supersede the performance of \(\pi_{0}\) as more data is gathered.
We also explore an alternative between-cycle regularizer that ensures that the learned policy \(\pi_{\theta_{k}}\) stays close to the previously policy \(\pi_{\theta_{k-1}}\), which allows the policy to adapt but slows the rate at which it does so:
\[\mathcal{J}(\theta_{k})=(1-\alpha)\ \mathcal{J}_{\text{DAPG}}(\theta_{k})+\ \alpha\ \mathbb{E}_{s\sim\rho^{*}}\|\pi_{\theta_{k-1}}(s)-\pi_{\theta_{k}}(s)\|_{2}^{2}. \tag{5}\]
As previously stated, our regularizer takes the form of the Euclidean distance between successive policies could be represented using the KL divergence for continuous, stochastic policies as done in Maximum a posteriori Policy Optimization (MPO) (Abdolmaleki et al., 2018).
### Teacher-Provided Annotations
While BC can provide good a starting policy on observed transitions, agents still run the risk of learning sub-optimal policies due to insufficient state-action coverage. We investigate the use of teachers to provide transition-specific annotations to improve sample complexity and facilitate optimization of the current policy \(\pi_{\theta_{k}}\). The forms of possible annotations depend on the representation and accessibility of the teacher (i.e., our ability to query the teacher for advice). In our experiments, we represent our teacher as an RL agent with a deterministic policy that obtains either sub-optimal or optimal performance on a given task. The forms of annotations we consider include teacher-provided actions (i.e., \(a^{*}\sim\pi^{*}(s)\)) and critic gradients (i.e., \(\nabla_{a}Q^{*}(s,a)\)). Generally, teacher-provided actions function as expert demonstrations, while teacher-provided gradients serve as direct corrections for decisions made by the current policy. The practicality and accessibility of each of these forms of annotations are heavily dependent on the overarching use case and intended application at hand.
**Teacher-Action Annotations.** We first consider an annotation mechanism similar to DAgger (Ross et al., 2010). In imitation learning, DAgger introduces a mechanism for querying expert-suggested actions. These actions are used both at acting and training time to reduce the risk of compounding errors due to an induced distributional shift. Unlike DAgger, we only query the teacher's action during training time to provide transition-specific annotations for optimizing a reinforcement learning-based objective.
Specifically, we explore augmenting our agent's policy loss \(\mathcal{J}_{\text{DAPG}}\) to include an \(\ell_{2}\)-loss component that evaluates the difference between the student's policy \(\pi_{\theta_{k}}\) and the teacher-suggested action \(a^{*}\) for each transition in \(\mathcal{D}\). This constrains the agent's policy to remain close to the suggestions provided by the teacher. Furthermore, to encourage monotonic policy improvement between successive cycles of data aggregation and policy optimization, we utilize a between-cycle policy regularizer:
\[\mathcal{J}(\theta_{k})=(1-\alpha)\ \mathcal{J}_{\text{DAPG}}(\theta_{k})+\ \mathbb{E}_{s\sim\rho^{*}}\Big{[}\beta_{k}\|\pi_{\theta_{k-1}}(s)-\pi_{ \theta_{k}}(s)\|_{2}^{2}+\alpha\|a^{*}-\pi_{\theta_{k}}(s)\|_{2}^{2}\Big{]}. \tag{6}\]
Figure 2: Regularization parameter \(\alpha_{n}\in(0,1)\) as a function of \(n\) learner steps taken, evaluated for various exponential-decay rates.
Under this objective, we also incorporate the exponential weight parameter \(\alpha\in(0,1)\) previously introduced in equation 4. Thus, as \(\alpha\) decreases, our agent transitions from learning a policy that stays close to the teacher-suggested action to one that solely relies on the agent's conventional policy loss. Choosing an appropriate rate parameter \(\alpha\) enables the learn policy \(\pi_{\theta_{k}}\) to down-weight its reliance on the teacher's suggestions as more data is gathered.
In scenarios where evaluation is costly, choosing an appropriate parameter for \(\alpha\) may be difficult. As such, we explore an adaptive procedure for selecting which policy loss component (i.e., D4PG vs. DAgger-like) to minimize at a per-transition basis. To do so, we introduce a Q-filter \(\delta(s)\) and re-construct the deterministic policy gradient as
\[\nabla\mathcal{J}(\theta_{k})=\mathbb{E}_{s\sim\rho^{*}}\Big{[}\underbrace{ \big{[}1-\delta(s)\big{]}\,\nabla_{\theta_{k}}\pi_{\theta_{k}}\nabla_{a}Q_{ \phi_{k}}(s,a)\big{|}_{a=\pi_{\theta_{k}}(s)}}_{\text{DIPG component}}+\ \underbrace{\delta(s)\,\nabla_{\theta_{k}}\|a^{*}-\pi_{\theta_{k}}(s)\|_{2}^{ 2}}_{\text{Teacher-action component}}\Big{]}, \tag{7}\]
where \(\delta(s)=1\big{[}Q_{\phi_{k}}(s,a^{*})\geq Q_{\phi_{k}}(s,\pi_{\theta_{k}}(s ))\big{]}\) is an indicator function. Under this filtered approach, the agent's policy is optimized using the DAgger-like component only for transitions where the teacher-suggested actions obtain a value \(Q_{\phi_{k}}\) that is larger than the value of the current policy \(\pi_{\theta_{k}}\). This approach bears a resemblance to the optimization procedure introduced in the critic regularized regression (CRR) algorithm where advantage-weights are used to filter out actions that significantly deviate from the training distribution.
Teacher-Gradient Annotations.Additionally, for continuous control tasks, we consider directly incorporating teacher-provided gradient information \(G_{a}(s,a)\) differentiated with respect to actions \(a\). These gradients can be interpreted as the directions in which the agent should adjust their actions to enhance their current policy. This approach differs from using parameter-based gradients as corrective feedback since such information is challenging for an external system or teacher to supply. We envision several examples for using human feedback to estimate action-specific gradients. Some include:
1. A reward model that is learned from expert human feedback and differentiated with respect to actions, and
2. Aggregated human experiences that each provide (1) a preferential ordering suggesting directions each action should be moved towards and (2) a magnitude, indicating how much to move in the preferred direction.
In both examples, the gradient information prioritizes actions that aim to maximize the teacher's internal notion of a reward or value function (e.g., \(G_{a}(s,a)=\nabla_{a}Q^{*}(s,a)\), where \(Q^{*}\) is the teacher's value function). In general, we hypothesize that relying on teacher-provided gradients during the early periods of an agent's learning process can circumvent risks associated with learning from a poorly initialized or potentially over-estimated value function.
As a preliminary first step toward leveraging teacher-provided gradients, we augment the deterministic policy gradient introduced in equation 1 to include a component \(G_{a}(s,a)\nabla_{\theta_{k}}\pi_{\theta_{k}}(s,a)\) that weights the current policy gradient according to gradient information provided by the teacher. As
Figure 3: The DeepMind Control Suite environments used in our experiments.
such, we represent the augmented deterministic policy gradient as
\[\nabla\mathcal{J}(\theta_{k})=\mathbb{E}_{s\sim\mathcal{D}}\Big{[}(1-\alpha)\, \nabla_{\theta_{k}}\pi_{\theta_{k}}\nabla_{a}Q_{\phi_{k}}(s,a)\big{|}_{a=\pi_{ \theta_{k}}(s)}+\alpha\underbrace{G_{a}(s,\pi_{\theta_{k}}(s))\nabla_{\theta_{ k}}\pi_{\theta_{k}}(s)}_{\text{Teacher-provided}}\Big{]} \tag{8}\]
and incorporate the exponential decay weight \(\alpha\) that decrements as functions of number of gradient steps taken. Thus, the agent leverages gradient information from the teacher primarily during early stages of its learning process.
## 4 Experiments
We evaluate our series of methods on a set of continuous control tasks with observable states in the growing-batch setting. A number of these tasks also involve multidimensional actions spaces. Our results suggests that effective policy regularization paired with teacher-provided annotations works well in these challenging domains and serves to improve the sample efficiency of deterministic policy gradient algorithms, while encouraging monotonic policy improvement between cycles. Across all experiments, the total number of actor steps and learner steps are identical and fixed at specific values. Additionally, the number of cycles within each experiment can vary, with more cycles adding to diversity of data collected. Further details on our experimental setup are provided in Appendix A.2.
### Environments
DeepMind Control Suite (DSC) is a set of continuous control tasks used as a benchmark to assess the performance of continuous control algorithms. We consider the following six MuJoCo environments from the DSC: _cartpole-balance_, _cartpole-swingup_, _finger-spin_, _cheetah-run_, _walker-run_, and _pendulum-swingup_. An illustration of the environments is given in Figure 4. For our set of experiments, the dimensionality of the action space is low (i.e., \(\leq 6\) degress of freedom). Additionally, a feature-based observation space is considered (i.e., no pixel-based observations under partial observability).
### Investigated Approaches
We evaluate agents on the set of control tasks mentioned in section 4.1 and highlight the performance benefit of our proposed approaches (i.e., policy regularization and training-time annotations) against naively pre-training with behavioral cloning. Demonstrations from teachers performing each task were aggregated into datasets intended for pre-training. Each dataset contains 1M transitions from 1K episodes generated entirely from teachers. Additionally, annotations are queried by these same set of teachers at training time. Teachers were snapshots of RL agents that obtained either expert-level or mid-tier performance for each task. Expert-level snapshots achieved a per-episode return
Figure 4: Episode returns averaged over all tasks (using 5 randoms seeds per task) for a varying number of cycles (under a fixed number of actor steps) comparing BC initialization only vs. BC-policy regularizer. The shaded regions represents the standard deviation across all tasks and random seeds. The dashed line represents the expert teacher’s average baseline performance across all tasks.
of roughly \(900\) (averaged over all tasks), and mid-tier snapshots achieved a per-episode return of roughly \(600\). To account for stochasticity in the training process, we repeat each training run \(5\) times with different random seeds.
During training, the agent's policy is evaluated every \(100\) learner steps in an identical environment under a different initialization. For each environment, these results are then averaged across each task and are displayed within each figure representing our main results. Note that pre-training with expert data does not guarantee an expert-level policy because key transitions that helped the expert agent achieve a high-reward state may be under-observed within the demonstration dataset (e.g., in cart-pole swing-up, an expert agent is able to immediately swing the pole up and then keep the pole upright for the vast majority of the episode).
Policy Regularization.In the the growing-batch setting, we compare BC pretraining with and without policy regularization. As shown in Figure 4, BC pre-training helps the agent achieve decent baseline performance across the environments on average. However, as the agent proceeds to learn according the D4PG objective functions, we observe a stark initial decline in performance as anticipated in Section 3.2. Policy regularization helps mitigate this issue by constraining the learned policy to remain close to the BC initialized policy. Here a regularization strength of 0.5 was used for the BC regularizer. While policy regularization performs well, we notice that (1) the drop in performance after pre-training is not completely eliminated and that (2) too much regularization prevents the agent from surpassing baseline performance as the number of cycles (and thus the number of times the agent is able to interact with the environment) increases.
To address these challenges, we examine the benefit of using exponentially decreasing regularization weights. Under this approach, we set \(\alpha=1\) in equation 4 and gradually decrease it following an exponential decay as the number of learner steps increases. The parameter \(\alpha\) eventually reaches \(0\) once the allotted number of learner steps has been taken. In Figure 5, we notice that the dip in performance after pre-training is no longer present. As such, we observe that constraining the agent's policy to remain close to the BC initialized policy for the early stages of its learning process can help ensure a monotonic performance increase after pre-training. Additionally, as the number of cycles increases, the agent's policy is able to surpass the baseline performance for faster decay rates. While this approach works well, it requires hyper-parameter tuning to find a suitable value for the exponential decay rate.
Teacher-Action Annotations.We incorporate teacher-action annotations provided at training-time (paired with policy regularization) to examine how additional external information serves to further improve policy improvement between cycles. For teacher-action policy loss in equation 6, the between-cycle regularization parameter \(\beta_{k}\) was set to 5, while the decay rate for \(\alpha\) was separately chosen to be 1 and 5. Figure 5(a) highlights that, throughout the learning process, using teacher-action annotations out-performs solely relying on policy regularization after pre-training. This insight is observed for both choices for \(\alpha\). However, the teacher-action policy loss exhibits no
Figure 5: Episode return averaged over all tasks (using 5 randoms seeds per task) for varying number of cycles (under a fixed number of actor steps) comparing baseline with BC initialization only vs. BC-policy regularizer with \(\text{decay rate}=1\) and \(\text{decay rate}=5\). The shaded regions represents the standard deviation across all tasks and random seeds. The dashed line represents the expert teacher’s average baseline performance across all tasks.
ticeable sensitivity to the regularization parameter \(\alpha\) that is used. Specifically, we notice that faster decay rates can lead to a severe decline of the initial performance gain achieved when prioritizing learning from teacher-provided actions. Conversely, a slower decay rate appears to work well in these set of environments and allows the agent to obtain expert-level performance within 4 cycles. A possible explanation for this phenomena is that mimicking the teacher's behaviors reduces the need for exploration, which in turn allows the RL agent to (in the background) iteratively improve its critic network. This allows the agent to avoid taking actions that are enforced by an ill-informed (or perhaps overly-optimistic) critic.
To eliminate the dependency on regularization parameters, we examine the performance of utilizing the Q-filter for teacher-provided annotations as expressed in equation 7. Recall, that the Q-filter adaptively determines whether to learn to mimic the teacher's actions or, rather, to optimize the agent's own selected action. While utilizing the Q-filter initially under-performs in comparison to incorporating exponentially decreasing weights, it eventually reaches expert performance in a monotonic fashion without need of hyper-parameter tuning.
**Teacher-Gradient Annotations.** As previously mentioned, we hypothesize that relying on teacher-provided gradients during early periods of policy optimization can circumvent risks associated with learning from a poorly initialized and/or over-estimated value function. Here, we compare policy regularization and teacher-action annotations with using teacher-provided gradient information during policy optimization. We set the gradient information to be \(G_{a}(s,a)=\nabla_{a}Q^{*}(s,a)\), i.e., differentiated snapshots of the teacher's internal value function evaluated with respect to the growing-batch agent's current policy. We chose this form due to its easy of use and simplicity. The decay rate for \(\alpha\) was set to 1 and 5, respectively. In Figure 5(b), we observe that, irrespective of choice in \(\alpha\), utilizing gradient information from an expert teacher performs better than solely relying on BC-regularization, but is unable to reach the superior performance gain achieved when leveraging teacher-provided actions. Furthermore, we highlight that without the need of policy regularization leveraging teacher-provided gradients is able to circumvent the initial drop in performance that is observed after pre-training using BC. This insight further supports previously studied hypotheses attributing this anticipated drop in performance to poorly-initialized value functions.
**Sub-optimal Teachers.** In Figure 7, we adapt the previously mentioned approaches by employing a sub-optimal teacher (i.e., an RL agent with mid-tier performance in each task) for both BC initialization and within-cycle training using annotations. Although the growing-batch agent does not reach the same level of performance as when guided by an expert teacher, it surpasses the sub-optimal teacher's average baseline performance in almost all experiments. Importantly, our main conclusions remain consistent despite these changes:
Figure 6: Episode return averaged over all tasks (using 5 randoms seeds per task) under teacher-provided annotations. The shaded regions represents the standard deviation across all tasks and random seeds. The dashed line represents the expert teacher’s average baseline performance across all tasks.
1. Employing BC with policy regularization prevents a drastic decline in performance after initialization
2. Teacher-action annotations, when used with a Q-filter, eventually surpass all other examined methods without requiring hyper-parameter tuning.
3. Utilizing teacher-gradients (differentiated with respect to actions) results in better performance than relying solely on BC-regularization. Additionally, it helps mitigate the initial drop in performance observed after pre-training, regardless of the chosen decay rate.
## 5 Conclusion
In this paper, we present methods to leverage external information from teachers to improve the sample efficiency of growing-batch RL agents, while encouraging safe and monotonic policy improvement. Traditionally, RL agents are trained in an online manner, continuously updating their policy as new data is gathered from environmental interaction. While such approaches have achieved success in low-risk domains such as games, real-world application of RL require extensive evaluation of learned policies before subsequent deployment. As such, training agents in the growing-batch setting operationalizes these desires, while providing a realistic framework for incorporating external information from human experts that serves to improve sample complexity and coverage requirements of conventional RL methodologies.
We demonstrate that while pre-training policies via behavioral cloning can lead to good starting policies, safe optimization using new data is challenging due to overestimation bias present within critic network. Policy regularization can be used to improve this but can also cause the learned policy to stay overly close to the behavioral policy (and data) and limit the overall performance of the agent. We investigate the incorporation of exponentially decaying regularization weights to mitigate the agent's reliance on the behavioral policy as new experience is gathered, improving performance in our of experiments. We further illustrate that external information can also be used during an agents within-cycle training process in the form of transition-specific annotations. In our experiments, we observed that providing expert actions and gradients serves to notably improve the sample efficiency of our agents and encourage monotonic improvement across cycles. Since both types of annotation are practical and feasible, our work provides a suitable framework for further experiments in real-world problems.
Figure 7: Episode return averaged over all tasks (using 5 randoms seeds per task) under teacher-provided annotations. The shaded regions represents the standard deviation across all tasks and random seeds. The dashed line represents the _sub-optimal_ teacher’s average baseline performance across all tasks. |
2310.16775 | From Heisenberg to Hubbard: An initial state for the shallow quantum
simulation of correlated electrons | The widespread use of the noninteracting ground state as the initial state
for the digital quantum simulation of the Fermi-Hubbard model is largely due to
the scarcity of alternative easy-to-prepare approximations to the exact ground
state in the literature. Exploiting the fact that the spin-$\frac{1}{2}$
Heisenberg model is the effective low-energy theory of the Fermi-Hubbard model
at half-filling in the strongly interacting limit, here we propose a three-step
deterministic quantum routine to prepare an educated guess of the ground state
of the Fermi-Hubbard model through a shallow circuit suitable for near-term
quantum hardware. First, the ground state of the Heisenberg model is
initialized via a hybrid variational method using an ansatz that explores only
the correct symmetry subspace. Second, a general method is devised to convert a
multi-spin-$\frac{1}{2}$ wave function into its fermionic version. Third,
taking inspiration from the Baeriswyl ansatz, a constant-depth single-parameter
layer that adds doublon-holon pairs is applied to this fermionic state.
Numerical simulations on chains and ladders with up to 12 sites confirm the
improvement over the noninteracting ground state of the overlap with the exact
ground state for the intermediate values of the interaction strength at which
quantum simulation is bound to be most relevant. | Bruno Murta, Joaquín Fernández-Rossier | 2023-10-25T17:05:50Z | http://arxiv.org/abs/2310.16775v1 | # From Heisenberg to Hubbard: An initial state for the
###### Abstract
The widespread use of the noninteracting ground state as the initial state for the digital quantum simulation of the Fermi-Hubbard model is largely due to the scarcity of alternative easy-to-prepare approximations to the exact ground state in the literature. Exploiting the fact that the spin-\(\frac{1}{2}\) Heisenberg model is the effective low-energy theory of the Fermi-Hubbard model at half-filling in the strongly interacting limit, here we propose a three-step deterministic quantum routine to prepare an educated guess of the ground state of the Fermi-Hubbard model through a shallow circuit suitable for near-term quantum hardware. First, the ground state of the Heisenberg model is initialized via a hybrid variational method using an ansatz that explores only the correct symmetry subspace. Second, a general method is devised to convert a multi-spin-\(\frac{1}{2}\) wave function into its fermionic version. Third, taking inspiration from the Baeriswyl ansatz, a constant-depth single-parameter layer that adds doublon-holon pairs is applied to this fermionic state. Numerical simulations on chains and ladders with up to 12 sites confirm the improvement over the noninteracting ground state of the overlap with the exact ground state for the intermediate values of the interaction strength at which quantum simulation is bound to be most relevant.
+
Footnote †: preprint: APS/123-QED
Digital quantum simulation [1; 2] is expected to become a leading method to study correlated electrons [3]. By exploiting the principle of superposition and the natural encoding of entanglement, quantum computers can represent the full wave function of quantum many-body systems in a scalable way, which may allow to probe properties that defy state-of-the-art numerical methods on conventional hardware [4; 5]. A problem that offers the prospect of achieving such quantum advantage [6; 7] even with noisy intermediate-scale quantum (NISQ) processors [8] is the determination of the phase diagram of the Fermi-Hubbard model [9; 10; 11] by preparing the exact ground state of the second-quantized Hamiltonian
\[\hat{\mathcal{H}}=-t\sum_{i,\tau}\sum_{\sigma=\uparrow,\downarrow}(\hat{c}^{ \dagger}_{i,\sigma}\hat{c}_{i+\tau,\sigma}+\text{H.c.})+U\sum_{i}\hat{n}_{i, \uparrow}\hat{n}_{i,\downarrow}, \tag{1}\]
where the sum over \(\tau\) includes the nearest neighbors of site \(i\), and \(t>0\) and \(U>0\) define the interaction strength \(\frac{U}{\Lambda}\). The most challenging and relevant regime [12] of the Fermi-Hubbard model occurs when the two competing energy scales are comparable -- i.e., the Hubbard parameter \(U\) is of the order of the bandwidth \(W\) of the underlying tight-binding model (e.g., \(W=4t\) in one dimension, \(W=8t\) for the square lattice).
A key requirement for any ground state preparation method is an initial state with non-negligible overlap with the target state. In the case of the Fermi-Hubbard model, the standard choice is the noninteracting ground state [6; 7], but its vanishing fidelity relative to the exact ground state [13] for the intermediate range \(U\sim W\) calls for a more educated guess. Mean-field states [14] face the same issue, with the additional drawback of often breaking symmetries of the Hamiltonian. The Gutzwiller wave function [9] does produce a substantially greater overlap with the exact ground state at intermediate and large \(\frac{U}{t}\), but the NISQ-friendly schemes proposed to initialize it [13; 15] require, on average, a number of repetitions to succeed that becomes prohibitively large for a lattice of sufficiently great size due to their probabilistic nature.
In this Letter we introduce a deterministic quantum routine that is suitable for NISQ hardware to prepare a better approximation than the noninteracting ground state of the exact ground state of the Fermi-Hubbard model at half-filling with intermediate or large \(\frac{U}{t}\). This scheme makes use of the fact that, in the strongly interacting limit \(\frac{U}{t}\rightarrow\infty\), the charge degrees of freedom are frozen and the Fermi-Hubbard model is reduced [3] to the antiferromagnetic spin-\(\frac{1}{2}\) Heisenberg model [16]
\[\hat{\mathcal{H}}=J\sum_{i,\tau}\left(\hat{S}^{x}_{i}\hat{S}^{x}_{i+\tau}+ \hat{S}^{y}_{i}\hat{S}^{y}_{i+\tau}+\hat{S}^{z}_{i}\hat{S}^{z}_{i+\tau} \right), \tag{2}\]
with \(J=\frac{4t^{2}}{U}\). This result is valid for any lattice at half-filling and may be extended to hopping terms beyond nearest neighbors. Although determining the ground state of the Heisenberg model is generally nontrivial, we can benefit from the smaller size of the Hilbert space relative to the full-blown fermionic model to mitigate some of the most cumbersome issues faced in quantum simulation that arise from the exponential wall problem [17], namely the orthogonality catastrophe [18] and the barren plateaus [19] in hybrid variational methods [20].
The quantum scheme herein put forth comprises three parts that can be identified in the circuit scheme shown |
2302.02511 | An explicit formula for high-order sideband polarization by extreme
tailoring of Feynman path integrals | High-order sideband generation (HSG), as an analogue of the interband
processes in high-harmonic generation (HHG) in solids, is a nonperturbative
nonlinear optical phenomenon in semiconductors that are simultaneously driven
by a relatively weak near-infrared (NIR) laser and a sufficiently strong
terahertz (THz) field. We derive an explicit formula for sideband polarization
vectors in a prototypical two-band model based on the saddle-point method. Our
formula connects the sideband amplitudes with the laser-field parameters,
electronic structures, and nonequilibrium dephasing rates in a highly
nontrivial manner. Our results indicate the possibility of extracting
information on band structures and dephasing rates from high-order sideband
generation experiments with simple algebraic calculations. We also expect our
approach to be useful on the quantitative understanding of the interband HHG. | Qile Wu, Mark S. Sherwin | 2023-02-06T00:35:06Z | http://arxiv.org/abs/2302.02511v3 | An explicit formula for high-order sideband polarization by extreme tailoring of Feynman path integrals
###### Abstract
High-order sideband generation (HSG), as an analogue of the interband processes in high-harmonic generation (HHG) in solids, is a nonperturbative nonlinear optical phenomenon in semiconductors that are simultaneously driven by a relatively weak near-infrared (NIR) laser and a sufficiently strong terahertz (THz) field. We derive an explicit formula for sideband polarization vectors in a prototypical two-band model based on the saddle-point method. Our formula connects the sideband amplitudes with the laser-field parameters, electronic structures, and nonequilibrium dephasing rates in a highly nontrivial manner. Our results indicate the possibility of extracting information on band structures and dephasing rates from high-order sideband generation experiments with simple algebraic calculations. We also expect our approach to be useful on the quantitative understanding of the interband HHG.
## I Introduction
The recent development of strong laser fields has enabled extensive study of nonperturbative optical responses of crystalline solids in highly nonlinear and nonequilibrium regimes. One celebrated example is high-harmonic generation (HHG), which has been observed in conventional metals [1] and semiconductors [2; 3; 4; 5] and serves as an important way to obtain ultraviolet light sources [6; 7]. The realization of HHG in solid crystals has led to a method to probe electronic properties including band structures [8; 9; 10; 11], Berry curvatures [12], topological phases [13; 14; 15; 16; 17; 18; 19; 20], and nonequilibrium dephasing rates of electron-hole coherences [21; 22]. Investigation of HHG in correlated electron systems has also been initiated [23; 24; 25]. In semiconductors, HHG contains contributions from intraband and interband processes, which are in general coupled with each other [26; 27; 28]. The interband process can be understood in a three-step model similar to HHG in atoms [29]. In the first step, an electron-hole pair is created by a strong laser field. In the second step, the electron and hole are accelerated in their respective bands by the same laser field. In the third step, recombination of the electron and hole results in radiation with integer multiples of the fundamental frequencies. The intraband contribution comes from the intraband accelerations of the electron and hole through a nonlinear current [2]. We will only discuss the interband processes.
As an analogue of the interband HHG, high-order sideband generation (HSG) [30; 31] has also received considerable interest since the last decade [32; 33; 34; 35; 36; 37; 38; 39; 40]. HSG occurs in semiconductors when an electron-hole pair is created by a relatively weak near-infrared (NIR) laser with a photon energy \(\hbar\Omega\) close to the bandgap \(E_{\rm g}\) and then accelerated by a strong terahertz (THz) field with a photon energy \(\hbar\omega\ll E_{\rm g}\). Upon recollisions and recombinations of the electron-hole pair, sideband photons of energy \(\hbar\Omega+n\hbar\omega\) are emitted, where the sideband index \(n\) is an integer [30; 31]. In contrast to HHG in semiconductors, intraband and interband processes in HSG are disentangled and separately controlled by two different laser fields. Such simplification has led to a reconstruction of low-energy Bloch wavefunctions of holes in bulk GaAs through a simple algebraic equation [39]. Frequency combs of sidebands with orders \(n>100\) (66 sidebands) have been produced from HSG [36]. HSG has also played a role in probing Berry curvatures [35], band structures [38], and electron correlations [40].
Theoretical approaches based on or equivalent to the semiconductor Bloch equations (SBEs) [41] have been widely used in the numerical analyses of both the intraband and interband HHG [3; 4; 10; 11; 15; 19; 26; 27; 28; 29; 3; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59]. The scattering terms in the SBEs are mostly approximated through a dephasing constant for the interband polarization [3; 4; 10; 11; 15; 19; 26; 27; 28; 29; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59]. More details on the scattering effects have also been investigated through the coupling between the density matrix elements and four-point correlations [37; 38; 40]. In the simplest case, the SBEs are solved in the single-electron limit, where the Coulomb interaction between the charge carriers are neglected [3; 4; 11; 15; 19; 26; 27; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59]. Another important aspect is the global gauge symmetry, which has long been ignored in the study of HHG and is paid attention to only recently [15; 19; 28; 53; 54; 55; 56; 57; 58; 59]. In fact, to explore the effects of Berry curvatures in HSG, dynamical equations equivalent to the SBEs in the limit of negligible carrier densities and Coulomb interaction have already been used in the forms obeying the global gauge symmetry [53; 60; 61; 62]. A gauge-invariant density-matrix formalism has also been applied in a discussion on the detection of the macroscopic Berry curvature [63]. Theoretical frameworks other than the SBEs in the study of interband HHG include the time-dependent density-functional theory [49; 50; 51; 52; 54; 56; 57; 58; 59] and the single-particle time-dependent Schrodinger equation [11; 76; 77; 78; 79; 80; 81; 82; 83]. To gain intuitive pictures of the interband HHG, discussions have been focused on the single-electron limit with the carrier occupations ignored such
that the interband polarization can be written in a compact form of Feynman path integrals, which can then be analyzed through the well-established saddle-point method [10; 21; 43; 44; 47; 52; 80; 83; 84; 85]. The three-step model in interband HHG has been extended to include the effects from nonzero Berry curvatures [52; 54] and imperfect recollisions [52; 54; 80; 83; 85]. A four-step model was also proposed [82]. While qualitative understandings of interband HHG have been reached in various aspects, quantitative understandings based on the saddle-point method were initiated just recently [85].
The theoretical analyses of HSG were mostly based on either a time-dependent Schrodinger equation [30; 33; 60; 61; 62; 63; 62; 86; 87], or a dynamical equation of the interband density matrix elements in the single-electron limit [88; 35; 89]. Both of these equations are equivalent to the SBEs with negligible carrier occupations and phenomenological dephasing rates. While numerical solutions of SBEs have provided insights on effects from Coulomb interactions in HSG from systems involving strongly bound excitons [34; 37; 38; 40], analyses in the single-electron limit serve as an important middle stage for investigating more complicated systems and have already led to predictions of many nontrivial emergent phenomena such as dynamical birefringence [35]. Similar to the interband HHG, the sideband amplitudes in the single-electron limit were represented by Feynman path integrals, which were analyzed with the saddle-point method [30; 35; 60; 61; 62; 86; 87]. Remarkably, agreement between the saddle-point approximation and the full evaluation of the Feynman path integrals can be achieved not only qualitatively but also quantitatively [60; 61; 62; 86; 87]. However, from the numerical saddle-point solutions, it is still not fully clear how the laser-field parameters, electronic structures, and nonequilibrium dephasing rates are coded in the sideband amplitudes.
In this paper, we derive an explicit formula for sideband polarization vectors in a prototypical two-band model based on the saddle-point method. To tailor the Feynman path integrals into an explicit algebraic function of the laser-field and material parameters, we notice that, in classical electron-hole recollisions under a linearly-polarized THz field, when the kinetic energy gain of an electron-hole pair is much smaller than their ponderomotive energy in the THz field, the time intervals for the shortest recollision paths lie around the nodes of the THz field, where the THz field is almost linear in time. Our derivation is based on the idea that, for sufficiently large ponderomotive energy in the presence of sufficiently strong dephasing, the shortest recollision paths will dominate such that the THz field can be approximated as near-linear in time in the saddle-point analysis. We call this linear-in-time (LIT) approximation. Our formula connects the sideband amplitudes with the laser-field parameters, electronic structures, and nonequilibrium dephasing rates in a highly nontrivial manner. Our results also indicate the possibility of extracting information about band structures and dephasing rates from HSG experiments with simple algebraic calculations. Owing to the similarity between the interband HHG and HSG, we expect our approach will shed new light on the quantitative understanding of HSG in more complicated systems, as well as interband HHG.
## II Saddle-point analysis
We start with a saddle-point analysis taking account of only the shortest recollision pathways associated with each sideband in the presence of sufficiently strong dephasing. For simplicity, we convey the idea of the linear-in-time approximation in a prototypical two-band model with zero Berry curvatures and a parabolic energy difference between the conduction and valence bands, \(E_{\rm cv}(\mathbf{k})=E_{\rm g}+\hbar^{2}k^{2}/(2\mu)\), where \(E_{\rm g}\) is the bandgap, \(\hbar\) is the reduced Planck constant, and \(\mu\) is the reduced mass of the electron-hole pairs. Under the approximation of free electrons and holes [30; 60; 61; 62], the \(n\)th-order sideband polarization vector produced by continuous-wave NIR and THz laser fields can be written as [35]
\[\mathbb{P}_{n}= \frac{i}{\hbar}\frac{1}{T_{\rm THz}}\int_{0}^{T_{\rm THz}}dte^{ i(\Omega+n\omega)t}\int\frac{d^{D}\mathbf{P}}{(2\pi)^{D}}\int_{-\infty}^{t}dt^{ \prime}\mathbf{d}^{*}\] \[\exp\{-\frac{i}{\hbar}\int_{t^{\prime}}^{t}dt^{\prime\prime}(E_{ \rm cv}[\mathbf{k}(t^{\prime\prime})]-i\Gamma)\}\mathbf{d}\cdot\mathbf{E}_{ \rm NIR}(t^{\prime}), \tag{1}\]
which describes a three-step process in HSG as follows. In the first step, an electron-hole pair is created at time \(t^{\prime}\) through the coupling between the interband dipole vector \(\mathbf{d}\) and the electric field of the NIR laser \(\mathbf{E}_{\rm NIR}(t^{\prime})=\mathbf{F}_{\rm NIR}e^{-i\Omega t^{\prime}}\) with frequency \(\Omega\), where the rotating wave approximation is used. In the second step, from time \(t^{\prime}\) to \(t\), the electron-hole pair is accelerated by the THz field and accumulates a dynamic phase \((-1/\hbar)\int_{t^{\prime}}^{t}dt^{\prime\prime}E_{\rm cv}[\mathbf{k}(t^{ \prime\prime})]\), where \(\hbar\mathbf{k}(t)=\hbar\mathbf{P}+e\mathbf{A}(t)\) is the kinetic momentum with \(\hbar\mathbf{P}\) being the canonical momentum, \(e\) the elementary charge, and \(\mathbf{A}(t)\) the vector potential of the THz field. We take the THz field as linearly polarized along x-axis in the form \(\mathbf{F}_{\rm THz}(t)=-\hat{\mathbf{A}}(t)=\hat{x}F_{\rm max}\cos(\omega t)\) with frequency \(\omega\), and \(\mathbf{A}(t)=-\hat{x}(F_{\rm max}/\omega)\sin(\omega t)\). The constant \(\Gamma\) quantifies the dephasing in this step phenomenologically. In the third step, the electron and hole recombine at time \(t\) and a sideband with frequency \(\Omega+n\omega\) is emitted. Here, \(T_{\rm THz}=2\pi/\omega\) is the period of the THz field and \(D\) is the dimension of the momentum space. The sideband amplitudes are zero for odd sideband index \(n\) because of the inversion symmetry in this two-band model. The sideband polarization vector can be written in the form of Feynman path integrals,
\[\mathbb{P}_{n}= \mathbf{d}^{*}\mathbf{d}\cdot\mathbf{F}_{\rm NIR}\frac{i\omega}{ \pi\hbar}\int_{0}^{T_{\rm THz}/2}dt\int\frac{d^{D}\mathbf{P}}{(2\pi)^{D}}\] \[\int_{0}^{+\infty}d\tau\exp[\frac{i}{\hbar}S_{n}(\mathbf{P},t, \tau)], \tag{2}\]
where we have introduced a time-duration variable \(\tau=t-t^{\prime}\), and an action
\[S_{n}(\mathbf{P},t,\tau) =n\hbar\omega t-\int_{t-\tau}^{t}dt^{\prime\prime}\frac{\hbar^{2}}{ 2\mu}[\mathbf{P}+\frac{e}{\hbar}\mathbf{A}(t^{\prime\prime})]^{2}\] \[+i(\Gamma-i\Delta)\tau, \tag{3}\]
with \(\Delta=\hbar\Omega-E_{\mathrm{g}}\) being the detuning of the NIR laser. The integral with respect to the recombination time \(t\) has been folded to be over half a period of the THz field.
To tailor the Feynman path integrals, we apply the saddle-point method [30; 62; 86; 87] by having a Taylor expansion of the action \(S_{n}(\mathbf{P},t,\tau)\) around the saddle points up to the second-order terms and extending the limits of the integrals to infinities to form Gaussian integrals. In the presence of sufficiently strong dephasing, the amplitude of each sideband is dominantly determined by one shortest recollision pathway within half a period of the THz field. Including only the saddle point \((\mathbf{P}_{n},t_{n},\tau_{n})\) for the \(n\)th-order sideband that corresponds to the shortest recollision pathway, we obtain the approximate expression (see Appendix A for the derivation),
\[\mathbb{P}_{n}\approx 2\mathbf{C}\exp[\frac{i}{\hbar}S_{\mathrm{sc}}^{ (t,\tau)}(t_{n},\tau_{n})]\] \[\frac{e^{-(i/2)[D\arg(\tau_{n})+\arg(\partial_{n}^{2}S_{\mathrm{sc }}^{(\tau,\tau)})+\arg(\partial_{n}^{2}S_{\mathrm{sc}}^{(\tau,\tau)})]}}{\sqrt {[(\omega\tau_{n})^{D}[\partial_{(\omega t_{n})}^{2}S_{\mathrm{sc}}^{(t,\tau)} /\hbar][\partial_{(\omega\tau_{n})}^{2}S_{\mathrm{sc}}^{(\tau)}/\hbar]}]}, \tag{4}\]
which contains a constant vector
\[\mathbf{C}=\frac{-1}{\hbar\omega}e^{-i\pi D/4}(\frac{\mu\omega}{2\pi\hbar})^{ D/2}\mathbf{d}^{\ast}\mathbf{d}\cdot\mathbf{F}_{\mathrm{NIR}}, \tag{5}\]
a semiclassical action,
\[S_{\mathrm{sc}}^{(t,\tau)}(t_{n},\tau_{n})\] \[= \hbar\omega t_{n}+[i\Gamma+\Delta+U_{\mathrm{p}}(\gamma^{2}( \omega\tau_{n})-1)]\tau_{n}\] \[+U_{\mathrm{p}}\tau_{n}\alpha(\omega\tau_{n})\gamma(\omega\tau_{ n})\cos[\omega(\tau_{n}-2t_{n})], \tag{6}\]
and two second-order derivatives,
\[\frac{1}{\hbar}\frac{\partial^{2}S_{\mathrm{sc}}^{(t,\tau)}}{\partial(\omega t _{n})^{2}}= 2n\cot[\omega(\tau_{n}-2t_{n})], \tag{7}\]
\[\frac{1}{\hbar}\frac{\partial^{2}S_{\mathrm{sc}}^{(\tau)}}{\partial( \omega\tau_{n})^{2}}= \frac{n}{2}[\frac{\alpha^{2}(\omega\tau_{n})+\beta^{2}(\omega\tau _{n})}{\omega\tau_{n}\alpha(\omega\tau_{n})\beta(\omega\tau_{n})}+1]\cot[ \omega(\tau_{n}-2t_{n})]\] \[+\frac{n}{2}[\frac{\alpha^{2}(\omega\tau_{n})-\beta^{2}(\omega \tau_{n})}{2\alpha(\omega\tau_{n})\beta(\omega\tau_{n})}]^{2}\tan[\omega(2t _{n}-\tau_{n})]\] \[+\frac{U_{\mathrm{p}}}{\hbar\omega}\frac{\alpha^{2}(\omega\tau_{n })-\beta^{2}(\omega\tau_{n})}{\omega\tau_{n}}. \tag{8}\]
The semiclassical action \(S_{\mathrm{sc}}^{(t,\tau)}(t_{n},\tau_{n})\) is given by evaluating the action \(S_{n}(\mathbf{P},t,\tau)\) at the saddle point \((\mathbf{P}_{n},t_{n},\tau_{n})\), while the second line in Eq. 4 arises from the Gaussian quantum fluctuations around the saddle point. Here, \(U_{\mathrm{p}}\equiv e^{2}F_{\mathrm{max}}^{2}/(4\mu\omega^{2})\) is the ponderomotive energy, and we have introduced the functions \(\alpha(x)=\cos(x/2)-\gamma(x)\) and \(\gamma(x)=\beta(x)/(x/2)\) with \(\beta(x)=\sin(x/2)\). Different from the approximate expressions for sideband amplitudes in Ref. [86], [62], and [87], Eq. 4 does not contain square roots of complex numbers, which are not single-valued. The values of \(\mathbf{P}_{n}\), \(t_{n}\), and \(\tau_{n}\) satisfy the saddle-point equations,
\[\int_{t_{n}-\tau_{n}}^{t_{n}}dt^{\prime\prime}\frac{\hbar\mathbf{k}_{n}(t^{ \prime\prime})}{\mu}=\mathbf{0}, \tag{9}\]
\[E_{\mathrm{eh}}[k_{n}(t_{n})]-E_{\mathrm{eh}}[k_{n}(t_{n}-\tau_{n})]=n\hbar\omega, \tag{10}\]
\[E_{\mathrm{eh}}[k_{n}(t_{n}-\tau_{n})]=i\Gamma+\Delta, \tag{11}\]
where \(E_{\mathrm{eh}}(k)=\hbar^{2}k^{2}/(2\mu)\) is the kinetic energy from the relative motion of the electron-hole pairs, and \(\hbar\mathbf{k}_{n}(t^{\prime\prime})=\hbar\mathbf{P}_{n}+e\mathbf{A}(t^{ \prime\prime})\) is the time-dependent kinetic momentum associated with the saddle point. The first saddle-point equation corresponds to the condition that an electron and a hole recombine at the site where they are created. The second and third saddle-point equations are related to energy conservation for the cases with zero dephasing (\(\Gamma=0\)) and nonnegative detunings (\(\Delta\geq 0\)) upon creation and recombination of the electron-hole pairs, respectively. For the cases with zero dephasing (\(\Gamma=0\)) and negative detunings (\(\Delta<0\)), Eq. 11 describes creation of electron-hole pairs through quantum tunneling with a pure imaginary energy [87]. Nonzero dephasing (\(\Gamma\neq 0\)) makes the kinetic energy \(E_{\mathrm{eh}}[k_{n}(t^{\prime\prime})]\) complex in general during the recollision events. As we will see later in this section, nonzero detunings do not introduce extra obstacles in tailoring the Feynman path integrals, since the sideband polarization vector \(\mathbb{P}_{n}\) depends on the detuning \(\Delta\) through an analytic function of the complex variable \(i\Gamma+\Delta\). Thus we set \(\Delta=0\) in the numerical calculations from here on and postpone the discussion of the effects from nonzero detunings until Section IV.
Using the approximate expression, Eq. 4, one can write the sideband polarization vector \(\mathbb{P}_{n}\) as an explicit function of the laser-field and material parameters on the premise that the explicit forms of \(t_{n}\) and \(\tau_{n}\) are known. However, the saddle-point equations are transcendental in general. To find clues for further approximation, we investigate the semiclassical recollision pictures provided by the saddle-point equations in the special cases where the sideband photon energies are much smaller than the ponderomotive energy (\(n\hbar\omega\ll U_{\mathrm{p}}\)). Fig. 1 shows the time paths of recollisions, electron-hole separation \(x_{\mathrm{eh}}(t^{\prime\prime})=\int_{t_{n}-\tau_{n}}^{t^{\prime\prime}}dt^{ \prime\prime}\hbar k_{n}(t^{\prime\prime})/\mu\) and kinetic energy \(E_{\mathrm{eh}}[k_{n}(t^{\prime\prime})]\) for the 10th-order sideband. The ponderomotive energy \(U_{\mathrm{p}}\) is chosen as \(2\times 10^{3}\hbar\omega\), which is a typical value in existing HSG experiments [39]. Fig. 1 (a), (d) and (g) show three time paths corresponding to the shortest recollision pathways within half a period of the THz field (green curves) for the cases with zero detuning and dephasing constants \(\Gamma=0,\,\hbar\omega,\,5\hbar\omega\), respectively.
We denote \(t_{n}^{\prime}=t_{n}-\tau_{n}\) for the creation time of the electron-hole pairs. Since the kinetic energy \(E_{\rm eh}[k_{n}(t^{\prime\prime})]\) and the relative velocity \(v_{\rm eh}(t^{\prime\prime})=\hbar k_{n}(t^{\prime\prime})/\mu\) are both analytic functions of time, any time path in the complex time plane connecting two fixed time points gives the same dynamic phase and electron-hole separation. For the zero-dephasing case (\(\Gamma=0\)), the time path can always be chosen as lying on the real-time axis (black line segment in Fig. 1(a)). This choice corresponds to a classical recollision picture with a real electron-hole separation \(x_{\rm eh}\) (Fig. 1(b)) and a real kinetic energy \(E_{\rm eh}\) (Fig. 1(c)). Remarkably, along such a time path, the THz field is almost linear in time. This approximate linearity remains in the presence of relatively weak dephasing. As shown in Fig. 1 (d) and (g), although the creation time \(t_{n}^{\prime}\) and recollision time \(t_{n}\) become complex, the time path can still be chosen as lying around the origin of the complex time plane. We also see that the creation time \(t_{n}^{\prime}\) and recollision time \(t_{n}\) are further away from the real-time axis for stronger dephasing. For the weaker-dephasing case (\(\Gamma=\hbar\omega\)), an imaginary part of the electron-hole separation arises, while the real part resembles the zero-dephasing case (Fig. 1 (e)). As the dephasing gets stronger, the electron-hole separation contains a more significant imaginary part and a real part more distorted from the classical counterpart (Fig. 1 (h)). A similar trend in the kinetic energy is shown in Fig. 1 (f) and (i). As energy conservation is imposed by the saddle-point equations, Eq. 10 and 11, in each of the cases, the real part of the kinetic energy goes from zero to the sideband offset energy \(10\hbar\omega\), while the imaginary part starts and ends at the value of the dephasing constant \(\Gamma\).
From the above analysis of the semiclassical recollision pictures, we see that the linear-in-time approximation might be appropriate in solving the saddle-point equations for relatively small sideband index and not too strong dephasing. A more precise statement can be inferred from the saddle-point equations with the canonical momentum \(\hbar{\bf P}_{n}\) eliminated (see Appendix A),
\[\sin[\omega(\tau_{n}-2t_{n})]=\frac{n\hbar\omega}{4U_{\rm p}\alpha(\omega\tau _{n})\beta(\omega\tau_{n})}, \tag{12}\]
\[\cos[\omega(\tau_{n}-2t_{n})]=\frac{\alpha^{2}(\omega\tau_{n})+\beta^{2}( \omega\tau_{n})-\xi}{\alpha^{2}(\omega\tau_{n})-\beta^{2}(\omega\tau_{n})}, \tag{13}\]
where \(\xi=[i\Gamma+\Delta+(n/2)\hbar\omega]/U_{\rm p}\). If the creation time \(t_{n}^{\prime}=t_{n}-\tau_{n}\) and recollision time \(t_{n}\) are located around the node of the THz field such that \(|\omega(2t_{n}-\tau_{n})-\pi|,|\omega\tau_{n}|\ll 1\), there must be \(n\hbar\omega/U_{\rm p}\approx|(\omega\tau_{n})^{4}[\omega(2t_{n}-\tau_{n})- \pi]/12|\ll 1\), and \(|\xi|\approx|(\omega\tau_{n})^{2}[\omega(2t_{n}-\tau_{n})-\pi]^{2}/8|\ll 1\). In other words, a sufficient condition for the linear-in-time approximation to be valid is that the dephasing constant \(\Gamma\), the detuning \(\Delta\), and the sideband offset energy \(n\hbar\omega\) are all small with respect to the ponderomotive energy \(U_{\rm p}\). We will focus on the accuracy of the linear-in-time approximation under this condition.
Before exploring the linear-in-time approximation, it is important to know first the accuracy of the saddle-point approximation. To this end, we compare the dimensionless sideband amplitudes \(Q_{n}\equiv\mathbb{P}_{n}\cdot{\bf C}/|{\bf C}|^{2}\) calculated through the saddle-point approximation with the results from numerical integration of the exact expression (see Appendix B),
\[Q_{n}= i^{n/2-1}\int_{0}^{+\infty}\frac{d(\omega\tau)}{(\omega\tau)^{D/2}}J _{n/2}[\frac{U_{\rm p}}{\hbar\omega}\omega\tau\gamma(\omega\tau)\alpha( \omega\tau)]\] \[\exp\{i[\mathbb{S}^{(\tau)}(\omega\tau)+n/2|\omega\tau\}, \tag{14}\]
where \(\mathbb{S}^{(\tau)}(\omega\tau)=(i\Gamma+\Delta)/(\hbar\omega)+[U_{\rm p}/( \hbar\omega)][\gamma^{2}(\omega\tau)-1]\) and \(J_{n}\) is the \(n\)th-order Bessel function of the first kind. We will present numerical results in the main text only for the one-dimensional case (\(D=1\)). The results are
Figure 1: Semiclassical pictures of electron-hole recollisions for the 10th-order sideband. (a) The creation time \(t_{n}^{\prime}\) and recollision time \(t_{n}\) (both real, red dots) in half a period of the THz field \(F_{\rm THz}\) (dark green curve) for zero dephasing case (\(\Gamma=0\)). The THz field is almost linear in time from \(t_{n}^{\prime}\) to \(t_{n}\) (black arrows). (b) The separation \(x_{\rm eh}\) and (c) the kinetic energy \(E_{\rm eh}\) (in units of the THz photon energy \(\hbar\omega\)) of an electron-hole pair along the real time-path from \(t_{n}^{\prime}\) to \(t_{n}\) (black straight line-segment in (a)). (d) The creation time \(t_{n}^{\prime}\) and recollision time \(t_{n}\) (both complex, red dots) for the case with dephasing constant \(\Gamma=\hbar\omega\). (e) The separation \(x_{\rm eh}\) and (f) the kinetic energy \(E_{\rm eh}\) of an electron-hole pair along the time path in the complex-time plane, \(t_{n}^{\prime}\rightarrow{\rm Re}(t_{n}^{\prime})\rightarrow{\rm Re}(t_{n}) \to t_{n}\) (two red straight line-segments parallel to the imaginary-time axis and a black straight line-segment in (d)). Both \(x_{\rm eh}\) and \(E_{\rm k}\) are complex (magenta and blue curves respectively for the real and imaginary parts). The shaded areas indicate the region where the time is complex. (g), (h), and (i) show results corresponding to (d), (e), and (f), respectively, for the case with a dephasing constant \(\Gamma=5\hbar\omega\). In the calculation, we use ponderomotive energy \(U_{\rm p}=2\times 10^{3}\hbar\omega\) and \(U_{\rm p}/(eF_{\rm THz})\)=800 nm. The detuning is set as zero except for the dashed lines in (h) and (i) showing \(x_{\rm eh}\) and \(E_{\rm eh}\) in the case with dephasing constant \(\Gamma=5\hbar\omega\) and detuning \(\Delta=-2\hbar\omega\), where the creation time \(t_{n}^{\prime}\) and recollision time \(t_{n}\) are slightly different from those in (g).
similar for the two- and three-dimensional cases (\(D=2,3\)) with a linearly-polarized THz field (see Fig. 14-17 in Appendix E for example results regarding the accuracy of the linear-in-time approximation). Fig. 2 shows a comparison for sideband indices from \(10\) to \(40\). The ponderomotive energy is chosen as \(U_{\rm p}=2\times 10^{3}\hbar\omega\), the same typical value in HSG experiments [39] as in Fig. 1, and the dephasing constant is set as \(\Gamma=5\hbar\omega\). As shown in Fig. 2 (a) and (b), the saddle-point approximation agrees well with the numerical integration for both the absolute values and phases of the sideband amplitudes. We also see that the variations of the dimensionless sideband amplitudes \(Q_{n}\) with respect to the sideband index \(n\) closely follow those of the semiclassical propagator, \(\exp[(i/\hbar)S_{\rm sc}^{(t,\tau)}(t_{n},\tau_{n})]\). However, the absolute values of the semiclassical propagator are off by about two orders of magnitude from the numerical integration results (Fig. 2 (a)), while the phases are off by around \(100\) degrees (Fig. 2 (b)). Therefore, the Gaussian quantum fluctuations are important in determining the sideband amplitudes. To quantify the accuracy of the saddle-point approximation, we compute the relative errors in the absolute values of \(Q_{n}\) and absolute errors in the phases of \(Q_{n}\) with respect to the numerical integration results. As shown in Fig. 2 (c) and (d), within the considered sideband window, the relative errors in the absolute values of \(Q_{n}\) stay around \(5\%\), and the absolute phase errors go from about \(3.5\) to \(5\) degrees.
To have a more systematic view of how the accuracy of the saddle-point approximation varies with the laser-field and material parameters, we first notice that, apart from the sideband index \(n\), each dimensionless sideband amplitude \(Q_{n}\) is solely determined by two quantities, a combination of the dephasing constant and detuning, \((i\Gamma+\Delta)/(\hbar\omega)\), and the ponderomotive energy \(U_{\rm p}/(\hbar\omega)\), both in units of the THz photon energy \(\hbar\omega\). This statement is clear from the exact expression, Eq. 14, and is also valid under the saddle-point approximation (see Eq. 4, 6, 7, 8,12, and 13). Thus we compute the errors in the dimensionless sideband amplitudes \(Q_{n}\) for sideband indices \(n=10\) and \(n=40\) over a wide range of dephasing constants and ponderomotive energies around the experimentally accessible values in units of the THz photon energy. Fig. 3 (a) and (b) show respectively the relative errors in the absolute values of \(Q_{n}\) and the absolute errors in the phases of \(Q_{n}\) as functions of the dephasing constant \(\Gamma\) with the ponderomotive energy \(U_{\rm p}\) fixed at \(2\times 10^{2}\hbar\omega\) (blue curves), \(2\times 10^{3}\hbar\omega\) (red curves), and \(2\times 10^{4}\hbar\omega\) (black curves). As a general trend, the relative errors in \(|Q_{n}|\) and the phase errors decrease as the dephasing gets stronger, except for some nonmonotonic behaviors in the cases with relatively small ponderomotive energy (e.g., blue curves in Fig. 3 (a) and (b)). Fig. 3 (c) and (d) show respectively the relative errors in \(|Q_{n}|\)
Figure 2: The saddle-point approximation for the dimensionless sideband amplitudes \(Q_{n}\) at relatively low orders of sidebands. (a) and (b) compare respectively the absolute values and phases of \(Q_{n}\) calculated by numerical integration (blue curves) with the results from the saddle-point approximation (red triangles). The magenta dots represent the results solely from the semiclassical propagator \(\exp[(i/\hbar)S_{\rm sc}^{(t,\tau)}(t_{n},\tau_{n})]\). The black curves in (c) and (d) show respectively the relative errors in \(|Q_{n}|\) and the absolute errors in the phases of \(Q_{n}\) in the saddle-point approximation. In the calculation, we use detuning \(\Delta=0\), dephasing constant \(\Gamma=5\hbar\omega\), and ponderomotive energy \(U_{\rm p}=2\times 10^{3}\hbar\omega\).
Figure 3: The accuracy of the saddle-point approximation for the dimensionless sideband amplitudes \(Q_{n}\). (a) and (b) show respectively the relative errors in \(|Q_{n}|\) and the absolute errors in the phases of \(Q_{n}\) as functions of the ponderomotive energy \(U_{\rm p}\) with the dephasing constant \(\Gamma\) fixed at \(\hbar\omega\) (blue curves), \(5\hbar\omega\) (red curves), and \(20\hbar\omega\) (black curves). The results for sideband indices \(n=10\) and \(n=40\) are plotted as solid and dash-dotted curves, respectively. Zero detunings are used for all cases.
and the absolute errors in the phases of \(Q_{n}\) as functions of the ponderomotive energy \(U_{\rm p}\) with the dephasing constant \(\Gamma\) fixed at \(\hbar\omega\) (blue curves), \(5\hbar\omega\) (red curves), and \(20\hbar\omega\) (black curves). For larger ponderomotive energy, the errors are mostly larger in the three selected dephasing cases with the exception of the phase errors in the cases with \(\Gamma=\hbar\omega\) (blue curves in Fig. 3 (d)). Nonmonotonic variations of the errors with increasing ponderomotive energy are also seen for the relatively low-order sideband in the strong-dephasing cases (e.g., solid curves in Fig. 3 (a) and (b)). As for the dependences on the sideband indices, the relative errors in \(|Q_{n}|\) are smaller for higher-order sidebands except for the weak-dephasing cases (blue curves in Fig. 3 (c)), while the phase errors are smaller for smaller sideband indices in the three selected cases with weak to moderate dephasing (Fig. 3 (d)). Over the whole parameter space investigated, the relative errors in \(|Q_{n}|\) are mostly below 10% and the phase errors are mostly less than 10 degrees.
The results of the accuracy analysis shown in Fig. 2 and 3 can be appreciated by considering the wave nature of the electron-hole pairs in HSG. The electrons and holes are generally not point particles but quantum mechanical objects with wavefunctions of finite widths. As has been discussed in Ref. [88], the centers of an electron and a hole wave packets do not even need to coincide with each other to recombine and generate sidebands. Intuitively, one expects that the recollision processes in HSG can be described by the semiclassical trajectories given by the saddle-point method if the maximum separations of the electron-hole pairs are much larger than the widths of their wavefunctions in real space. The maximum separations are larger for higher sideband indices in the limit of classical recollisions, while a direct calculation of the momentum distributions of the electron-hole wavefunctions indicates that the electron-hole wavefunctions tend to be broader in real space for weaker dephasing and lower-order sidebands. This is consistent with the enhanced accuracy of the saddle-point approximation in Fig. 2 by including the Gaussian fluctuations, and the trends shown in Fig. 3 (a) and (b) that the saddle-point approximation tends to be more accurate for relatively higher-order sidebands and relatively strong dephasing. The lower accuracy for larger ponderomotive energy shown in most curves in Fig. 3 (c) and (d) could also be attributed to the broader electron-hole wavefunctions in real space. See Appendix C for more details.
## III Linear-in-time approximation
Based on the saddle-point analysis, we now continue tailoring the Feynman path integrals using the linear-in-time approximation. The first task is to obtain explicit forms of the creation time \(t^{\prime}_{n}=t_{n}-\tau_{n}\) and the recollision time \(t_{n}\) from the saddle-point equations. Under the linear-in-time approximation, the THz field strength is approximated by the first-order Taylor polynomial at the node \(\omega t=\pi/2\), \(F_{\rm THz}(t)=-F_{\rm max}(\omega t-\pi/2)\). To make the mathematics simpler, we define time variables with a tilde to indicate a translation of half a period of the THz field, e.g., \(\omega\tilde{t}=\omega t-\pi/2\). The kinetic momentum \(\hbar k_{n}(t)\) satisfies the Newtonian equation of motion
\[\hbar\dot{k}_{n}(t)=-eF_{\rm THz}(t)=eF_{\rm max}\omega\tilde{t}, \tag{15}\]
whose solution can be written as
\[k_{n}(t)=k_{n}(t^{\prime}_{n})+\frac{eF_{\rm max}}{2\hbar\omega}[(\omega\tilde {t})^{2}-(\omega\tilde{t}^{\prime}_{n})^{2}]. \tag{16}\]
Putting this solution into the first saddle-point equation, Eq. 9, yields
\[\omega^{2}(\tilde{t}_{n}+2\tilde{t}^{\prime}_{n})(\tilde{t}_{n}-\tilde{t}^{ \prime}_{n})=-\frac{6\hbar\omega}{eF_{\rm max}}k_{n}(t^{\prime}_{n}), \tag{17}\]
which provides a relation connecting the time variables \(t^{\prime}_{n}\) and \(t_{n}\) with the kinetic momenta \(\hbar k_{n}(t)\) at \(t^{\prime}_{n}\) and \(t_{n}\). The solution of \(k_{n}(t)\) at \(t_{n}\) provides another such relation,
\[\omega^{2}(\tilde{t}_{n}+\tilde{t}^{\prime}_{n})(\tilde{t}_{n}-\tilde{t}^{ \prime}_{n})=\frac{2\hbar\omega}{eF_{\rm max}}[k_{n}(t_{n})-k_{n}(t^{\prime}_{ n})]. \tag{18}\]
The saddle-point equations concerning the energy conservation, Eq. 10 and 11, are not affected by the linear-in-time approximation, giving the kinetic momenta \(\hbar k_{n}(t)\) at the creation time \(t^{\prime}_{n}\) and recollision time \(t_{n}\) through the following equations,
\[\frac{\hbar\omega}{eF_{\rm max}}k_{n}(t^{\prime}_{n})=\pm\sqrt{\frac{i\Gamma+ \Delta}{2U_{\rm p}}}\equiv\pm\zeta_{0}\sqrt{\frac{\hbar\omega}{2U_{\rm p}}}, \tag{19}\]
\[\frac{\hbar\omega}{eF_{\rm max}}k_{n}(t_{n})=\sqrt{\frac{i\Gamma+\Delta+n \hbar\omega}{2U_{\rm p}}}\equiv\zeta_{n}\sqrt{\frac{\hbar\omega}{2U_{\rm p}}}, \tag{20}\]
where \(\zeta_{n}\equiv\sqrt{(i\Gamma+\Delta)/(\hbar\omega)+n}\). We have fixed the sign of \(\hbar k_{n}(t_{n})\) to make it continuously connect with the kinetic momentum in the limit of classical recollisions (\(\Gamma=\Delta=0\)) at the recollision time. In this paper, a square root of a complex number is defined to have a nonnegative real part. From Eq. 17, 18, 19 and 20, the creation time \(t^{\prime}_{n}\) and the recollision time \(t_{n}\) can be easily solved as
\[\omega\tilde{t}^{\prime}_{n}=(\frac{2\hbar\omega}{9U_{\rm p}})^{1/4}\frac{2 \zeta_{0}-\zeta_{n}}{\sqrt{\zeta_{n}-\zeta_{0}}}, \tag{21}\]
\[\omega\tilde{t}_{n}=(\frac{2\hbar\omega}{9U_{\rm p}})^{1/4}\frac{2\zeta_{n}- \zeta_{0}}{\sqrt{\zeta_{n}-\zeta_{0}}}, \tag{22}\]
which correspond to a time duration with a positive real part,
\[\omega\tau_{n}=(\frac{18\hbar\omega}{U_{\rm p}})^{1/4}\sqrt{\zeta_{n}-\zeta_{0}}. \tag{23}\]
To make the imaginary part of \(\tau_{n}\) nonpositive regarding the convergence of the Gaussian integrals in the saddle-point approximation (see Appendix A), we have chosen the kinetic momentum \(\hbar k_{n}(t_{n}^{\prime})\) to have a nonpositive real part. These solutions are consistent with the sufficient condition discussed in the last section for the validity of the linear-in-time approximation that the dephasing constant \(\Gamma\), the detuning \(\Delta\), and the sideband offset energy \(n\hbar\omega\) should all be small relative to the ponderomotive energy \(U_{\rm p}\). In the limit of classical recollisions (\(\Gamma=\Delta=0\)), the creation time \(t_{n}^{\prime}\) and the recollision time \(t_{n}\) satisfy \(\tilde{t}_{n}=-2\tilde{t}_{n}^{\prime}\), consistent with the numerical results in Fig. 1 (a).
One can arrive at explicit forms of the sideband amplitudes as functions of the laser-field and material parameters by putting the explicit solutions of \(t_{n}\) and \(\tau_{n}\) into the approximate expression from the saddle-point approximation, Eq. 4. However, the dependences of the sideband amplitudes on the laser-field and material parameters are still far from transparent in such forms. To go further, we expand respectively the semiclassical action \(S^{(t_{n},\tau_{n})}(t_{n},\tau_{n})\) and the two second-order derivatives in Eq. 4 into Taylor series up to the terms of the lowest order in \(1/U_{\rm p}\),
\[\frac{1}{\hbar}S^{(t,\tau)}_{\rm sc}(t_{n},\tau_{n})=n\omega t_{n }+i\frac{\Gamma}{\hbar\omega}\omega\tau_{n}\] \[-\frac{U_{\rm p}}{24\hbar\omega}(\omega\tau_{n})^{3}[\frac{( \omega\tau_{n})^{2}}{15}+(\omega\tilde{t}_{n}^{\prime}+\omega\tilde{t}_{n})^{2 }], \tag{24}\]
\[\frac{1}{\hbar}\frac{\partial^{2}S^{(t,\tau)}_{\rm sc}}{\partial(\omega t_{n} )^{2}}=-\frac{1}{3}(\omega\tau_{n})^{3}\frac{U_{\rm p}}{\hbar\omega}, \tag{25}\]
\[\frac{1}{\hbar}\frac{\partial^{2}S^{(\tau)}_{\rm sc}}{\partial(\omega\tau_{n} )^{2}}=\frac{U_{\rm p}}{2\hbar\omega}(\omega\tau_{n})[(\omega\tilde{t}_{n}^{ \prime}+\omega\tilde{t}_{n})^{2}-\frac{1}{9}(\omega\tau_{n})^{2}], \tag{26}\]
which lead to a compact algebraic form for the sideband polarization vectors,
\[\mathbb{P}_{n}\approx 2i^{n}\mathbf{C}\exp\{i[q_{1/4}(n,\frac{i\Gamma+\Delta}{\hbar \omega})(\frac{\hbar\omega}{U_{\rm p}})^{1/4}]\}\] \[(\frac{U_{\rm p}}{\hbar\omega})^{\frac{D-2}{8}}\frac{\exp[-i\arg[ q_{0}(n,\frac{i\Gamma+\Delta}{\hbar\omega})]/2]}{\sqrt{[q_{0}(n,\frac{i\Gamma+ \Delta}{\hbar\omega})]}}, \tag{27}\]
where
\[q_{1/4}(n,\frac{i\Gamma+\Delta}{\hbar\omega})= (\frac{2}{9})^{1/4}\frac{4\sqrt{\zeta_{n}-\zeta_{0}}}{5}\] \[(2\zeta_{0}^{2}+\zeta_{0}\zeta_{n}+2\zeta_{n}^{2}), \tag{28}\]
\[q_{0}(n,\frac{i\Gamma+\Delta}{\hbar\omega})=-\sqrt{32(3\sqrt{2})^{D}}\zeta_{0 }\zeta_{n}(\zeta_{n}-\zeta_{0})^{\frac{D+2}{2}}. \tag{29}\]
The factor \(i^{n}\) is related to the initial phase of the THz field. As can be easily seen from Eq. 1, a phase shift of \(\varphi\) in the THz field will result in a phase shift of \(n\varphi\) in the \(n\)th-order sideband. Fig. 4 shows a comparison of the dimensionless sideband amplitues \(Q_{n}=\mathbb{P}_{n}\cdot\mathbf{C}/|\mathbf{C}|^{2}\) calculated from the algebraic form, Eq. 27, with the results from numerical integration of Eq. 14. We use the same parameters as in Fig. 2. As shown in Fig. 4 (a) and (b), the algebraic form agrees well with the numerical integration for both the absolute values and phases of the sideband amplitudes. The relative errors in the absolute values of \(Q_{n}\) stay below 9% (Fig. 4 (c)), and the absolute errors in the phases are less than 4 degrees (Fig. 4 (d)). The dip in the phase errors at \(n=30\) arises from a sign change in the phase differences.
To see whether the accuracy of the linear-in-time approximation remains high for a wide range of dephasing constants and ponderomotive energies, we compute the errors in the dimensionless sideband amplitudes \(Q_{n}\) within the same parameter space as in the accuracy analysis of the saddle-point approximation shown in Fig. 3. Fig. 5 (a) and (b) show respectively the relative errors in the absolute values of \(Q_{n}\) and the absolute errors in the phases of \(Q_{n}\) as functions of the dephasing constant \(\Gamma\) with the ponderomotive energy \(U_{\rm p}\) fixed at \(2\times 10^{2}\hbar\omega\) (blue curves), \(2\times 10^{3}\hbar\omega\) (red curves), and \(2\times 10^{4}\hbar\omega\) (black curves). For the cases with the smallest ponderomotive energy, \(U_{\rm p}=2\times 10^{2}\hbar\omega\), the relative errors in \(|Q_{n}|\) mostly stay above 10% (blue curves in Fig. 5 (a)), and the phase errors can go up to around 200 degrees (blue curves in Fig. 5 (b)). For the cases with \(U_{\rm p}=2\times 10^{3}\hbar\omega\), the relative errors in \(|Q_{n}|\) are also mostly above 10% for the 40th-order sideband (red dash-dotted curve in Fig. 5 (a)), and the phase errors can get to about 40 degrees for
Figure 4: The linear-in-time approximation for the dimensionless sideband amplitude \(Q_{n}\). (a) and (b) compare respectively the absolute values and phases of \(Q_{n}\) calculated by numerical integration (blue curves) to the results from the linear-in-time approximation. The black curves in (c) and (d) show respectively relative errors in \(|Q_{n}|\) and absolute errors in the phases of \(Q_{n}\) in the linear-field approximation. In the calculation, we use detuning \(\Delta=0\), dephasing constant \(\Gamma=5\hbar\omega\) and ponderomotive energy \(U_{\rm p}=2\times 10^{3}\hbar\omega\).
the 10th-order sideband (red solid curve in Fig. 5 (b)). In contrast to the results in Fig. 3 (a) and (b), which concern the accuracy of the saddle-point approximation, large ponderomotive energy is favored to achieve high accuracy in the linear-in-time approximation. Fig. 5 (c) and (d) show respectively the relative errors in \(|Q_{n}|\) and the absolute errors in the phases of \(Q_{n}\) as functions of the ponderomotive energy \(U_{\rm p}\) with the dephasing constant \(\Gamma\) fixed at \(\hbar\omega\) (blue curves), \(5\hbar\omega\) (red curves), and \(20\hbar\omega\) (black curves). In the limit of large ponderomotive energy, both the relative errors in \(|Q_{n}|\) and the phase errors match the results in Fig. 3 (c) and (d). As the ponderomotive energy gets smaller, the accuracy of the linear-in-time approximation for the cases with relatively high sideband indices and strong dephasing gradually become lower than the limits set by the saddle-point approximation. Several dips corresponding to sign changes in the differences are also seen in Fig. 5 (a), (b), and (d).
In order to obtain an algebraic form with higher accuracy, we introduce corrections up to the order of \((\hbar\omega/U_{\rm p})^{3/4}\) to the creation time \(t^{\prime}_{n}\), recombination time \(t_{n}\), and the time duration \(\tau_{n}\), which read (see Appendix D for the derivation)
\[\omega\tilde{t}^{\prime}_{n}= (\frac{2\hbar\omega}{9U_{\rm p}})^{1/4}\frac{2\zeta_{0}-\zeta_{ n}}{\sqrt{\zeta_{n}-\zeta_{0}}}+(\frac{2\hbar\omega}{9U_{\rm p}})^{3/4}\] \[\frac{23\zeta_{0}^{2}(2\zeta_{0}-3\zeta_{n})+\zeta_{n}^{2}(30 \zeta_{0}-17\zeta_{n})}{120(\zeta_{n}-\zeta_{0})^{3/2}}, \tag{30}\]
\[\omega\tilde{t}_{n}= (\frac{2\hbar\omega}{9U_{\rm p}})^{1/4}\frac{2\zeta_{n}-\zeta_{ 0}}{\sqrt{\zeta_{n}-\zeta_{0}}}-(\frac{2\hbar\omega}{9U_{\rm p}})^{3/4}\] \[\frac{\zeta_{0}^{2}(17\zeta_{0}-30\zeta_{n})+23\zeta_{n}^{2}(3 \zeta_{0}-2\zeta_{n})}{120(\zeta_{n}-\zeta_{0})^{3/2}}, \tag{31}\]
\[\omega\tau_{n}= (\frac{18\hbar\omega}{U_{\rm p}})^{1/4}\sqrt{\zeta_{n}-\zeta_{0}}\] \[+(\frac{18\hbar\omega}{U_{\rm p}})^{3/4}\frac{7(\zeta_{0}^{2}+ \zeta_{n}^{2})-4\zeta_{0}\zeta_{n}}{360\sqrt{\zeta_{n}-\zeta_{0}}}. \tag{32}\]
Including a corresponding correction to the semiclassical action in \(S^{\{t_{n},\tau_{n}\}}(t_{n},\tau_{n})\), we arrive at a new algebraic form,
\[\mathbb{P}_{n}\approx 2i^{n}\mathbf{C}\exp\{i[q_{1/4}(n,\frac{i\Gamma+\Delta}{\hbar \omega})(\frac{\hbar\omega}{U_{\rm p}})^{1/4}\] \[+q_{3/4}(n,\frac{i\Gamma+\Delta}{\hbar\omega})(\frac{\hbar \omega}{U_{\rm p}})^{3/4}]\}\] \[(\frac{U_{\rm p}}{\hbar\omega})^{\frac{D-2}{8}}\frac{\exp[-i\arg[ q_{0}(n,\frac{i\Gamma+\Delta}{\hbar\omega})]/2]}{\sqrt{|q_{0}(n,\frac{i\Gamma+ \Delta}{\hbar\omega})|}}, \tag{33}\]
which contains a new function,
\[q_{3/4}(n,\frac{i\Gamma+\Delta}{\hbar\omega})= (\frac{1}{18})^{1/4}\frac{1}{1260\sqrt{\zeta_{n}-\zeta_{0}}}[103( \zeta_{n}^{2}-\zeta_{0}^{2})^{2}\] \[+232\zeta_{0}\zeta_{n}(\zeta_{0}^{2}+\zeta_{n}^{2})-184\zeta_{0}^ {2}\zeta_{n}^{2}]. \tag{34}\]
In parallel with the accuracy analysis shown in Fig. 3 and Fig. 5, we compute the errors in the dimensionless sideband amplitudes \(Q_{n}\) using the new algebraic form, Eq. 33. As shown in Fig. 6 (a), the relative errors in the absolute values of \(Q_{n}\) for the cases with relatively small dephasing are close to the limits set by the saddle-point approximation. For sufficiently strong dephasing, the relative errors in \(|Q_{n}|\) stay below 10% except for the case with sideband index \(n=10\) and ponderomotive energy \(U_{\rm p}=2\times 10^{2}\hbar\omega\) (solid blue curve in Fig. 6 (a)). The absolute errors in the phases of \(Q_{n}\) are mostly less than 5 degrees for the three selected values of ponderomotive energy (Fig. 6 (b)). Even for the cases with \(U_{\rm p}=2\times 10^{2}\hbar\omega\), the phase errors stay below 20 degrees (blue curves in Fig. 6 (b)). As shown in Fig. 6 (c) and (d), both the relative errors in \(|Q_{n}|\) and the phase errors approach the results in Fig. 3 (c) and (d) for a wide range of relatively large ponderomotive energies. The relative errors in \(|Q_{n}|\) are mostly below 10% for the selected cases with moderate dephasing (red and black curves in Fig. 3 (c)), while
Figure 5: The accuracy of the linear-in-time approximation for the dimensionless sideband amplitude \(Q_{n}\). (a) and (b) show respectively the relative errors in \(|Q_{n}|\) and absolute errors in the phases as functions of the dephasing constant \(\Gamma\) with ponderomotive energy \(U_{\rm p}\) fixed at \(2\times 10^{2}\hbar\omega\) (blue curves), \(2\times 10^{3}\hbar\omega\) (red curves), and \(2\times 10^{4}\hbar\omega\) (black curves). (c) and (d) show respectively the relative errors in \(|Q_{n}|\) and absolute errors in the phases as functions of the ponderomotive energy \(U_{\rm p}\) with the dephasing constant \(\Gamma\) fixed at \(\hbar\omega\) (blue curves), \(5\hbar\omega\) (red curves), and \(20\hbar\omega\) (black curves). The results for sideband indices \(n=10\) and \(n=40\) are plotted as solid and dash-dotted curves, respectively. Zero detunings are used for all cases.
the phase errors are less than 10 degrees for all three selected dephasing cases (Fig. 6 (d)). This remarkable suppression of the errors by the correction term ends our derivation of the algebraic forms for the sideband polarization vectors.
## IV Nonzero detunings
To finalize our tailoring of the Feynman path integrals, we discuss the effects from nonzero detunings in this section. From the saddle-point equations, we see that the solution of the saddle points depends on the detuning through the kinetic energy \(E_{\rm ch}[k_{n}(t^{\prime\prime})]\) at the creation time \(t^{\prime}_{n}\) and recollision time \(t_{n}\). An example of the semiclassical recollision pictures associated with a dephasing constant \(\Gamma=5\hbar\omega\) and a negative detuning \(\Delta=-2\hbar\omega\) is shown in Fig. 1 (h) and (i) (dashed curves). The nonzero detuning further distorted the curves representing the complex electron-hole separation. As a new feature for the complex kinetic energy, the real part starts from the value of the detuning \(\Delta\) and ends at the sideband offset energy subtracted by \(\Delta\). For the derivation of the two algebraic forms, Eq. 27 and 33, we have seen from previous discussions that the role of the detuning \(\Delta\) has no essential difference from that of the dephasing constant \(\Gamma\), since the sideband amplitudes depend on \(\Gamma\) and \(\Delta\) through analytic functions of the complex variable \(i\Gamma+\Delta\). However, the question remains how the accuracy of the linear-in-time approximation depends on the detuning.
To quantify the dependence of the accuracy of the linear-in-time approximation on the detuning, we compute the errors in the dimensionless sideband amplitudes \(Q_{n}\) as functions of the dephasing constant \(\Gamma\in[1,40]\hbar\omega\) and the detuning \(\Delta\in[-20,20]\hbar\omega\), with the ponderomotive energy \(U_{\rm p}\) fixed at three representative values, \(2\times 10^{2}\hbar\omega\), \(2\times 10^{3}\hbar\omega\), and \(2\times 10^{4}\hbar\omega\). Fig. 7 and 8 show respectively the relative errors in the absolute values of \(Q_{n}\) and the absolute errors in the phases of \(Q_{n}\) for sideband index \(n=40\) (the results for \(n=20,30\) are shown in Fig. 10-13 in Appendix E). In each of the two figures, the errors in \(Q_{n}\) calculated by using Eq. 27 (Eq. 33) are presented in the left (right) column. As shown in Fig. 7 (a), for the cases with \(U_{\rm p}=2\times 10^{2}\hbar\omega\), the relative errors in \(|Q_{n}|\) calculated by using Eq. 27 are greater than 50% in more than half of the parameter space investigated. As the ponderomotive energy increases to \(2\times 10^{3}\hbar\omega\), the relative errors in \(|Q_{n}|\) are mostly less than 20% (Fig. 7 (b)). For the cases with \(U_{\rm p}=2\times 10^{4}\hbar\omega\), the relative errors in \(|Q_{n}|\) stay below 10% and can go even below 5% in most of the parameter space (Fig. 7 (c)). The correction term in Eq. 33 greatly suppresses the relative errors in \(|Q_{n}|\), as shown in Fig. 7 (d), (e), and (f). The relative errors in \(|Q_{n}|\) calculated by using Eq. 33 can already go below 5%
Figure 7: The accuracy of the linear-in-time approximation for the absolute values of the dimensionless sideband amplitudes \(Q_{40}\) with varing dephasing and detuning. Left (Right) column: the relative errors in \(|Q_{40}|\) without (with) a higher-order correction. The values of the ponderomotive energy \(U_{\rm p}\) are chosen as \(2\times 10^{2}\hbar\omega\) ((a) and (d)), \(2\times 10^{3}\hbar\omega\) ((b) and (e)), and \(2\times 10^{4}\hbar\omega\) ((c) and (f)).
Figure 6: The accuracy of the linear-in-time approximation with a higher-order correction for the dimensionless sideband amplitude \(Q_{n}\). (a) and (b) show respectively the relative errors in \(|Q_{n}|\) and absolute errors in the phases as functions of the dephasing constant \(\Gamma\) with ponderomotive energy \(U_{\rm p}\) fixed at \(2\times 10^{2}\hbar\omega\) (blue curves), \(2\times 10^{3}\hbar\omega\) (red curves), and \(2\times 10^{4}\hbar\omega\) (black curves). (c) and (d) show respectively the relative errors in \(|Q_{n}|\) and absolute errors in the phases as functions of the ponderomotive energy \(U_{\rm p}\) with the dephasing constant \(\Gamma\) fixed at \(\hbar\omega\) (blue curves), \(5\hbar\omega\) (red curves), and \(20\hbar\omega\) (black curves). The results for sideband indices \(n=10\) and \(n=40\) are plotted as solid and dash-dotted curves, respectively. Zero detunings are used for all cases.
in a wide range of dephasing constants and detunings for the cases with \(U_{\rm p}=2\times 10^{2}\hbar\omega\) (Fig. 7 (d)). For the cases with the other two selected larger ponderomotive energies, the relative errors in \(|Q_{n}|\) stay below \(5\%\) in almost the whole parameter space (Fig. 7 (e) and (f)). As shown in Fig. 8, the suppression of the phase errors in \(|Q_{n}|\) by the correction term in Eq. 33 is also remarkable. For the cases with \(U_{\rm p}=2\times 10^{2}\hbar\omega\), the phase errors calculated by using Eq. 27 range from below 20 degrees to as large as 140 degrees in the parameter space investigated (Fig. 8 (a)). For the cases with \(U_{\rm p}=2\times 10^{3}\hbar\omega\), the phase errors are mostly below 10 degrees (Fig. 8 (b)). As the ponderomotive energy increases to \(2\times 10^{4}\hbar\omega\), the phase errors are mostly less than 5 degrees (Fig. 8 (c)). In contrast, the phase errors calculated by using Eq. 33 stay below 15 degrees in almost the whole parameter space shown in Fig. 8 (d) for the cases with \(U_{\rm p}=2\times 10^{2}\hbar\omega\). For the cases with the other two selected larger ponderomotive energies, the phase errors are mostly less than 2.5 degrees, as shown in Fig. 8 (e) and (f). The results are similar for two- and three-dimensional cases (\(D=2,3\)) (see Fig. 14-17 in Appendix E). We thus see that the algebraic form, Eq. 33, is suitable for describing relatively low orders of sidebands in a wide range of parameters that are experimentally accessible.
## V Feynman-path interferometer
A straightforward application of our algebraic forms is to guide the control of the sideband amplitudes. The pump NIR laser does not need to be monochromatic. For instance, one can build up an interferometer using a NIR laser field with two central frequencies separated by an even number times of the THz frequency \(\omega\), \(\mathbf{E}_{\rm NIR}(t^{\prime})=\mathbf{F}_{\rm NIR}[1+\rho_{21}e^{-i(2N \omega t^{\prime}-\varphi_{21})}]e^{-i\Omega t^{\prime}}\), where \(N\) is an integer, and the real parameters \(\rho_{21}\) and \(\varphi_{21}\) control respectively the relative strength and phase delay between the two frequency components. Two sets of sidebands produced respectively by the two frequency components of the NIR laser are located at the same frequencies, and thus interference occurs at each of the sideband frequencies. On the condition that the linear-in-time approximation is valid, as discussed earlier, for a monochromatic NIR laser, a shortest electron-hole recollision pathway dominantly contributes to each sideband amplitude within half a period of the THz field. Therefore, this interference can also be considered as the interference between two electron-hole recollision pathways. By using the algebraic form, Eq. 33, the resulting sideband polarization vector at frequency \(\Omega+n\omega\) (\(n\) is an even integer) can be written as
\[\mathbb{P}(\Omega+n\omega) \approx\mathbf{C}[Q_{n}(\frac{i\Gamma+\Delta}{\hbar\omega},\frac{ U_{\rm p}}{\hbar\omega})\] \[+\rho_{21}e^{i\varphi_{21}}Q_{n-2N}(\frac{i\Gamma+\Delta}{\hbar \omega}+2N,\frac{U_{\rm p}}{\hbar\omega})], \tag{35}\]
which contains the detuning \(\Delta=\hbar\Omega-E_{\rm g}\), and the dimensionless sideband amplitude in the form,
\[Q_{n}(\frac{i\Gamma+\Delta}{\hbar\omega}, \frac{U_{\rm p}}{\hbar\omega})=2i^{n}\exp\{i[q_{1/4}(n,\frac{i \Gamma+\Delta}{\hbar\omega})(\frac{\hbar\omega}{U_{\rm p}})^{1/4}\] \[+q_{3/4}(n,\frac{i\Gamma+\Delta}{\hbar\omega})(\frac{\hbar\omega }{U_{\rm p}})^{3/4}]\}\] \[(\frac{U_{\rm p}}{\hbar\omega})^{\frac{D-2}{8}}\frac{\exp[-i\arg[q _{0}(n,\frac{i\Gamma+\Delta}{\hbar\omega})]/2]}{\sqrt{|q_{0}(n,\frac{i\Gamma+ \Delta}{\hbar\omega})|}}. \tag{36}\]
By varying the phase delay \(\varphi_{21}\), the intensity of the sideband can be tuned between the values
\[I_{n,\pm}=I_{n,0}[1\pm\rho_{21}\frac{|Q_{n-2N}(\frac{i\Gamma+\Delta}{\hbar \omega}+2N,\frac{U_{\rm p}}{\hbar\omega})|}{|Q_{n}(\frac{i\Gamma+\Delta}{ \hbar\omega},\frac{U_{\rm p}}{\hbar\omega})|}]^{2}, \tag{37}\]
where \(I_{n,0}\) is the sideband intensity when the second frequency component is switched off (\(\rho_{21}=0\)). The maximal sideband intensity is obtained when the two recollision pathways are in phase such that
\[\arg[Q_{n-2N}(\frac{i\Gamma+\Delta}{\hbar\omega}+2N,\frac{U_{\rm p }}{\hbar\omega})]+\varphi_{21}\] \[= \arg[Q_{n}(\frac{i\Gamma+\Delta}{\hbar\omega},\frac{U_{\rm p}}{ \hbar\omega})]\,(\,\mathrm{mod}\,2\pi). \tag{38}\]
Such an interferometer can be used to extract the dephasing constant \(\Gamma\), the bandgap \(E_{\rm g}\) in the detuning \(\Delta\)
Figure 8: The accuracy of the linear-in-time approximation for the phases of the dimensionless sideband amplitudes \(Q_{40}\) with varying dephasing and detuning. Left (Right) column: the absolute errors in the phases of \(Q_{40}\) without (with) a higher-order correction. The values of the ponderomotive energy \(U_{\rm p}\) are chosen as \(2\times 10^{2}\hbar\omega\) ((a) and (d)), \(2\times 10^{3}\hbar\omega\) ((b) and (e)), and \(2\times 10^{4}\hbar\omega\) ((c) and (f)).
and the reduced mass \(\mu\) in the ponderomotive energy \(U_{\rm p}\). By measuring the maximal and minimal relative sideband intensities \(I_{n,\pm}/I_{n,0}\) and the corresponding phase delays \(\varphi_{21}\), two algebraic relations between the parameters \(i\Gamma+\Delta\) and \(U_{\rm p}\) can be seen from Eq. 37 and 38. To determine the three real parameters, \(\Gamma\), \(\Delta\), and \(U_{\rm p}\), it requires at least one additional equation, which can be obtained by adding a third frequency component of the NIR laser field. Although the absolute sideband intensity \(I_{n,0}\) also contains information on the parameters \(i\Gamma+\Delta\) and \(U_{\rm p}\), determination of \(I_{n,0}\) involves additional complexities such as modeling of the propagation of the NIR laser and sideband fields through optical setups. The absolute sideband intensity might also include a significant enhancement factor from electron-hole Coulomb interaction [86], which is outside the scope of this paper.
## VI Extracting material parameters by varying the THz field strength
The dependence of the sideband intensities on the THz field strength [36] provides a simpler way of extracting the dephasing constant \(\Gamma\) and the reduced mass \(\mu\) with a monochromatic NIR laser field. In cases where the algebraic form, Eq. 33, is valid, measuring intensities \(I_{n}^{F_{1}}\) and \(I_{n}^{F_{2}}\) of the \(n\)th-order sideband respectively for two THz field strengths \(F_{\rm max,1}\) and \(F_{\rm max,2}=\lambda F_{\rm max,1}\) yields an algebraic equation for the parameters \(i\Gamma+\Delta\) and \(U_{\rm p}\),
\[\sqrt{\frac{I_{n}^{F_{2}}}{I_{n}^{F_{1}}}}=\frac{|Q_{n}^{F_{2}}|}{|Q_{n}^{F_{ 1}}|}= \lambda^{\frac{D-2}{4}}\exp[(1-\lambda^{-\frac{1}{2}})x_{1/4}\] \[+(1-\lambda^{-\frac{3}{2}})x_{3/4}], \tag{39}\]
where we denote \(Q_{n}^{F_{s}}\equiv Q_{n}((i\Gamma+\Delta)/(\hbar\omega),U_{\rm p}^{F_{s}}/( \hbar\omega))\) (s=1,2) and \(x_{l}\equiv{\rm Im}[q_{l}(n,(i\Gamma+\Delta)/(\hbar\omega))](\hbar\omega/U_{ \rm p}^{F_{s}})^{l}\) (\(l=1/4,3/4\)) with \(U_{\rm p}^{F_{s}}\equiv e^{2}F_{\rm max,s}^{2}/(4\mu\omega^{2})\) being the ponderomotive energy corresponding to the THz field strength \(F_{\rm max,s}\). Taking the logarithm on both sides of the equation, we obtain an equation linear in the variables \(x_{1/4}\) and \(x_{3/4}\),
\[(1-\lambda^{-\frac{1}{2}})x_{1/4}+(1-\lambda^{-\frac{3}{2}})x_{3/4}\] \[= \frac{1}{2}\ln\frac{I_{n}^{F_{2}}}{I_{n}^{F_{1}}}-\frac{D-2}{4} \ln\lambda. \tag{40}\]
Measuring the sideband intensities for three different THz field strengths produces two such equations, which can be easily solved for \(x_{1/4}\) and \(x_{3/4}\). The reduced mass can then be calculated as
\[\mu=\frac{e^{2}F_{\rm max,1}^{2}}{4\hbar\omega^{3}}\frac{x_{1/4}^{4}}{\{{\rm Im }[q_{1/4}(n,\frac{i\Gamma+\Delta}{\hbar\omega})]\}^{4}}, \tag{41}\]
where the parameter \(i\Gamma+\Delta\) satisfies the algebraic equation
\[\frac{x_{1/4}^{3}}{x_{3/4}}=\frac{\{{\rm Im}[q_{1/4}(n,\frac{i\Gamma+\Delta}{ \hbar\omega})]\}^{3}}{{\rm Im}[q_{3/4}(n,\frac{i\Gamma+\Delta}{\hbar\omega})]}. \tag{42}\]
If the detuning \(\Delta\) is known, one can easily extract the dephasing constant \(\Gamma\) from Eq. 42 and then calculate the reduced mass \(\mu\) using Eq. 41. The whole extraction procedure can still be applied even if the dephasing constant \(\Gamma\) depends on the sideband index \(n\). The applicability of the procedure relies on the premise that the theory agrees with experiments. Depending on the complexities in real experiments, modifications of our theory might be necessary. For example, in the presence of multiple dephasing mechanisms, a theory with a dephasing constant might not be able to explain the experimentally observed fall-offs of sideband intensities [33]. A possible modification is to replace the dephasing factor \(\Gamma\tau\) in the action \(S_{n}({\bf P},t,\tau)\) in Eq. 3 by an integral \(\int_{t-\tau}^{t}dt^{\prime\prime}\Gamma[{\bf k}(t^{\prime\prime})]\) with \(\Gamma\) becoming a function of the kinetic momentum \({\bf k}\). Whether the saddle-point analysis in this paper still applies after such a modification is an interesting question to be explored in future works.
For a multi-band system with more than one species of electron-hole pairs, interference of recollision pathways associated with different species of electron-hole pairs might provide extra equations to extract the bandgap \(E_{\rm g}\). Such interference can be investigated systematically through the dynamical Jones matrices [35], each of which maps the electric field of the NIR laser into a sideband polarization vector. In the basis of circular polarizations, \(\sigma_{\pm}\) with helicity \(\pm 1\) (\(\sigma_{\pm}=\pm(\hat{x}\pm i\hat{y})/\sqrt{2}\) for light fields propagating along the z-axis), we can reorganize Eq. 1 into the form,
\[\begin{pmatrix}P_{\pm,n}^{\rm HSG}\\ P_{-,n}^{\rm HSG}\end{pmatrix}=\mathcal{T}_{n}\begin{pmatrix}F_{\pm}^{\rm NIR }\\ F_{-}^{\rm NIR}\end{pmatrix}, \tag{43}\]
where \(P_{\pm,n}^{\rm HSG}\) and \(F_{\pm}^{\rm NIR}\) denote respectively the \(\sigma_{\pm}\) components of the sideband polarization vector \(\mathbb{P}_{n}\) and the vector \({\bf F}_{\rm NIR}\) in the electric field of the NIR laser, and the dynamical Jones matrix \(\mathcal{T}_{n}\) is a two-by-two matrix. For a general constant dipole vector \({\bf d}=d_{+}\sigma_{+}+d_{-}\sigma_{-}\), The dynamical Jones matrix \(\mathcal{T}_{n}\) can be written as
\[\mathcal{T}_{n}=\bar{C}\mu^{D/2}Q_{n}\begin{pmatrix}|d_{-}|^{2}&d_{-}^{*}d_{+} \\ d_{-}d_{+}^{*}&|d_{+}|^{2}\end{pmatrix}, \tag{44}\]
which includes the dimensionless sideband amplitude \(Q_{n}((i\Gamma+\Delta)/(\hbar\omega),U_{\rm p}/(\hbar\omega))\) and a constant
\[\bar{C}=\frac{-1}{\hbar\omega}e^{-i\pi D/4}(\frac{\omega}{2\pi\hbar})^{D/2}. \tag{45}\]
Due to time-reversal symmetry, each electron-hole pair is usually accompanied by another pair with a complex conjugate dipole vector. As a result, the dynamical Jones matrix in Eq. 44 is modified as
\[\mathcal{T}_{n}=\bar{C}\mu^{D/2}Q_{n}\begin{pmatrix}|{\bf d}|^{2}&2d_{-}^{*}d_{+ }\\ 2d_{-}d_{+}^{*}&|{\bf d}|^{2}\end{pmatrix}. \tag{46}\]
The dynamical Jones matrix for a simplest extension, where two species of electron-hole pairs move indepen
dently in their respective bands, can then be written as
\[\mathcal{T}_{n}=\bar{C}\sum_{j=1,2}\mu_{j}^{D/2}Q_{n}^{(j)}\begin{pmatrix}|\mathbf{ d}_{j}|^{2}&2d_{j,-}^{*}d_{j,+}\\ 2d_{j,-}d_{j,+}^{*}&|\mathbf{d}_{j}|^{2}\end{pmatrix}, \tag{47}\]
which explicitly show how the recollision pathways associated with the two species of electron-hole pairs interfere with each other. We have labeled the two species of electron-hole pairs by \(j=1,2\), and denoted \(Q_{n}^{(j)}\equiv Q_{n}((i\Gamma_{j,n}+\Delta_{j})/(\hbar\omega),U_{p,j}/( \hbar\omega))\). Each species of electron-hole pair is assigned a reduced mass \(\mu_{j}\), a dephasing constant \(\Gamma_{j,n}\) depending on the sideband index \(n\), a detuning \(\Delta_{j}\), a ponderomotive energy \(U_{p,j}\equiv e^{2}F_{\mathrm{max}}^{2}/(4\mu_{j}\omega^{2})\), and a dipole vector \(\mathbf{d}_{j}=d_{j,+}\sigma_{+}+d_{j,-}\sigma_{-}\). Recent development of sideband polarimetry has enabled the determination of each dynamical Jones matrix up to a constant factor [35; 39]. The first row of the dynamical Jones matrix \(\mathcal{T}_{n}\) provides two linear equations with respect to the quantities \(\mu_{j}^{D/2}Q_{n}^{(j)}\) (\(j=1,2\)) associated with the two species of electron-hole pairs. The two linear equations have a unique solution if the dipole vectors \(\mathbf{d}_{j}\) (\(j=1,2\)) satisfy the condition of linear independence,
\[\frac{d_{1,-}^{*}d_{1,+}}{|\mathbf{d}_{1}|^{2}}\neq\frac{d_{2,-}^{*}d_{2,+}}{| \mathbf{d}_{2}|^{2}}. \tag{48}\]
According to the discussion at the beginning of this section, with the absolute value of the quantity \(\mu_{j}^{D/2}Q_{n}^{(j)}\) determined up to a constant factor for three different THz field strengths, the algebraic form, Eq. 33, can be used to determine the reduced mass \(\mu_{j}\) and dephasing constant \(\Gamma_{j,n}\) as functions of the detuning \(\Delta_{j}\) (\(j=1,2\)). For a fixed THz field strength, taking the ratio \(\mu_{1}^{D/2}Q_{n}^{(1)}/(\mu_{2}^{D/2}Q_{n}^{(2)})\) yields a complex equation for the parameters \(i\Gamma_{j,n}+\Delta_{j}\) and \(U_{\mathrm{p,j}}\) (\(j=1,2\)),
\[\frac{\mu_{1}^{D/2}Q_{n}^{(1)}}{\mu_{2}^{D/2}Q_{n}^{(2)}}= (\frac{\mu_{1}}{\mu_{2}})^{\frac{3D+2}{8}}\sqrt{\frac{|\frac{q_{0 }^{(2)}}{|q_{0}^{(1)}|}}{|q_{0}^{(1)}|}}\exp\{i\frac{\arg[q_{0}^{(2)}]-\arg[q_{ 0}^{(1)}]}{2}\}\] \[\exp\{i[q_{1/4}^{(1)}\frac{\hbar\omega}{U_{\mathrm{p,1}}})^{1/4}+ q_{3/4}^{(1)}(\frac{\hbar\omega}{U_{\mathrm{p,1}}})^{3/4}\] \[-q_{1/4}^{(2)}(\frac{\hbar\omega}{U_{\mathrm{p,2}}})^{1/4}-q_{3/4 }^{(2)}(\frac{\hbar\omega}{U_{\mathrm{p,2}}})^{3/4}]\}, \tag{49}\]
where we denote \(q_{l}^{(j)}\equiv q_{l}(n,(i\Gamma_{j,n}+\Delta_{j})/(\hbar\omega))\) with \(j=1,2\) and \(l=0,1/4,3/4\). By treating the reduced mass \(\mu_{j}\) and dephasing constant \(\Gamma_{j,n}\) as functions of the detuning \(\Delta_{j}\) determined for each species of the electron-hole pairs, Eq. 49 represents an algebraic relation between the two detunings \(\Delta_{1}\) and \(\Delta_{2}\). With the ratio \(\mu_{1}^{D/2}Q_{n}^{(1)}/(\mu_{2}^{D/2}Q_{n}^{(2)})\) for another THz field strength, we expect that the detunings and thus the bandgap \(E_{\mathrm{g}}\) might be fully determined. We leave the question on the uniqueness of the solution from this procedure for future discussion.
## VII Discussion
### Connection with existing HSG experiments
Experimental observation of high-order sideband generation (HSG) has been reported in two classes of materials. The first class includes bulk gallium arsenide (GaAs) [32; 39] and GaAs-based quantum wells (QWs) [31; 33; 35; 36]. The second class includes bulk and monolayer tungsten diselenide (WSe\({}_{2}\)) [34; 37; 38; 40]. Our two-band model is appropriate for describing HSG in the direct-gap materials such as narrow GaAs QWs [35] and monolayer WSe\({}_{2}\)[38], which have isolated parabolic bands near the bandgaps. The recent experiments of sideband polarimetry have also indicated that HSG in bulk GaAs can be approximated as resulting from the interference of two electron-hole species that move independently in the THz field when the NIR laser is near-resonant with the bandgap [39]. This means that our results can also be applied to describe HSG in bulk GaAs for the cases of near-resonant excitation by the NIR laser.
For the validity of our formula, the required large ponderomotive energy \(U_{\mathrm{p}}/\hbar\omega\) (in units of the THz photon energy \(\hbar\omega\)) has already been achieved for both classes of materials. In a recent HSG experiment in bulk GaAs [39], a THz field with a frequency \(f=\omega/(2\pi)=0.447\) THz and a field strength \(F_{\mathrm{max}}=70\) kV/cm is used, corresponding to values of \(U_{\mathrm{p}}/\hbar\omega\) being around 2500 and 3900 respectively for the two species of electron-hole pairs associated with two species of holes. The reduced masses for the two species of electron-hole pairs are taken respectively to be in the ranges \([0.057,0.061]m_{0}\) and \([0.037,0.038]m_{0}\) in the \(k_{x}\)-\(k_{y}\) plane, where \(m_{0}\) is the electron rest mass [90]. In a report of HSG in monolayer WSe\({}_{2}\)[38], a THz field with a frequency \(f\) as low as 27 THz and a field strength as high as 19 MV/cm is applied, corresponding to \(U_{\mathrm{p}}/\hbar\omega=291\) if the reduced mass is chosen as \(\mu=0.17m_{0}\)[91]. Therefore, according to the discussion in Section VI, experiment conditions are ready for testing our method of extracting the dephasing constant and reduced mass in monolayer WSe\({}_{2}\), and extracting the dephasing constants, the bandgap and reduced masses in bulk GaAs.
We expect our method can be used to extract dephasing constants and reduced masses in various direct-gap semiconducting and insulating materials that have isolated parabolic bands near the bandgaps. For direct-gap multi-band systems such as bulk GaAs, where two species of electron-hole pairs can be created and move independently in their respective bands, the bandgaps can also be extracted through our approach if the dipole vectors associated with the two electron-hole species satisfy Eq. 48.
### Hints for more complicated systems
In a general multi-band system, different electron-hole species can couple with each other while they are accelerated by the linearly-polarized THz field. In the limit of negligible carrier occupations, the sideband polarization vectors can still be expressed as Feynman path integrals under the approximation of free electrons and holes [35]. However, the coupling between different electron-hole species results in the presence of non-Abelian Berry curvatures, which makes the analysis of the Feynman path integrals with the saddle-point method very complicated [35]. It is still not clear if HSG for such systems can be described by the saddle-point method quantitatively. If the saddle-point approximation still applies, for sufficiently strong dephasing and sufficiently small kinetic energy gain, we expect that the semiclassical trajectories dominantly contributing to the sideband emission should still happen around the nodes of the THz field in order to get effective overlap between the electron and hole wavepackets, at least along the direction of the THz field. If this is true, one might be able to use linear-in-time (LIT) approximation to greatly simplify the analysis and reveal simple laws from the intricate HSG in multi-band systems with non-Abelian Berry curvature.
### Connection with HHG
Due to the similarity between HSG and the interband processes in high-harmonic generation (HHG), our results can also be useful in the analysis of HHG if the interband processes dominate. For the readers who are familiar with the semiconductor Bloch equations (SBEs) [41] but not the integral form of sideband polarization vectors, Eq. 1, we would like to mention that Eq. 1 results from a summation of the microscopic polarization \(p_{\mathbf{k}(t)}\) in the SBEs followed by a Fourier transform,
\[\mathbb{P}_{n}=\frac{1}{T_{\mathrm{THz}}}\int_{0}^{T_{\mathrm{ THz}}}dte^{i(\Omega+n\omega)t}\int\frac{d^{D}\mathbf{P}}{(2\pi)^{D}}\mathbf{d}^{* }p_{\mathbf{k}(t)}, \tag{50}\]
The microscopic polarization \(p_{\mathbf{k}(t)}\) has the form,
\[p_{\mathbf{k}(t)}= \frac{i}{\hbar}\int_{-\infty}^{t}dt^{\prime}\mathbf{d}\cdot \mathbf{E}_{\mathrm{NIR}}(t^{\prime})\] \[\exp\{-\frac{i}{\hbar}\int_{t^{\prime}}^{t}dt^{\prime\prime}(E_{ \mathrm{cv}}[\mathbf{k}(t^{\prime\prime})]-i\Gamma)\}, \tag{51}\]
which satisfies one of the SBEs in the limit of negligible carrier occupations,
\[i\hbar\frac{d}{dt}p_{\mathbf{k}(t)} = i\hbar\frac{\partial}{\partial t}p_{\mathbf{k}(t)}+i\hbar \dot{\mathbf{k}}(t)\cdot\frac{\partial}{\partial\mathbf{k}}p_{\mathbf{k}(t)} \tag{52}\] \[= (E_{\mathrm{cv}}[\mathbf{k}(t)]-i\Gamma)p_{\mathbf{k}(t)}- \mathbf{d}\cdot\mathbf{E}_{\mathrm{NIR}}(t),\]
where the Coulomb interaction is ignored and the scattering effects are described phenomenologically by the dephasing constant \(\Gamma\). In HSG, the kinetic momentum \(\hbar\mathbf{k}\) satisfies the equation of motion, \(\hbar\dot{\mathbf{k}}(t)=-e\mathbf{F}_{\mathrm{THz}}(t)\). By substituting the THz and NIR laser fields with a single laser field, Eq. 52 can also be used to describe the interband HHG in cases where the limit of negligible carrier occupations and the approximation of free electrons and holes still apply. For such cases, the interband polarization vectors are of the form
\[\mathbb{P}_{n}^{\mathrm{HHG}} = \frac{i}{\hbar}\frac{1}{T_{0}}\int_{0}^{T_{0}}dte^{i(n+1)u_{0}t} \int\frac{d^{D}\mathbf{P}}{(2\pi)^{D}}\int_{-\infty}^{t}dt^{\prime}\mathbf{d} ^{*} \tag{53}\] \[\exp\{-\frac{i}{\hbar}\int_{t^{\prime}}^{t}dt^{\prime\prime}(E_{ \mathrm{cv}}[\mathbf{k}(t^{\prime\prime})]-i\Gamma)\}\mathbf{d}\cdot\mathbf{F} _{0}(t^{\prime}),\]
where \(n\) is an even integer, and \(T_{0}=2\pi/\omega_{0}\) is the period of the driving laser field \(\mathbf{F}_{0}\). For a driving field of the form \(\mathbf{F}_{0}(t)=\hat{x}F_{\mathrm{max}}\cos(\omega_{0}t)\), the interband polarization \(\mathbb{P}_{n}^{\mathrm{HHG}}\) contains two terms corresponding to the sideband polarization vector \(\mathbb{P}_{n}\) in Eq. 1 with the substitutions, \(\mathbf{F}_{\mathrm{NIR}}\rightarrow\hat{x}F_{\mathrm{max}}/2\), \(\Omega\rightarrow\pm\omega_{0}\), \(\omega\rightarrow\omega_{0}\), \(n\rightarrow(n+1)\mp 1\) on the right-hand side of the equation. Therefore, our algebraic formulae for the sideband polarization vector \(\mathbb{P}_{n}\), Eq. 27, 28, 29, 33 and 34, can be directly applied in the analysis of the interband HHG under the aforementioned assumptions.
## VIII Conclusion
In summary, we have introduced a linear-in-time approximation and derived an explicit formula for electron-hole recollisions in a prototypical two-band model by tailoring Feynman path integrals. Our formula connects the sideband amplitudes with the laser-field and material parameters in a highly nontrivial manner. Over a wide range of dephasing constant, detuning, and ponderomotive energy, we show that both the absolute values and phases of the sideband polarization vectors can be quantitatively described by our algebraic formula with high accuracy. We demonstrate a way to control the sideband amplitudes by building up a Feynman-path interferometer that can be used to extract the dephasing constant, the bandgap, and the reduced mass. We also propose a method of extracting the dephasing constant and the reduced mass by simple algebraic calculation with sideband intensities measured for three THz field strengths. For a multi-band system such as bulk GaAs near-resonantly excited by the NIR laser, we show the possibility of extracting the dephasing constants, the bandgap, and the reduced masses through algebraic calculations. We have also discussed how our approach can be useful for analyses of HSG in more complicated systems, as well as HHG when interband processes dominate.
## Acknowledgment
We thank J. B. Costello and S. D. O'Hara for stimulating discussions. This work is funded by NSF-DMR
2004995.
## Appendix A Saddle-point method
In this appendix, we illustrate the details of using saddle-point method to calculate the sideband polarization vectors from Eq. 2. We will discuss the case where there is only one saddle point associated with each sideband.
We first expand the action \(S_{n}(\mathbf{P},t,\tau)\) into a Taylor series up to the second order in the variables, \(\mathbf{P}\), \(t\), and \(\tau\), around the saddle point \((\mathbf{P}_{n},t_{n},\tau_{n})\) for the \(n\)th-order sideband, \(S_{n}\approx S_{\text{sc}}(P_{n},t_{n},\tau_{n})+\delta^{2}S_{n}/2\), with a semiclassical action
\[S_{\text{sc}}(P_{n},t_{n},\tau_{n})= n\hbar\omega t_{n}-\int_{t_{n}-\tau_{n}}^{t_{n}}dt^{\prime\prime} \frac{\hbar^{2}}{2\mu}[P_{n}+\frac{e}{\hbar}A(t^{\prime\prime})]^{2}\] \[+i(\Gamma-i\Delta)\tau_{n}, \tag{10}\]
and a second-order term,
\[\delta^{2}S_{n}= -\frac{\hbar^{2}\tau_{n}}{\mu}(\mathbf{P}-P_{x}\hat{x})^{2}\] \[+\frac{\partial^{2}S_{\text{sc}}}{\partial P_{n}^{2}}\delta P^{2 }+2\delta\tau\frac{\partial^{2}S_{\text{sc}}}{\partial\tau_{n}\partial P_{n} }\delta P+2\delta t\frac{\partial^{2}S_{\text{sc}}}{\partial t_{n}\partial P_{ n}}\delta P\] \[+\frac{\partial^{2}S_{\text{sc}}}{\partial t_{n}^{2}}\delta t^{2 }+2\delta\tau\frac{\partial^{2}S_{\text{sc}}}{\partial\tau_{n}\partial t_{n} }\delta t+\frac{\partial^{2}S_{\text{sc}}}{\partial\tau_{n}^{2}}\delta\tau^{2}. \tag{11}\]
where \(\delta P=P_{x}-P_{n}\), \(\delta t=t-t_{n}\), and \(\delta\tau=\tau-\tau_{n}\). Note that the momentum \(\hbar\mathbf{P}_{n}\) is along the x-axis, as is obvious from the first saddle-point equation, Eq. 9. Extending the limits of the integrals to infinities, we obtain the following Gaussian integrals,
\[\mathbb{P}_{n}\approx \mathbf{d}^{*}\mathbf{d}\cdot\mathbf{F}_{\text{NIR}}\frac{i\omega }{\pi\hbar}\exp[\frac{i}{\hbar}S_{\text{sc}}(P_{n},t_{n},\tau_{n})]\int_{- \infty}^{+\infty}d\delta\tau\] \[\int_{-\infty}^{+\infty}d\delta t\int_{-\infty}^{+\infty}\frac{d ^{D}\mathbf{P}}{(2\pi)^{D}}\exp[\frac{i}{2\hbar}\delta^{2}S_{n}]. \tag{12}\]
To do the integrals, we first make the quadratic form \(\delta^{2}S\) diagonal. Introducing the variable
\[\bar{P}=\delta P-\frac{\partial f_{P_{n}}}{\partial t_{n}}\delta t-\frac{ \partial f_{P_{n}}}{\partial\tau_{n}}\delta\tau, \tag{13}\]
where \(f_{P_{n}}(t_{n},\tau_{n})\) is the solution of \(P_{n}\) from the saddle-point equation \(\partial_{P_{n}}S_{\text{sc}}(P_{n},t_{n},\tau_{n})=0\), we can write the second-order term \(\delta^{2}S\) in the form
\[\delta^{2}S_{n}= -\frac{\hbar^{2}\tau_{n}}{\mu}(\mathbf{P}-P_{x}\hat{x})^{2}+ \frac{\partial^{2}S_{\text{sc}}}{\partial P_{n}^{2}}\bar{F}^{2}+\frac{ \partial^{2}S_{\text{sc}}^{(t,\tau)}}{\partial\tau_{n}^{2}}\delta t^{2}\] \[+2\delta\tau\frac{\partial^{2}S_{\text{sc}}^{(t,\tau)}}{\partial \tau_{n}\partial t_{n}}\delta t+\frac{\partial^{2}S_{\text{sc}}^{(t,\tau)}}{ \partial\tau_{n}^{2}}\delta\tau^{2}. \tag{14}\]
where \(S_{\text{sc}}^{(t,\tau)}(t_{n},\tau_{n})=S_{\text{sc}}(t_{P_{n}}(t_{n},\tau_{ n}),t_{n},\tau_{n})\). Through a second change of variables, \(\bar{t}=\delta t-\partial_{\tau_{n}}f_{t_{n}}\delta\tau\), with \(f_{t_{n}}(\tau_{n})\) being the solution of \(t_{n}\) from \(\partial_{t_{n}}S_{\text{sc}}^{(t,\tau)}(t_{n},\tau_{n})=0\), we obtain the diagonal form
\[\delta^{2}S_{n}= -\frac{\hbar^{2}\tau_{n}}{\mu}(\mathbf{P}-P_{x}\hat{x})^{2}\] \[+\frac{\partial^{2}S_{\text{sc}}}{\partial P_{n}^{2}}\bar{P}^{2 }+\frac{\partial^{2}S_{\text{sc}}^{(t,\tau)}}{\partial t_{n}^{2}}\bar{t}^{2}+ \frac{\partial^{2}S_{\text{sc}}^{(\tau)}}{\partial\tau_{n}^{2}}\delta\tau^{2}, \tag{15}\]
where \(S_{\text{sc}}^{(\tau)}(\tau_{n})=S_{\text{sc}}^{(t,\tau)}(f_{t_{n}}(\tau_{n}), \tau_{n})\). The Gaussian integrals converge if \(\partial_{P_{n}}^{2}S_{\text{sc}}=-\hbar^{2}\tau_{n}/\mu\), \(\partial_{t_{n}}^{2}S_{\text{sc}}^{(t,\tau)}\), and \(\partial_{\tau_{n}}^{2}S_{\text{sc}}^{(\tau)}\) are all nonzero and their imaginary parts are all non-negative. Under these conditions, carrying out the Gaussian integrals yields
\[\mathbf{P}_{n}\approx 2\mathbf{C}\exp[\frac{i}{\hbar}S_{\text{sc}}^{(t, \tau)}(t_{n},\tau_{n})]\] \[\frac{e^{-(i/2)[D\arg(\tau_{n})+\arg(\partial_{t_{n}}^{2}S_{ \text{sc}}^{(t,\tau)})+\arg(\partial_{\tau_{n}}^{2}S_{\text{sc}}^{(\tau)})]}}{ \sqrt{[(\omega\tau_{n})^{D}[\partial_{(\omega\tau_{n})}^{2}S_{\text{sc}}^{(t, \tau)}/\hbar][\partial_{(\omega\tau_{n})}^{2}S_{\text{sc}}^{(\tau)}/\hbar][ \partial_{(\omega\tau_{n})}^{2}S_{\text{sc}}^{(\tau)}/\hbar]}, \tag{16}\]
which includes a constant vector
\[\mathbf{C}=\frac{-1}{\hbar\omega}e^{-i\pi D/4}(\frac{\mu\omega}{2\pi\hbar})^{D/ 2}\mathbf{d}^{*}\mathbf{d}\cdot\mathbf{F}_{\text{NIR}}. \tag{17}\]
We have eliminated \(P_{n}\) in the action \(S_{\text{sc}}(P_{n},t_{n},\tau_{n})\) using the solution of the saddle-point equation \(\partial_{P_{n}}S_{\text{sc}}(P_{n},t_{n},\tau_{n})=0\),
\[P_{n}=f_{P_{n}}(t_{n},\tau_{n})=\frac{e}{\hbar\tau_{n}}\int_{t_{n}-\tau_{n}}^{t_{n}} dt^{\prime\prime}A(t^{\prime\prime}). \tag{18}\]
The explicit form of \(S_{\text{sc}}^{(t,\tau)}(t_{n},\tau_{n})\) reads
\[S_{\text{sc}}^{(t,\tau)}(t_{n},\tau_{n})\] \[= n\hbar\omega t_{n}+[i\Gamma+\Delta+U_{\text{p}}(\gamma^{2}(\omega \tau_{n})-1)]\tau_{n}\] \[+U_{\text{p}}\tau_{n}\alpha(\omega\tau_{n})\gamma(\omega\tau_{n}) \cos[\omega(\tau_{n}-2t_{n})], \tag{19}\]
where we have introduced the functions \(\alpha(x)=\cos(x/2)-\gamma(x)\) and \(\gamma(x)=\beta(x)/(x/2)\) with \(\beta(x)=\sin(x/2)\). The second saddle-point equation, \(\partial_{t_{n}}S_{\text{sc}}^{(t,\tau)}(t_{n},\tau_{n})=\partial_{t_{n}}S_{ \text{sc}}(P_{n},t_{n},\tau_{n})=0\), gives an implicit form of the function \(f_{t_{n}}(\tau_{n})\),
\[\sin[\omega(\tau_{n}-2f_{t_{n}})]=\frac{n\hbar\omega}{4U_{\text{p}}\alpha( \omega\tau_{n})\beta(\omega\tau_{n})}, \tag{20}\]
from which we can calculate the explicit forms of the derivatives \(\partial_{(\omega t_{n})}^{2}S_{\text{sc}}^{(t,\tau)}/\hbar\) and \(\partial_{(\omega\tau_{n})}^{2}S_{\text{sc}}^{(\tau)}/\hbar\) as
\[\frac{1}{\hbar}\frac{\partial^{2}S_{\text{sc}}^{(t,\tau)}}{ \partial(\omega\tau_{n})^{2}}= 2n\cot[\omega(\tau_{n}-2t
To determine \(t_{n}\) and \(\tau_{n}\), one can use Eq. 11, together with the third saddle-point equation \(\partial_{\tau_{n}}S^{(t,\tau)}_{\rm sc}(t_{n},\tau_{n})=\partial_{\tau_{n}}S_{ \rm sc}(P_{n},t_{n},\tau_{n})=0\), which can be written as
\[\cos[\omega(\tau_{n}-2t_{n})]=\frac{\alpha^{2}(\omega\tau_{n})+\beta^{2}( \omega\tau_{n})-\xi}{\alpha^{2}(\omega\tau_{n})-\beta^{2}(\omega\tau_{n})}, \tag{14}\]
where \(\xi=[i\Gamma+\Delta+(n/2)\hbar\omega]/U_{\rm p}\).
## Appendix B Analytic calculations
In this appendix, we perform analytic calculations to simplify the expression of the sideband polarization vectors, Eq. 1, into an integral over a single variable.
We consider a general polarization state for the THz field with a vector potential
\[\mathbf{A}(t)=-\Lambda\frac{F_{\rm max}}{\omega}[\cos\phi\sin( \omega t)\hat{x}+\sin\phi\sin(\omega t+\varphi)\hat{y}], \tag{15}\]
where \(\Lambda=\sqrt{2/(1+\sqrt{\kappa})}\) with \(\kappa=\cos^{2}\varphi+\cos^{2}(2\phi)\sin^{2}\varphi\) and \(\phi\in[0,\pi/2]\). Integrating out all canonical momentum components except for the one along the x- and y-axis, we write the sideband polarization vector in the form,
\[\mathbb{P}_{n}= \mathbf{C}\frac{2\pi\hbar}{\mu}\int_{0}^{T_{\rm THz}}\frac{dt}{T _{\rm THz}}e^{i(\Omega+n\omega)t}\int\frac{dP_{x}dP_{y}}{(2\pi)^{2}}\] \[\int_{0}^{+\infty}\frac{d\tau}{(\omega\tau)^{(D-2)/2}}\exp[\frac{ i}{\hbar}\mathbb{S}(P_{x},P_{y},t,\tau)], \tag{16}\]
where the action \(\mathbb{S}(P_{x},P_{y},t,\tau)\) is quadratic in both \(P_{x}\) and \(P_{y}\),
\[\mathbb{S}= -\hbar\Omega t-[\frac{\hbar^{2}(P_{x}^{2}+P_{y}^{2})}{2\mu}-i \Gamma-\Delta+\Lambda^{2}U_{\rm p}]\tau\] \[+\frac{2\Lambda\hbar eF_{\rm THz}}{\mu\omega^{2}}\{P_{x}\cos\phi \sin\frac{\omega\tau}{2}\sin[\omega(\frac{\tau}{2}-t)]\] \[+P_{y}\sin\phi\sin\frac{\omega\tau}{2}\sin[\omega(\frac{\tau}{2} -t)-\varphi]\}\] \[+\frac{\Lambda^{2}U_{\rm p}}{\omega}\{\cos^{2}\phi\sin(\omega \tau)\cos[\omega(\tau-2t)]\] \[+\sin^{2}\phi\sin(\omega\tau)\cos[\omega(\tau-2t)-2\varphi]\}. \tag{17}\]
Integrating out \(P_{x}\) and \(P_{y}\) gives
\[\mathbb{P}_{n}= \mathbf{C}\frac{\omega}{i}\int_{0}^{T_{\rm THz}}\frac{dt}{T_{\rm THz }}e^{i(\Omega+n\omega)t}\] \[\int_{0}^{+\infty}\frac{d\tau}{(\omega\tau)^{D/2}}\exp[\frac{i}{ \hbar}\mathbb{S}^{(t,\tau)}(t,\tau)], \tag{18}\]
where
\[\mathbb{S}^{(t,\tau)}(t,\tau)= \sqrt{\kappa}\Lambda^{2}U_{\rm p}\tau\gamma(\omega\tau)\alpha( \omega\tau)\cos[\omega(\tau-2t)-\varphi+\eta]\] \[-\hbar\Omega t+\mathbb{S}^{(\tau)}(\omega\tau)\hbar\omega\tau, \tag{19}\]
with the functions \(\alpha\) and \(\gamma\) defined in Appendix. A, \(\mathbb{S}^{(\tau)}(\omega\tau)\equiv(i\Gamma+\Delta)/(\hbar\omega)+\Lambda^ {2}[U_{\rm p}/(\hbar\omega)][\gamma^{2}(\omega\tau)-1]\), and a constant \(\eta\) defined by \(\cos\eta=\cos\varphi/\sqrt{\kappa}\) and \(\sin\eta=\cos 2\phi\sin\varphi/\sqrt{\kappa}\). Using the identity with the Bessel functions of the first kind, \(J_{m}\),
\[e^{iz\cos\theta}=\sum_{m=-\infty}^{+\infty}J_{m}(z)i^{m}e^{im\theta}, \tag{20}\]
we arrive at a Fourier series,
\[\exp[\frac{i}{\hbar}\mathbb{S}^{(t,\tau)}(t,\tau)]\] \[= \sum_{m}e^{-i(\Omega+2m\omega)t}i^{m}J_{m}[\sqrt{\kappa}\Lambda^{ 2}\frac{U_{\rm p}}{\hbar}\tau\gamma(\omega\tau)\alpha(\omega\tau)]\] \[e^{im(\eta-\varphi)}\exp\{i[\mathbb{S}^{(\tau)}(\omega\tau)+m] \omega\tau\}, \tag{21}\]
from which we can immediately see that the sideband amplitudes are identically zero for odd sideband indices, while for even sideband indices, we obtain the following integral form,
\[\mathbb{P}_{n}= \mathbf{C}i^{n/2-1}\int_{0}^{+\infty}\frac{d(\omega\tau)}{( \omega\tau)^{D/2}}J_{n/2}[\sqrt{\kappa}\Lambda^{2}\frac{U_{\rm p}}{\hbar\omega} \omega\tau\gamma(\omega\tau)\alpha(\omega\tau)\] \[e^{i(n/2)(\eta-\varphi)}\exp\{i[\mathbb{S}^{(\tau)}(\omega\tau)+n /2]\omega\tau\}. \tag{22}\]
For circularly polarized THz fields, we have \(\phi=\pi/4\) and \(\varphi=\pm\pi/2\) so that \(\kappa=0\), which implies that the sideband amplitudes are identically zero since the Bessel functions of nonzero integer orders satisfy \(J_{n}(0)=0\).
For a linearly polarized THz field with vector potential \(\mathbf{A}=-(F_{\rm max}/\omega)\cos(\omega t)\hat{x}\), we have \(\eta=\varphi=0\) and \(\kappa=\Lambda=1\) thus Eq. 22 can be simplified as
\[\mathbb{P}_{n}= \mathbf{C}i^{n/2-1}\int_{0}^{+\infty}\frac{d(\omega\tau)}{(\omega \tau)^{D/2}}J_{n/2}[\frac{U_{\rm p}}{\hbar\omega}\omega\tau\gamma(\omega\tau) \alpha(\omega\tau)]\] \[\exp\{i[\mathbb{S}^{(\tau)}(\omega\tau)+n/2]\omega\tau\}, \tag{23}\]
with \(\mathbb{S}^{(\tau)}(\omega\tau)=(i\Gamma+\Delta)/(\hbar\omega)+[U_{\rm p}/( \hbar\omega)][\gamma^{2}(\omega\tau)-1]\).
## Appendix C Maximum electron-hole separations and electron-hole wavefunction widths
In this appendix, we discuss the maximum electron-hole separations and electron-hole wavefunction widths for one-dimensional momentum space to gain some insights into how the accuracy of the saddle-point approximation depends on the dephasing constant \(\Gamma\), the sideband index \(n\), and the ponderomotive energy \(U_{p}\). Intuitively, one expects that the recollision processes in HSG can be described by the semiclassical trajectories given by the saddle-point solutions if the maximum separations of the electron-hole pairs are much larger than the widths of their wavefunctions in real space.
We estimate the maximum electron-hole separations for the shortest classical recollision pathways within the
linear-in-time approximation. Along a shortest classical recollision pathway, an electron and a hole are created with zero relative kinetic momentum (\(\hbar k_{n}(t^{\prime}_{n})=0\)), and the maximum separation is reached at \(t_{\rm max}\) when the kinetic momentum \(\hbar k_{n}(t)\) goes back to zero. Under the linear-in-time approximation, from Eq. 16 and 21 with \(\Gamma=\Delta=0\), we see that
\[\omega\tilde{t}^{\prime}_{n}=-\omega\tilde{t}_{\rm max}=-(\frac{2n\hbar\omega} {9U_{\rm p}})^{1/4}. \tag{10}\]
Integrating the relative velocity \(v_{\rm eh}(t^{\prime\prime})=\hbar k_{n}(t^{\prime\prime})/\mu\) from \(t^{\prime}_{n}\) to \(t_{\rm max}\), we obtain the maximum electron-hole separation as
\[x_{\rm max}=|\int_{t^{\prime}_{n}}^{t_{\rm max}}dt^{\prime\prime}\frac{\hbar k (t^{\prime\prime})}{\mu}|=\frac{2\sqrt{3}}{9P_{\rm max}}(\frac{8n^{3}U_{\rm p }}{\hbar\omega})^{1/4}, \tag{11}\]
where \(\hbar P_{\rm max}=eF_{\rm max}/\omega\) is the maximum relative momentum obtainable from the THz field.
Next, we calculate the electron-hole wavefunction widths along the THz-field driving direction for one-dimensional momentum space. The electron-hole wavefunctions are equivalent to the microscopic polarization \(p_{\rm k(t)}\) in Eq. 51 [35]. For one-dimensional momentum space, the electron-hole wavefunctions can be calculated as
\[p_{k(t)}=\frac{i}{\hbar}{\bf d}\cdot{\bf F}_{\rm NIR}\int_{0}^{+\infty}d\tau e ^{i\mathbb{S}(P,t,\tau)-i\Omega t}, \tag{12}\]
with an action
\[\mathbb{S}(P,t,\tau) = -[(\frac{2P^{2}}{P_{\rm max}^{2}}+1)U_{\rm p}-(i\Gamma+\Delta)] \frac{\tau}{\hbar} \tag{13}\] \[+ \frac{8U_{\rm p}}{\hbar\omega}\frac{P}{P_{\rm max}}\sin\frac{ \omega\tau}{2}\sin[\omega(\frac{\tau}{2}-t)]\] \[+ \frac{U_{\rm p}}{\hbar\omega}\sin(\omega\tau)\cos[2\omega(\frac{ \tau}{2}-t)].\]
Using the identity, Eq. 10, we have the expansion,
\[e^{i\mathbb{S}(P,t,\tau)}= e^{-i[(2P^{2}/P_{\rm max}^{2}+1)U_{\rm p}-(i\Gamma+\Delta)]( \tau/\hbar)} \tag{14}\] \[\sum_{n_{1}}J_{n_{1}}[\frac{U_{\rm p}}{\hbar\omega}\sin(\omega \tau)]i^{n_{1}}e^{in_{1}\omega(\tau-2t)}\] \[\sum_{n_{2}}J_{n_{2}}[\frac{8U_{\rm p}}{\hbar\omega}\frac{P}{P_{ \rm max}}\sin\frac{\omega\tau}{2}]e^{in_{2}\omega(\frac{\tau}{2}-t)}.\]
Since the Bessel function \(J_{n_{2}}(x)\) is even (odd) for even (odd) \(n_{2}\), the terms with odd \(n_{2}\) do not contribute to sideband generation because of inversion symmetry. Including only the terms with even \(n_{2}\), we arrive at the following form of the electron-hole wavefunctions,
\[p_{k(t)}= \frac{i}{\hbar\omega}{\bf d}\cdot{\bf F}_{\rm NIR}\sum_{n\,{\rm even }}\Psi_{P}(n)e^{-i(\Omega+n\omega)t}, \tag{15}\]
where each sideband frequency \(\Omega+n\omega\) is associated with a momentum distribution function,
\[\Psi_{P}(n)= \int_{0}^{+\infty}d(\omega\tau)e^{-i[2P^{2}/P_{\rm max}^{2}+1)U_{ \rm p}-\hbar\omega-(i\Gamma+\Delta)](\tau/\hbar)}\] \[\sum_{n^{\prime}}J_{2n^{\prime}}[\frac{8U_{\rm p}}{\hbar\omega} \frac{P}{P_{\rm max}}\sin\frac{\omega\tau}{2}]\] \[J_{n-n^{\prime}}[\frac{U_{\rm p}}{\hbar\omega}\sin(\omega\tau)]i ^{n-n^{\prime}}. \tag{16}\]
Fig. 9 (a), (b), and (c) show respectively the dependences of the momentum distribution function \(\Psi_{P}(n)\) on the dephasing constant \(\Gamma\), the sideband index \(n\), and the ponderomotive energy \(U_{\rm p}\). We observe that the momentum distribution function \(\Psi_{P}(n)\) tends to be more localized
Figure 9: Momentum distributions of electron-hole wavefunctions. (a) The momentum distribution functions \(\Psi_{P}(n)\) for two dephasing constants, \(\Gamma=\hbar\omega\) (black curve) and \(\Gamma=5\hbar\omega\) (red curve). (b) The momentum distribution functions \(\Psi_{P}(n)\) for \(n=10\) (red curve) and \(n=40\) (dark green curve). (c) The momentum distribution functions \(\Psi_{P}(n)\) for two values of the ponderomotive energy, \(U_{\rm p}=2\times 10^{2}\hbar\omega\) (blue curve) and \(U_{\rm p}=2\times 10^{3}\hbar\omega\) (red curve). The red curves in (a), (b), and (c) represent the same momentum distribution function \(\Psi_{P}(n)\) calculated for the 10th-order sideband with parameters \(U_{\rm p}=2\times 10^{3}\hbar\omega\), \(\Gamma=5\hbar\omega\), and \(\Delta=0\). The two curves in each frame are calculated by using the same parameters except for the one shown in the legend.
for weaker dephasing, smaller sideband index, and larger ponderomotive energy. The peaks at around \(\pm P_{\rm max}\) correspond to the saddle-point solution \(P_{n}\) in Eq. 119 and its inverse.
Since the maximum separation \(x_{\rm max}\) (in units of \(P_{\rm max}^{-1}\)) is larger for higher-order sidebands and larger ponderomotive energy, one expects that the saddle-point approximation should be of higher accuracy for relatively high-order sidebands and relatively strong dephasing, while the dependence of the accuracy on the ponderomotive energy relies on the competition between the maximum electron-hole separations and the electron-hole wavefunction widths.
## Appendix D Corrections to the linear-in-time approximation
In this appendix, we derive the correction term to the linear-in-time approximation in the algebraic form, Eq. 33.
The THz field strength near the node at \(\omega t=\pi/2\) can in general be expanded in a Taylor series,
\[F_{\rm THz}(t)=-F_{\rm max}[\omega\tilde{t}-\frac{(\omega\tilde{t})^{3}}{6}+ \cdots]. \tag{120}\]
From the Newtonian equation of motion \(\hbar\dot{k}_{n}(t)=-eF_{\rm THz}(t)\), the kinetic momentum \(\hbar k_{n}(t)\) can also be written as a Taylor series,
\[\hbar k_{n}(t)= \hbar k_{n}(t^{\prime}_{n})+\frac{eF_{\rm max}}{\omega}\{\frac{1} {2}[(\omega\tilde{t})^{2}-(\omega\tilde{t}^{\prime}_{n})^{2}]\] \[-\frac{1}{24}[(\omega\tilde{t})^{4}-(\omega\tilde{t}^{\prime}_{n })^{4}+\cdots]\}. \tag{121}\]
Putting this solution into the first saddle-point equation, Eq. 9, yields
\[\zeta_{0}\sqrt{\frac{\hbar\omega}{2U_{\rm p}}}= \frac{1}{6}[(\omega\tilde{t}_{n})-(\omega\tilde{t}^{\prime}_{n})] \{[(\omega\tilde{t}_{n})+2(\omega\tilde{t}^{\prime}_{n})]\] \[-\frac{1}{20}[(\omega\tilde{t}_{n})^{3}+2(\omega\tilde{t}_{n})^{ 2}(\omega\tilde{t}^{\prime}_{n})+3(\omega\tilde{t}_{n})(\omega\tilde{t}^{ \prime}_{n})^{2}\] \[+4(\omega\tilde{t}^{\prime}_{n})^{3}]+\cdots\}. \tag{122}\]
The solution of \(k_{n}(t)\) at \(t_{n}\) provides another equation for the time variables \(\tilde{t}^{\prime}_{n}\) and \(\tilde{t}_{n}\),
\[(\zeta_{n}+\zeta_{0})\sqrt{\frac{\hbar\omega}{2U_{\rm p}}}= \frac{1}{2}[(\omega\tilde{t}_{n})^{2}-(\omega\tilde{t}^{\prime}_{n })^{2}]\] \[-\frac{1}{24}[(\omega\tilde{t}_{n})^{4}-(\omega\tilde{t}^{\prime} _{n})^{4}+\cdots]. \tag{123}\]
Here we have used Eq. 19 and 20 to eliminate the kinetic momenta at \(t^{\prime}_{n}\) and \(t_{n}\). To obtain the correction terms of higher-order in \(\hbar\omega/U_{\rm p}\) to the solutions of \(t^{\prime}_{n}\) and \(t_{n}\), we start a perturbation theory from the ansatzes,
\[\omega\tilde{t}^{\prime}_{n}=\delta^{\prime}_{1/4}+\delta^{\prime} _{3/4}, \tag{124}\] \[\omega\tilde{t}_{n}=\delta_{1/4}+\delta_{3/4}, \tag{125}\]
where the factors \(\delta^{\prime}_{1/4}\) and \(\delta_{1/4}\) are the solutions of \(\omega\tilde{t}^{\prime}_{n}\) and \(\omega\tilde{t}_{n}\) of the order \((\hbar\omega/U_{\rm p})^{1/4}\) under the linear-in-time approximation, given by Eq. 21 and 22, and the factors \(\delta^{\prime}_{3/4}\) and \(\delta_{3/4}\) are correction terms of the order \((\hbar\omega/U_{\rm p})^{3/4}\). Putting these ansatzes into Eq. 122 and 124 and keeping the lowest-order terms in \(\hbar\omega/U_{\rm p}\), we obtain the following two linear equations with respect to the variables \(\delta^{\prime}_{3/4}\) and \(\delta_{3/4}\),
\[(2\delta_{1/4}+\delta^{\prime}_{1/4})\delta_{3/4}+(\delta_{1/4}- 4\delta^{\prime}_{1/4})\delta^{\prime}_{3/4}\] \[= \frac{1}{20}(\delta_{1/4}-\delta^{\prime}_{1/4})[\delta^{3}_{1/4} +2\delta^{2}_{1/4}\delta^{\prime}_{1/4}\] \[+3\delta_{1/4}(\delta^{\prime}_{1/4})^{2}+4(\delta^{\prime}_{1/4} )^{3}], \tag{126}\]
\[\delta_{1/4}\delta_{3/4}-\delta^{\prime}_{1/4}\delta^{\prime}_{3/4}=\frac{1}{2 4}[\delta^{4}_{1/4}-(\delta^{\prime}_{1/4})^{4}]. \tag{127}\]
Solving these linear equations yields
\[\delta_{3/4}= \frac{1}{120}[5\delta^{3}_{1/4}-4\delta^{2}_{1/4}\delta^{\prime} _{1/4}\] \[-7\delta_{1/4}(\delta^{\prime}_{1/4})^{2}-4(\delta^{\prime}_{1/4} )^{3}], \tag{128}\] \[\delta^{\prime}_{3/4}= \frac{1}{120}[5(\delta^{\prime}_{1/4})^{3}-4(\delta^{\prime}_{1/4 })^{2}\delta_{1/4}\] \[-7(\delta^{\prime}_{1/4})\delta^{2}_{1/4}-4(\delta_{1/4})^{3}]. \tag{129}\]
Substituting \(\delta^{\prime}_{1/4}\) and \(\delta_{1/4}\) with the right-hand sides of Eq. 21 and 22, after some straightforward algebra, we obtain
\[\delta^{\prime}_{3/4}= (\frac{2\hbar\omega}{9U_{\rm p}})^{3/4}\frac{1}{120(\zeta_{n}- \zeta_{0})^{3/2}}\] \[[23\zeta^{2}_{0}(2\zeta_{0}-3\zeta_{n})+\zeta^{2}_{n}(30\zeta_{0}- 17\zeta_{n})], \tag{130}\] \[\delta_{3/4}= -(\frac{2\hbar\omega}{9U_{\rm p}})^{3/4}\frac{1}{120(\zeta_{n}- \zeta_{0})^{3/2}}\] \[[\zeta^{2}_{0}(17\zeta_{0}-30\zeta_{n})+23\zeta^{2}_{n}(3\zeta_{0}- 2\zeta_{n})]. \tag{131}\]
To derive the correction term to the semiclassical action \(S^{(t,\tau)}_{\rm sc}(t_{n},\tau_{n})\) of the order \((\hbar\omega/U_{\rm p})^{3/4}\), we approximate the semiclassical action as the following Taylor polynomial,
\[\frac{1}{\hbar}S^{(t,\tau)}_{\rm sc}(t_{n},\tau_{n})=n\omega t_{n}+ \frac{i\Gamma+\Delta}{\hbar\omega}\omega\tau_{n}-\frac{U_{\rm p}}{24\hbar\omega}\] \[(\omega\tau_{n})^{3}[\frac{(\omega\tau_{n})^{2}}{15}+(\omega \tilde{t}^{\prime}_{n}+\omega\tilde{t}_{n})^{2}-\frac{(\omega\tau_{n})^{2}}{15}( \omega\tilde{t}^{\prime}_{n}+\omega\tilde{t}_{n})^{2}\] \[-12(\omega\tilde{t}^{\prime}_{n}+\omega\tilde{t}_{n})^{4}+\frac{( \omega\tau_{n})^{5}}{420}], \tag{132}\]
Using the identities \(\zeta^{2}_{n}-\zeta^{2}_{0}=n\) and \(\zeta^{2}_{0}=(i\Gamma+\Delta)/(\hbar\omega)\) and the solutions of \(t^{\prime}_{n}\) and \(t_{n}\) up to the order of \((\hbar\omega/U_{\rm p})^{3/4}\), we arrive at a form of the semiclassical action up to the order of \((\hbar\omega/U_{\rm p})^{3/4}\),
\[\frac{1}{\hbar}S^{(t,\tau)}_{\rm sc}(t_{n},\tau_{n})= q_{1/4}(n,\frac{i\Gamma+\Delta}{\hbar\omega})(\frac{\hbar\omega}{U_{\rm p}})^{1/4}\] \[+q_{3/4}(n,\frac{i\Gamma+\Delta}{\hbar\omega})(\frac{\hbar \omega}{U_{\rm p}})^{3/4}, \tag{133}\] |
2304.11185 | Gravitational-Wave Phasing of Quasi-Circular Compact Binary Systems to
the Fourth-and-a-Half post-Newtonian Order | The inspiral phase of gravitational waves emitted by spinless compact binary
systems is derived through the fourth-and-a-half post-Newtonian (4.5PN) order
beyond quadrupole radiation, and the leading amplitude mode ($\ell$, m) = (2,
2) is obtained at 4PN order. We also provide the radiated flux, as well as the
phase in the stationary phase approximation. Rough numerical estimates for the
contribution of each PN order are provided for typical systems observed by
current and future gravitational wave detectors. | Luc Blanchet, Guillaume Faye, Quentin Henry, François Larrouturou, David Trestini | 2023-04-21T18:00:06Z | http://arxiv.org/abs/2304.11185v4 | # Gravitational-Wave Phasing of Quasi-Circular Compact Binary Systems
###### Abstract
The inspiral phase of gravitational waves emitted by spinless compact binary systems is derived through the fourth-and-a-half post-Newtonian (4.5PN) order beyond quadrupole radiation, and the leading amplitude mode \((\ell,\mathrm{m})=(2,2)\) is obtained at 4PN order. We also provide the radiated flux, as well as the phase in the stationary phase approximation. Rough numerical estimates for the contribution of each PN order are provided for typical systems observed by current and future gravitational wave detectors.
+
Footnote †: preprint: DESY-23-043
At the time when the LIGO and Virgo gravitational-wave detectors were approved, there was no theoretical prediction available for gravitational waves (GWs) generated by compact binary systems, apart from that of the famous Einstein quadrupole formula [1; 2; 3]. However, it was soon realized that, given the frequency band and the expected sensitivity of these ground-based detectors, the waveform modeling was to be drastically improved in order to extract all the potential information from the signal, at least in the case of the inspiral of two neutron stars [4; 5]. The breakthrough came with the merging of the post-Newtonian (PN) and the multipolar post-Minkowskian (MPM) expansions into a single formalism [6; 7; 8; 9; 10], that was applied with success to derive step by step the waveform of compact binary systems up to 3.5PN order [11; 12; 13; 14; 15; 16; 17; 18; 19; 20] (the results are also known at 4PN order for the spin-orbit coupling and the spin-spin coupling, see _e.g._[21; 22]). Since then, many works (outlined below) have aimed at extending the precision of this result to the next level, namely 4PN or even 4.5PN order beyond the Einstein quadrupole formula.
This Letter provides the final results of these efforts, _i.e._, the GW phasing of non-spinning compact binary systems on quasi-circular orbits up to 4.5PN order, as well as the dominant GW mode, given by \((\ell,\mathrm{m})=(2,2)\), at 4PN order. Ready to be used for building accurate PN template banks for the detection and analysis of the inspiral phase of compact binaries, they should be important for third generation ground-based detectors (Einstein Telescope and Cosmic Explorer), future space-borne detectors (LISA and TianQin) and of course the current second-generation detectors (LIGO, Virgo and KAGRA). All results presented in this Letter are to be found in the ancillary file [23] associated with the companion paper [24].
Besides improving the detectors' data analysis, the motivation for computing high PN orders is also to perform high-accuracy tests of general relativity (GR), since the PN coefficients directly probe the non-linear structure of the theory. By confronting results from the PN expansion against data, one can put constraints on potential deviations from GR [25; 26]. This has already allowed for the confirmation of the signature of GW tails [27; 28], and is promising for tests with future multi-band detections between LISA and ground-based detectors [29].
We denote by \(f(t)\) the frequency of the dominant \((2,2)\) mode of the GW as measured by an observer in the asymptotically flat region far from the source (recall that this is twice the orbital frequency), and by \(\psi(t)=\pi\int\!\mathrm{d}tf(t)\) the corresponding half-phase. As usual, we define the directly-measurable PN parameter \(x=\mathcal{O}(c^{-2})\) by
\[x\equiv\left(\frac{\pi Gmf}{c^{3}}\right)^{2/3}\,, \tag{1}\]
where \(m=m_{1}+m_{2}\) is the binary's total mass, \(m_{1}\) and \(m_{2}\) being the constant masses of the progenitors. For circular orbits, \(x\) may be defined invariantly from the Killing vector of the helical symmetry in the asymptotically flat spacetime. Since compact binaries tend to have circularized by the time they enter the detector's frequency band [30], we only consider the case of quasi-circular orbits, for which the time-evolution of the frequency and phase (or "chirp") is
entirely driven by the energy flux-balance equation,
\[\frac{\mathrm{d}E}{\mathrm{d}t}=-\mathcal{F}\,, \tag{2}\]
where \(E\) denotes the invariant energy of the compact binary and \(\mathcal{F}\) the total energy flux (or GW luminosity). Both \(E\) and \(\mathcal{F}\) in the balance equation are unique functions of the PN parameter \(x\) and the two masses. They have to be evaluated with the same relative PN precision, in the present case 4.5PN (\(\sim x^{9/2}\)). From Eq. (2), we derive a simple ordinary differential equation for the frequency as a function of time, and, once it is solved, a further integration yields the phase as a function of frequency.
The invariant energy \(E\) follows from the conservative dynamics of the compact binary at 4PN order, which have been obtained by various groups using different methods: (i) the Arnowitt-Deser-Misner (ADM) Hamiltonian formalism [31; 32; 33; 34] yielded the first derivation of the 4PN energy, although with an ambiguity parameter obtained by matching the near-zone computation to results imported from gravitational self-force (GSF) [35]; (ii) the Fokker Lagrangian formalism in harmonic coordinates [36; 37; 38; 39] derived the complete result, without ambiguity and without resorting to GSF, by using a specific regularization procedure which was proven to be equivalent to dimensional regularization; (iii) the effective field theory (EFT) approach [40; 41; 42; 43; 44; 45; 46; 47] rederived the 4PN energy by using dimensional regularization. From this series of works, the binary's invariant energy was obtained as the Noetherian quantity associated with temporal translation, and reads at 4PN order (see _e.g._[48; 49] for partial results up to 6PN order):
\[E=-\frac{m\nu c^{2}x}{2}\Bigg{\{}1+\bigg{(}-\frac{3}{4}-\frac{ \nu}{12}\bigg{)}x+\bigg{(}-\frac{27}{8}+\frac{19}{8}\nu-\frac{\nu^{2}}{24} \bigg{)}x^{2}\] \[\qquad\qquad+\bigg{[}-\frac{675}{64}+\bigg{(}\frac{34445}{576}- \frac{205}{96}\pi^{2}\bigg{)}\nu-\frac{155}{96}\nu^{2}-\frac{35}{5184}\nu^{3} \bigg{]}x^{3}\] \[\qquad\qquad+\bigg{[}-\frac{3969}{128}+\bigg{(}-\frac{123671}{57 60}+\frac{9037}{1536}\pi^{2}+\frac{896}{15}\gamma_{\mathrm{E}}+\frac{448}{15} \ln(16x)\bigg{)}\nu\] \[\qquad\qquad\qquad+\bigg{(}-\frac{498449}{3456}+\frac{3157}{576} \pi^{2}\bigg{)}\nu^{2}+\frac{301}{1728}\nu^{3}+\frac{77}{31104}\nu^{4}\bigg{]} x^{4}+\mathcal{O}\big{(}x^{5}\big{)}\Bigg{\}}\,. \tag{3}\]
We denote by \(\nu\equiv m_{1}m_{2}/m^{2}\) the symmetric mass ratio (\(\gamma_{\mathrm{E}}\) is the Euler constant). Since there are no terms of half-integer PN order for circular orbits, this expression is actually valid up to 4.5PN order (as indicated by the final error term).
The second input is the energy flux, which we have computed using the PN-MPM formalism applied to compact binaries at 4.5PN beyond the leading quadrupole formula. Crucial to this computation was the recently-completed source mass quadrupole moment at 4PN order [50; 51; 52; 53], the source current quadrupole moment at 3PN order [54] and the non-linear tail-of-memory effect [55; 56]. We provide the technical details of the derivation in the companion paper [24], and report here only the final result:
\[\mathcal{F}=\frac{32c^{5}}{5G}\nu^{2}x^{5}\Bigg{\{}1+\bigg{(}- \frac{1247}{336}-\frac{35}{12}\nu\bigg{)}x+4\pi x^{3/2}+\bigg{(}-\frac{44711}{ 9072}+\frac{9271}{504}\nu+\frac{65}{18}\nu^{2}\bigg{)}x^{2}+\bigg{(}-\frac{81 91}{672}-\frac{583}{24}\nu\bigg{)}\pi x^{5/2}\] \[\qquad\qquad+\bigg{[}\frac{6643739519}{69854400}+\frac{16}{3}\pi ^{2}-\frac{1712}{105}\gamma_{\mathrm{E}}-\frac{856}{105}\ln(16\,x)+\bigg{(}- \frac{134543}{7776}+\frac{41}{48}\pi^{2}\bigg{)}\nu-\frac{94403}{3024}\nu^{2} -\frac{775}{324}\nu^{3}\bigg{]}x^{3}\] \[\qquad\qquad+\bigg{(}-\frac{16285}{504}+\frac{214745}{1728}\nu+ \frac{193385}{3024}\nu^{2}\bigg{)}\pi x^{7/2}\] \[\qquad\qquad+\bigg{[}-\frac{323105549467}{3178375200}+\frac{23259 7}{4410}\gamma_{\mathrm{E}}-\frac{1369}{126}\pi^{2}+\frac{39931}{294}\ln 2- \frac{47385}{1568}\ln 3+\frac{232597}{8820}\ln x\] \[\qquad\qquad+\bigg{(}-\frac{1452202403629}{1466942400}+\frac{414 78}{245}\gamma_{\mathrm{E}}-\frac{267127}{4608}\pi^{2}+\frac{479062}{2205}\ln 2 +\frac{47385}{392}\ln 3+\frac{20739}{245}\ln x\bigg{)}\nu\] \[\qquad\qquad+\bigg{(}\frac{1607125}{6804}-\frac{3157}{384}\pi^{2} \bigg{)}\nu^{2}+\frac{6875}{504}\nu^{3}+\frac{5}{6}\nu^{4}\bigg{]}x^{4}\] \[\qquad\qquad+\bigg{[}\frac{265978667519}{745113600}-\frac{6848}{105 }\gamma_{\mathrm{E}}-\frac{3424}{105}\ln(16\,x)+\bigg{(}\frac{2062241}{22176 }+\frac{41}{12}\pi^{2}\bigg{)}\nu\]
\[-\frac{133112905}{290304}\nu^{2}-\frac{3719141}{38016}\nu^{3}\Bigg{]}\pi x^{9/2}+ \mathcal{O}\big{(}x^{5}\big{)}\Bigg{\}}\,. \tag{4}\]
In the test-mass limit \(\nu\to 0\), we exactly retrieve the result of linear black-hole perturbation theory [57; 58; 59; 60; 61]. Since BH perturbations have recently been extended numerically to second order in the mass ratio \(\nu\)[62; 63; 64], it would be interesting to verify the consistency of this numerical result with the PN prediction (4). Note also that in the case of black holes, the contributions due to the absorption by the BH horizons are not included in the PN calculation, and should be added separately. The BH absorption is a 4PN effect for Schwarzschild black holes [65], and a 2.5PN effect for spinning ones [66; 67; 68; 69; 70].
With both (3) and (4) in hand, we apply the flux-balance equation (2) and readily obtain the time evolution of the GW frequency. Sophisticated techniques exist to increase the precision of the PN results and the overlap with numerical relativity [71; 72; 73; 74], but we do not discuss them here and simply present the results in the form of a fully expanded Taylor PN series. We employ the time variable
\[\tau\equiv\frac{\nu c^{3}}{5Gm}\big{(}t_{0}-t\big{)}\,, \tag{5}\]
where \(t\) is the coordinate time in the asymptotic radiative coordinate system, and \(t_{0}\) an integration constant. We have the freedom to redefine it as \(t_{0}\longrightarrow t_{0}+\alpha\,\frac{Gm}{c^{3}}\) where \(\alpha\) is any constant, which amounts to the replacement \(\tau\longrightarrow\tau[1+\alpha\,\nu/(5\tau)]\). Although \(t_{0}\) is not uniquely defined, it might be formally interpreted as the instant of coalescence, when \(x\rightarrow+\infty\), and then it satisfies \(t_{0}-t=\mathcal{O}(c^{5})\) in the PN regime. At the 4PN order, using the fact that \(\tau^{-1}=\mathcal{O}(c^{-8})\) is a small 4PN quantity, we conveniently adjust \(\alpha\) so as to simplify as much as possible the result:
\[x =\frac{\tau^{-1/4}}{4}\Bigg{\{}1+\bigg{(}\frac{743}{4032}+\frac{1 1}{48}\nu\bigg{)}\tau^{-1/4}-\frac{1}{5}\pi\,\tau^{-3/8}\] \[\qquad+\bigg{(}\frac{19583}{254016}+\frac{24401}{193536}\nu+ \frac{31}{288}\nu^{2}\bigg{)}\tau^{-1/2}+\bigg{(}-\frac{11891}{53760}+\frac{10 9}{1920}\nu\bigg{)}\pi\,\tau^{-5/8}\] \[\qquad+\bigg{[}-\frac{10052469856691}{6008596070400}+\frac{1}{6} \pi^{2}+\frac{107}{420}\gamma_{\rm E}-\frac{107}{3360}\ln\bigg{(}\frac{\tau}{2 56}\bigg{)}\] \[\qquad+\bigg{(}\frac{3147553127}{780337152}-\frac{451}{3072}\pi^ {2}\bigg{)}\nu-\frac{15211}{442368}\nu^{2}+\frac{25565}{331776}\nu^{3}\Bigg{]} \tau^{-3/4}\] \[\qquad+\bigg{(}-\frac{113868647}{433520640}-\frac{31821}{143360} \nu+\frac{294941}{3870720}\nu^{2}\bigg{)}\pi\tau^{-7/8}\] \[\qquad+\bigg{[}-\frac{2518977598355703073}{377935885951303680} \gamma_{\rm E}-\frac{9049}{258048}\pi^{2}+\frac{14873}{1128960}\ln 2+\frac{47385 }{1605632}\ln 3-\frac{9203}{3440640}\ln\tau\] \[\qquad+\bigg{(}\frac{718143266031997}{576825222758400}+\frac{244 493}{1128960}\gamma_{\rm E}-\frac{65577}{1835008}\pi^{2}+\frac{15761}{47040} \ln 2-\frac{47385}{401408}\ln 3-\frac{244493}{18063360}\ln\tau\bigg{)}\nu\] \[\qquad+\bigg{(}-\frac{1502014727}{8323596288}+\frac{2255}{393216} \pi^{2}\bigg{)}\nu^{2}-\frac{258479}{33030144}\nu^{3}+\frac{1195}{262144}\nu^ {4}\bigg{]}\tau^{-1}\ln\tau\] \[\qquad+\bigg{[}-\frac{9965202491753717}{5768252227584000}+\frac{1 07}{600}\gamma_{\rm E}+\frac{23}{600}\pi^{2}-\frac{107}{4800}\ln\bigg{(}\frac {\tau}{256}\bigg{)}\] \[\qquad+\bigg{(}\frac{8248609881163}{2746786775040}-\frac{3157}{3 0720}\pi^{2}\bigg{)}\nu-\frac{3590973803}{20808990720}\nu^{2}-\frac{520159}{163 4992128}\nu^{3}\bigg{]}\pi\,\tau^{-9/8}+\mathcal{O}\big{(}\tau^{-5/4}\big{)} \Bigg{\}}\,. \tag{6}\]
Then, the GW half-phase \(\psi\) of the dominant harmonics is related to the binary's orbital phase \(\phi\) by
\[\psi=\phi-\frac{2\pi G\mathrm{M}f}{c^{3}}\ln\!\left(\frac{f}{f_{0}}\right), \tag{7}\]
where \(\mathrm{M}\) is the ADM mass of the binary, and \(f_{0}\) is an arbitrary unphysical scale, reflecting the different origins of time between the local coordinates covering the source and the radiative coordinates. The logarithmic phase modulation
was determined in [75; 76] and is physically due to the scattering of GWs on the Schwarzschild background associated with M (_i.e._ GW tails). While the GW half-phase \(\psi\) and the corresponding GW frequency \(f=\dot{\psi}/\pi\) are directly measurable, the orbital phase \(\phi\) can only be inferred _via_ the theoretical prediction (7). When expressing the results in terms of the GW observables \(\psi\) and \(f\), the arbitrary scale \(f_{0}\) is canceled out (see Sec. VI B in the detailed paper [24]). The explicit expression of the time-domain GW half-phase \(\psi(t)\) in terms of \(x(t)\) [given by (6)] reads
\[\psi =\psi_{0}-\frac{x^{-5/2}}{32\nu}\Bigg{\{}1+\bigg{(}\frac{3715}{100 8}+\frac{55}{12}\nu\bigg{)}x-10\pi x^{3/2}\] \[\quad+\bigg{(}\frac{15293365}{1016064}+\frac{27145}{1008}\nu+ \frac{3085}{144}\nu^{2}\bigg{)}x^{2}+\bigg{(}\frac{38645}{1344}-\frac{65}{16} \nu\bigg{)}\pi x^{5/2}\ln x\] \[\quad+\bigg{[}\frac{12348611926451}{18776862720}-\frac{160}{3} \pi^{2}-\frac{1712}{21}\gamma_{\rm E}-\frac{856}{21}\ln(16\,x)\] \[\qquad\qquad+\bigg{(}-\frac{15737765635}{12192768}+\frac{2255}{48 }\pi^{2}\bigg{)}\nu+\frac{76055}{6912}\nu^{2}-\frac{127825}{5184}\nu^{3}\bigg{]} x^{3}\] \[\quad+\bigg{(}\frac{77096675}{2032128}+\frac{378515}{12096}\nu- \frac{74045}{6048}\nu^{2}\bigg{)}\pi x^{7/2}\] \[\quad+\bigg{[}\frac{2550713843998885153}{2214468081745920}-\frac {9203}{126}\gamma_{\rm E}-\frac{45245}{756}\pi^{2}-\frac{252755}{2646}\ln 2-\frac{789 75}{1568}\ln 3-\frac{9203}{252}\ln x\] \[\qquad\qquad+\bigg{(}-\frac{680712846248317}{337983528960}-\frac {488986}{1323}\gamma_{\rm E}+\frac{109295}{1792}\pi^{2}-\frac{1245514}{1323} \ln 2+\frac{78975}{392}\ln 3-\frac{244493}{1323}\ln x\bigg{)}\nu\] \[\qquad\qquad+\bigg{(}\frac{7510073635}{24385536}-\frac{11275}{115 2}\pi^{2}\bigg{)}\nu^{2}+\frac{1292395}{96768}\nu^{3}-\frac{5975}{768}\nu^{4} \bigg{]}x^{4}\] \[\quad+\bigg{[}-\frac{93098188434443}{150214901760}+\frac{1712}{21} \gamma_{\rm E}+\frac{80}{3}\pi^{2}+\frac{856}{21}\ln(16x)\] \[\qquad\qquad+\bigg{(}\frac{1492917260735}{1072963584}-\frac{2255}{ 48}\pi^{2}\bigg{)}\nu-\frac{45293335}{1016064}\nu^{2}-\frac{10323755}{1596672} \nu^{3}\bigg{]}\pi x^{9/2}+\mathcal{O}\big{(}x^{5}\big{)}\Bigg{\}}\,, \tag{8}\]
where the integration constant \(\psi_{0}\) is determined by initial conditions, _e.g._, when the wave frequency enters the detector's band. The results hereabove (6)-(8) give the prediction of Einstein's general relativity for the GW frequency and phase chirp of non-spinning compact binaries up to 4.5PN precision.
Up to now we dealt with the time-domain GW half-phase \(\psi(t)\). It is useful (especially for data-analysis purposes) to also control the frequency-domain GW half-phase, which we denote \(\Psi(F)\). Its PN expansion is obtained by using the stationary phase approximation (SPA) [77] and reads:
\[\Psi_{\rm SPA} =2\pi F\,T_{0}+\Psi_{0}\] \[\quad+\frac{3\,v^{-5}}{128\nu}\Bigg{\{}1+\bigg{(}\frac{3715}{756}+ \frac{55}{9}\nu\bigg{)}v^{2}-16\pi v^{3}\] \[\qquad\qquad+\bigg{(}\frac{15293365}{508032}+\frac{27145}{504}\nu +\frac{3085}{72}\nu^{2}\bigg{)}v^{4}+\bigg{(}\frac{38645}{252}-\frac{65}{3} \nu\bigg{)}\pi v^{5}\ln v\] \[\qquad\qquad+\bigg{[}\frac{11583231236531}{4694215680}-\frac{640}{ 3}\pi^{2}-\frac{6848}{21}\gamma_{\rm E}-\frac{6848}{21}\ln(4v)+\bigg{(}-\frac {15737765635}{3048192}+\frac{2255}{12}\pi^{2}\bigg{)}\nu\] \[\qquad\qquad+\frac{76055}{1728}\nu^{2}-\frac{127825}{1296}\nu^{3} \bigg{]}v^{6}\] \[\quad+\bigg{[}\frac{77096675}{254016}+\frac{378515}{1512}\nu-\frac {74045}{756}\nu^{2}\bigg{]}\pi v^{7}\]
\[+\Bigg{[}-\frac{2550713843998885153}{276808510218240}+\frac{90490}{18 9}\pi^{2}+\frac{36812}{63}\gamma_{\rm E}+\frac{1011020}{1323}\ln 2+\frac{78975}{196} \ln 3+\frac{18406}{63}\ln v\] \[\qquad+\bigg{(}\frac{680712846248317}{42247941120}-\frac{109295}{ 224}\pi^{2}+\frac{3911888}{1323}\gamma_{\rm E}+\frac{9964112}{1323}\ln 2-\frac{78975}{49} \ln 3+\frac{1955944}{1323}\ln v\bigg{)}\nu\] \[\qquad+\bigg{(}-\frac{7510073635}{3048192}+\frac{11275}{144}\pi^{ 2}\bigg{)}\nu^{2}-\frac{1292395}{12096}\nu^{3}+\frac{5975}{96}\nu^{4}\Bigg{]}v ^{8}\ln v\] \[+\Bigg{[}\frac{105344279473163}{18776862720}-\frac{640}{3}\pi^{ 2}-\frac{13696}{21}\gamma_{\rm E}-\frac{13696}{21}\ln(4v)\] \[\qquad+\bigg{(}-\frac{1492917260735}{134120448}+\frac{2255}{6}\pi ^{2}\bigg{)}\nu+\frac{45293335}{127008}\nu^{2}+\frac{10323755}{199584}\nu^{3} \Bigg{]}\pi v^{9}+\mathcal{O}\big{(}v^{10}\big{)}\Bigg{\}}\,, \tag{9}\]
where \(v\equiv\big{(}\frac{\pi Gm\,F}{c^{3}}\big{)}^{1/3}\) with \(F\) being the Fourier frequency, and where \(T_{0}\) and \(\Psi_{0}\) are two integration constants. Again we have adjusted \(T_{0}\) in order to simplify the result (and we have absorbed the usual \(-\frac{\pi}{4}\) into \(\Psi_{0}\)). The coefficients up to 3.5PN, as well as the 4.5PN piece, are already in use; see _e.g._ App. A of [78].
In order to get intuition on the relative contribution of each PN order to the signal, we provide in Table 1 rough numerical estimates for the number of accumulated GW cycles in the frequency band of current and future detectors. Our naive estimation does not take the various detector noises into account, and a more realistic estimation should be performed [79]. Nevertheless, it can be useful to gain insight on the behavior of the PN expansion, which seems to converge well, as we see from Table 1. For all the typical compact binaries in Table 1, we find that the 4PN and 4.5PN orders amount to about a tenth of a cycle (less than 1 radian). This suggests that systematic errors due to the PN modeling may be dominated by statistical errors and negligible for LISA. However, this should be confirmed by detailed investigations along the lines of [80].
Besides the chirp described by the results (6)-(8), it is also important to compute the wave amplitude, in view of the data analysis of LISA [81; 82; 83] and high-accuracy comparisons with numerical relativity (see _e.g._[84; 85; 86; 87]). We decompose the waveform, at leading order in the distance \(R\) to the source, onto a basis of spin-weighted spherical harmonics (following the conventions of [88; 89])
\[h_{+}-{\rm i}h_{\times}=\frac{8Gm\nu x}{Rc^{2}}\,\sqrt{\frac{\pi}{5}}\sum_{ \ell=2}^{+\infty}\sum_{\rm m=-\ell}^{\ell}H_{\ell\rm m}e^{-{\rm i}m\psi}Y_{-2} ^{\ell\rm m}\,, \tag{10}\]
\begin{table}
\begin{tabular}{|l||c|c||c|c||c|c|} \hline Detector & \multicolumn{2}{c||}{LIGO/Virgo} & \multicolumn{2}{c||}{ET} & \multicolumn{2}{c|}{LISA} \\ \hline Masses (\(M_{\odot}\)) & \(1.4\times 1.4\) & \(10\times 10\) & \(1.4\times 1.4\) & \(500\times 500\) & \(10^{5}\times 10^{5}\) & \(10^{7}\times 10^{7}\) \\ \hline PN order & \multicolumn{6}{c||}{cumulative number of cycles} \\ \hline \hline Newtonian & \(2\,562.599\) & \(95.502\) & \(744\,401.36\) & \(37.90\) & \(28\,095.39\) & \(9.534\) \\ \hline
1PN & \(143.453\) & \(17.879\) & \(4\,433.85\) & \(9.60\) & \(618.31\) & \(3.386\) \\ \hline
1.5PN & \(-94.817\) & \(-20.797\) & \(-1\,005.78\) & \(-12.63\) & \(-265.70\) & \(-5.181\) \\ \hline
2PN & \(5.811\) & \(2.124\) & \(23.94\) & \(1.44\) & \(11.35\) & \(0.677\) \\ \hline
2.5PN & \(-8.105\) & \(-4.604\) & \(-17.01\) & \(-3.42\) & \(-12.47\) & \(-1.821\) \\ \hline
3PN & \(1.858\) & \(1.731\) & \(2.69\) & \(1.43\) & \(2.59\) & \(0.876\) \\ \hline
3.5PN & \(-0.627\) & \(-0.689\) & \(-0.93\) & \(-0.59\) & \(-0.91\) & \(-0.383\) \\ \hline
4PN & \(-0.107\) & \(-0.064\) & \(-0.12\) & \(-0.04\) & \(-0.12\) & \(-0.013\) \\ \hline
4.5PN & \(0.098\) & \(0.118\) & \(0.14\) & \(0.10\) & \(0.14\) & \(0.065\) \\ \hline \end{tabular}
\end{table}
Table 1: Contribution of each PN order to the total number of accumulated cycles inside the detector’s frequency band, for typical (but non-spinning) quasi-circular compact binaries observed by current and future detectors. We have approximated the frequency bands of LIGO/Virgo, Einstein Telescope (ET) and LISA with step functions, respectively between \(\left[30\,{\rm Hz},10^{3}\,{\rm Hz}\right]\), \(\left[1\,{\rm Hz},10^{4}\,{\rm Hz}\right]\) and \(\left[10^{-4}\,{\rm Hz},10^{-1}\,{\rm Hz}\right]\). When the merger occurs within the frequency band of the detector, the exit frequency is taken to be the Schwarzschild ISCO, \(f_{\rm ISCO}=c^{3}/(6^{3/2}\pi Gm)\). The contributions due to the non-linearities of GR (_e.g._, tails) increase with the PN order and are detailed in [24].
where the half-phase variable is given by (8). All \(H_{\ell\text{m}}\) modes are currently known at 3.5PN order for spinning, non-precessing, quasi-circular orbits [88; 89; 90; 91; 92]. Although we were able to derive the phase with 4.5PN accuracy, the same precision for the modes is yet out of reach, since, even though the 4.5PN radiation-reaction terms in the equations of motion are known [93; 94], neither the source quadrupole moment nor the non-linear contributions to the GW propagation are fully controlled at 4.5PN order (only the contributions that enter the 4.5PN flux for circular orbits are known). We thus report the extension of the dominant quadrupole mode \((\ell,\text{m})=(2,2)\) for non-spinning, quasi-circular orbits up to 4PN order:
\[H_{22}=1 +\left(-\frac{107}{42}+\frac{55}{42}\nu\right)\!x+2\pi x^{3/2}+ \left(-\frac{2173}{1512}-\frac{1069}{216}\nu+\frac{2047}{1512}\nu^{2}\right)\! x^{2}+\left[-\frac{107\pi}{21}+\left(\frac{34\pi}{21}-24\,\text{i}\right)\nu \right]x^{5/2}\] \[+\left[\frac{27027409}{646800}-\frac{856}{105}\,\gamma_{\text{E}} +\frac{428\,\text{i}\,\pi}{105}+\frac{2\pi^{2}}{3}+\left(-\frac{278185}{33264} +\frac{41\pi^{2}}{96}\right)\!\nu-\frac{20261}{2772}\nu^{2}+\frac{114635}{9979 2}\nu^{3}-\frac{428}{105}\ln(16x)\right]\!x^{3}\] \[+\left[-\frac{2173\pi}{756}+\left(-\frac{2495\pi}{378}+\frac{143 33\,\text{i}}{162}\right)\!\nu+\left(\frac{40\pi}{27}-\frac{4066\,\text{i}}{94 5}\right)\!\nu^{2}\right]\!x^{7/2}\] \[+\left[-\frac{846557506853}{12713500800}+\frac{45796}{2205}\gamma_ {\text{E}}-\frac{22898}{2205}\text{i}\pi-\frac{107}{63}\pi^{2}+\frac{22898}{2 05}\ln(16x)\right.\] \[\left.\qquad+\left(-\frac{336005827477}{4237833600}+\frac{15284}{ 441}\gamma_{\text{E}}-\frac{219314}{2205}\text{i}\pi-\frac{9755}{32256}\pi^{2 }+\frac{7642}{441}\ln(16x)\right)\!\nu\right.\] \[\left.\qquad+\left(\frac{256450291}{7413120}-\frac{1025}{1008}\pi^ {2}\right)\!\nu^{2}-\frac{81579187}{15567552}\nu^{3}+\frac{26251249}{31135104} \nu^{4}\right]\!x^{4}+\mathcal{O}\!\left(x^{9/2}\right). \tag{11}\]
Satisfyingly, this result is in perfect agreement with linear black-hole perturbation theory in the limit when \(\nu\to 0\); see App. B of [58]. Again, it would be interesting to compare the PN prediction (11) with second-order (numerical or analytical) BH perturbation theory.
We thank Adam Pound for discussions on BH perturbation and GSF results, Bala Iyer for his longstanding support and interest in this project, and Alessandra Buonanno for suggestions. F.L. is grateful to the Institut d'Astrophysique de Paris for its hospitality during this project. He received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 817791). G.F. thanks IIT Madras for a Visiting Faculty Fellow position under the IoE program during the completion of this work.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.